Sonarworks Reference 4 Software Adds 26 New Headphone Calibration Profiles
The innovative audio software developers from Sonarworks announced that their Reference 4 software has added 26 new headphone calibration profiles, supporting additional models from AKG, Audio-Technica, beyerdynamic, Bose, HyperX, Master & Dynamics, Philips, Plantronics, Sennheiser, and Sony. The revolutionary monitoring software - which is able to deliver a true reference standard almost anywhere - now includes 150 headphone calibration profilesRead More

Harman Professional Announces Availability of JBL C221 and C222 Two-Way ScreenArray Cinema Loudspeakers
Harman Professional Solutions announced that its new JBL C221 and C222 two-way ScreenArray cinema loudspeakers using patent-pending Dual Dissimilar Arraying Technology are now shipping in limited quantities, with full availability in May 2018. These next-generation cinema sound loudspeakers deliver the latest JBL technology in a compact and accessible form factor, helping small- and mid-sized cinemas upgrade to improve coverage and smooth sound reproduction at an accessible price point.   Read More

Ward-Beck Introduces New preMO Series of Networked Microphone Preamps
Ward-Beck Systems is moving fast forward into fully IP convergence, and leading the way in audio networking implementation. At NAB 2018, the Canadian company unveiled its new preMO series of networked microphone preamps representing the convergence of analog audio, digital conversion, and IP-based networking, creating completely new workflows. These new products rethink audio operations in the networking age, leveraging AES67 and ST2110 protocols, and AES70 for remote control.    Read More

Blackmagic Design Announces DaVinci Resolve 15 with Improved Audio Features and New Fairlight Audio Consoles
Anyone visiting the NAB 2018 show was certainly surprised by the Blackmagic Design booth, the largest ever, filled with so many products that any effort to see everything would take the greater part of a day. Among the many unveilings for NAB, and only a year and a half since the Fairlight acquisition, Blackmagic unveiled a much improved DaVinci Resolve 15 software with important new audio tools and features, while showing the first of a new generation of Fairlight modular audio consoles.   Read More

ICEpower Announces Its Most Powerful Amplifier Module to Date 
ICEpower's latest amplifier power module, the 1200AS, was designed for live sound and concerts and the Danish company confirms this is its most powerful audio module to date. Based on ICEpower's recent ICEedge Class D chip set, the new ICEPower 1200AS incorporates the latest technologies, enabling superior sound quality and an ultra-low noise floor (less than 30 μV - 130 dB signal-to-noise ratio). The extremely high audio quality also makes it a superior high-end audio amplifier for high-end audio applications.   Read More

HumBeatz Mobile Looper App from AmpTrack Technologies Allows Voice to MIDI Instrument Conversion
There are hundreds of innovative apps for iOS and Android devices, with many featuring advanced composition and recording tools. Swedish company AmpTrack Technologies just released its new mobile app HumBeatz, that lets users hum, whistle, or beatbox and turn those sounds into a musical instrument. With HumBeatz, the vocal sounds can be used to easily and quickly create a bass line, drum groove, trumpet riff, or other sounds for creating musical parts, loops, stems, or song sketches.   Read More

Optocore Festival Box Allows All Protocols to Tunnel Over the Same Fiber
With the festival season fast approaching, Optocore chose the recent Frankfurt Prolight+Sound show to launch its new Festival Box. Having already proven the versatile fiber-based technology in the broadcast industry - marketed through its close technology partner BroaMan - Optocore has responded to the growing demand from the live sound community for a more elegant and efficient signal transport system, with a hot-swappable SFP solution that will radically streamline multiple-band festival bills. Hence the name Festival Box.   Read More

Gayle Sanders Returns with Launch of New Company Eikon at AXPONA 2018
Gayle Sanders, co-founder of MartinLogan and one of the industry's most celebrated speaker designers, turns a new page in his prestigious career with the introduction of his new company, Gayle Sanders Eikon. The world premiere of his new digital active loudspeaker system, featuring the Image1 and Eikontrol, took place at the 2018 AXPONA Show. The Eikon concept combines reference DSP-based electronics with direct connection to multiway speakers, and a new user interface, setting a new standard for total system performance.   Read More


Editor's Desk

Object-Based Audio and Sound Reproduction 

Object-Based Audio. You've heard the term.
It means we no longer need to record audio information in 6 (5.1), 8 (7.1), or an insane number of audio channels - 22.2, as Japan is pitching for its 8K production and broadcast standard - to properly convey the spatial information of sound reproduction. And not necessarily only for sophisticated soundtracks in immersive formats.

The European bcom Institute of Research and Technology is one of the most active entities in immersive experiences and 360º audio for multiple platforms. At NAB 2018 bcom could show some of the best content available for virtual reality platforms.
It is true that this entire transition was first inspired by Hollywood proposing immersive audio formats for its blockbuster productions, with Dolby Atmos and DTS:X becoming the new norm in movies. In Europe, Auro 3D was pitched as a generic "immersive" format for any type of production, including music recordings, adding a "height" layer of additional channels, to the surround layers, creating a 9.1 to 13.1 approach, depending on the size of the room and the audience.
Dolby Atmos, DTS:X and Auro-3D immersive channel-based formats were intended to add a "critical component in the accurate playback of native 3D audio content," described as "height" or ceiling channels, using a speaker layout that constructs "sound layers." No doubt, this is highly effective in movie theaters, and is not a problem in movie production and distribution.
The problem is, people are not exactly able to build "movie theaters" in their homes for daily use. Some can build dedicated home theaters for movie viewing at home, creating those immersive format installations, but we do more at home when we consume media than just movie watching. Most of the time, people just enjoy music listening and watch live and non-fiction TV programs, which don't necessarily need the creative components of immersive audio formats as described by Dolby Atmos, DTS:X, or Auro-3D.
That's why when the industry started to look at the requirements for the next-generation standards for media distribution, including next-generation broadcast standards, and OTT (on-demand) streamed content distribution - consumed in many cases in simple mobile devices - it was obvious that we shouldn't just add more channels, instead we should look at alternative approaches for efficient media distribution to any type of platform. 

Dolby Atmos on headphones. From a channel-based immersive audio format, Dolby is promoting the virtues of 3D audio virtualization and binaural reproduction on headphones. It is still called Dolby Atmos.
Also, new types of media (e.g., virtual reality and gaming) were already inspiring a new generation of content creation tools for immersive audio experiences and generated an increased interest in 3D audio concepts and technologies such as Ambisonics/HOA, Binaural, head-related transfer function (HRTF), head-related impulse response (HRIR), and object-based audio techniques.
Not surprisingly, even companies such as Auro Technologies -
which always stated that "natively recording Auro-3D in an object-based format is simply not possible," - Dolby, and DTS, immediately started exploring alternative media distribution techniques for those new platforms, mainly virtual reality (VR) and mobile devices, and using enhanced binaural reproduction on headphones. Auro called it 3D Over Headphones, DTS called it DTS Headphone:X, and Dolby still prefers to call it Dolby Atmos in order not to confuse consumers, while its "professional solutions" division markets a multiplicity of Dolby Audio technologies and tools specific for creation, distribution, and playback.
After all, independently of the immersive audio format descriptions, all these companies explore object-based audio techniques for content creation (production). Dolby describes the process for its Dolby Atmos Renderer, as metadata that creates "multichannel speaker outputs, binaural headphone outputs, and channel-based deliverables."
Basically, all those formats start as audio tools (plug-ins for standard production DAWs, such as Pyramix or Pro Tools) allowing panning manipulation by placing "audio objects in a 3D space" that generate "object metadata that is authored with the final content," using a visual representation and signal metering to monitor the dynamic mix of these objects in a "3D space."
Independently of the distribution and playback platforms, using this simple object metadata we can more easily describe how a sound moves, with better "resolution" regarding the interim stages.
Let's imagine a sound that evolves from the right side directly to the ceiling above us and to our left. Imagine a Dolby Atmos soundtrack where a sound moves like I described, but the listener only has two conventional stereo speakers in the living room. Since there's no ceiling speakers, the sound would just move from the right speaker to the left speaker.
Now imagine the same program material but reproduced using a soundbar with multiple drivers, with digital signal and beamforming processing for spatial virtualization. The panoramic motion of the sound is in fact translated using spatial (positional) metadata and the resulting reproduction sounds like the sound moves from left, up above our head, and to the right as it would if we had multiple speaker channels. Only we don't "have channels." We have just the information about the sound relative position or provenance and a different playback system that is able to generate an immersive experience. That playback system could be a soundbar, or even the tiny speakers on a smartphone or tablet, or a sophisticated omnidirectional speaker that is able to analyze the acoustics in the room and project sounds in different directions, creating an immersive stage (not exactly accurate, but similarly impactful) because the information regarding position or provenance or the sound(s) is generated through descriptive metadata of objects. Audio objects.

The National Association of Broadcasters (NAB) show in Las Vegas, NV, is an ideal event to gain a perspective of the latest content production and distribution technologies.
Yes, Dolby Atmos and DTS:X immersive audio formats are both object-based - a combination of raw audio channels and metadata describing position and other properties of the audio objects - at least from the production point of view. The formats use standard multi-channel distribution (5.1 or 7.1 - which are part of any standard distribution infrastructure, including broadcast standards) and are able to convey object-based audio for specific overhead and peripheral sounds using metadata that "articulates the intention and direction of that sound: where it's located in a room, what direction it's coming from, how quickly it will travel across the sound field, etc." Standard AV receivers, televisions and STBs, equipped with Dolby Atmos and DTS:X, read that metadata and determine how the experience is "rendered" appropriately to the speakers that exist in the playback system. In DTS:X, it is even possible to manually adjust sound objects - interact and personalize the sound.
As I said, not all content material "needs" to be described in such a sophisticated way and not all metadata is intended to be translated as positional data. There are still excellent mono recordings, there are all sorts of single channel or stereo broadcasted content, there's loads of excellent "stereo field" music, etc., and all can benefit from object-based audio or additional metadata for multiplatform distribution and playback. 
Content distribution also faces other more complex challenges, such as multi-language commentary and dialog dubbing, loudness management for different playback scenarios, room equalization, acoustic compensation, etc. Also, using object-based metadata, we could allow some basic interaction with the sound program, enabling users to choose between what type of experience they would prefer, like watching live events with or more or less "sound environment" and focus on commentary, or even removing commentary all-together. On a broader perspective, we can also see object-based audio being the metadata layer that could help solve the multi-platform challenges of today's media distribution, allowing better audio reproduction for any sort of content in any type of playback device and channel configuration, including binaural virtualization of audio on headphones and spatial audio reproduction in digitally processed speaker arrays.
And that's where the industry is heading, leaving our precious "audiophile" discussions about the ideal "production standards" for stereo recording and reproduction of music in the dust.

The Sennheiser prototype AMBEO soundbar was presented at CES 2018 for the first time and will probably be the first MPEG-H solution on the market based on the Fraunhofer reference design.

But more importantly... now there's a new object-based audio standard, called MPEG-H.
The new audio system based on the MPEG-H Audio standard is now on-air with the new television standards adopted and under implementation in Korea and the US (ATSC 3.0), Europe (DVB UHD), and China. But MPEG-H audio also offers interactive and immersive sound, employing the audio objects, height channels, and Higher-Order Ambisonics for other types of distribution, including OTT services, digital radio, music streaming, VR, AR, and web content. Following Fraunhofer's successful effort of demonstrating a 3D Soundbar prototype, there are now real products in production from multiple companies, naturally from the Korean consumer electronics giants (e.g., Samsung and LG) and also from Sennheiser and others. Other playback possibilities are being explored on TVs and smart speakers using 3D virtualization technology such as Fraunhofer's UpHear, enabling immersive sound to be delivered without using multiple speakers.

No doubt, the customization and personalization features of MPEG-H will be decisive to excite broadcasters, content providers, and consumers, and in turn creating awareness and demand for an understanding of object-based audio from all domains of content production. As I stated previously, this could include also music production, since it would allow optimization of content to sound best on any end device, providing universal delivery in the home theater as well as on headphones, smartphones, tablets, and any speaker configuration.
And the best reason to believe that MPEG-H audio will create a solid foundation for working with object-based audio content, is the fact that it is compatible with today's streaming and broadcast equipment and infrastructure. The MPEG-H Audio codec, together with the channels or objects needed for immersive sound can be transmitted at bit rates similar to those used today for 5.1 surround broadcasts, and MPEG-H Audio-based systems offer DASH support for streaming, as well as multi-platform loudness control depending on device and listening environment.
This is about changing the paradigm in sound reproduction. Read my complete article available online, where I provide some examples of companies working in this domain.


You Can DIY!
A Dynamic Microphone Soundcard Amplifier
By Paul Loewenstein
"A good tutorial on interfacing microphones to sound cards." This was the brief description preceding Paul Loewenstein's article in the original edition of audioXpress. We think there's more to it. The author describes a microphone amplifier designed to interface a standard dynamic microphone to a sound card (the original text still referred to a Creative Soundblaster...) or other computer microphone input with a 5 V through 2.2 kΩ bias to the microphone connector ring. The resulting circuit was optimized for speech recognition applications with the Dragon Naturally-Speaking technology (from Nuance and one of the earliest voice-to-text transcription and speech recognition applications. This is still a useful circuit for anyone exploring microphone preamp applications. This article was originally published in audioXpress, February 2008.   Read the Full Article Now Available Here

Voice  Coil User Report
Loudsoft FINE R+D Analyzer 
By Vance Dickason
Beginning with the January 2018 issue of Voice Coil magazine, Vance Dickason began implementing the Loudsoft FINE R+D analyzer into the Test Bench measurement protocol. This article provides details about the features in FINE R+D, a Fast Fourier Transform (FFT) analyzer packaged in a 1u rack. FINE R+D comes with the hardware rack, a manual, an external power supply, a USB cable, a loop-back cable for calibration, and a measurement microphone. Even though the included standard measurement microphone is quite good, for enhanced accuracy Loudsoft recommends using the G.R.A.S. Sound & Vibration 1/4" Type 46BE microphone capsule and preamplifier body, which is used with the system. This article was originally published in Voice Coil, December 2017.   Read the Full Article Online

AX May 2018: Digital Login
Audio Product Design | DIY Audio Projects | Audio Electronics | Audio Show Reports | Interviews | And More 

Don't Have a Subscription?
VC May 2018: Digital Login
Industry News & Developments | Products & Services | Test Bench | Acoustic Patents | Industry Watch | And More