Industry & Product News
Hypex Electronics Introduces New NCORE Family for Active Speakers
Hypex introduces the NCAS500MP. The first member of an all new NCORE family of amplifiers specifically designed for two-way active speakers. From Pro-Audio solutions, high-end hi-fi speakers to studio monitors, with the proven NCORE technology, the NCAS500MP can be used for all active speaker designs. With a power rating of 400 + 100 W in 8 ohm, it is very powerful for its size .   Read More
Syntiant and Infineon Offer Always-On Ultra-Low-Power Local Voice Control
Syntiant, a California-based AI chip company focused on moving machine learning from the cloud to edge devices, announced a partnership with Infineon Technologies, to service the rising demand for voice user interface (VUI) in electronic devices. Optimized for battery-powered devices from true wireless earbuds to mobile phones to smart speakers, Syntiant's partnership with Infineon enables more efficient voice control and custom command detection to numerous product configurations .    Read More  


Worldwide Home Loudspeaker Market Shows Pockets of Growth
Futuresource Consulting unveiled its latest "Home Audio Worldwide Loudspeaker Report," which demonstrates that the market is showing resilience despite shifting listening habits, as consumers look beyond traditional audio products, fixing their sights on smart speakers and headphones. The recent changes in loudspeaker preferences means trade value is faring better than volume, according to the new worldwide loudspeaker report from Futuresource .    Read More  
 
 
NTi Audio Expands Room Acoustics Optimization Solutions with Room Acoustic Reporter Software
The Room Acoustics Reporter PC software from NTi Audio, in combination with the DS3 Dodecahedron Speaker and the XL2 Sound Level Meter, provides the perfect solution for acoustics evaluation of a room, performing reverberation time optimization calculations and providing comprehensive reporting according to ISO 3382 and DIN 18041 standards. Useful for anyone working on analysis and evaluation of room acoustics the XL2 and the Room Acoustics Reporter software even allow simulation of acoustic treatment options with absorption data from manufacturers .    Read More  


JBL Professional Announces One Series 104-BT Desktop Coaxial Speakers with Bluetooth 5
Harman Professional Solutions announced the new JBL Professional One Series 104-BT desktop reference monitors with Bluetooth 5. The new design features a single coaxial driver that delivers clear, detailed sound while mixing and editing. The new compact and very affordable speakers were designed to enable content creators and audio professionals to stream accurate, studio-quality audio via Bluetooth or playback audio using a standard wired connection .    Read More  

DPA Promotes Immersive Sound Capture with New 4560 CORE Binaural Headset Microphone
Danish manufacturer DPA Microphones announced the new 4560 CORE Binaural Headset Microphone, combining two miniature microphones positioned at the opening of each ear with a flexible headset. A high-quality solution for YouTubers, sound designers, podcasters, vloggers, and other content creators, including filmmakers who want to capture authentic immersive audio for their projects. The binaural setup allows users to capture, in real time, exactly what each ear is hearing so that those listening on headphones can experience the full immersive sound .    Read More  
   
 
ESS Technology Introduces Third-Generation ES9219 Headphone SoC for Mobile Devices
Just a few short days after launching its new Sabre Smart Power-Amplifier (SPA), a new-generation device designed to boost the audio quality and efficiency of phones, wireless speakers, and even the smallest connected wearable devices, ESS Technology announced the ES2019, the third generation of Sabre DAC headphone System-on-Chip (SoC) solutions for mobile devices. The ES9219 Headphone SoC with 32-bit, QUAD DAC Technology and 2 Vrms output adds crosstalk reduction and THD compensation, while reducing BOM size and cost .    Read More

MOTU Introduces Two Compact USB-C Audio Recording Interfaces
Mark of The Unicorn (MOTU) is an engineering-driven music technology company based in Cambridge, MA. They created one of the best DAWs in the world, called Digital Performer, which a lot more people should use. They also do excellent audio recording interfaces and now they introduced the M2 and M4, two compact units powered via USB-C, both shipping now. Being USB-C bus-powered audio interfaces for Mac, Windows, and iOS, these two well-designed units offer best-in-class 24-bit/192 kHz audio quality, best-in-class speed (ultra-low latency) and best-in-class metering with a full-color LCD .    Read More  

Editor's Desk
 
J. Martins
(editor-in-chief)
 


Audio Collaborative 2019
Two Key Audio Trends and What to Expect
Audio Collaborative was a really insightful event, given the relevance of the topics discussed and the amount of knowledge that was shared in just a few hours. In two previous editions of The Audio Voice, I already addressed some of the trends that surprised me the most at this event, and I discussed the loudspeaker market data that Futuresource revealed during that event. In this third report for The Audio Voice, I explore the important topics of voice control, and improved audio experiences, two very relevant discussions during Audio Collaborative 2019.

One of the first presentations at this event was from Simon Forrest, principal analyst at Futuresource Consulting, who focused on the Smart Home Technology Evolution and specifically addressed the topic of voice recognition, voice control, and voice assistants in that domain. I remember tweeting about "reality setting-in on voice potential" after this presentation. Effectively, Simon Forrest summarized all main technology trends in the space, calling things for what they are, providing a realistic perspective on achievements, technology evolution, challenges, and actual market opportunities.

Simon Forrest, principal analyst at Futuresource Consulting, focused on the Smart Home Technology Evolution.

"Voice is not very smart," stated Forrest in front of an audience that is deeply committed to turning this specific technology into the "next big thing" for the audio industry. "Things are not much different than just a button press," he reinforced when describing what voice control using smart speakers actually allows, moving on to explain why visual platforms are more useful and why manufacturers in the smart home space are increasingly adding displays and cameras to microphone and speakers in their hardware. The benefit, the Futuresource analyst summarized, is in the content focus, combining perception and interaction with graphically enhanced environments, allowing for new Augmented and Mixed Reality applications, which are both visual and tactile.

Like most of the audience and many of those involved in the development of voice recognition - I believe - I strongly agree with Simon Forrest's view that we still have a long way to go to turn today's voice command and control interactions into real "virtual assistants." As the presentation detailed, evolving toward configurable wake words, improved language support, and implementing the ability to identify users through face recognition (smart camera) and voice biometrics. Eventually, perfecting conversational abilities that Forrest called, Virtual Assistants 1.0.

His view of the evolution in this field - Virtual Assistant 2.0 - described the ability of artificial intelligence engines to actually power features such as context awareness in conversations or object recognition in image analysis, paving the way to visual search and audible AR. Ultimately, even adding intelligent interjection (influencing consumer thoughts and behavior). Evolving toward a single VA solution with a fluid user experience across systems, and edge-based cognitive reasoning. Futuresource's timeline for this evolution? Five to 10 years.

Voice interaction + Camera perception + Edge AI + Cloud = Intelligence. The technologies necessary to create the essential elements for advanced virtual assistants already exist.

This edition of Audio Collaborative also included an interesting session on "enhanced audio" or improved audio experiences, in this case exploring the exciting concepts of spatial, 3D, Binaural, and Surround Sound, as well as new audio experiences (e.g., headphone-based live events). In "What is Next for the Consumer and the Pro Audio Industry," Jack Wetherill, Principal Analyst at Futuresource Consulting, engaged in an interesting conversation with Guillaume Le Nost, Creative Director, L-Acoustics; Mirek Stiles, Head of Audio Products at Abbey Road Studios; Richard Lindsay-Davies, Digital TV Group; Andreas Ehret, Director of Technology Marketing, Dolby; and Paul Gillies, Director, Headphone Revolution (a pioneering company in headphone-based live events and "Silent Disco" parties).

As Wetherill highlighted, there are too many technologies and buzzwords "out there," and this diversified panel of industry insiders offered a nice perspective into what it all means. This is an important topic, because it is increasingly palpable that many of these "enhanced experiences" are already having a meaningful impact in the audio industry - at the consumer and professional application levels. In fact, the consumer electronics industries are actually desperately pushing for something new like this.

Le Nost, bringing the perspective of a professional audio industry leading manufacturer, provided a nice overview of all that's involved when we currently talk about these "enhanced experiences," from playback to live sound technologies. In his perspective, immersive sound for live events took off exponentially, and reflects consumers' legitimate interest and need for enhanced audio experiences.

Le Nost also provided a nice summary of the main immersive audio technologies and the opportunities they convey. Professional audio companies like L-Acoustics have long been trying to "immerse" the spectator with reinforced sound, but only when they started combining spatial audio processing and evolving from channel-based to object-based audio, did it become possible to create a new experience that was sufficiently rewarding to generate excitement both in audiences but also in events, live shows, and tour promoters. The key, as Le Nost described it, was to not only surround everyone with sound but also being able to localize a sound source precisely on stage. "For us, immersion is quite different from localization, and I think that in 3D audio the two terms are still valid for a lot of use cases."

During the interesting session on "enhanced audio" or improved audio experiences, Guillaume Le Nost, Creative Director, L-Acoustics, provided a nice summary of the main immersive audio technologies and the opportunities they convey.

Importantly, Le Nost also explained how scene-based technologies used in gaming have great potential when rendered to binaural and enjoyed on headphones, while object-based content, which is rendered specifically for each target reproduction system, allows a superior immersion effect, and is particularly effective for live sound. As a leading professional audio company in the live sound and touring space, the L-Acoustics engineer illustrated how millions of spectators have already experienced immersive sound in live shows, as hundreds of major touring artists and productions all over the world are transitioning from conventional left/right or left/center/right rigs to distributed "immersive" arrays.

Le Nost also appealed those attending the Audio Collaborative 2019 event, for support of initiatives that will allow users to bridge all channel-, scene- and object-based sound models in the future, such as the existing Audio Definition Model, promoted by the EBU and described in the Recommendation BS.2076, recently approved by the ITU. Some technology providers in the consumer space (e.g., Dolby and DTS) or those involved in the MPEG-H development efforts (e.g., the Fraunhofer institute) have already adopted the ADM metadata model. L-Acoustics is now pushing the professional audio industry to do the same. More important, Le Nost warned that the existing creative tools for immersive sound are still not friendly enough for creators, that more work is needed to make sure that content creation is correctly translated in different platforms, and that a lot more effort is needed on that front.

One of many possible examples of an immersive sound installation with L-Acoustics L-ISA Hyperreal Sound systems. In this case, a live television production of Sing! China, the highly popular re-branded version of The Voice in China. (Image courtesy L-Acoustics)

Overall, this panel provided an excellent overview of the adoption and business opportunities that new enhanced audio experiences can generate. Mirek Stiles, from Abbey Road Studios, explained how game engines are being explored as a creative audio tool, and how the studios are exploring all the production tools available in spatial and 3D audio, from Dolby Atmos, to virtual reality. Andreas Ehret discussed how the adoption of Dolby Atmos, from cinema to consumer electronics exploded in the last few years, and how hundreds of millions of devices are now in the market that can all playback immersive audio. 

Strangely, it was Stiles who highlighted how new products such as the Amazon Echo Studio and the Sennheiser AMBEO soundbar are actually bringing that experience closer to the consumer and allowing people to have new enhanced experiences, including interacting with the audio of live sports broadcasting. The fact that streaming services are now supporting and distributing actual content in immersive formats, such as Dolby Atmos or MPEG-H (Sony 360 Reality Audio) is something that the whole panel agreed will contribute to quickly bring scale to the new formats.

But as Wetherill cleverly asked Andreas Ehret from Dolby, while Dolby Atmos is a wonderful experience in the cinema, it is painfully obvious that a small device such as a smartphone or a single wireless speaker, some soundbars and even something like the Amazon Echo Studio, is not able to generate the same positive impact. Is there a risk that the original concept is endangered? And is this something that Dolby worries about? Wetherill asked. Ehret confirmed, "I worry about that every day. But the approach we have been taking is that Atmos cannot be and should not be just a premium, niche experience to those who install dozens of speakers in their home. That doesn't make sense. You wouldn't have people producing in that format if that was all you can address. The approach we've taken is to make sure that we can address the entire market and all those devices. Whenever you are feeding those devices with Atmos, with an original Atmos mix, you still get "an uplift" from what you would have with stereo," he shared. "I think that is a lesson we've learned from surround sound, and at the very early stages of Atmos development it was clear to us that we shouldn't repeat the same mistake." 

Listening to object-based spatial audio material on a single Sony 360 Reality Audio speaker demonstrated at CES 2019.

Of course, I agree with Ehret that when we hear a song streamed on Amazon HD and mixed in Dolby Atmos on a speaker such as the Amazon Echo Studio, we get something that is different. Definitely different from mono, considering that we have one single speaker. I cannot yet confirm if that's the case from stereo in two separate and correctly configured speakers. As Ehret recommended, "you have to listen for yourself." 

From the same session, I personally found very interesting the testimony from Paul Gillies that his company Headphone Revolution is having massive success with "entertaining people with headphones," not only with Silent Discos - those are becoming a major attraction at festivals throughout the year - but also with all sorts of parties and events in places that don't have a noise license. Increasingly, Gillies explained, there's applications at corporate events where people can use headphones to choose between multiple types of content. And artists and content creators are aware of the potential of headphones, personalization, and enhanced binaural experiences, and want to explore those for multiple people at the same time. For the moment, the main hurdle for these events is that they need to bring the multiple signals to a lot of headphones at the same time. The next step, according to Headphone Revolution, will be personalization, allowing a real advantage for people to choose listening to headphones during events. Very promising.

During this event's closing session, Carl Hibbert, Head of Consumer Media & Technology at Futuresource, talked with Bill Neighbors, SVP & GM, Cinema and Home, Xperi; and Pete Wood, Senior Vice President, New Media Distribution, Sony Pictures, about a new initiative to bring enhanced experiences to consumers. In this case, involving Imax Enhanced (www.IMAXenhanced.com) platform and content.

For those not aware, in September 2019, IMAX Corp. and DTS, a wholly-owned subsidiary of Xperi, announced an expansion of the IMAX Enhanced ecosystem spanning new streaming platforms, movies from Sony Pictures Home Entertainment and other studios, and device partners throughout the United States, Europe, and China. IMAX Enhanced is a premier, immersive at-home entertainment experience - combining exclusive, digitally remastered 4K HDR content with high-end consumer electronics to offer a new level of sight, sound, and scale. IMAX Enhanced device partners include Arcam, AudioControl, Denon, Elite, Integra, Lexicon, Marantz, Onkyo, Pioneer, Sony Corp., TCL, Trinnov, and recently Anthem and StormAudio joined as well.

Sony promoted the first generation of IMAX Enhanced content, hardware, and services earlier in 2019.

Regarding the audio level, IMAX and DTS agreed to use a special variant of the DTS:X codec technology integrated in home audio equipment to deliver an IMAX signature sound experience with more immersive, powerful sound. To ensure optimized playback of fully immersive IMAX Enhanced content, the two companies are designing an IMAX Mode - meticulously calibrated to deliver the best viewing and listening experience in the home. IMAX original soundtracks (5.0 to 11.0) have a unique mix and dynamic arrangement. Until now, it could only be experienced at an IMAX theater and now it is going to be available in the home.

Interestingly, Pete Wood reminded us how in the digital age there's been this reoccurring phenomena of technology moving forward very quickly, forcing content - and in certain measure, services - to catch up. He reminded us of the early days of Netflix, where it was all about convenience versus not-such a great quality experience, or the early days of HDTV broadcasting and the lack of a consistent consumer experience. Wood agrees that consumers have an expectation of high quality and that there's already an association in consumer's minds with the IMAX brand. That was the reason why Sony decided to offer content to match that experience and offer a complete quality-ecosystem from creation to the end-consumer. As Bill Neighbors reinforced, IMAX Enhanced is a platform not a new standard. It actually uses existing standards like HDR10 because it doesn't want to burden the consumer with another standardization process. On the audio front, its uses DTS existing formats. IMAX Enhanced sound needs a 5.1.4 speaker system at the minimum, while optimum results come with the recommended 7.2.4 configuration and reproduction is compatible with the DTS Neural:X upmixer found in many AVRs.
So, for these two Hollywood-minded industry veterans, there's no question that the opportunity is in trying to "enhance" the experience with the best the industry can offer - starting with the best home cinema experiences, and let the consumer vote with their wallets .

R&D Stories
Achieving Frictionless Lower-Power Application Connectivity with Data-Over-Sound
By  Dr. Adib Mehrabi (Chirp)
 
Anytime we need to log into the local Wi-Fi or pair with another Bluetooth device, we think, "There must be a better way to do this!" That's now one of the exciting possibilities for data-over-sound technology, and in this article Dr. Adib Mehrabi (Head of Research, Chirp) explains what else we can do with it and how it works. His article explores how data-over-sound can be realized in embedded scenarios, and how Chirp's new technology, using the Arm Cortex-M4 and Cortex-M7 processors, can be implemented in connected products. Chirp.io is a Data Over Sound application pioneer, combining a team of audio scientists and DSP engineers with a mission to simplify machine-to-machine connectivity using sound, with low-power and low-cost device-to-device communication. This article was originally published in audioXpress, November 2019 .    Read the Full Article Now Available Here

Voice Coil  Spotlight
Zero Phase In Studio Monitors
By  Vance Dickason
 
This Voice Coil Spotlight, written by Vance Dickason with commentary from Floyd Toole, Wolfgang Klippel, Andrew Jones, and James Croft, focuses on Zero Phase in Studio Monitors. As Dickason explains, "the implication of using FIR filters to zero out the phase of a loudspeaker to improve its subjective performance seems inherently flawed given all the research done over years that concluded that the human ear is not very sensitive to phase or group delay. Given this, I decided to get a few other opinions on this subject and sent out emails to a number of individuals whose opinions I greatly respect, and ask them what they thought of the concept of zero phase improving subjective performance in loudspeakers." This article was originally published in Voice Coil, October 2019 .    Read the Full Article Now Available Here

AX December 2019: Digital Login
Audio Product Design | DIY Audio Projects | Audio Electronics | Audio Show Reports | Interviews | And More 

Don't Have a Subscription?
VC December 2019: Digital Login
Industry News & Developments | Products & Services | Test Bench | Acoustic Patents | Industry Watch | And More