While the onus for successful acoustics at meetings often falls on planners’ shoulders, are there ways for attendees to individually filter out noises they don’t want to hear? We asked Josh McDermott, Ph.D., an assistant professor in MIT’s Department of Brain and Cognitive Sciences who oversees the school’s Laboratory for Computational Audition. McDermott’s research focuses on how we learn to instantly recognize certain sounds — and why machines can’t — as well as how we segregate sounds and perceive reverberation.
Can you explain reverberation, and some of your studies pertaining to it?
The sound that enters our ears originates from an outer source, but that source can interact with the environment on the way to our ears and enter the ears as a very distorted version of the original source. We’re interested in how it is that people are able to partially separate the effects of the source and the effects of the environment, and we study that by combining computational models of what we think the auditory system is doing with experiments on people to test how people hear. We also spend a lot of time looking into the brains of people while they’re listening to sounds to try to understand the neural representations that are produced by the sound.
Are there techniques or technologies that listeners can use on an individual basis to segregate the sounds that they want to hear from those that they don’t?
That’s a subject that I’m deeply interested in, but those technologies mostly don’t exist yet. I’d like to think that in another 10 or 20 years, we’ll have applications on our cellphones that will essentially process sound for us and help us filter out the stuff that we really don’t want to listen to. As of now, that problem is largely unsolved.
One thing that does exist are directional microphones that will filter out sounds coming from directions that you’re not interested in. In principle, they’re something that you can equip a person with and that might help; [a listener] kind of points their head toward the source that they’re interested in, and that will be amplified relative to everything else. But it’s not a product that’s actually available for us.
I’ve also often thought that it might be kind of useful to create a kind of ear prism, little tubes going into your ears that would allow you to point an artificial ear toward the thing that you’re interested in while enabling you to continue to look at [the subject]. I think that would be a cool thing to actually try.
Since you’ve attended and presented at meetings devoted to sound and hearing, are there any techniques you’ve seen used at these meetings to cut reverberation and control sound?
I’m always amazed at how bad sound is at auditory meetings. You’d think that it wouldn’t be, but meetings are organized by professional meeting organizers, so the fact that [we’re attending] an auditory meeting is irrelevant. It ends up being random whether the terms are good or not.
I’m amazed at how much reverberation varies from space to space. I’ve given talks in concert halls where the reverberation makes it almost impossible to even present sound demos — you can’t even play them because the reverberation so profoundly alters the sound. In other comparably sized rooms, it will work great. So I think the way that the walls are treated is probably the biggest factor, and that’s usually beyond the control of the people that are coming to the meeting. The actual scientists that go to the meetings, they just don’t have a whole lot of direct control over all of these things.
I think things could be a lot better. The people who are organizing [meetings] are not the people attending them — so they end up not being invested in a detailed way in how things go. It’s possible to manage the reverb, but it’s just usually not the case. There are also some interesting signal-processing tricks that people have looked into for counteracting the effects of reverb, but they’re not widely used at this point.
What kind of tricks?
Reverb has the effect of blurring sound out, giving you these delayed copies of the sound, all these sorts of reflections that arise at later times. [Reverb] causes the sound to lose its resolution in some sense. So the trick would be that if you can break the sound up into little pieces so that the individual pieces are “shorter,” then the blurring causes the pieces to interfere with each other less than normally. So it would be possible to take a speech signal and reproduce it in a way that makes the thing that actually enters the person’s ear a little bit closer to the actual, original sound. This is not something that’s in widespread use, but within 10 years it could be.
With ever-more staticky environments around us, do you think people are evolving in terms of how they perceive and process sound?
That’s something I’m really interested in, and I think we don’t really know the answer to that. It’s a deep and unresolved issue in neuroscience as to what extent our abilities are due to what we’re born with and to what extent they’re learned and adapted to the particular environments in which we live. Certainly, what’s very clear is that modern industrialized life is much noisier than life was pre-machinery. Once, you walked around in the forest and things were pretty quiet. Now you walk around the city, and the decibel levels are much higher. I think it’s an open question as to whether people have adapted to that, or whether our struggles — such as when we’re in a restaurant or a nightclub — are due to the fact that our auditory system evolved during situations when the world was a bit quieter.
I’ve worked in a few noisy newsrooms, and there is often so much noise that it’s hard to concentrate. Over time, though, some journalists can drown out background noise so they can focus on writing. I don’t know what mechanism makes people able to do that.
There’s probably two effects that work in opposition, which is that it’s plausible and likely that people can learn to get better at most things that they practice. But there’s this other factor, which is that when you work in a noisy environment, you end up suffering from hearing impairment. For instance, at a construction site, most of the workers probably end up with hearing loss. It would be interesting to look at environments where the overall dB [decibel] level is not that high, like a newsroom, where there’s just lots of sound sources, so filtering ends up being important.
Do you have any other thoughts on manipulating sound at meetings or in noisy environments?
I think that within the next 10 to 20 years, there will be things along the lines of personalized hearing aids, tools that will be in smartphones, where somebody will be able to basically say, “Yeah, I only want to hear the person who’s talking,” and that will come through their earphones. Things will really explode in terms of sort of personal assistance from devices.
Josh McDermott’s tips for creating sound that everyone can hear:
Use music to shift the vibe
“The main function of music is to manipulate people’s mood and energy level,” McDermott said. “You can play music to help people relax and get rid of anxiety, or to energize people and get them moving. You can imagine situations where you’d want to do each of those things.” McDermott suggests playing classical music in speaker prep rooms “to help people chill out,” and using upbeat music in the morning to help wake attendees up.
Don’t forget the hearing-impaired
They’re at pretty much every meeting. “Problems get a lot harder when you have hearing impairment,” McDermott said, so take advantage of loop systems and Bluetooth streamers when possible.
That open-air room set is murder on your attendees’ concentration. Your keynoter’s voice is getting lost in the cavernous depths of the theater space. And the music at your cocktail reception is way too loud. But don’t worry — good sound design is as easy as listening to the audiology experts, meeting organizers, and AV professionals we’ve assembled.