New brain research by a Macquarie University team debunks a 75-year-old theory about how humans hear and determine where sounds are coming from.
It found a “sparse, energy efficient form of neural circuitry” performs this function.
The previous engineering-based theory was that each location in space was represented by a dedicated neuron in the brain whose sole function was to determine where a sound was coming from. This assumption has guided research and audio technologies for decades.
The new study found spatial hearing circuitry in humans is much simpler than first thought, and similar to what animals use for spatial listening.
It found that rather than having an array of neurons with each tuned to one place, human brains process sounds in the same way as many other mammals, using neural circuitry.
The paper’s senior author, Distinguished Professor David McAlpine, Academic Director of Macquarie University Hearing, explained findings and implications of the research paper published in Current Biology on 7 May 2024, to the university’s magazine, The Lighthouse.
“Just like other animals, we are using our ‘shallow brain’ to pick out very small snippets of sound, including speech, and we use these snippets to tag the location and maybe even the identity of the source,” he told The Lighthouse.
Better hearing devices
He said findings could lead to better voice recognition technology and more advanced hearing devices including hearing aids, cochlear implants and smartphones.
The goal for hearing aids and implants was to mimic human hearing and accurately locate the source of sounds but this remained elusive, he explained.
The current approach stems from a model developed by engineers in the 1940s to explain how humans locate a sound source based on differences of a few tenths of millionths of a second when the sound reaches each ear. This model uses the theory that each location is represented by a dedicated neuron in the brain whose function is to determine where a sound is coming from.
In 2001, McAlpine challenged the engineering model in Nature Neuroscience and although his theory was opposed, he continued to gather evidence to support it, showing that the existing model did not apply to species after species, including the prime animal for spatial listening, the barn owl. Proving it in humans was harder as it was more difficult to show the process in action in the human brain, he said.
“We like to think that our brains must be far more advanced than other animals in every way, but that is just hubris,” he told The Lighthouse. “It was clear to me that this was a function that didn’t require an over-engineered brain because animals come in all shapes and sizes.
“It was always going to be the case that humans would use a similar neural system to other animals for spatial listening, one that had evolved to be energy-efficient and adaptable.”
McAlpine and his team, Dr Robert Luke, Dr Lindsey Van Yper, Associate Professor Jaime Undurraga and Dr Jessica Monaghan, proved this by developing hearing assessment that asked study participants to determine if sounds they heard were focused such as foreground sounds or fuzzy like background noise.
Participants had electro-and magneto-encephalography (EEG and MEG) imaging while listening to the same sounds.
Imaging showed patterns were the same as those in smaller mammals and were explained by a multifunctional network of neurons that encoded information, including the source’s location and size, McAlpine explained.
Scaling distribution of the location detectors for head size found they were similar to rhesus monkeys which have large heads and similar cortices to humans.
“That was the final check box, and it tells us that primates that have directional hearing are using the same simplified neuronal system as small mammals,” he said. “Gerbils are like guinea pigs, guinea pigs are like rhesus monkeys, and rhesus monkeys are like humans in this regard.”
Findings indicated human brains use the same network to distinguish where sounds are coming from as picking speech out of background noise.
The imaging also revealed a machine does not need to be trained for language as a human brain does to listen effectively, a finding which could have implications for artificial intelligence technology in hearing aids and implants. AI that can understand and generate human language is increasingly used in listening devices.
The next step is to identify the minimum amount of information that can be conveyed in a sound while still receiving the maximum amount of spatial listening.
In 2022, Macquarie University Hearing formed a partnership with Google Australia, Cochlear, National Acoustic Laboratories, NextSense and the Shepherd Centre to explore opportunities for AI in hearing.
More reading
National Acoustic Laboratories and Ear Science Institute Australia partner to combat hearing loss
Meals on Wheels NSW recipients trial hearing tests via app
Cochlear and Hearing Australia sign new three-year agreement