Can artificial intelligence match how the brain processes sound ?

Biology

Without realising it, our brain continuously processes sounds and infers semantic information, such as the presence of birds in a tree based on hearing their song. How can this process be replicated using artificial intelligence-based models, or “artificial hearing”? For the first time, a research team led by Bruno Giordano, a CNRS researcher at the Institut de neurosciences de la Timone (CNRS/Aix-Marseille University) in collaboration with Professor Elia Formisano of Maastricht University, has compared current artificial hearing models to determine which one better explains the brain’s perception and representation of sounds. Their research shows that deep neural networks1 far outperform the others and that the algorithms developed by Google are the best among them. Scientists compared the processing of a sound bank by these different models and by patients’ brains using functional magnetic resonance imaging (fMRI)2 . By proposing a framework and methodology for comparing artificial hearing models, these findings, published in Nature Neuroscience on 16 March 2023, could have significant implications for the development of new brain-inspired sound recognition technologies.

  • 1These networks can consist of millions of such “neurons”, divided into several dozen layers. These artificial intelligence models can “learn” from a database, with or without human supervision.
  • 2An fMRI makes it possible to visualise brain activity indirectly.
Bibliography

Intermediate acoustic-to-semantic representations link behavioral and neural responses to natural sounds. Bruno L. Giordano, Michele Esposito, Giancarlo Valente et Elia Formisano. Nature Neuroscience, le 16 mars 2023. DOI:10.1038/s41593-023-01285-9

Contact

Bruno Giordano
Chercheur
François Maginiot
CNRS Press Officer