Researchers may have found a way to better predict spoken language development in deaf children who receive cochlear implants.
The researchers from Australia, the US and Hong Kong used the most advanced form of machine learning, an AI model using deep transfer learning (DTL), in an international study of 278 children who received the implants.
DTL is a type of AI which uses prior knowledge learnt from pretraining on a large dataset.
The researchers found the model achieved 92.39% accuracy in predicting spoken language outcomes at one-to-three years after implantation – which they said was superior to conventional approaches.
They said the approach could allow intensified therapy to be offered earlier to children forecast to struggle more with spoken language.
The children, who spoke English, Spanish or Cantonese, had a mean age at cochlear implantation of 18 months, and 49% were female.
Results were published in JAMA Otolaryngology – Head & Neck Surgery on 26 December 2025.
The researchers trained the AI models to predict outcomes based on pre-implantation brain MRI scans from the children. All three centres used different protocols for brain scanning and different outcome measures.
More intensive therapy
The model outperformed traditional machine learning models in all outcome measures.
“Our results support the feasibility of a single AI model as a robust prognostic tool for language outcomes of children served by cochlear implant programs worldwide. This is an exciting advance for the field,” said senior author, Dr Nancy Young.
“This AI-powered tool allows a ‘predict-to-prescribe’ approach to optimise language development by determining which child may benefit from more intensive therapy.”
Dr Young holds the Lillian S. Wells Professorship in Paediatric Otolaryngology at Ann & Robert H. Lurie Children’s Hospital of Chicago where she is medical director of Audiology and Cochlear Implant Programs, is Professor of Otolaryngology at Northwestern University Feinberg School of Medicine, and Professor and Fellow at Knowles Hearing Center at Northwestern University School of Communication.
The researchers said that while cochlear implants substantially improved spoken language in children with severe to profound sensorineural hearing loss, outcomes remained more variable than in children with healthy hearing.
“This variability cannot be reliably predicted for individual children using age at implant or residual hearing,” they said. “Development of an artificial intelligence clinical tool to predict which patients will exhibit poorer improvements in language skills may enable an individualised approach to improve language outcomes.”
Other researchers included speech pathologist Associate Professor Shani Dettman from the University of Melbourne’s Department of Audiology and Speech Pathology.




