
Submitted by Rachel Gardner on Mon, 26/01/2026 - 10:24
Artificial intelligence (AI) models can help doctors make faster and more accurate diagnoses when they’re trained to analyse medical images, such as scans of tumours or photos of skin growths. Now a researcher is investigating if AI can diagnose diseases just as effectively from analysing body sounds like heartbeats, speaking, coughing and breathing.
Cecilia Mascolo, Professor of Mobile Systems here, is a researcher with expertise in using AI tools to monitor human health. In the last few years, she and her team have developed and trained machine learning models to help doctors diagnose Covid from recordings of people coughing, breathing and talking.
When they ran a side-by-side comparison of how well the AI performed against human doctors, the findings were striking. While doctors who had experience of working on Covid wards made accurate diagnoses, the AI model actually performed slightly better. But overall, the most accurate results were achieved by a 'committee' of doctors using the AI tool.
Prof Mascolo is now extending the highly promising results of this pilot project into a bigger study. Called AURORA (or 'Audio models for respiratory and cardiac diagnostics and clinical training'), it was today awarded a Proof-of-Concept grant by the European Research Council. The ERC says that such projects "show how frontier science can open new routes to innovation". Prof Mascolo says that the funding will help her team "lead the way in using AI on audio to diagnose illnesses".
Importantly, Prof Mascolo adds, "while there's a lot of talk about how AI might be used as a substitute for a doctor or health professional, our study highlights that in reality, the skills of AI models and doctors can be very complementary and they work well together."
Exciting advances
In the past few years, there's been growing research into developing AI tools that analyse medical images and it's leading to exciting advances. In the last two years, a team of mainly UK-based researchers have developed an AI model for diagnosing eye disease from scans of patients' retinas.
And NICE, the UK's National Institute for Health and Care Excellence, is currently trialling an AI tool that triages potential skin cancer cases by analysing photos taken by doctors of suspicious lesions on patients' skin.
But research on training AI models to analyse sounds, rather than images, is far less advanced. It's high time it caught up, says Prof Mascolo.
"Doctors have used human sounds to diagnose disease for centuries. Yet today, diagnosing respiratory and cardiovascular disease in a patient still largely relies on expensive tests and the time of a busy consultant," she points out. "By using AI, we can do better."
Better training
As well as improving diagnosis, she adds, this would also benefit doctor training. "My daughter is a medical student and she can only learn what pneumonia sounds like if, among the patients she sees that day, there's a patient with pneumonia and she can listen to their lungs. That's not a very efficient training method when we could be using AI models to generate more and better data for them to train on at any time."
But there's a reason why sounds have been studied less than medical images: a lack of data. AI models are data-hungry and require huge amounts of information to train on. While databanks of medical images – like scans, x-rays, mammograms and photos – have built up over the years and are being used to train AI models, there are fewer data banks of sounds.
However, Prof Mascolo and her team have been collecting audio data from their own previous research. During the pandemic, they collected recordings of people coughing, breathing and speaking via the mobile Covid-19 Sounds app that they developed. They then tested out a machine learning model that they had trained on that data and published the results in the Journal of Medical Internet Research.
"Our objective was to compare how our model and human clinicians performed in predicting COVID-19 from these sound recordings," explains Prof Mascolo’s colleague Dr Jing Han, a Senior Research Associate here and lead author on the paper Evaluating Listening Performance for COVID-19 Detection by Clinicians and Machine Learning.
Half of the recordings in the dataset were from Covid-positive patients, and half from Covid-negative volunteers. As well as comparing how well the AI model and human doctors performed in diagnosing Covid from listening to the recordings, the researchers also looked at "whether combining the predictions of the model and human experts could further enhance the performance in terms of both accuracy and confidence." The answer was that it did.
"These findings suggest that the doctors and the AI model could make better clinical decisions via a cooperative approach," says Dr Han. She adds that this could help encourage "higher confidence in audio-based respiratory diagnosis."
Potential of human sound-based diagnostics
And this is exactly what the new ERC Proof of Concept project aims to achieve.
"Human sound-based diagnostics have huge potential," says Prof Mascolo. "So in our new project we’ll leverage our ongoing collaboration work (with Dr Tomasz Jadczyk, an Associate Professor and cardiologist at the Medical University of Silesia in Poland) to develop tools that support clinicians in diagnosing respiratory and cardiac conditions and help train student doctors on what they sound like."
