- The Uplifting Times
- Posts
- How AI helps medical professionals read confusing EEGs to save lives
How AI helps medical professionals read confusing EEGs to save lives
AI almost doubles medical professionals’ accuracy while showing its work, assisting their decision making rather than telling them what to do
The Big Idea
DUKE UNIVERSITY
Imagine a tool that could help doctors save thousands of lives each year by better understanding brain activity. Researchers at Duke University have created a machine learning model that does just that. This tool helps medical professionals read electroencephalography (EEG) charts more accurately, which is crucial for detecting dangerous seizures in unconscious patients.
Doctors analyzing EEG brain activity charts, crucial for detecting seizures and other neurological conditions in patients. Image courtesy of Western Sydney University.
The Story
Researchers at Duke University have developed an amazing new tool using machine learning to help read EEG charts of patients in intensive care. These EEGs are vital because they show the brain's electrical activity through lines that go up and down on a chart. When a patient has a seizure, these lines jump dramatically, like a seismograph during an earthquake, making it easy to spot. However, other harmful events, called seizure-like events, are much harder to detect.
“The brain activity we’re looking at exists along a continuum, where seizures are at one end, but there’s still a lot of events in the middle that can also cause harm and require medication,” said Dr. Brandon Westover, associate professor of neurology at Massachusetts General Hospital and Harvard Medical School. “The EEG patterns caused by those events are more difficult to recognize and categorize confidently, even by highly trained neurologists, which not every medical facility has. But doing so is extremely important to the health outcomes of these patients.”
To solve this problem, the team worked with Cynthia Rudin’s lab at Duke. Rudin and her team specialize in creating "interpretable" machine learning models. Unlike typical "black box" models, these models must explain how they reach their conclusions, making them more trustworthy and easier to understand.
The researchers gathered EEG samples from over 2,700 patients and had more than 120 experts identify key features in the charts, classifying them as seizures, one of four types of seizure-like events, or 'other.' Each type of event has its own pattern, but these patterns can be hard to see because of noisy data or overlapping signals.
“There is a ground truth, but it’s difficult to read,” said Stark Guo, a Ph.D. student working in Rudin’s lab. “The inherent ambiguity in many of these charts meant we had to train the model to place its decisions within a continuum rather than well-defined separate bins.”
Visual representation of a new AI algorithm aiding medical professionals in interpreting EEG patterns of patients at risk of brain damage from seizures or seizure-like events. Each colored arm signifies a distinct event type, with proximity to the tip indicating algorithm confidence. Image courtesy of Duke University.
Visually, this spectrum looks like a multicolored starfish. Each arm of the starfish represents a different type of seizure-like event. The closer the chart is to the tip of an arm, the more certain the algorithm is about its decision. Charts closer to the center indicate less certainty.
Besides this colorful visual aid, the algorithm also shows the brainwave patterns it used to make its decision and provides three examples of similar, professionally diagnosed charts. “This lets a medical professional quickly look at the important sections and either agree that the patterns are there or decide that the algorithm is off the mark,” said Alina Barnett, a postdoctoral research associate in the Rudin lab. “Even if they’re not highly trained to read EEGs, they can make a much more educated decision.”
Output from a new AI algorithm for EEG interpretation. Bottom graphs highlight EEG sections used in the algorithm's decision-making, while annotated examples on the right show similar professionally reviewed EEGs. Image courtesy of Duke University.
To test the algorithm, the team asked eight medical professionals to categorize 100 EEG samples into six categories, once with AI assistance and once without. Their accuracy improved significantly, from 47% to 71%, outperforming those using a similar "black box" algorithm in a previous study.
“Usually, people think that black box machine learning models are more accurate, but for many important applications, like this one, it’s just not true,” said Rudin. “It’s much easier to troubleshoot models when they are interpretable. And in this case, the interpretable model was actually more accurate. It also provides a bird’s eye view of the types of anomalous electrical signals that occur in the brain, which is really useful for care of critically ill patients.”
This work was supported by the National Science Foundation (IIS-2147061, HRD-2222336, IIS-2130250, 2014431), the National Institutes of Health (R01NS102190, R01NS102574, R01NS107291, RF1AG064312, RF1NS120947, R01AG073410, R01HL161253, K23NS124656, P20GM130447) and the DHHS LB606 Nebraska Stem Cell Grant.
Source
Story provided by Duke University.
Content may be edited for style and length.
Alina Jade Barnett, Zhicheng Guo, Jin Jing, Wendong Ge, Peter W. Kaplan, Wan Yee Kong, Ioannis Karakis, Aline Herlopian, Lakshman Arcot Jayagopal, Olga Taraschenko, Olga Selioutski, Gamaleldin Osman, Daniel Goldenholz, Cynthia Rudin, M. Brandon Westover. Improving Clinician Performance in Classifying EEG Patterns on the Ictal–Interictal Injury Continuum Using Interpretable Machine Learning. NEJM AI, 2024; 1 (6) DOI: 10.1056/AIoa2300331
Interested in more AI?
Readers of The Uplifting Inbox also subscribe to simple.ai:
|
Reply