News Release

Automatic label checking: The missing step in making reliable medical AI

Two important checks that make deep-learning models for AI more effective

Peer-Reviewed Publication

Osaka Metropolitan University

The Xp Bodypart Checker

image: 

Xp‑Bodypart‑Checker successfully divided radiographs depending on the body part.

view more 

Credit: Osaka Metropolitan University

Researchers at Osaka Metropolitan University have discovered a practical way to detect and fix common labeling errors in large radiographic collections. By automatically verifying body-part, projection, and rotation tags, their research improves deep-learning models used for routine clinical tasks and research projects.

Deep-learning models using chest radiography have made remarkable progress in recent years, evolving to accomplish tasks that are challenging for humans such as estimating cardiac and respiratory function.

However, AIs are only as good as the images input into them. Although X-ray images taken at hospitals are labeled with information, such as the imaging site and method, before being fed into the deep-learning model, this is mostly done manually, meaning errors, missing data, and inconsistencies occur, especially at busy hospitals.

This is further complicated by images with various rotations. A radiograph be taken from the anterior to the posterior or vice versa, and it can also be lateral, inverted or rotated, further complicating the dataset.

In large imaging archives, these minor errors quickly add up to hundreds or thousands of mislabeled results.

A research team at Osaka Metropolitan University Graduate School of Medicine, including graduate student Yasuhito Mitsuyama and Professor Daiju Ueda, aimed to improve the detection of mislabeled data by automatically identifying errors before they affect the input data for deep-learning models.

The group developed two models: Xp-Bodypart-Checker, which classifies radiographs depending on the body part; and CXp-Projection-Rotation-Checker, which detects the projection and rotation of chest radiographs.

Xp‑Bodypart‑Checker achieved an accuracy of 98.5% and CXp‑Projection‑Rotation‑Checker obtained accuracies of 98.5% for projection and 99.3% for rotation. The researchers are optimistic that integrating both into a single model would deliver game-changing performance in clinical settings.

Although the results were outstanding, the team hopes to fine-tune the method further for clinical use. “We plan to retrain the model on radiographs that were flagged despite being correctly labeled, as well as those that were not flagged but were in fact mislabeled, to achieve even greater accuracy,” Mitsuyama said.

The study was published in European Radiology.

###

About OMU

Established in Osaka as one of the largest public universities in Japan, Osaka Metropolitan University is committed to shaping the future of society through the “Convergence of Knowledge” and the promotion of world-class research. For more research news, visit https://www.omu.ac.jp/en/ and follow us on social media: X, Facebook, Instagram, LinkedIn.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.