UMaine Ph.D. students develop AI tool to improve breast cancer detection
University of Maine
image: Jeremy Juybari (left) and Josh Hamilton developed an artificial intelligence (AI) system that could make it easier and faster for doctors to identify signs of breast cancer in tissue samples.
Credit: Photo courtesy of the University of Maine.
A research team led by two University of Maine Ph.D students developed an artificial intelligence (AI) system that could make it easier and faster for doctors to identify signs of breast cancer in tissue samples, possibly preventing delays and saving lives.
The system, named the Context-Guided Segmentation Network (CGS-Net), mimics the way human pathologists study cancer tissue to achieve accuracy in digital cancer diagnosis. Spearheaded by Jeremy Juybari, a doctoral candidate in electrical and computer engineering, and Josh Hamilton, a doctoral candidate in biomedical engineering, the tool introduces a deep learning architecture designed to interpret microscopic images of tissue with greater precision than conventional AI models.
Breast cancer is the second leading cause of cancer-related deaths in women, affecting one in eight over their lifetime. Diagnosis still relies on the microscopic inspection of chemically stained tissue samples, a process that requires expertise and time.
Two-thirds of the world’s pathologists are concentrated in only 10 countries, leaving large regions facing diagnostic delays that contribute to preventable deaths. In India, for example, roughly 70% of cancer deaths are linked to treatable risk factors compounded by limited access to timely diagnostics.
“This model integrates both detailed local tissue regions and broader contextual regions to improve the accuracy of cancer predictions in histological slides,” Juybari said. “By introducing a unique training algorithm and an innovative initialization strategy, this research demonstrates how incorporating surrounding tissue context can significantly enhance model performance. These findings reinforce the importance of holistic image analysis in medical AI applications.”
Juybari and Hamilton described their work in a paper published in Scientific Reports (part of the Nature portfolio), which they co-authored with faculty researchers Andre Khalil, Yifeng Zhu and Chaofan Chen.
At the core of the system lies a dual-encoder model that mirrors the workflow of a pathologist examining a slide. Traditionally, pathologists gather data themselves by zooming in and out of the images they are examining. CGS-Net would do this simultaneously. One branch of the network processes a high-resolution image patch to capture cell-level details. The other examines a lower-resolution patch encompassing the surrounding tissue, the broader architectural context that helps specialists distinguish normal from malignant structures.
Each patch shares the same center pixel, ensuring that both views of the tissue align precisely. Together, they feed into a system of interconnected encoders and decoders that uses data from both the high and low resolution images for a complete analysis.
To test the system, the research team at UMaine trained CGS-Net to analyze 383 digitized whole-slide images of lymph node tissue and determine which ones showed signs of breast cancer. It was also trained on how to differentiate between healthy and cancerous tissue. The tool consistently outperformed traditional single-input models.
“Our model, CGS-Net, successfully mimicked how a pathologist looks at histological samples, by using two encoders that simultaneously examine two views of different levels of magnification power,” the researchers wrote in their paper.
While the current study focused on binary cancer segmentation, the team sees wide potential for expansion. Future research aims to incorporate additional resolutions, apply the system to multiclass tissue segmentation and test its adaptability across other cancer types. The architecture could also integrate multimodal data, such as radiology scans or molecular profiles, which are known to further boost diagnostic accuracy.
Beyond technical, the project underscores the interdisciplinary strength of UMaine’s research ecosystem, merging engineering, computing and biomedical science to tackle global health disparities.
The study was supported by the National Cancer Institute and the National Science Foundation, and utilized computational resources from the University of Maine System Advanced Research Computing, Security, and Information Management (ARCSIM). Both the dataset and source code are publicly available, ensuring transparency and collaboration across the scientific community.
As cancer diagnosis becomes increasingly digital, tools like CGS-Net promise to enhance, and not replace, human expertise. By teaching machines to see as doctors do, UMaine researchers are helping chart a future where early, accurate detection becomes accessible to all.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.