AI-powered image generation sharpens accuracy in crop disease severity assessment
Nanjing Agricultural University The Academy of Science
The method, called location-guided lesion representation learning (LLRL), overcomes the common challenge of background interference that often causes existing models to confuse healthy areas with lesions. By training the system to focus directly on diseased regions, the team achieved higher reliability in classifying disease severity across apple, potato, and tomato leaves.
Global food production must increase by 50% by 2050 to feed the growing population. Yet plant diseases already cut annual yields by 13%–22%, representing billions in agricultural losses worldwide. Traditional methods of assessing disease severity rely on human expertise or laboratory testing—both costly, time-intensive, and subjective. Advances in machine learning and deep learning have enabled automated recognition of plant diseases, often with over 90% accuracy. However, most models still struggle to distinguish between lesions and background features such as shadows, soil, or healthy tissue. These inaccuracies limit their effectiveness for guiding pesticide use. Based on these challenges, researchers developed a method that directly targets lesion areas, improving assessment reliability.
A study (DOI: 10.1016/j.plaphe.2025.100058) published in Plant Phenomics on 26 May 2025 by Qi Wang’s team, Guizhou University, enhances the accuracy and interpretability of plant leaf disease severity assessment, enabling more precise pesticide application and advancing sustainable agricultural management.
The location-guided lesion representation learning (LLRL) framework was designed to enhance the accuracy of plant leaf disease severity assessment by combining advanced network architectures with robust experimental validation. The system integrates three components: an image generation network (IG-Net), which employs a diffusion model to generate paired healthy–diseased images; a location-guided lesion representation learning network (LGR-Net), which leverages these pairs to isolate lesion areas and produce a dual-branch feature encoder (DBF-Enc) enriched with lesion-specific knowledge; and a hierarchical lesion fusion assessment network (HLFA-Net), which fuses these features to deliver precise severity classification. To validate the method, researchers built a dataset of 12,098 images covering apple, potato, and tomato leaf diseases, supplemented by more than 10,000 generated pairs, and implemented the experiments using Python 3.8.19 and the PyTorch 1.13.1 framework with GPU acceleration. LGR-Net was trained with the Adam optimizer, a weight decay of 1 × 10⁻⁴, and a scheduled learning rate decay across 4,000 iterations, while HLFA-Net was trained for 100 epochs at a fixed learning rate of 0.01, sharing and freezing the DBF-Enc module. When compared against 12 benchmark models across real, generated, and mixed datasets, LLRL consistently outperformed alternatives, achieving at least 1% higher accuracy and reaching up to 92.4% with the combined use of pre-training and attention mechanisms. Visualization experiments further confirmed its ability to precisely localize lesion regions (IoU = 0.934, F1 = 0.9615), while feature maps showed progressive concentration on lesions at lower resolutions. Grad-CAM analysis revealed attention patterns that shifted toward lesions with increasing severity, aligning with established pathology knowledge. The framework demonstrated strong generalization across crop species, with particularly robust results for tomato and potato datasets, highlighting its potential as a versatile and reliable tool for agricultural disease management.
By enabling accurate grading of disease severity, LLRL provides a powerful foundation for precision pesticide application. Farmers could use smartphone photos of leaves to instantly assess disease progression and receive guidance on dosage and timing. At larger scales, drones and satellite imaging could integrate the system for automated monitoring across entire fields, significantly reducing manual inspection demands. This not only saves costs but also minimizes unnecessary pesticide use, reducing environmental pollution and safeguarding farmer income.
###
References
DOI
Original URL
https://doi.org/10.1016/j.plaphe.2025.100058
Funding information
This research was supported by the National Key R&D Program of China (2024YFE0214300), Guizhou Provincial Science and Technology Projects ([2024]002, CXTD[2023]027), Guizhou Province Youth Science and Technology Talent Project ([2024]317), Guiyang Guian Science and Technology Talent Training Project ([2024] 2-15), the Talent Introduction Program of Guizhou University under Grant No. (2021)89.
About Plant Phenomics
Plant Phenomics is dedicated to publishing novel research that will advance all aspects of plant phenotyping from the cell to the plant population levels using innovative combinations of sensor systems and data analytics. Plant Phenomics aims also to connect phenomics to other science domains, such as genomics, genetics, physiology, molecular biology, bioinformatics, statistics, mathematics, and computer sciences. Plant Phenomics should thus contribute to advance plant sciences and agriculture/forestry/horticulture by addressing key scientific challenges in the area of plant phenomics.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.