image: A programmable LED array provides multi-angle plane-wave illumination, and the transmitted light is collected by microscopic optics and recorded on a camera sensor for subsequent computational reconstruction.
Credit: Advanced Devices & Instrumentation
With the rapid progress of computational imaging and artificial intelligence, overcoming the depth-of-field limitation while maintaining a wide field of view has become a key challenge in microscopy. Classical FPM (Figure 1) leverages an LED array with angle-varied illumination to shift the sample spectrum in the Fourier domain, effectively extending the system’s numerical aperture and surpassing the diffraction limit for high-resolution imaging. However, when the specimen exhibits substantial axial variations, reconstruction on a single focal plane leads to severe blurring or even loss of structural information in out-of-focus regions (Figure 2).
To tackle this issue, Professor Edmund Y. Lam from the University of Hong Kong led doctoral student Jingqian Wu and other team members to adopt the concept of three-dimensional implicit neural representation (INR). The sample within the entire depth range is parameterized as a 3D feature volume, and two multilayer perceptrons are used to model the amplitude and phase components of the complex field. Unlike explicit voxel grids or image stacks, the implicit representation learns a continuous mapping from spatial–axial coordinates to physical quantities. This enables high-fidelity modeling of the 3D optical field with a compact set of parameters and avoids the heavy memory cost of storing dense volumetric data. The model details are shown in Figure 3.
Building on this representation, a physics-informed 3D focus-aware weighting map is introduced to encode the degree of focus at each spatial location and depth. During training, the network automatically assigns larger weights to well-focused regions and smaller weights to blurred ones. A weighted fusion along the depth direction then yields a single all-in-focus 2D reconstruction. This process is analogous to manual focusing but implemented in a fully differentiable, end-to-end learnable manner guided by the physical propagation model.
Since ground-truth all-in-focus images are not available for real 3D samples, the researchers further devise an unsupervised sharpness loss based on gradient sparsity. By encouraging stronger and sparser image gradients, this loss term jointly optimizes the 3D weighting map and the implicit representation toward sharper and more structurally faithful reconstructions, without the need for labeled training data.
To evaluate the effectiveness of the proposed method, the researchers conducted experiments on both tilted 3D blood smear specimens and 2D biological samples. As illustrated in Figure 4, for samples with substantial depth variation, both conventional FPM and existing neural approaches suffer from severe defocus blur across different regions. In contrast, the proposed framework successfully reconstructs a genuinely all-in-focus wide-field image, preserving structural consistency and clarity over the entire field of view.
For 2D samples (Figure 5), traditional methods require selecting an appropriate focal plane to obtain a sharp reconstruction. The proposed method, however, leverages its learned 3D weighting map to automatically determine the focused regions, producing sharp and structurally faithful results without manual focus selection.
Moreover, the method substantially improves downstream segmentation performance. As shown in Figure 6, compared with conventional and neural baselines, the proposed approach recovers clearer cellular boundaries across the whole field and achieves more reliable segmentation results using Cellpose. This demonstrates that all-in-focus reconstruction not only enhances visual quality but also provides tangible scientific value in quantitative biological image analysis.
By integrating 3D implicit neural representations with a physics-based propagation model, the authors demonstrate all-in-focus reconstruction of samples with significant depth variation on a standard FPM platform. The proposed method constructs a 3D feature volume, learns a physics-guided focus-aware weighting map, and employs an unsupervised sharpness loss, enabling the recovery of structurally sharp and globally consistent all-in-focus images directly from experimental data without 3D ground-truth labels. Experimental results show that the method greatly reduces defocus blur and markedly improves the accuracy of downstream tasks such as cell segmentation, providing more reliable image inputs for quantitative analysis of 3D biological specimens.
In future work, the team aims to extend the framework to a broader range of complex 3D samples and explore integration with advanced FPM hardware designs. Building datasets that contain real or approximate all-in-focus reference images will further facilitate quantitative evaluation and enhancement of all-in-focus reconstruction algorithms across diverse application scenarios, thereby strengthening the foundation of computational microscopy and intelligent biomedical image analysis.
Journal
Advanced Devices & Instrumentation
Method of Research
Imaging analysis
Subject of Research
Not applicable
Article Title
All-in-Focus Fourier Ptychographic Microscopy via 3D Implicit Neural Representation
Article Publication Date
2-Dec-2025