News Release

Neural array meta-imaging

Peer-Reviewed Publication

Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Fig. 1 Neural array imaging system and its principle behind its ability to break the trade-off between aperture size, ffeld of view, and imaging quality.

image: 

Fig. 1 Neural array imaging system and its principle behind its ability to break the trade-off between aperture size, ffeld of view, and imaging quality. (a) The system consists of a metalens array, a CMOS sensor and a computing layer chip. Metalens array modulates the light from the target scene, which is captured by the CMOS sensor and used as input to the reconstruction algorithm for real-time scene reconstruction. (b) Schematic diagram of the neural array imaging model and the traditional point-to-point imaging model. (c)-(d) and (f)-(g) compare the wavefront aberrations and the modulation transfer function (MTF) distribution of the traditional point-to-point imaging model and the neural array imaging model in large-aperture cameras. The black dashed lines represent a reference lower bound for the recoverable MTF constrained by the system noise level. (e) and (h) compare the performance map of the two imaging models across various metrics.

view more 

Credit: Xinbin Cheng et al.

Lightweight and high-quality imaging systems are increasingly demanded in various fields such as security monitoring, drone inspection, machine vision, medical imaging, and consumer electronics. Traditional optical systems, composed of multiple bulky refractive lenses, deliver high imaging performance but remain difficult to miniaturize. In contrast, metalenses, consisting of planar metasurfaces, enable compact form factors but suffer from inherent chromatic aberrations that limit their comprehensive performance in terms of aperture, FOV, bandwidth, and image quality.

 

Integrating metalenses with computational imaging shifts part of the aberration correction from optics to algorithms, offering a potential route to combine ultrathin design with high-quality imaging. However, no existing work has successfully achieved this goal, primarily due to the intrinsic trade-offs imposed by the point-to-point imaging model. In this model, as aperture and FOV increase, wavefront aberrations grow rapidly across wavelengths, leading to a significant decline in the modulation transfer function (MTF), which quantifies the information transfer capability of the system. While post-processing algorithms can partially compensate for this degradation, their effectiveness is fundamentally limited by system noise. When the MTF falls below a noise-dependent threshold, accurate image reconstruction becomes unreliable, resulting in substantial quality loss.

 

To overcome these limitations, the joint team from Tongji University, Stanford University, and the Shanghai Institute of Technical Physics proposed a neural array imaging system that fundamentally resolves the long-standing trade-offs among aperture, FOV, bandwidth, and image quality.

 

As illustrated in Figure 1, the system adopts a novel neural array imaging model, in which a conventional large-aperture metalens is decomposed into an array of multiple small-aperture metalenses. This configuration ensures that chromatic and off-axis aberrations no longer scale severely with aperture size or FOV. For instance, under design parameters of 2.76 mm aperture, 50° FOV, and 400–700 nm spectral range, the neural array model reduces the wavefront error at 450 nm and 650 nm from over 90 wavelengths to approximately one wavelength. Correspondingly, the MTF curve shows that the system maintains its response above the noise-limited threshold, greatly improving recoverability and image quality.

 

A key challenge lies in determining the optimal number and arrangement of small-aperture lenses. Simple periodic layouts introduce multiple zero-frequency points in the MTF, whereas breaking periodicity effectively avoids this issue. The research team addressed this through a neural-network-driven, end-to-end design framework for lens array optimization. Furthermore, they developed a Multi-scale Feature-domain Wiener Deconvolution Deep Fusion Network (MFWDFNet) to achieve physically interpretable and detail-preserving image reconstruction.

 

Based on the neural array imaging model and the end-to-end optimization strategy, the team built a neural array imaging prototype with an aperture of 2.76 mm, F-number of 1.45, 50° FOV, frame rate of 25 Hz, and spectral range of 400–700 nm. As shown in Figure 2, the system achieved an average MTF at 72 lp/mm comparable to a commercial compound lens (Edmund 33-300) and demonstrated excellent imaging performance in both indoor and outdoor environments. Meanwhile, the total optical track length was reduced from 57 mm to 4.3 mm, achieving a 13-fold reduction in thickness.

 

In addition, the system's potential for task-level visual perception was validated. As shown in Figure 3, the captured images were successfully processed by the DepthAnythingV2 model for depth estimation and the YOLOv5s model for object detection. The results indicate that the neural array imaging system achieves accuracy comparable to conventional cameras, demonstrating its potential for intelligent vision applications.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.