News Release

Artificial intelligence-driven inverse lithography technology

Peer-Reviewed Publication

Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Fig. 1

image: 

Fig. 1 The lithography process. The projection lithography system.

view more 

Credit: Yang, Y., Liu, K., Gao, Y. et al.

Introduction

The advancement of semiconductor manufacturing is a key driver of electronic device innovations. As Moore’s Law progresses, lithography becomes a critical process for integrated circuit fabrication. Lithography systems improved resolution by reducing exposure wavelength or increasing numerical aperture. As feature sizes shrink, these approaches face technical bottlenecks and cost pressure.

 

Computational lithography optimizes the process factor to improve resolution. Inverse lithography technology (ILT) has global optimization capabilities, attracting attention from both academia and industry. In recent years, artificial intelligence (AI) has brought breakthroughs to ILT, improving performance in lithography modeling and mask optimization.

 

Recently, a research team from Tsinghua University published a review titled “Advancements and challenges in inverse lithography technology: a review of artificial intelligence-based approaches” in Light: Science & Applications. This paper summarizes the principles, developments, and AI applications of ILT, and discusses the challenges and prospects of ILT.

 

The corresponding author is Prof. Liangcai Cao from Tsinghua University, with Ph.D. candidate Yixin Yang as the first author. Contributors include Ph.D. candidates Kexuan Liu, Ph.D. candidates Yunhui Gao, and Prof. Chen Wang.

 

Computational Lithography

The lithography process comprises several steps, including coating, pre-baking, exposure, baking, development, etching, resist stripping, and metrology (Fig. 1). Lithography systems have evolved through contact, proximity, and projection configurations. Contact lithography is prone to mask contamination and damage. Proximity lithography is constrained by wafer flatness. In 1973, the projection lithography machine was introduced, enabling pattern transfer through the optical projection of masks.

 

Resolution enhancement techniques include off-axis illumination, optical proximity correction, and phase-shift masks. Computational lithography models the lithography process and optimizes the illumination source and mask design according to the target wafer pattern (Fig. 2).

 

Computational lithography evolved through rule-based optical proximity correction (RBOPC), model-based optical proximity correction (MBOPC), and ILT (Fig. 3). ILT characterizes the optical imaging process using Hopkins theory and the transmission cross-coefficient. The inverse problem is solved through gradient-based optimization algorithms, predicting wafer patterns to closely approximate target designs.

 

ILT was first proposed in 1981 by researchers from the University of Wisconsin-Madison. The industrial application was achieved in 2003 by Luminescent Technologies, with commercialization accelerated by Intel. In 2010, the introduction of the regularization framework and conjugate gradient algorithm improved ILT’s computational efficiency. The integration of deep learning in 2017 marked a new stage of ILT, as Advanced Semiconductor Materials Lithography (ASML) adopted convolutional neural networks to optimize the lithography process. Graphics processing unit (GPU)-based computing platforms advanced the ILT implementation. Over four decades, ILT has evolved from a concept into critical technology in semiconductor manufacturing (Fig. 4).

 

AI-driven inverse lithography technology

AI has brought transformative breakthroughs to ILT (Fig. 5). In lithography modeling, data-driven strategies have improved the computational efficiency of thick-mask near-field simulations and photoresist effect modeling. Deep learning frameworks, such as convolutional neural networks and generative adversarial networks, have enhanced mapping accuracy from masks to wafer patterns, achieving physical simulation level predictive capability in extreme ultraviolet lithography. The integration of AI effectively mitigates the trade-off between precision and efficiency in lithography modeling.

 

AI is reshaping the ILT’s algorithm framework. By hybridizing physical models with neural networks, the architecture maintains the physical consistency of optical systems while leveraging the advantages of data-driven learning. Generative models enable rapid synthesis of high-fidelity mask patterns. Graph neural networks handle complex design constraints in layout optimization. AI-driven ILT enhances imaging quality and overcomes computational bottlenecks, laying the foundation for large-scale industrial adoption.

 

Prospects

ILT faces multiple challenges (Fig. 6). In computational efficiency, ILT requires more processing time compared to conventional optical proximity correction, limiting application primarily to local hotspot correction. Full-chip ILT optimization necessitates partitioning the layout into smaller units, which introduces boundary stitching artifacts. In mask manufacturing, electron-beam direct writing processes remain immature and time-consuming, requiring Manhattanization of curvilinear mask patterns to comply with manufacturing rules. Though AI techniques enhance lithography modeling precision and efficiency, deep learning models suffer from limitations, including insufficient interpretability and data dependency.

 

In the future, ILT development will focus on the integration of AI, model and algorithm optimization, and mask fabrication innovations (Fig. 6). Computational efficiency can be accelerated by GPU. Physics-embedded deep learning approaches maintain the advantages of data-driven algorithms and enhance physical consistency and interpretability. Multi-beam mask writing (MBMW) technology is expected to break through manufacturing bottlenecks for curvilinear masks, achieving improvements in resolution and throughput. The development of automated mask generation workflows, multi-scale physical modeling frameworks, and co-optimization of source-mask designs will advance ILT in advanced integrated circuit manufacturing.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.