image: Researchers have developed a new superior hardware platform for AI accelerators using photonic integrated circuits on silicon chip.
Credit: Bassem Tossoun from IEEE JSTQE
The emergence of AI has profoundly transformed numerous industries. Driven by deep learning technology and Big Data, AI requires significant processing power for training its models. While the existing AI infrastructure relies on graphical processing units (GPUs), the substantial processing demands and energy expenses associated with its operation remain key challenges. Adopting a more efficient and sustainable AI infrastructure paves the way for advancing AI development in the future.
A recent study published in IEEE Journal of Selected Topics in Quantum Electronics demonstrates a novel AI acceleration platform based on photonic integrated circuits (PICs), which offer superior scalability and energy efficiency compared to conventional GPU-based architectures. The study, led by Dr. Bassem Tossoun, a Senior Research Scientist at Hewlett Packard Labs, demonstrates how PICs leveraging III-V compound semiconductors can efficiently execute AI workloads. Unlike traditional AI hardware that relies on electronic distributed neural networks (DNNs), photonic AI accelerators utilize optical neural networks (ONNs), which operate at the speed of light with minimal energy loss.
“While silicon photonics are easy to manufacture, they are difficult to scale for complex integrated circuits. Our device platform can be used as the building blocks for photonic accelerators with far greater energy efficiency and scalability than the current state-of-the-art”, explains Dr. Tossoun.
The team used a heterogeneous integration approach to fabricate the hardware. This included the use of silicon photonics along with III-V compound semiconductors that functionally integrate lasers and optical amplifiers to reduce optical losses and improve scalability. III-V semiconductors facilitate the creation of PICs with greater density and complexity. PICs utilizing these semiconductors can run all operations required for supporting neural networks, making them prime candidates for next-generation AI accelerator hardware.
The fabrication started with silicon-on-insulator (SOI) wafers that have a 400 nm-thick silicon layer. Lithography and dry etching were followed by doping for metal oxide semiconductor capacitor (MOSCAP) devices and avalanche photodiodes (APDs). Next, selective growth of silicon and germanium was performed to form absorption, charge, and multiplication layers of the APD. III-V compound semiconductors (such as InP or GaAs) were then integrated onto the silicon platform using die-to-wafer bonding. A thin gate oxide layer (Al₂O₃ or HfO₂) was added to improve device efficiency, and finally a thick dielectric layer was deposited for encapsulation and thermal stability.
“The heterogeneous III/V-on-SOI platform provides all essential components required to develop photonic and optoelectronic computing architectures for AI/ML acceleration. This is particularly relevant for analog ML photonic accelerators, which use continuous analog values for data representation”, Dr. Tossoun notes.
This unique photonic platform can achieve wafer-scale integration of all of the various devices required to build an optical neural network on one single photonic chip, including active devices such as on-chip lasers and amplifiers, high-speed photodetectors, energy-efficient modulators, and non-volatile phase shifters. This enables the development of TONN-based accelerators with a footprint-energy efficiency that is 2.9 × 10² times greater than other photonic platforms and 1.4 × 10² times greater than the most advanced digital electronics.
This is indeed a breakthrough technology for AI/ML acceleration, reducing energy costs, improving computational efficiency, and enabling future AI-driven applications in various fields. Going forward, this technology will enable datacenters to accommodate more AI workloads and help solve several optimization problems.
The platform will be addressing computational and energy challenges, paving the way for robust and sustainable AI accelerator hardware in the future!
***
Reference
Authors: Bassem Tossoun1, Xian Xiao1, Stanley Cheung1, Yuan Yuan1, Yiwei Peng1, Sudharsanan Srinivasan2, George Giamougiannis3, Zhihong Huang1, Prerana Singaraju1, Yanir London1, Matěj Hejda1, Sri Priya Sundararajan1, Yingtao Hu1, Zheng Gong1, Jongseo Baek1, Antoine Descos1, Morten Kapusta1, Fabian Böhm1, Thomas Van Vaerenbergh1, Marco Fiorentino1, Geza Kurczveil1, Di Liang4, Raymond G. Beausoleil1
Title of original paper: Large-Scale Integrated Photonic Device Platform for Energy-Efficient AI/ML Accelerators
Journal: IEEE Journal of Selected Topics in Quantum Electronics
DOI: https://doi.org/10.1109/JSTQE.2025.3527904
Affiliations:
1Hewlett Packard Labs, USA
2Indian Institute of Technology, Madras
3Microsoft Research, Cambridge, U.K
4University of Michigan, USA
Journal
IEEE Journal of Selected Topics in Quantum Electronics
Method of Research
Computational simulation/modeling
Subject of Research
Not applicable
Article Title
Large-Scale Integrated Photonic Device Platform for Energy-Efficient AI/ML Accelerators
Article Publication Date
9-Jan-2025
COI Statement
NA