New Chip Enables Lightning-Fast AI Computing

New Chip Enables Lightning-Fast AI Computing

Engineers at the University of Pennsylvania have created a novel chip that utilizes light waves instead of electricity to conduct the intricate calculations required for training AI. This innovation holds the promise of significantly boosting the processing speed of computers while simultaneously decreasing their energy usage.
Credit: Depositphotos

Engineers at the University of Pennsylvania have created a novel chip that utilizes light waves instead of electricity to conduct the intricate calculations required for training AI. This innovation holds the promise of significantly boosting the processing speed of computers while simultaneously decreasing their energy usage.

The novel silicon-photonic (SiPh) chip marks a significant advancement by integrating the pioneering research of Benjamin Franklin Medal Laureate and H. Nedwill Ramsey Professor Nader Engheta, who specializes in manipulating nanoscale materials for mathematical computations using light—the swiftest communication method—with the SiPh platform, which employs silicon, an abundant and inexpensive element commonly used for mass-producing computer chips.

Exploring Light-Matter Interaction for Next-Generation Computing

The interaction between light waves and matter presents a potential pathway for creating computers that transcend the constraints of current chips, which are based on principles dating back to the early days of computing in the 1960s.

In a study published in Nature Photonics, Engheta’s team, alongside Firooz Aflatouni, Associate Professor of Electrical and Systems Engineering, outlines the development of this innovative chip.

However, Engheta explains that they decided to collaborate, capitalizing on Aflatouni’s research group’s expertise in nanoscale silicon devices. Their aim was to create a platform capable of performing vector-matrix multiplication, a fundamental mathematical operation crucial for the development and functionality of neural networks, the architecture underpinning modern AI tools.

Manipulating Silicon Thickness for Light Propagation Control

Engheta elaborates that rather than using a uniform silicon wafer, the approach involves thinning the silicon to around 150 nanometers in specific areas, enabling precise control over light propagation through the chip. These variations in thickness, without the need for additional materials, manipulate light scattering in predetermined patterns, facilitating rapid mathematical computations at the speed of light.

Aflatouni highlights that due to constraints from the commercial foundry responsible for manufacturing the chips, this design is already poised for commercial deployment and could potentially be integrated into graphics processing units (GPUs), which are in high demand for developing new AI systems.

He explains, “They can incorporate the Silicon Photonics platform as an extension, enabling accelerated training and classification.”

In addition to its enhanced speed and energy efficiency, the chip developed by Engheta and Aflatouni offers privacy benefits. Since multiple computations can occur simultaneously, sensitive data no longer needs to be stored in a computer’s active memory, making a future computer powered by this technology virtually impervious to hacking attempts.

Aflatouni emphasizes, “No one can breach non-existent memory to access your data.”

Other contributors to the research include Vahid Nikkhah, Ali Pirmoradi, Farshid Ashtiani, and Brian Edwards from Penn Engineering.


Read the original article on: Phys org

Read more: Can Graph Neural Networks Truly Predict Drug Molecule Effectiveness?

Share this post