Photonic Processor may Streamline 6G Signal Processing

With more connected devices requiring greater bandwidth for activities like teleworking and cloud computing, managing the limited wireless spectrum available to all users will become increasingly difficult.
Engineers are turning to artificial intelligence to manage wireless spectrum more efficiently, aiming to reduce latency and enhance performance. However, most AI techniques used to classify and process wireless signals consume significant power and struggle to operate in real time.
MIT Unveils Ultrafast Optical AI Chip for Wireless Signal Processing
To address this, MIT researchers have developed a new AI hardware accelerator tailored for wireless signal processing. Their optical processor uses light to carry out machine learning tasks, enabling it to classify wireless signals within nanoseconds. The study is published in Science Advances.
This photonic chip is roughly 100 times faster than leading digital alternatives and achieves about 95% accuracy in signal classification. It’s also scalable and adaptable, making it suitable for a range of high-performance computing tasks—while being more compact, lightweight, cost-effective, and energy-efficient than traditional digital AI accelerators.
The device holds strong potential for future 6G applications, such as cognitive radios that can boost data rates by adjusting wireless modulation formats based on real-time environmental conditions.
Expanding Applications Beyond Signal Processing
By allowing edge devices to run deep-learning computations instantly, this new hardware accelerator could significantly accelerate tasks far beyond signal processing. For example, it could enable autonomous vehicles to respond instantly to environmental shifts or allow smart pacemakers to continuously track and assess a patient’s heart health.
“There are many applications that could benefit from edge devices capable of analyzing wireless signals,” says Dirk Englund, professor in MIT’s Department of Electrical Engineering and Computer Science, and senior author of the paper. “What we’ve introduced could pave the way for real-time, dependable AI inference. This is just the beginning of something with far-reaching impact.”
Collaborative Effort Behind the Breakthrough Optical AI Research
Englund co-authored the paper with lead author Ronald Davis III, Ph.D.; Zaijun Chen, former MIT postdoc and now assistant professor at the University of Southern California; and Ryan Hamerly, visiting scientist at MIT’s Research Laboratory of Electronics (RLE) and senior scientist at NTT Research.
Current digital AI accelerators for wireless signal processing typically convert signals into images and process them through deep-learning models for classification. While this method delivers high accuracy, the heavy computational demands of deep neural networks make it unsuitable for time-critical applications.
Optical systems offer a faster, more energy-efficient alternative by using light to encode and process data. However, general-purpose optical neural networks have struggled to achieve high performance in signal processing while remaining scalable.
A Tailored Optical Neural Network for Signal Processing
To address this, the researchers developed a specialized optical neural network architecture for signal processing, named the multiplicative analog frequency transform optical neural network (MAFT-ONN).
MAFT-ONN solves scalability challenges by encoding all signal data and conducting machine-learning operations entirely in the frequency domain—prior to digitizing the wireless signals.
The team designed the network to carry out both linear and nonlinear operations directly within the optical pathway, which are essential for deep learning. This approach allows them to use just one MAFT-ONN device per network layer, unlike other techniques that require separate devices for each neuron.
“With this method, we can pack 10,000 neurons onto a single device and perform all the necessary multiplications in one step,” explains Davis.
Photoelectric Multiplication Powers Efficiency and Scalability
They achieve this efficiency through photoelectric multiplication, a technique that significantly enhances performance. It also enables easy scaling of the optical neural network by adding more layers without additional complexity.
MAFT-ONN processes incoming wireless signals by analyzing their data and passing the results to the edge device for further tasks. For example, by identifying a signal’s modulation type, MAFT-ONN allows the device to recognize the signal format and extract the embedded information.
A major challenge in developing MAFT-ONN was figuring out how to translate machine-learning computations onto optical hardware.
Customizing Machine Learning to Harness Optical Hardware
“We couldn’t just apply a standard machine-learning framework—we had to tailor it specifically to our hardware and find ways to leverage the underlying physics to perform the desired computations,” explains Davis.
When tested through simulations for signal classification, the optical neural network achieved 85% accuracy in a single measurement and could quickly reach over 99% accuracy with multiple measurements. MAFT-ONN completed the entire classification process in just 120 nanoseconds.
“The more you measure, the more accurate it becomes. Since MAFT-ONN performs inference in nanoseconds, you gain accuracy without sacrificing speed,” Davis adds.
While leading digital RF systems handle machine-learning inference in microseconds, optical systems can achieve it in nanoseconds—or even faster.
Looking ahead, the team aims to implement multiplexing strategies to increase the processing capacity and scale of MAFT-ONN. They also plan to expand the architecture to support more advanced deep learning models, such as transformers and large language models (LLMs).
Read the original article on: Techxplore
Read more:Pedaling Powers the Battery-free Automatic Shifting System
Leave a Reply