Now we know how to realize and train photonic chips that use light to implement highly efficient neural networks. The result, enabling the growth of artificial intelligence systems and, to a broader extent, systems requiring very high computational capacity, has just been published in the Science in a paper entitled Experimentally realized in situ backpropagation for deep learning in photonic neural networks.
This work is the result of a five-year collaboration between the Photonic Devices group of the Department of Electronics, Information and Bioengineering of the Politecnico di Milano, coordinated by Prof. Andrea Melloni and Prof. Francesco Morichetti, and the Department of Electrical Engineering of Stanford University.
The research activity targets the development of programmable photonic processors for data transmission and processing, and now these devices have been employed for the realization of photonic neural networks. The realized device integrates a photonic accelerator in a silicon chip of a few mm2 which exploits a programmable mesh of interferometers to implement artificial neurons that are capable of performing mathematical operations at very high speed (< 0.1 ns) and with very low energy consumption. To this end, training strategies of photonic neurons have been developed which allow calculation precisions comparable to those of a conventional neural network, but with considerable energy savings and much higher speed.
Besides applications in the field of neural networks, the developed device can be used as a computing unit for High Performance Computing (HPC) systems, graphic accelerators, mathematic coprocessors, data mining, cryptography, and quantum computers.
Check out the News and Views about this work published in Nature Photonics.