6/5/2023 0 Comments 4 elements of nature trippy![]() still requires 2 N units to implement the input dimension of N. The linearly scalable compact integrated diffractive optical network (IDNN) demonstrated in ref. However, most of the reported works point to a quadratic increase in the component count, chip size and power consumption as the computational matrix size is scaled up 43, which largely limits the integration potential of the resulting optical computing scheme while significantly increasing the complexity of the manipulation. From the calculation results, ONN has the potential to improve at least two orders of magnitude in terms of energy consumption and computing density 42. The reported ONNs have been comparable to the state-of-the-art digital processors in terms of efficiency but have revealed a huge leap in computing density 40, 41. In recent years, ONNs have attracted much interest in the realization of high-speed, large-scale and high-parallel optical neuromorphic hardware, with demonstrations including the use of light diffraction 18, 19, 20, 21, 22, 23, 24, light interference 25, 26, 27, 28, 29, 30, light scattering 31, 32 and time-wavelength multiplexing 16, 33, 34, 35, 36, 37, 38, 39. Additionally, the light transmission in the ONN simultaneously implements data processing, which effectively avoids data tidal transmission in the von Neumann computing paradigm. Photonics devices have low interconnect loss and can overcome the bandwidth bottleneck of their electrical counterparts to achieve ultrahigh computing bandwidth up to 10 THz 12, 13, 14, 15, 16, 17. Optical neural networks (ONNs) are regarded as promising candidates for the next generation of neuromorphic hardware processors. However, in present schemes, mainly based upon the von Neumann computing paradigm, there is an inherent trade-off between the data exchange speed and the energy consumption this is mainly because in these schemes, the memory and process unit are separated 8, 9, 10, 11. As an increasing number of complex scenarios continue to emerge, including auto-driving and artificial intelligence services on the cloud 6, 7, it is strongly desired to increase the processing speed of the underlying neuromorphic hardware while reducing its computing energy consumption. CNNs are commonly used in image recognition to greatly reduce the network complexity and conduct high-precision predictions, with wide applications in object classification, computer vision, real-time translation, and other areas 2, 3, 4, 5. Inspired by the working mechanisms in biological visual nervous systems, convolutional neural networks (CNNs) have become a powerful category of artificial neural networks 1. The linear scalability of the proposed design with respect to computational size translates into a solid potential for large-scale integration. ![]() Although the convolution kernels are interrelated, ten-class classification of handwritten digits from the MNIST database is experimentally demonstrated. Three 2 × 2 correlated real-valued kernels are made of two multimode interference cells and four phase shifters to perform parallel convolution operations. Here, a compact on-chip optical convolutional processing unit is fabricated on a low-loss silicon nitride platform to demonstrate its capability for large-scale integration. ![]() However, most present optical computing schemes are hardly scalable since the number of optical elements typically increases quadratically with the computational matrix size. ![]() Optical computing has been demonstrated to enable significant improvements in terms of processing speeds and energy efficiency. Convolutional neural networks are an important category of deep learning, currently facing the limitations of electrical frequency and memory access time in massive data processing. ![]()
0 Comments
Leave a Reply. |