News&Events

Focus

BIT Has Made Great Progress in the Application of Optoelectronic Fusion Pool Computing in Language Learning

Recently, Prof. Sun Linfeng, President of School of Physics, BIT, cooperated with Prof. Heejun Yang, Korea Advanced Institute of Science and Technology (KAIST), and Prof. Wang Zhongrui, The University of Hong Kong, to propose a new multi-dimensional optoelectronic fusion memristor based on low-dimensional material system, which realizes reservoir computing in the sensor and is successfully used in language symbol recognition and learning. Considering the existence of interference items with high similarity, the recognition rate of complex sentence system is 91%. This achievement provides a low-cost training scheme for processing time series signal events for machine learning and edge computing applications. This work was published in Science Advances, a sub Journal of Science, on May 14.

In recent years, biologically-inspired machine vision is developing rapidly, because visual perception grasps more than 80% of the information in the process of human interaction with the surrounding environment. Although people have made great efforts in simulating the visual cortex of human brain to realize the function of “seeing”, physically separated sensing, memory and processing units lead to a lot of energy consumption, time delays and additional hardware costs. Especially with the rapid development of the Internet of Things (IoT) and the explosive growth of data volume, the number of sensor nodes on the IoT continues to increase. In addition, the traditional recurrent neural network training algorithm is too complex, the amount of computing is too large, the convergence speed is slow, and the network structure is difficult to optimize, which further aggravates this challenge. While, reservoir computing has been proved to significantly reduce the computing cost, providing a good solution for the development of time pattern classification, chaotic state prediction and so on. However, the current reservoir computing is serial in the process of information processing, and it is impossible to realize a more potential sensing parallel mechanism. Therefore, it is unable to achieve more potential sensing parallel mechanism. Therefore, how to realize reservoir computing in sensor will be the key to further improve the speed of information processing, which is conducive to the development of reservoir computing towards high speed, low power consumption and easy integration. This research work overcomes the technical bottleneck of physically separated sensors and reservoir computing, and greatly reduces system learning complexity and computing costs. This method can satisfy the urgent requirements for the explosive growth of big data processing in the era of IoT, and provides a technological breakthrough for the realization of more effective machine learning and brain-like computing.

Optoelectronic fusion reservoir computing for language symbol recognition and learning

Prof. Sun Linfeng is the first author, and Prof. Heejun Yang is the corresponding author. This work is supported by the Research Prize of Young Scholars Program of BIT, Samsung Research Foundation of Korea, Samsung Electronics Hatch Fund and Korea National Research Foundation.

Linfeng Sun, Zhongrui Wang, Jinbao Jiang, Yeji Kim, Bomin Joo, Shoujun Zheng, Seungyeon Lee, Woo Jong Yu, Bai Sun Kong and Heejun Yang*, “In-sensor reservoir computing for language learning via two-dimensional memristors”, Science Advances, 7, 20, 2021 (eabg1455).


Paper link: https://advances.sciencemag.org/content/7/20/eabg1455