Ultrafast Large-scale Neural Network Processor on a Chip

Daniel Lathrop, Professor of Physics, Professor of Geology, Two Cross-disciplinary Institutes: IREAP and IPST, University of Maryland

Friday, December 8, 2017
1:30 p.m.

Hopeman 224

Neural networks allow machines to imitate the way in which human intelligence solves problems by inferring from past experience. These networks are composed of large arrays of communicating neurons, each one performing a simple non-linear operation. When combined and trained by variation of connection weights, the network can perform complex perceptive computational tasks such as image and voice recognition and complex pattern predictions. When implementing neural networks on conventional digital processing hardware such as those at the core of our PCs, an immense inefficiency stands out: neural network computations are inherently parallel, while computers were designed to perform computations serially. This leads to slow computation times and a high toll of energy consumption. Here we report a way to overcome this challenge. We implement a silicon chip with thousands, and potentially millions, of processing interconnected ‘neurons’, each one operating at 200ps rates. The chip is, thus, capable of performing fully parallel, highly efficient computation. For the design of our network, we follow the well-established machine learning algorithms in which interconnections are described by a random sparse directed graph. We show preliminary laboratory measurements of the network dynamics on a chip and discuss its software variations.