The 168-core prototype chip called Eyeriss will be able to tap into memory to instantly recognize faces, objects and even sounds. The chip is designed for use in smartphones, self-driving cars, robots, drones and other devices.
Eyeriss is among a handful of chips being developed so devices can do more things without human intervention. Qualcomm is making chips so mobile devices can learn about users and anticipate actions over time. Nvidia offers a computer for automobiles with its Tegra chip so self-driving cars can recognize signals and street signs.
Computers can be trained to recognize images, faces and sound, as has been demonstrated by Microsoft, Facebook and Google through deep-learning systems. Deep learning is a section of machine learning in which algorithms aid in correlation and classification of data. Deep-learning systems typically require complex neural networks and vast computing resources like power-hungry GPUs and thousands of servers.
MIT says its chips would require a fraction of the resources, and is 10 times more power efficient than a mobile graphics processor. It would be possible to use the chip in wearables, smartphones and battery-operated robots.
Eyeriss will bring self-contained AI capabilities to devices with most of the processing happening locally on a device. Wi-Fi or cellular connections won't be needed to tap into cloud services or servers for image or object recognition.
Nvidia at CES demonstrated self-driving cars that culled data from servers to recognize obstructions or objects on a street. With MIT's chip, self-driving cars could have on-board image recognition capabilities, which could be useful in remote areas where cellular connections aren't available.
Each Eyeriss core has its own memory bank, which is the opposite of centralized memory for GPUs and CPUs that power today's deep-learning systems. The chip tries to reduce repetition in processing by efficiently breaking down tasks for execution among the 168 cores. The circuitry can be reconfigured for different types of neural networks, and compression helps preserve bandwidth.
The chip was demonstrated doing image recognition at the ISSCC (International Solid-State Circuits Conference) in San Francisco on Wednesday.
The researchers haven't said if the chips would reach devices. Besides Intel and Qualcomm, chip companies like Movidius are trying to bring AI capabilities to mobile devices.