Imagine a more sustainable future, where cell phones, smartwatches and other wearables would not have to be looked at or discarded for a new model. Instead, they can be upgraded with the latest sensors and processors that will snap into a device’s internal chips, such as embedding Lego bricks into existing builds. These recoverable chips can keep devices up to date while reducing our electronic waste.
MIT engineers have now taken a step towards this modular approach with a Lego-like design for a stackable and re-configurable artificial intelligence chip.
The design includes periodic layers of sensing and processing components, as well as light emitting diodes (LEDs) that allow the chip layers to communicate optically. Other modular chip designs use conventional cables to relay signals between layers. These complex connections are difficult to cut and restore, if not impossible, these stackable designs cannot be reconfigured.
MIT’s design uses light instead of physical cables to transmit data via chips. So the chip can be reconfigured, the layers can be swapped or stacked, for example adding new sensors or updated processors.
“You can add as many computational levels and sensors as you like, for light, pressure and even odor,” said Jihun Kang, a post-doctoral fellow at MIT. “We call it an AI chip like a reconstructive LEGO because it has unlimited scalability depending on the combination of layers.”
Researchers are interested in applying the design to Edge computing devices – autonomous sensors and other electronic devices that operate independently of centralized or distributed resources, such as supercomputers or cloud computing.
“As we enter the era of sensor network-based Internet of Things, the demand for advanced multi-function computing devices will increase dramatically,” said Jiwan Kim, an associate professor of mechanical engineering at MIT. “Our proposed hardware architecture will provide great versatility for future Edge computing. A
The team results are published Natural electronics. In addition to Kim and Kang, MIT authors include co-authors Chanieol Choi, Hunsek Kim and Min-Qiu Song, and contributing authors Hanul Eun, Celesta Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-in Park, Yengin Kim, Han. With Yol Li, Duon Li, Subin Pang, Sang-Hun Bei, Hun S. Kum, and Peng Lin, Harvard University, Tsinghua University, Zhejiang University, and their collaborators.
Team Design is currently set up to perform basic image identification. It hierarchically processors made from image sensors, LEDs, and artificial synapses – a network of memory resistors, or “memorystores”, which the team previously created, working together like a network. Physical Neural, or “brain-on” – a chip. »Each array can be trained to process and classify signals directly on a chip without the need for external software or Internet connection.
In their new chip design, researchers have integrated image sensors into a network of artificial synapses, each of which was trained to recognize specific characters – in this case, M, I and T. One method though would be to relay the signal from a sensor. Through physical cables in a processor, the team instead built an optical system between each sensor and a network of artificial synapses to communicate between layers without the need for any physical connection.
“Other chips are physically wired through metal, which makes them difficult to rewire and redesign, so if you want to add a new function, you’ll need to create a new chip,” said MIT Postdock Hyunsack Kim. “We have replaced that physical wired connection with an optical communication system, which gives us the freedom to stack and connect chips as we wish. A
The team’s optical communication system has a pair of photodetectors and LEDs, each equipped with small pixels. Photodetectors form an image sensor for receiving data and LEDs for transmitting data to the next level. When a signal (for example, an image of a character) reaches the image sensor, the photo’s light pattern encodes a specific configuration of LED pixels, which in turn stimulates another layer of photodetector, as well as a network of artificial synapses instead of LEDs. Classifies the signal.
The team has developed a single chip with a computing core about 4 square millimeters, or about the size of a piece of confetti. The chip is stacked with three image recognition “blocks”, each containing an image sensor, an optical communication layer and an array of artificial synapses to classify the three characters A, M, I or T. They then project a pixelated image of random characters. Each neural network produced on the chip and in response measures the electrical current. (The higher the current, the more likely the image is to be a character that has been trained to recognize a particular board.)
“Denoising” the processor, and the chip is found and then accurately locate the image.
“We’ve demonstrated stackability, replaceability and the ability to insert new functions into chips,” notes Min-Qiu Song, a MIT postdock.
Researchers plan to add more sensing and processing capabilities to the chip, and they envision unlimited applications.
“We can add layers to a cellphone’s camera so it can recognize more complex images, or turn them into healthcare monitors that can be integrated into wearable electronic skin,” suggests Choi, who previously created “intelligent” skin with Kim. For important observations. Panel
Another idea, he added, involves modular, electronic-embedded chips that customers can choose to make with the latest sensors and processor “bricks”.
“We can build a simple chip platform, and each level can be sold separately as a video game,” said Jehan Kim. “We can create a variety of neural networks, such as for image or voice recognition, and let the customer choose what they want and add to an existing chip like a LEGO. A
The study was supported, in part, by the South Korean Ministry of Commerce, Industry and Energy (MOTIE); Korea Institute of Science and Technology (KIST); And Samsung’s Global Research Outreach Program.