Research Area Intelligent Computing

Intelligent Computing

The intelligent computing will be a key enabler for future progress in devices. The GREEN lab’s vision is to enable intelligence in resource-constrained devices. Our research on intelligent computing includes architecture, circuit, and device innovations to enable deep learning in hardware. Further, we aim to explore new learning algorithms and associated hardware that enables learning dynamics of a complex system. The key research vectors in this thrust are:

  1. Learning System’s Dynamics – Algorithm and Architecture: A key challenge in future growth of artificial intelligence is to design models that can learn dynamics of a complex system to enable prediction of the system’s evolution. We are exploring innovative algorithms for learning system dynamics, as well as developing programmable digital architecture to accelerate a wide classes of dynamic system models.
  2. Hardware Architecture for Deep Learning: We are developing a unique in-memory and programmable computing platform, Neurocube, to accelerate training and inference of deep learning networks. The goal is enable 100X gain in energy-efficiency during training and inference compared to GPU, while maintaining the programmability across various learning models.  
  3. Accelerated Training of Deep Learning Networks: Accelerating training is perhaps the most critical challenge for application and adoption of deep learning networks in various applications. We are exploring algorithm-architecture co-design techniques during training include in-memory accelerator for training, hardware assisted dynamic precision control, and frequency domain operations, to enable 10x-100x enhancement in training performance. 
  4. Energy-efficient Inference for Deep Learning Networks: Our research is exploring static and dynamic techniques, such as precision control, energy-accuracy trade-off, etc. to enhance energy-efficiency of hardware accelerators for deep learning. The techniques to reduce memory demand during inference are being explored as well.  
  5. Exploring post-CMOS devices for Learning Networks: We have developed device and circuit design techniques to enable ultra-low-power design of different types of machine learning networks. The group has presented innovative Gaussian devices using heterojunction tunneling transistors for associative processing, explored design of ultra-low power cellular neural networks using Si/Ge Tunneling FETs, and deep learning networks using resistive devices.