Artificial Intelligence

Our goal is to enable applications of artificial intelligence (AI) in complex problems vision, science, control, and optimization., as well as instigate orders of magnitude gains in energy-efficiency of AI platforms for internet-of-things (IoT) system.  The key research vectors are:

  1. Learning Algorithms and Applications: GREEN Lab is interested in developing innovative learning algorithms and application of existing learning algorithms such as deep neural network, spiking neural network, and  dynamical systems to problems in computer vision, science, control, and optimization.
    • Embedded Computer Vision: We are developing AI/ML algorithms that can be embedded in cameras to enhance information content of sensed images/videos. 
    • Dynamical Systems: We are developing models that can learn dynamics of a complex system to enable prediction of the system’s evolution.  
    • Artificial Intelligence for Science:  We are developing a hybrid learning theory for applying AI to scientific problems that couples model-based learning of system’s dynamics with data driven learning.
    •  AI/ML in Control and Optimization: We are interested in applications of AI and ML in treal-time control systems and optimization problems.
  2. Robust and Secure Artificial Intelligence: We are exploring algorithmic techniques to enhance robustness and security of machine learning algorithms. 
    • Noise-robust Deep Learning:  We are characterizing and reducing the effect of noise in the accuracy of deep learning algorithms.
    • Secure Deep Learning: We are developing training algorithms and defense mechanisms to make deep neural networks secure against adversarial attacks. 
  3. Computationally Efficient Artificial Intelligence: The GREEN lab’s vision is to enable artificial intelligence in resource-constrained devices. Our research on computationally efficient intelligent computing includes architecture, circuit, and device innovations to enable deep learning in hardware. 
    • Learning meets 3D Memory:  We are developing a unique in-memory and programmable computing platform, Neurocube, to accelerate training and inference of deep learning networks.
    • Energy-efficient Training and Inference: We are developing algorithm-architecture co-design techniques to reduce memory and computation demands of learning algorithms during training and inference.
    • Exploring post-CMOS devices for Learning: We are exploring application of emerging devices such as Resistive devices (RRAM), Ferroelecgtric FETs (FeFETs), and Tunneling transistors for efficient machine learning platforms. 
  4. Collaborative Learning: We believe enabling intelligence in future internet-of-things (IoT) system will require active collaboration between multiple edge devices and between edge and cloud. We are exploring collaboration between edge and host, where a host running complex deep learning guides a simple (non-AI) edge device to increase quality of information delivered from edge to host under constrained bandwidth. We are exploring how learning algorithms can be partitioned between host and edge, or between multiple edge devices to enable collaborative learning and enhance energy-efficient and quality-of-service in an IoT environment.