Computer Architecture

Computer architecture is the art and science of meeting the performance, power, energy, temperature, reliability, and accuracy goals of software, ranging from large-scale AI and cloud services to safety-critical embedded services, by composing principled and well-abstracted hardware that harnesses the latest in VLSI technology and trends. As Moore’s Law and Dennard scaling wane, the question of how best to continue building principled and programmable hardware that can meet the ever-increasing computational needs of next-generation AI in large-scale data centers and edge/embedded settings is vital to the continued success of the computer sciences.

Faculty working in this area:

faculty email website
Abhishek Bhattacharjee Bhattacharjee Group
Yongshan Ding Ding Group


Highlights in this area:

Abhishek Bhattacharjee and his students are currently studying hardware architectures that are highly heterogeneous, offering significant performance, but hampering programmability because of their domain specialization. Abhishek and his students have worked on the virtual memory abstraction, with contributions to translation contiguity, memory transistency, and GPU address translation. Their work on coalesced TLBs has been integrated in AMD’s chips, and their large page optimizations are now in Linux. More recently, Abhishek and his students have been building systems to help treat neurological disorders and advance the brain sciences. Their contributions include HALO, a flexible but ultra-low-power architecture for implantable brain-computer interface.

Yongshan Ding’s group focuses on computer architecture research in the context of quantum computing. Emerging quantum systems are typically too noisy and too small for useful applications. To this end, their research focuses on designing innovative techniques to improve the efficiency of quantum algorithms and software, by adapting to hardware architectures. Working closely with experimentalists, their current efforts include constructing novel error-correcting protocols to guarantee robust computation and designing new algorithms that are less resource-intensive and more error-resilient.