Society has become so dependent on computing power that any inefficiencies in the way that we process information can considerably impede productivity and quality of life. Three emerging trends pose challenges to the design of more efficient computer systems. First, energy constraints are becoming more strict amidst the rising interest in IoT and mobile computing. Yet traditional architectures waste a great deal of energy ensuring exactness for the naturally approximate applications that run on these systems (e.g., noisy sensor input, user-subjective output). Second, data sets are growing to enormous proportions due to the rapid gathering of information in modern devices. We can no longer rely on data being readily available in on-chip storage. Third, the active chip area is diminishing at smaller technology nodes due to thermal and power density limitations in process technology scaling. We can no longer fully utilize all on-chip hardware resources simultaneously. In this talk, I present new architectural techniques that tackle these challenges by recognizing that they stem from fundamental gaps in the way that data is contextualized in hardware. The goal of a processor is to process real-world information; yet in modern architectures, hardware perceives data as nothing more than bits. First, I show that awareness of the type of information encoded in the bits enables approximation of data values for greater efficiency under strict energy constraints. Second, I show that awareness of the location of information enables more concise caching of massive data sets. Third, I show that awareness of the significance of information enables better scheduling of computations based on their impact on the quality of the final result, improving utilization of precious on-chip resources. These ideas aim to mitigate fundamental inefficiencies in the data movement, storage and compute of today's systems.
Joshua San Miguel is a doctoral candidate in Electrical and Computer Engineering at the University of Toronto, where he is advised by Professor Natalie Enright Jerger. He received a BASc in Engineering Science with Honours at the University of Toronto in 2012. His research spans broadly across topics in computer architecture, touching on caches, memory systems, branch prediction, computation models and networks-on-chip, with his dissertation focusing on approximate computing. He is an author of numerous conference and journal papers, fostering several collaborations with Cornell University, IBM Research, INRIA, NVIDIA Research and the University of Washington. His work has received a HiPEAC Paper Award and a NOCS Best Paper Nomination and has been cited twice by IEEE Micro Top Picks as among the top contributions to computer architecture research in 2015 and 2016 (honorable mention). He was also a recipient of the IBM Ph.D. Fellowship in 2016.