Saavan is a 4th year Ph.D. student in Prof. Sayeef Salahuddin’s group with a broad interest and expertise in many fields. He works on training and mapping machine learning algorithms onto in house hardware accelerators which we use to solve optimization problems. Through this work he has developed an expertise in machine learning, digital and analog design, as well as high performance computing and low level programming. Saavan is a NSF GRFP honorable mention, Qualcomm Innovation Fellowship Finalist and he has interned at Facebook Reality Labs working on low power machine learning accelerators.
Research
As the demand for big data grows, accelerators are needed to solve computing’s hardest problems. We have developed a series of hardware accelerators for the NP-Complete Ising Model problem which is capable of a variety of intelligence tasks. This accelerator architecture is capable of solving optimization problems, such as the Max-CUT, travelling salesman, and others while also performing sampling based inference for machine learning. The FPGA based accelerator shows a proof of concept acceleration of 100-1000x faster than current state of the art quantum annealing systems, and best in class performance. Along with this we have taped out a stochastic neural accelerator which we expect to be 3-10x faster than the FPGA implementation while being 100x more power efficient. This novel neural accelerator will serve as a proof of concept design to inform future Ising model based accelerators, and is capable of accelerating the future of computing.