I’m a 1st year PhD student at the University of Toronto supervised by Prof. Gennady Pekhimenko. Before that, I completed Masters in Computer Science from University of British Columbia under the supervision of Prof. Alexandra Fedorova and Prof. Ivan Beschastnikh. I primarily work on systems research for machine learning and distributed/parallel computing.
My recent research projects include optimizing distributed machine learning and performance analysis of deep learning training workloads. I also have experience working on instruction prefetching optimizations for multi-core machines and virtual machines. Before returning to academia, I was working as a software engineer for 5 years in various companies in India.
- Oct 2019: Extended abstract accepted at SOSP SRC 2019.
- Jan 2019: Paper accepted at SysML 2019.
- July 25, 2018: Paper accepted at IISWC 2018.
- “Why Should I Trust You?” Explaining the Predictions of Any Classifier
- Beyond Data and Model Parallelism for Deep Neural Networks
- Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
- Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding