Sharan Narang is a researcher at Baidu's Silicon Valley AI Lab (SVAIL), working in the systems team. He has played an important role in improving the performance and programmability of the deep learning framework used by researchers at SVAIL. Sharan's research work has focused on reducing the memory requirement of deep learning models. He has explored techniques like pruning neural network weights and quantization to achieve this goal. He has also proposed DSD training flow that improved the accuracy of deep learning applications by ~5%. Prior to Baidu, Sharan was working on next-generation mobile processors at NVIDIA.