Deep Learning on Mobile Device


Over the last few years, convolutional neural networks (CNN) have risen in popularity, especially in the area of computer vision. Many mobile applications running on smartphones and wearable devices would potentially benefit from the new opportunities enabled by deep learning techniques. However, CNNs are by nature computationally and memory intensive, making them challenging to deploy on a mobile device.

This talk explains how to practically bring the power of convolutional neural networks and deep learning to memory and power-constrained devices like smartphones. You will learn various strategies to circumvent obstacles and build mobile-friendly shallow CNN architectures that significantly reduce the memory footprint and therefore make them easier to store on a smartphone;

The talk also dives into how to use a family of model compression techniques to prune the network size for live image processing, enabling you to build a CNN version optimized for inference on mobile devices. Along the way, you will learn practical strategies to preprocess your data in a manner that makes the models more efficient in the real world.

Siddha Ganju

I currently am an Architect at Nvidia focusing on the Self-Driving initiative. Previously I was a Deep Learning Data Scientist at Deep Vision where I worked on developing and deploying deep learning models on resource constraint edge devices. My post graduation is from Carnegie Mellon University with a Master in Computational Data Science
  • Date: May 08, 10:00 (US Pacific Time)
  • Fee: Free
  • Available Seats: 3