30 Golden Rules of Deep Learning Performance

"Watching paint dry is faster than training my deep learning model.”
“If only I had ten more GPUs, I could train my model in time.”
“I want to run my model on a cheap smartphone, but it’s probably too heavy and slow.”
If this sounds like you, then you might like this talk.

Exploring the landscape of training and inference, we cover a myriad of tricks that step-by-step improve the efficiency of most deep learning pipelines, reduce wasted hardware cycles, and make them cost-effective. We identify and fix inefficiencies across different parts of the pipeline, including data preparation, reading and augmentation, training, and inference.

With a data-driven approach and easy-to-replicate TensorFlow examples, finely tune the knobs of your deep learning pipeline to get the best out of your hardware. And with the money you save, demand a raise!

Anirudh Koul

a noted AI expert, UN/TEDx speaker, author of the Practical Deep Learning book and a former scientist at Microsoft AI & Research, where he founded Seeing AI, considered the most used technology among the blind community after the iPhone. He also serves as an ML Lead for NASA FDL and coaches a team for Roborace, the Formula One championship of autonomous driving @200mph.
  • Date: Jun 01, 10:00 (US Pacific Time)
  • Fee: Free
  • Available Seats: 81 (max 300)
  • Help? Send Question
Watch Recording