The Hitchhiker’s Guide to Hyperparameter Optimization

Dec 02, 10:00AM PDT(06:00PM GMT).
  • Free 181 Attendees
Welcome to the "Deep Learning in Practice" learning series, presented by Allegro AI. This series is focused on methodologies and tools for machine and deep-learning(ML/DL) projects.

In the 4th session, we will focus on practical experiences and practises on hyperparameter optimization.

Long before machine learning ventured outside of academia, hyperparameters had a different name. “Magic Numbers”. These were unexplained constants appearing in code, and their values completely (and destructively) controlled the behavior of programs since the 1960s.
One day, someone (probably a physicist) realized that some of their hardcoded constants could be altered, thereby generating completely new behavior in the software they were working on - without changing anything else. The constants were now parameters as well, and the hyperparameter was invented.
This has made a lot of people very angry and been widely regarded as a bad move.
Fast forward more than half a century into the future, these “list of constants” are no longer tweaked by the shaky hands of weary grad-students, but are methodically swept to find “performance-enhancing” configurations. This is done by very fancy algorithms that are designed to find these “lucky” combinations using the least resources possible.
Such sweeping mechanisms are fairly easy to mismanage and may result in a huge waste of money, time, and energy. Nevertheless, every now and then comes a rare combination of an efficient optimization mechanism for the search - and a robust and easy way to perform it.
In this webinar, we will gawk at the amazing wealth of this realm and use open- source tools to move from a simple grid search - to algorithms verging on Bistromathics, all by keeping a neat interface and controllable execution.

All sessions of the series:

  • Nov 11th: ML Pipelines for Research: This is the Way. Session 3
  • Oct 14th: The Fundamentals of Research-MLOps. Session 2
  • Sep 23rd: Insights on Data Challenge in Deep Learning Projects. Session 1
  • Ariel Biller (AllegroAI)

    Ariel recently took up the mantle of Evangelist at AllegroAI. He is enthusiastic about the rapid evolution of MLOps used in academic and industrial research.
    Ariel received his Ph.D. from the Weizmann Institute of Science. With a broad experience in computational research, he made the transition to the bustling startup scene of Tel-Aviv, and to cutting-edge Deep Learning research.
    In the past 5 years, Ariel worked on various projects from the realms of - quantum chemistry, massively-parallel supercomputing, deep-learning computer-vision, and even the data science of ultra-fast-charging batteries.