Bootcamp: Build AI/ML Pipeline with NLP, TensorFlow, SageMaker

Description
Speaker
This full day hands-on bootcamp is for developers of all skill level to come together to learn deep learning on NLP using Tensorflow with Amazon Sagemaker.
Get free deep learning training. Together we will work through several deep learning labs, build an end-to-end AI/ML pipeline for natural language processing with Amazon SageMaker. You will get hands-on experience with the deep learning, NLP, BERT, Tensorflow and Sagemaker.
Every attendee will receive a free AWS instance for this bootcamp
The bootcamp includes 6 modules:
  • Ingest, analyze, and visualize a public dataset
  • Transform the raw dataset into machine learning features
  • Train a model with our features
  • Optimize model training using hyper-parameter tuning
  • Deploy and test our model both online (real-time) and offline (batch)
  • Automate the entire process with a SageMaker pipeline

Start Date/Time:
  • North America: July 20th, 9:00 PM PST
  • Europe/Africa/MiddleEast: July 21st, 5:00 AM BST / 6:00 AM CET
  • India/Asia/Australia: July 21st, 9:30 AM IST / 12:00 PM Singapore / 2:00 PM Sydney
  • Agenda:

    • [30 mins] Setup
    • [30 mins] Ingest Data
    • [30 mins] Explore Data
    • [15 mins] Q&A / Break
    • [30 mins] Prepare Data
    • [30 mins] Train Model
    • [30 mins] Q&A / Meal Break
    • [30 mins] Optimize Model
    • [30 mins] Deploy Model
    • [30 mins] Create Pipeline
    • [15 mins] Q&A / Wrap Up
    Attendees will learn how to:
    • Ingest data into S3 using Amazon Athena and the Parquet data format
    • Visualize data with pandas, matplotlib on SageMaker notebooks and AWS Data Wrangler
    • Analyze data with the Deequ library, Apache Spark, and SageMaker Processing Jobs
    • Perform feature engineering on a raw dataset using Scikit-Learn and SageMaker Processing Jobs
    • Train a custom BERT model using TensorFlow, Keras, and SageMaker Training Jobs
    • Find the best hyper-parameters using SageMaker Hyper-Parameter Optimization Service (HPO)
    • Deploy a model to a REST Inference Endpoint using SageMaker Endpoints
    • Perform batch inference on a model using SageMaker Batch Transformations
    • Automate the entire process using StepFunctions, EventBridge, and S3 Triggers

    Pre-requisites:
    Modern browser - and that is it!
    Nothing will be installed on your local laptop

    Resources
  • https://github.com/data-science-on-aws/workshop
  • https://datascienceonaws.com (https://datascienceonaws.com/)
  • Chris&Antje

    Chris Fregly
    Chris Fregly is a Developer Advocate for AI and Machine Learning at Amazon Web Services (AWS) based in San Francisco, California.

    Antje Barth
    Antje Barth is a Developer Advocate for AI and Machine Learning at Amazon Web Services (AWS) based in Düsseldorf, Germany

    • Date: Jul 20, 21:00 (US Pacific Time)
    • Fee: Free
    • Available Seats: 10 (max 700)
    • Help? Send Question
    Watch Recording