Many AI and machine learning courses cover theoretical techniques of algorithms. In this course, we will train you to become a full stack machine learning engineer, capable of not just training models but also deploying and managing them in production.
You will learn machine learning primarly through building 8 production grade services, step by step, and learn how to build production machine learning services in AWS, how to integrate them with an application, and how to manage it through lifecycle.
Students who take this course will be able to:
Identify and frame business use cases that can be solved by AI and machine learning
Choose the right techniques, tools, frameworks to the business use cases
Build production machine learning services on AWS, and manage through their lifecycle.
Hands on implementation of end to end machine learning services
- Session 1: Jun 1, 10:00am-11:30am PDT (US Pacific Time, GMT-7)
- Session 2: Jun 3, 10:00am-11:30am PDT
- Session 3: Jun 8, 10:00am-11:30am PDT
- Session 4: Jun 10, 10:00am-11:30am PDT
- Session 5: Jun 15, 10:00am-11:30am PDT
- Session 6: Jun 17, 10:00am-11:30am PDT
- Session 7: Jun 22, 10:00am-11:30am PDT
- Session 8: Jun 24, 10:00am-11:30am PDT
- 4 weeks / 8 sessions / 12 hours
- 8 lectures / 8 hands-on projects
- Capstone project, Github portfolio
- Live Sessions (with Zoom), Real time interaction
- Slack support after class and homeworks
Check the Syllabus tab for full course content.
WHO SHOULD LEARN:
Developers, data scientists, students.
Basic familiarity with machine learning
AWS account (Free to sign up). Students can use their own AWS account to run the hands on labs and have all artifacts (models, datasets) in their account for further use
Est. time spend per week: 3 hours live class (required) + 3 hours homework (required) + 3 hours projects (bonus, optional).
Full refund upon request before the first session ends (Jun 1st, 2021 12PM PDT). 5% transaction fee is not refundable.
If missed live sessions, you can watch recordings any time, along with interactive learning tools, slides, course notes
Students have life time access to course materials
Earn Certificate of Completion
Scholarship is available, contact us for details
Cohort 11: Mar 29 ~ Apr 21, 2020
Cohort 10: Jan 18 ~ Feb 10, 2021
Cohort 9: Nov 3 ~ Nov 26, 2020
Cohort 8: Sep 15 ~ Oct 8, 2020
Cohort 7: Jul 27 ~ Aug 19, 2020
Cohort 6: Jun 15 ~ Jul 8, 2020
Cohort 5: May 11 ~ Jun 3, 2020
Cohort 4: Mar 17 ~ Apr 9, 2020
Cohort 3: Jan 31, 2020
Cohort 2: Jan 28 ~ Feb 20, 2020
Cohort 1: Oct 30 ~ Nov 22, 2019
This course includes 8 hands on workshops, and each workshop will cover:
A business use case and how machine learning maps to that use case
An AI/ML algorithm and AI/ML internals technical topic. For example, how and when to use Regression, Classification and how to map the right powerful AI algorithms for each data type.
How to build a production machine learning services (Feature Engineering, Model Training, Model Validation, Endpoints, Gateway/Lambda Integration, Application Integration).
One or more AWS tools. Across the 8 sessions we will cover AWS Sagemaker, BuiltIn Algorithms, Custom Docker Containers, Endpoint, AWS Lambda, AWS API Gateway, AWS Roles and Authentication, AWS Cloudwatch, AWS S3, Postman API testing, cURL, Production AI Best Practices. Microservice design pattern for AI deployment. We will also cover Navigator (by Pyxeda.ai), a life cycle overlay tool that makes AWS tools easier to configure and use.
You will build 7 working production ML services.
We will cover the following projects (business use cases):
Customer Churn: Detect whether your customers are about leave your business or your product. Predict which customers are at risk of churn.
Pricing Analysis: Understand what factors most affect price. Predict how price can change with features. Assess viability of price for future products
Customer approvals: Should you approve a loan for a particular customer? Predict whether a customer is likely to be delinquent on a bill?
Appointment planning: Is a customer likely to miss an appointment? Can you plan your appointment schedule more effectively?
Removing bias: Bias in your AI data can lead to poor outcomes, unhappy users or even legal problems. How to detect and remove sources of bias?
Sentiment Analysis: Are you customer’s happy with your product? What do their comments, tweets and other writings say about their feelings?
Recommendation system: What can you learn about your customers or users? Can you analyze their usage and see what else you can upsell to some users?
You will learn how to use cloud tools:
Configure and use S3 for your data
Feature engineer your data with Python code
Configure and use AWS Sagemaker. Deep dive on built in AWS Sagemaker algorithms KNN and XGBoost. How to hyper-parameter tune Sagemaker algorithms.
Bring custom code into AWS Sagemaker as a Docker container
Configuring and using a Sagemaker Endpoint.
Connecting a Sagemaker Endpoint to a public URL via AWS Gateway and Lambda.
Integrating REST microservices with applications. Using CURL/Postman for API testing.
Cloud AWS best practices. Cloudwatch for logs, managing endpoints.
Navigator (by Pyxeda.ai) for ease of use. How to use Navigator and AWS together.
You will learn machine learning algorithms, algorithm internals and technical concepts
How to select an algorithm for a use case, train, deploy and use it in production, and measure how well it is doing.
How to map each use case to machine learning - what type of machine learning can be used, for what data type. How to measure.
Powerful general purpose algorithms like XGBoost, LinearLearner, KNN, etc. and how they work internally and how to hyper-parameter tune them for best performance.
Metrics and practices for algorithmic evaluation and how to map them to use case.
AI Trust, Fairness and Bias. Managing AI related risks
Machine learning explainability
Advanced aspects of production ML - live monitoring and diagnosis, model versioning, retraining and others.
Module 1: Build and run a production ML in the cloud - Customer Churn
In this first session, we will show the steps needed to build and run a production ML in the cloud. We will illustrate these steps with our first business use case, Customer Churn.
What students will learn:
- Overview of the production ML lifecycle and all of its steps. How to go from data to running production ML
- Description of cloud services that can be used for each stage
- Overview of a customer churn problem and how to build a ML service for it with AWS with Binary Classification Algorithms
- Lab 1: Build a ML service for Customer Churn and test it.
Module 2: Feature Engineering and KNN. Case study - Customer Churn, Pricing Analysis
This session session builds upon the first. We will go over an important aspect of the ML pipeline, Feature Engineering. We will go over the transformations that are required before a dataset can be used by an AWS ML algorithm for building a model. We also examine the use case of Pricing Analysis and how a production AI based on Regression techniques can address this use case. We follow the lifecycle steps shown in the first session, but delve into more depth on how to perform appropriate feature engineering for different scenarios. We also introduce attendees to an ML algorithm called KNN and how it can be used for both regression and classification problems. In the code lab, attendees will perform feature engineering transforms, save the transformed data back in S3 and get started with configuring a KNN algorithm in AWS SageMaker.
What students will learn:
- How to perform feature engineering as a part of the AI life cycle in production
- Overview of a business Pricing Problem and how to build an AI for it using Regression Algorithms
- Common approaches to Feature Engineering - One Hot Encoding and Missing Value handling
- Lab 2: Perform feature engineering on two datasets, Pricing and Churn. Build an AI in the cloud using AWS SageMaker (KNN)
Module 3: Evaluating effectiveness of production AI and XGBoost
In this session, we will explore a new algorithm called XGBoost and configure its hyper-parameters in AWS SageMaker. We will also learn how to evaluate the performance of an AI algorithm and use this criterion to tune the hyper-parameters and pick the best model. In particular, we delve into evaluation metrics beyond accuracy and cover both ML metrics (such as Confusion Matrix) and production service metrics (such as latency and scale) that are important for production. As a case study, we will use the Pricing and Churn datasets that were introduced in the previous sessions
What stuents will learn:
- How to build and configure hyper-parameters for both regression and classification type of problems
- Understand the hyper-parameters exposed by AWS SageMaker and use KNN and XGBoost for performing training on the two types of problems: Classification and Regression
- How hyper-parameters affect solution quality and performance
- Lab 3: Train and evaluate several ML algorithms in the cloud using AWS SageMaker and AWS CloudWatch
Module 4: AI deployment as a microservice in the Cloud
In this session, we will focus on deployment of the trained ML model into production as a microservice. This will involve several aspects, such as endpoint creation, IAM role creation and configuration of a Lambda and API Gateway. This session will focus on the application integration of production AI and how to test and evaluate this integration. In the code lab, attendees will build an end to end working AI and perform external evaluation of the AI to access its performance.
What students will learn:
- How to deploy an AI in the cloud as a microservice
- Evaluate a production AI with both ML metrics (like Accuracy, Confusion Matrix, True/False Positive/Negative etc.) and code metrics (throughput, accuracy, etc.)
- How to test their AI service using CURL and Postman
- Lab 4: Build an end to end production pipeline in the cloud
Module 5: Custom AI algorithms in the Cloud: Use Case - Sentiment Analysis
In this session, we will focus on creating custom algorithms and using them in the cloud. We dive deep into algorithms for Text Classification, particularly the Bag of Words approach, its operation, advantages and limitations. We will look into how AWS SageMaker allows the deployment of custom algorithms, such as text classification using bag of words. This will involve building a custom docker container and configuring an AWS client in the local environment to push the container into AWS ECR. This session will focus on building custom algorithms and deploying them as a microservice. In the code lab, attendees will build a custom algorithm (Sentiment analysis use case) and create a docker container that conforms to AWS SageMaker requirements for custom algorithms. Attendees will build an end to end working AI for the custom algorithm
What students will learn:
- How to build and evaluate production AIs that use Text Classification algorithms
- Overview of a business Sentiment Analysis problem and how to build an AI for it using Text Classification in either Binary or Multiclass form
- How to build custom AI algorithms
- How to build docker containers with the custom algorithms that conform to AWS requirements.
- How to push the docker container into AWS ECR so that they can be used by SageMaker.
- Lab 5: build an end to end production pipeline in the cloud using custom algorithms.
Module 6: Skew in AI. How to Detect and Remove them in Production AI. Use Case - Appointment planning and COMPASS Bias
In this session, we will spend a part of the time to finish up the end to end lifecycle using the custom algorithm described in the previous session as it typically takes more than one session to cover. After that, we focus on the critical issue of data Skew in AI. and AI Bias. We cover how skew in data impacts the performance of AI and steps that can be taken to eliminate it. We will also cover the common ways that Bias can enter a production AI implementation and how to detect and remove Bias. As a case study, we use the publicly available Appointment planning and COMPAS dataset. In the code lab, attendees will build several AIs using these datasets. We will also introduce the overlay tool Pyxeda Navigator and provide resources for attendees to use it
What students will learn:
- What is data Skew and how to overcome it.
- AI Bias and why should it be avoided?
- How can AI Bias enter? How can it be detected and removed?
- How to test for Bias and apply Feature Engineering to reduce it.
- Lab 6: Build several AIs and perform analysis to detect skew and bias
Module 7: Mapping AI problems to Techniques and Drift in Production: Case Study - Making Recommendations
In the first 6 sessions, we have covered different types of use cases, how to map AI to each use case, and different aspects of production AI (hyper parameter tuning, REST API test and integration, etc.) In this session we combine these into a methodology for mapping problems to AI methods for numerical, categorical and free form text data. We cover in depth another powerful topic called Drift and its impact on real world production deployments. In the code lab, attendees will build a production recommendation system using publicly available data, illustrating the concepts covered so far
What students will learn:
- A methodology for mapping use cases (with different data types) into AI algorithms and lifecycle steps
- The concept of drift in production and how it impacts a Business
- Techniques to detect drift in production
- Lab 7: Build and use a production AI for recommendations
Module 8: Advanced topics in Production Cloud AI, Complete and showcase your project
We will provide an overview of advanced topics that attendees can explore beyond this webinar series. These include Model Versioning, interactions between application versioning and model versioning, diagnostics of production AI in the presence of data changes, A/B testing, and others. In the code lab, students will complete their custom project and create a github and project repository to showcase their project. They can also create a project video if they choose to and put it in a hosting facility that we will provide.
What students will learn:
- Best practices for Production AI (model versioning, model integrity, retraining cycles, microservice API management and versioning). How to manage production application changes and production AI upgrades in concert
- Best practices (debugging, instances, your cloud bill :-)).
- Lab 8: Finish custom project and projects presentation
Head of Machine Learning in Pyxeda. She was a Post-Doctoral Fellow with BIDMC and the Department of Pathology, Harvard Medical School, where she was involved in detection and classification of features from histopathological (breast cancer) images. She worked as a research scientist with Parallel Machines on monitoring the health of machine learning algorithms in production and has many publications on ML innovations.
Founder of Pyxeda AI. Previously, Nisha co-founded ParallelM which pioneered the MLOps practice of managing machine learning in production.
Senior Machine Learning Engineer at Pyxeda AI