Only a small fraction of a real-world industry AI application is composed of the Machine learning code or algorithm, the required surrounding infrastructure for building a shared AI services is vast and complex. If you are sucking with those hidden truths of AI, this course is right for you.
This course gets you up to build a shared AI service end to end by answering critical questions including: best deep learning technical approach for enterprise AI? key factors to consider an AI Engineering platforms? become a qualified AI Engineer? etc..
There are also 2 hands on code labs and one live demo, elaborate a benchmark between Spark Machine learning and Spark Deep learning with a user item propensity model example and teach you how to build an end to end AI Pipeline with Kafka, NiFi, Spark Streaming and Keras on Spark.
- Session 1: Jan. 30th Wed 10am-12pm PT
- Session 2: Feb. 1st Fri 10am-12pm PT
- 16 Lessons / 4 hours live talks
- 2 code labs/1 live demo
- Real time interactions with instructors
- Watch recorded videos any time
Check the content tab for full course outlines.
WHO SHOULD LEARN:
Architects or engineering managers who are or ready for building an Enterprise AI Service platform
Developers with interests in building large scale deep learning systems over Spark ecosystem
Data scientists, Data Engineers or Application developers who are seeking carrier growth or transformation to apply for AI Engineer positions
Basic background knowledge of machine learning, Python or Scala/java programming skills is preferred
Not sure if the course right for you? Try the first session for free. we will refund you before Jan.31st when the 2nd session starts.
If you miss the live session or want to learn again, you can watch recorded lessons any time.
Also live discuss with instructors and other developers at slack group.
Module 1: Case study: AI as a Service
- A typical end to end AI Service.
- Hidden truths of AI
- Options of AI as a Service
- The journey of AI as a Service
- Challenges of traditional machine learning
- How deep learning can improve
- Enterprise requirements for deep learning
- Deep learning approaches evaluation and Keras on Spark
Module 2: Keras on Spark
- Keras introduction
- Options of Keras on Spark
- Build a User Item Propensity model with deep learning algorithms
- Use case for user item propensity model
- Neural Collaborative Filtering deep learning algorithm
Code Lab 1
- Build a docker image and run a Keras on Spark container
- Run the NCF deep learning pipeline for User Item Propensity model
Module 3: AI Engineering platform and AI Engineers
- Key factors to consider an AI Engineering platform
- architect a data pipeline framework
- Apache NiFi introduction
- Traditional AI Tribe and its challenges
- knowledges and skills are required for AI Engineer
- Growing path for an AI Engineer
Module 4: Benchmark between Spark Machine learning and Deep learning
- Traditional Collaborative Filtering approach with Spark Mllib ALS (Scala)
- Build an NCF deep learning approach with Intel Analytic Zoo on Spark (Scala)
Code Lab 2: Spark Mllib (Als) Vs Intel Analytic Zoo on Spark (NCF)
Live Demo: Build an end to end AI Pipeline with Kafka, NiFi, Spark Streaming and Keras on Spark
Software engineering director, also the chapter leader for Data engineering and AI at Mastercard, he is leading worldwide teams to build Data Engineering and AI shared services and capabilities for the company. Jack has more than 10 years experiences at Big Data and AI, across telcom and finance industries, he blends deep business and technical expertise to help business groups building and growing up their business insights and intelligence capabilities. Jack is a Big Data & AI technical evangelist and active speaker that he already delivered lots of speeches at Strata, Spark AI summit, AI NextCon, AI Expo etc... he also has a passion for coaching people, helping them grow and develop in their area of expertise and ensuring alignment on the “how” of the work they perform.