Understanding and Removing Unfair Bias in ML

Extensive evidence has shown that AI can embed human and societal bias and deploy them at scale. And many algorithms are now being reexamined due to illegal bias. So how do you remove bias & discrimination in the machine learning pipeline?

In this webinar you’ll learn the debiasing techniques that can be implemented by using the open source toolkit AI Fairness 360.

AI Fairness 360 (AIF360) is an extensible, open source toolkit for measuring, understanding, and removing AI bias. AIF360 is the first solution that brings together the most widely used bias metrics, bias mitigation algorithms, and metric explainers from the top AI fairness researchers across industry & academia.
In this webinar you’ll learn:

  • How to measure bias in your data sets & models
  • How to apply the fairness algorithms to reduce bias
  • How to apply a practical use case of bias measurement & mitigation in a data-driven medical care management scenario
  • Upkar Lidder

    Upkar Lidder is a Full Stack Developer and Data Wrangler with a decade of development experience in a variety of roles. He can be seen speaking at various conferences and participating in local tech groups and meetups.
    • Date: Aug 17, 10:00 (US Pacific Time)
    • Fee: Free
    • Available Seats: 57 (max 200)
    • Help? Send Question
    Watch Recording