There is a growing need both for machine learning models that are explainable and models that are fair and free from bias. In this two-part talk, we will present an introduction to explainability and bias in machine learning.
In the first part, we will start with an overview of techniques, such as LIME and SHAP, for explaining machine learning models. In addition to helping us explain models and their predictions, explainability methods can also help us debug and find flaws in our models. In the second part of the talk, we will then go over some of the state-of-the-art methods for detecting and mitigating bias, and talk briefly about general challenges in handling bias.
Research Scientist at RealityEngines.AI