CS Talk
Vikram V. Ramaswamy
Host: Holly Rushmeier
Title: Measuring machine learning models for fairness
Abstract:
Current machine learning models are now incredibly powerful and can be used for tasks that make important decisions about people. In this lecture, we’ll discuss three different metrics to evaluate these models for fairness: demographic parity, equalized odds and parity of predictive values, and show that they are very often incompatible with each other. We’ll reason about what normative standards each of these metrics set, and finally, see how they interact with the larger space of machine learning fairness.
Bio:
Vikram V. Ramaswamy is a PhD candidate at Princeton University, working with Olga Russakovsky. He is interested in fairness and interpretability in machine learning, and how it applies to visual systems. In addition to his research and teaching, he is passionate about increasing diversity in higher education and has participated in numerous programs for the same, including the Princeton Freshman Scholars Initiative, a program for first gen / low-income students, and Princeton AI4All, a program for high school students aimed at increasing diversity in AI.