Algorithmic Fairness

[This article was first published on YoungStatS, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Algorithmic Fairness

Tuesday, October 3rd, 2023, 7:30 PT / 10:30 ET / 16:30 CET

2nd joint webinar of the IMS New Researchers Group, Young Data Science Researcher Seminar Zürich and the YoungStatS Project.

When & Where:

  • Tuesday, October 3rd, 2023, 7:30 PT / 10:30 ET / 16:30 CET
  • Online, via Zoom. The registration form is available here.

Speakers:

  • Linjun Zhang, Rutgers University: “Fair conformal prediction

Abstract: Multi-calibration is a powerful and evolving concept originating in the field of algorithmic fairness. For a predictor f that estimates the outcome y given covariates x, and for a function class C, multi-calibration requires that the predictor f(x) and outcome y are indistinguishable under the class of auditors in C. Fairness is captured by incorporating demographic subgroups into the class of functions C. Recent work has shown that, by enriching the class C to incorporate appropriate propensity reweighting functions, multi-calibration also yields target-independent learning, wherein a model trained on a source domain performs well on unseen, future target domains (approximately) captured by the re-weightings. The multi-calibration notion is extended, and the power of an enriched class of mappings is explored. HappyMap, a generalization of multi-calibration, is proposed, which yields a wide range of new applications, including a new fairness notion for uncertainty quantification (conformal prediction), a novel technique for conformal prediction under covariate shift, and a different approach to analyzing missing data, while also yielding a unified understanding of several existing seemingly disparate algorithmic fairness notions and target-independent learning approaches. A single HappyMap meta-algorithm is given that captures all these results, together with a sufficiency condition for its success.

  • Mikhail Yurochkin, IBM Research and MIT-IBM Watson AI Lab: “Operationalizing Individual Fairness

Abstract: Societal applications of ML proved to be challenging due to algorithms replicating or even exacerbating biases in the training data. In response, there is a growing body of research on algorithmic fairness that attempts to address these issues, primarily via group definitions of fairness. In this talk, I will illustrate several shortcomings of group fairness and present an algorithmic fairness pipeline based on individual fairness (IF). IF is often recognized as the more intuitive notion of fairness: we want ML models to treat similar individuals similarly. Despite the benefits, challenges in formalizing the notion of similarity and enforcing equitable treatment prevented the adoption of IF. I will present our work addressing these barriers via algorithms for learning the similarity metric from data and methods for auditing and training fair models utilizing the intriguing connection between individual fairness and adversarial robustness. Finally, I will demonstrate applications of IF with Large Language Models.

Discussant: Razieh Nabi, Emory University

YoungStatS project of the Young Statisticians Europe initiative (FENStatS) is supported by the Bernoulli Society for Mathematical Statistics and Probability and the Institute of Mathematical Statistics (IMS).

If you missed this webinar, you can watch the recording on our YouTube channel.

To leave a comment for the author, please follow the link and comment on their blog: YoungStatS.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)