I’m in Washington, DC today to present our FAIR-Frame paper at SIGIR.

In the paper, we present a framework for thinking about fairness in AI holistically, where all available demographic dimensions are considered together to analyze the fairness of the model instead of in isolation. Estimating the relationship between demographics and model error is more predictive of downstream fairness than simply calculating a fairness metric based on empirical probabilities. More broadly, zooming out can give a better perspective on disparities overall.