ACMI Lab | Fairness/Accountability


Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a variety of algorithms in attempts to satisfy subsets of these parities or to trade off the degree to which they are satisfied against utility. We are interested in studying the wider systems. What must we know about how disparities arose in the first place, what decision is influenced by the prediction, the normative principles for assigning responsibility, and the impacts of interventions to give practical guidance to ML practitioners?