ACMI Lab | Robustness

Robustness

We might hope that when faced with unexpected inputs, well-designed software systems would fire off warnings. However, ML systems, which depend strongly on properties of their inputs (e.g. the i.i.d. assumption), tend to fail silently. Faced with distribution shift, we wish (i) to detect and (ii) to quantify the shift, and (iii) to correct our classifiers on the fly—when possible. Unfortunately, absent assumptions, the problem is impossible. Moreover, many modern methods claim robustness but don't make clear which assumptions (if any) are coherently addressed. Our work focuses both on researching foundational principles for building systems that can be relied upon in the wild and developing a body of empirical work to guide practitioners.