Cynthia Dwork is a giant in the computer science community.  Among her notable contributions to the field- “differential privacy”, a series of techniques that safeguard privacy of individuals in a large database.

In a recent interview with Quanta magazine, she describes her most recent interest in algorithmic fairness.  She observes, algorithms increasingly control the kinds of experiences we have: They determine the advertisements we see online, the loans we qualify for, the colleges that students get into. Given this influence, it’s important that algorithms classify people in ways that are consistent with commonsense notions of fairness. We wouldn’t think it’s ethical for a bank to offer one set of lending terms to minority applicants and another to white applicants. But as recent work has shown — most notably in the book “Weapons of Math Destruction,” by the mathematician Cathy O’Neil — discrimination that we reject in normal life can creep into algorithms.

This is all still a field of wide open problems, and it’s bound to be so as long as people disagree on what “fair” should look like in every day life. But Dr. Dwork suggests that an important first step will be transparency. I would also argue that beyond transparency, navigating these issues will require greater literacy (from everyone) in the learning and decision systems that are become more and more pervasive.