Credit: Lucy Reading-Ikkanda/Simons Foundation

New Simons Collaboration Aims to Provide a Mathematical Language for Fairness

A joint effort by a group of researchers finds a rigorous footing on which to evaluate social trade-offs.

It’s easy to consider computers objective observers of reality, parsing data to make disinterested judgments. In 2016, however, an article published by the website ProPublica introduced many people outside computer science to the notion of algorithmic bias: Artificial intelligence can be unfair. The article described a piece of software, called COMPAS, used by some criminal justice systems to predict recidivism — the likelihood of re-arrest — and inform decisions about bail. The article reported that COMPAS overestimated recidivism rates for Black Americans and underestimated them for white Americans.

But the story grew more complicated. It turns out there’s more than one way, mathematically, to define unfairness. You could compare, say, false positive rates between two groups of people — the likelihood that someone will be mistakenly predicted to re-offend. Or false negative rates, the likelihood that someone will be mistakenly predicted not to re-offend. Soon after the ProPublica piece came out, researchers at Cornell and Harvard universities published a paper proving that under most circumstances, it’s impossible to make algorithms like COMPAS fair, according to two common definitions of fairness. To complicate things further, there are more than two ways to be fair. One researcher, in a tutorial at a 2018 conference, described 21 definitions and allowed for many beyond that.

To better understand this complex mathematical terrain, the Simons Foundation created the Simons Collaboration on the Theory of Algorithmic Fairness, awarding 13 principal investigators in theoretical computer science a total of $2 million a year for four years (extendable to seven), starting this past January. The researchers, scattered across several institutions, work on their own projects while connecting with one another through weekly research seminars and monthly meetings; they also plan to visit each other’s labs in the future. “What we’re trying to do is give the language necessary for discussing fairness,” says Omer Reingold, a computer scientist at Stanford University and the collaboration’s director.

 

Video Thumbnail

By clicking to watch this video, you agree to our privacy policy.

Video courtesy of the Simons Collaboration on the Theory of Algorithmic Fairness

 

Algorithms process large amounts of data to learn patterns and make recommendations: who should receive a loan, or be treated for cancer, or go to jail. “The collaboration’s long-term goals are to build frameworks that can ultimately inform deployments,” says Aaron Roth, a computer scientist at the University of Pennsylvania and a collaboration member, “but the way to get there is to first build strong, rigorous foundations.”

Some transfer of the collaboration’s work from theory to application has already begun. Jon Kleinberg, a computer scientist at Cornell and a collaboration member, co-authored the 2016 paper showing that there was no way for the COMPAS recidivism scores to be fair by all definitions. He says the work has since found its way into policy briefs and legislative testimony. Kleinberg also co-authored a 2020 paper exploring employers’ use of hiring algorithms from both technical and legal viewpoints, and one of his co-authors now advises companies on the use of such software.

 

An open field

While the notion of fairness has been studied in computer science for decades, it has quickly expanded in the last few years, Reingold says, thanks to a couple of factors. First, some foundational papers have been published. One was a 2012 paper, presented at the Innovations in Theoretical Computer Science Conference. In the paper, several researchers who are now collaboration members defined a notion of fairness in which individuals are compared with similar individuals, rather than groups being compared with groups. Another was Kleinberg’s 2016 paper about incompatible notions of fairness. “Theoreticians really love impossibilities,” Reingold says. “It can push them to be more creative and to understand things better.” Another factor has been the explosion in applications of machine learning, including some developed by the biggest companies in the world, that show evidence of discriminatory practices.

“This area feels to me a little bit like cryptography in the ’80s,” Reingold says. “There had already been a lot of good work, but still many of the basic notions and definitions were missing. That was a very productive decade, and a lot of the things that we now find completely trivial came into place. My hope is that this project will do the same for fairness.”

But aren’t there already more than 20 definitions of fairness? Yes, Reingold says, and we’ve accumulated even more since that 2018 tutorial. He says they’re all needed, comparing them to the many notions of secrecy in cryptography. But what we have is mostly a bunch of isolated definitions. He’d like to connect them into a landscape, or collect them into families, where insights about one kind of fairness can help illuminate others.

One such family of fairness is called group fairness. This includes comparisons of false positives or negatives on average between two groups. Reingold says group fairness is the most commonly used family, which is unfortunate because definitions in this category are also the weakest. An algorithm might seem fair when we consider how it treats two groups on average, but it may still discriminate, because people within each of those groups aren’t homogeneous. Other conceptions of fairness compare individuals to other individuals who are similar to them, or consider intersectionality, the overlap of demographic variables. The possibilities are endless.

Indeed, there are some big challenges for the field, Reingold says. One is the multiple and conflicting definitions of fairness, which supply contradictions and require trade-offs. Another is the compositional nature of computer systems. Multiple components or parties can each be fair, but when they interact, discrimination can result.

Reingold says theoreticians are well suited to solve some of these issues. First, they’re good at finding the right level of abstraction for a solution, so that it’s specific enough to help develop new algorithms but broad enough to apply widely. Second, “we bring a lot of tools to the mix,” he says. “Cryptography, privacy, game theory, economics.”

The theorists in this collaboration should make some significant progress, Reingold says. “What we have is a group of people who have been very active in this area, responsible for some of the biggest successes so far, working together, identifying challenges, communicating and creating a community.”

The concept of algorithmic fairness is highly complex, touching on many aspects of society. Collection of terms provided by Omer Reingold.

 

It’s all about trade-offs

While fairness may seem to be the domain of philosophers, it’s a natural application for mathematicians and game theorists, says Katrina Ligett, a computer scientist at the Hebrew University of Jerusalem and a collaboration member. “In my mind, theoretical computer science is about trade-offs. We’re trying to understand the tensions between various things that you might want,” such as accuracy, efficient use of data and efficient use of computation. “And fairness is just another thing in conflict with other things.” (Not to mention the conflicts between different types of fairness.)

Fairness might come at the cost of privacy, for example. To make sure you’re treating groups equally, you need to collect data about the people in those groups. “I don’t think the interesting question is ever: Let me maximize in one of these dimensions,” Ligett says. “It’s more: Let me expose the trade-offs so decision-makers are aware of them.”

One set of conflicts Ligett studies involves signaling, the strategic transmission of information. Imagine a group of high schools that want to get their students into college. A school could give all of its students A’s, but then an A would cease to carry meaning, so schools might game the system by giving out just the right amount of A’s. Some schools might be better than others at such signaling. To help colleges, one might introduce standardized tests. But, Ligett has found, even when the tests help colleges make more accurate admissions decisions, if there’s a bit of noise in the tests — if they don’t predict college performance perfectly — they can increase the disparity between schools. Here, trying to increase fairness actually reduces it. In the field of fairness, not only are there trade-offs, there are also counterintuitive effects.

 

A certain measure of unfairness towards schools without strategic signaling (versus schools that signal strategically) can be worse in the presence of a standardized test than without the standardized test (for certain ranges of the model parameters). Adapted from Immorlica et al. "Access to population-level signaling as a source of inequality." Proceedings of the Conference on Fairness, Accountability, and Transparency. 2019.

 

Ligett also studies asymmetries in data. For example, a bank might try to predict who will be most likely to pay off a loan, but it must make decisions based on the behavior of people who have already received loans. If some of the people who didn’t receive loans would have repaid them, the bank wouldn’t know. Ligett has shown that to paint a better picture of who will and won’t repay a loan, and to increase fair lending, the bank might benefit from exploration —giving some seemingly risky loans just to learn if they’re truly risky. Some demographics may seem risky only because they’ve never been given a chance.

Ligett is working with law scholars to translate this work on predicting creditworthiness into recommendations for policies in Israel.

 

For the good of all

Kleinberg has revealed another counterintuitive finding about fairness. In one project, he and collaborators looked at the multigenerational effects of social interventions. Societies must decide how to distribute limited opportunities, such as college admissions or scholarships. Should they go to people who will perform best, or people who will benefit the most?

In the college admissions scenario, imagine a single decision-making system that does what it can to benefit society the most in the long run, where it defines benefit in terms of average college performance across generations. On the one hand, people of higher socioeconomic status (SES) have advantages that may help their performance in college. But people of lower SES stand to gain more from the opportunity. Kleinberg employed several tools to study this imaginary world: empirical data on college performance, theoretical proofs, and simulations for complex versions of the problem.

He saw something interesting in the results: Even when the system didn’t care about distributing opportunity fairly, it ended up granting admission disproportionately to people of lower SES. That’s because those people used their degrees to move up to a higher SES, which improved the SES of their eventual children, and thus their children’s college performance. After a couple of generations, society as a whole benefited. The “selfish” decision-maker, Kleinberg says, had landed on “socioeconomic affirmative action.”

Of course, in the real world there’s no such omnipotent decider, but forward-thinking governments can offer scholarships to the same effect, and Kleinberg says he’s served on admissions committees that aim to take these long-range considerations into account.

In other work, Kleinberg aimed not to prescribe solutions but to interpret human behavior. If there is a group of 100 people, and one of them will be drafted for the military, behavioral studies suggest that people prefer to distribute the risk among a subgroup — any subgroup — rather than giving everyone a 1 percent chance or one person a 100 percent chance of being drafted. To explain the odd preference, Kleinberg’s model incorporated a psychological phenomenon called probability weighting: reducing risk from 1 percent to 0 percent reduces worry (arguably irrationally) much more than reducing it from, say, 20 percent to 19 percent, so the model aims to maximize the number of people with 0 percent risk. On the other hand, a psychological principle called the identifiable victim effect makes us uncomfortable when we can put a face on suffering, so the model avoids concentrating all the risk on one individual. Instead, it prefers a middle ground, just as humans do. “It’s important to understand when the public has what might seem to us an unexpected reaction to a proposed policy,” Kleinberg says.

 

A deadly game

Suppose you walk into a doctor’s office and receive a prescription. Your recommended dose, according to a hypothetical algorithm, is 27 mg, with a 95 percent prediction interval between 20 mg and 35 mg. That prediction interval is supposed to mean that there’s a 95 percent chance your correct dose is between the low and high numbers. But really your correct dose either is or isn’t in that range. The interval just means that across a population, the correct dose is in that range 95 percent of the time. Perhaps that range of doses is always right for a certain 95 percent of people and always wrong for another 5 percent of people.

Reingold, along with fellow collaboration member Guy Rothblum, has worked on the problem of seeking fairness in algorithmic accuracy. Aaron Roth has expanded the work to look at fairness not just in accuracy but according to other metrics such as uncertainty — the narrowness of prediction intervals. If an algorithm were to suggest a range between zero and 1 million milligrams, that would not be helpful. Roth has also expanded the question to look at cases where a model must make predictions about (and learn from) examples one at a time, rather than by analyzing a bunch of data at once before deployment.

The solution, Roth and colleagues found, was to use game theory, the mathematical analysis of competition and cooperation. They define a game in which a model makes these sequential decisions and tries to minimize its error, while an omniscient adversary who knows the patients’ correct dosages sends those patients to the model in an order designed to maximize the model’s error. Through some mathematical wizardry of their own, the researchers can then solve for the equilibrium of this game, where both players are doing as well as they can. That equilibrium defines the model’s best strategy for learning from examples. Based on how each patient responds to treatment, it can then make better predictions for the next patient.

 

Making connections

The researchers are focusing on theory, but the hope is that their work will have a life outside of equations. They’ll send regular status reports to the Simons Foundation, and success will be measured not only by the number of published papers but by the expansion of their research community: getting more students and postdocs in theoretical computer science to study fairness and strengthening partnerships between researchers who already do.

“There is a huge hunger for algorithmic fairness in industry, and so there is a transfer of knowledge from academia that is amazingly quick,” Reingold says. “The quickness of implementations in the real world is kind of scary for a theoretician like me, and I love this.” He’s used to developing and debating ideas for a decade or two before something happens with them, but this work is already being applied even outside the realm of fairness. Rothblum and his collaborators designed a system to predict COVID-19 mortality risk in Israel, using an algorithm from fairness theory and calibrating it for subpopulations for whom data may be lacking. According to Reingold, “any good theory has lots of connections and unexpected consequences.”

Recent Articles