2023 Simons Collaboration on the Theory of Algorithmic Fairness Annual Meeting

Date & Time


Location

Gerald D. Fischbach Auditorium
160 5th Ave
New York, NY 10010 United States

View Map

Thurs.: 8:30 AM—5 PM
Fri.: 8:30 AM—2 PM

Registration Closed

Invitation Only

All participants must register.

Organizer:
Omer Reingold, Stanford University

Speakers:
Bailey Flanigan, Carnegie Mellon University
Moritz Hardt, Max Planck Institute for Intelligent Systems, Tübingen
Jon Kleinberg, Cornell University
Katrina Liggett, Hebrew University
Huijia (Rachel) Lin, University of Washington
Emma Pierson, Jacobs Technion-Cornell Institute
Salil Vadhan, Harvard University
Manolis Zampetakis, UC Berkeley

Previous Meeting:
2022 Annual Meeting

Meeting Goals:
The Simons Collaboration on the Theory of Algorithmic Fairness aims to establish firm mathematical foundations, through the lens of computer science theory, for the emerging area of algorithmic fairness. Given the unique multidisciplinary nature of this topic, collaboration research looks inwards and outwards. Inwards towards the models, definitions, algorithms, and mathematical connections that the theory of computations can contribute to this effort and outwards towards other areas in computer science and beyond. Looking outside facilitates a discourse with different and at times opposing perspectives, a fruitful exchange of techniques and exposure to important application areas.

The second annual meeting of the collaboration will showcase progress in Algorithm Fairness as well as exciting insights and breakthrough results in the areas of Algorithms, Computational Social Sciences, Cryptography, Differential Privacy, Dynamic Systems, Game Theory, Machine Learning as well as further away areas such as the medical sciences, public policy, and law. The diverse set of participants will allow for informal discussions and new collaborations.

  • Agendaplus--large

    THURSDAY, FEBRUARY 2

    8:30 AMCHECK-IN & BREAKFAST
    9:30 AMJon Kleinberg | The Challenge of Understanding What Users Want: Inconsistent Preferences and Engagement Optimization
    10:30 AMBREAK
    11:00 AMBailey Flanigan | Accounting for Stakes in Democratic Decision Processes
    12:00 PMLUNCH
    1:00 PMMoritz Hardt | Algorithmic Collective Action in Machine Learning
    2:00 PMBREAK
    2:30 PMManolis Zampetakis | Analyzing Data with Systematic Bias: Truncation and Self-Selection
    3:30 PMBREAK
    4:00 PMEmma Pierson | Using Machine Learning to Increase Equity in Healthcare and Public Health
    5:00 PMDAY ONE CONCLUDES

    FRIDAY, FEBRUARY 3

    8:30 AMCHECK-IN & BREAKFAST
    9:30 AMSalil Vadhan | Differential Privacy's Transition from Theory to Practice
    10:30 AMBREAK
    11:00 AMKatrina Liggett | Data Privacy is Important, But It's Not Enough
    12:00 PMLUNCH
    1:00 PMHuijia (Rachel) Lin | How to Hide Secrets in Software?
    2:00 PMMEETING CONCLUDES
  • Abstractsplus--large

    Bailey Flanigan
    Carnegie Mellon University

    Accounting for Stakes in Democratic Decision Processes

    It is a ubiquitous idea that, to ensure a high-quality outcome of a collective decision, the greatest stakeholders should have sufficient say in the decision process. As we show, the cost of failing to account for stakes can be formalized in the language of voting theory: when voters have different stakes in a decision, all deterministic voting rules (which do not account for voters’ differing stakes) have unbounded welfare loss: that is, all such rules will sometimes select outcomes that give essentially zero welfare to the population overall. This is alarming given that in practice, the voting rules we use tend to be deterministic, and voters’ stakes are likely to vary.

    Bailey Flanigan will explore two solutions to this problem, both which involve accounting for voters’ stakes. In the first solution, Flanigan will show that the welfare loss is substantially reduced when voters are altruistic: when deciding how to vote, they weigh not only their own interests, but also the interests of others. As such, altruism can be interpreted as voters implicitly accounting for each other’s stakes. In the second solution, we formalize how to design democratic processes that explicitly account for voters’ stakes, and we show that doing so can again substantially reduce the welfare loss. Throughout the talk, Flanigan will discuss how these conclusions can be operationalized in the design of democratic processes.
     

    Moritz Hardt
    Max Planck Institute for Intelligent Systems

    Algorithmic Collective Action in Machine Learning

    We initiate a principled study of algorithmic collective action on digital platforms that deploy machine learning algorithms. We propose a simple theoretical model of a collective interacting with a firm’s learning algorithm. The collective pools the data of participating individuals and executes an algorithmic strategy by instructing participants how to modify their own data to achieve a collective goal. We investigate the consequences of this model in three fundamental learning-theoretic settings, the case of a non-parametric optimal learning algorithm, a parametric risk minimizer, and gradient-based optimization. In each setting, we come up with coordinated algorithmic strategies and characterize natural success criteria as a function of the collective’s size. Each setting admits the existence of vanishingly small critical thresholds for the fractional size of the collective at which success occurs. Complementing our theory, we conduct systematic experiments on a skill classification task involving tens of thousands of resumes from a gig platform for freelancers. Through more than two thousand model training runs of a BERT-like language model, we see a striking correspondence emerge between our empirical observations and the predictions made by our theory. Taken together, our theory and experiments broadly support the conclusion that algorithmic collectives of exceedingly small fractional size can exert significant control over a platform’s learning algorithm.
     

    Jon Kleinberg
    Cornell University

    The Challenge of Understanding What Users Want: Inconsistent Preferences and Engagement Optimization

    As we think about the fundamental settings in which algorithms mediate decisions about how people are treated, one of the most central is the way that online content is presented to users on large Internet platforms. Online platforms have a wealth of data, run countless experiments, and use industrial-scale algorithms to optimize user experience. Despite this, many users seem to regret the time they spend on these platforms. One possible explanation is that incentives are misaligned: platforms are not optimizing for user happiness. We suggest the problem runs deeper, transcending the specific incentives of any platform, and instead stems from a mistaken foundational assumption. To understand what users want, platforms look at what users do. This is a kind of revealed-preference assumption that is ubiquitous in user models. Yet research has demonstrated, and personal experience affirms, that we often make choices in the moment that are inconsistent with what we want: we can choose mindlessly or myopically, behaviors that feel entirely familiar on online platforms.

    In this work, Jon Kleinberg and collaborators have developed a model of media consumption where users have inconsistent preferences. Kleinberg will consider what happens when a platform that simply wants to maximize user utility is only able to observe behavioral data in the form of user engagement. The framework is based on a stochastic model of user behavior, in which users are guided by two conflicting sets of preferences — one that operates impulsively in the moment, and the other of which makes plans over longer timescales. By linking the behavior of this model to abstractions of platform design choices, we can develop a theoretical framework and vocabulary in which to explore interactions between algorithmic intermediation, behavioral science and social media.

    This talk is based on joint work with Sendhil Mullainathan and Manish Raghavan.
     

    Katrina Ligett
    Hebrew University

    Data Privacy is Important, But It’s Not Enough

    Our current data ecosystem leaves individuals, groups and society vulnerable to a wide range of harms, ranging from privacy violations to subversion of autonomy to discrimination to erosion of trust in institutions. In this talk, Katrina Ligett will discuss the Data Co-ops Project, a multi-institution, multi-disciplinary effort Ligett co-leads with Kobbi Nissim. The project seeks to organize our understanding of these harms and to coordinate a set of technical and legal approaches to addressing them. In particular, Ligett will discuss recent joint work with Ayelet Tapiero-Gordon and Alex Wood, wherein they argue that legal and technical tools aimed at controlling data and addressing privacy concerns are inherently insufficient for addressing the full range of these harms.
     

    Huijia (Rachel) Lin
    University of Washington

    How to Hide Secrets in Software?

    Since the initial public proposal of public-key cryptography based on computational hardness conjectures (Diffie and Hellman, 1976), cryptographers have contemplated the possibility of a “one-way compiler” that can transform computer programs into unintelligible forms while preserving functionality.

    In this talk, Rachel Linn will introduce a mathematical formalization of this goal called indistinguishability obfuscation (iO). In the past decade, it has been shown that iO is a “master tool” for enabling a wide range of cryptographic goals. Linn will then describe our recent construction of iO based on three well-studied computational hardness conjectures.

    This talk is based on joint work with Aayush Jain and Amit Sahai..
     

    Emma Pierson
    Cornell Tech

    Using Machine Learning to Increase Equity in Healthcare and Public Health

    Our society remains profoundly unequal. Worse, there is abundant evidence that algorithms can, improperly applied, exacerbate inequality in healthcare and other domains. This talk pursues a more optimistic counterpoint — that data science and machine learning can also be used to illuminate and reduce inequity in healthcare and public health — by presenting vignettes from domains including policing, women’s health and cancer-risk prediction.
     

    Salil Vadhan
    Harvard University

    Differential Privacy’s Transition from Theory to Practice

    Differential privacy is a mathematical framework for ensuring that individual-level information is not revealed through statistical analysis or machine learning on sensitive datasets. It emerged from the theoretical computer science literature through a series of papers from 2003–2006 and has since become the subject of a rich body of research spanning many disciplines. In parallel, differential privacy made a remarkably rapid transition to practice, with large-scale deployments by government agencies like the U.S. Census Bureau and by large tech companies such as Google, Apple, Microsoft and Meta, and differential privacy is also playing a core role in an emerging industry of privacy tech startups. In this talk, Salil Vadhan will try to summarize differential privacy’s transition to practice, speculate about the features of differential privacy and the surrounding context that enabled this transition, and discuss some of the challenges encountered along the way. 
     

    Manolis Zampetakis
    University of California, Berkeley

    Analyzing Data with Systematic Bias: Truncation and Self-Selection

    In a wide range of data analysis problems, across many scientific disciplines, we only have access to biased data due to some systematic bias of the data collection procedure. In this talk, Manolis Zampetakis will present a general formulation of fundamental statistical estimation tasks with systematic bias and show recent results from joint work on how to handle two very fundamental types of systematic bias that arise frequently in econometric studies: truncation bias and self-selection bias.

    This talk is based on joint works with: Yeshwanth Cherapanamjeri, Constantinos Daskalakis, Andrew Ilyas, Vasilis Kontonis and Christos Tzamos.

  • Participation & Fundingplus--large

    Participation in the meeting falls into the following four categories. An individual’s participation category is communicated via their letter of invitation.

    Group A – PIs and Speakers
    The foundation will arrange and pay for all air and train travel to the conference as well as hotel accommodations and reimbursement of local expenses.

    Group B – Out-of-town Participants
    The foundation will arrange and pay for all air and train travel to the conference as well as hotel accommodations and reimbursement of local expenses.

    Group C – Local Participants
    Individuals in Group C will not receive financial support, but are encouraged to enjoy all conference-hosted meals.

    Group D – Remote Participants
    Individuals in Group D will participate in the meeting remotely. Please register at the link above and a remote participation link will be sent to you approximately two weeks prior to the meeting.

  • Travel & Hotelplus--large

    Air and Train
    The foundation will arrange and pay for all air and train travel to the conference for those in Groups A and B. Please provide your travel specifications by clicking the registration link above. If you are unsure of your group, please refer to your invitation sent via email.

    Personal Car
    For participants in Groups A & B driving to Manhattan, The James NoMad Hotel offers valet parking. Please note there are no in-and-out privileges when using the hotel’s garage, therefore it is encouraged that participants walk or take public transportation to the Simons Foundation.

    Hotel
    Participants in Groups A & B who require accommodations are hosted by the foundation for a maximum of three nights at The James NoMad Hotel. Any additional nights are at the attendee’s own expense. To arrange accommodations, please register at the link above.

    The James NoMad Hotel
    22 E 29th St
    New York, NY 10016
    (between 28th and 29th Streets)
    https://www.jameshotels.com/new-york-nomad/

    For driving directions to The James NoMad, please click here.

  • COVID-19 Policyplus--large

    ALL in-person meeting attendees must be vaccinated against the COVID-19 and wear a mask when not eating or drinking.

  • Reimbursementplus--large

    Individuals in Groups A & B will be reimbursed for meals not hosted by the Simons Foundation as well as local expenses, including ground transportation. Additional information in this regard will be emailed on the final day of the meeting.

  • Contactsplus--large

    Registration and Travel Assistance

    Ovation Travel Group
    sfnevents@ovationtravel.com
    (917) 408-8384 (24-Hours)
    www.ovationtravel.com

    Meeting Questions and Assistance
    Meghan Fazzi
    Manager, Events and Administration, MPS, Simons Foundation
    mfazzi@simonsfoundation.org
    (212) 524-6080

Subscribe to MPS announcements and other foundation updates