2026 Simons Collaboration on the Theory of Algorithmic Fairness Annual Meeting

Date


Location

Gerald D. Fischbach Auditorium
160 5th Ave
New York, NY 10010 United States

View Map

Thurs.: 8:30 AM—5 PM
Fri.: 8:30 AM—2 PM

Invitation Only

Organizers:

Omer Reingold, Stanford University

Speakers:

Avrim Blum Toyota Technological Institute at Chicago (TTIC)
Sarah H. Cen, Carnegie Mellon University
Natalie Collina, University of Pennsylvania
Cynthia Dwork, Harvard University
Noémie Elhadad, Columbia University
Allison Koenecke, Cornell Tech
Frauke Kreuter, LMU Munich and University of Maryland,
Charlotte Peale, Stanford University

Meeting Goals:

The Simons Collaboration on the Theory of Algorithmic Fairness continues its mission of establishing firm mathematical foundations, through the lens of computer science theory, for the evolving field of algorithmic fairness.

The fifth annual meeting will highlight recent progress across the collaboration and the wider research community, drawing together theoretical advances and insights from related disciplines. Alongside presenting new results, the meeting offers space for reflection on ongoing challenges and opportunities, recognizing how the field continues to develop in response to both technological and societal change.
As in previous years, the meeting is designed to foster exchange of ideas, inspire collaborations, and deepen our shared understanding of fairness in algorithms.

Visit the Simons Collaboration on the Theory of Algorithmic Fairness Website:

https://toc4fairness.org/

Previous Meetings:

2022 Annual Meeting
2023 Annual Meeting
2024 Annual Meeting
2025 Annual Meeting

  • Thursday, February 5, 2026

    8:30 AMCHECK-IN & BREAKFAST
    9:30 AMAvrim Blum | Pessimism Traps, Algorithmic Interventions, and Replicability
    10:30 AMBREAK
    11:00 AMCharlotte Peale | Uncertainty Quantification Beyond Calibration
    12:00 PMLUNCH
    1:00 PMSarah H. Cen | Bridging the Gap Between Research and Policy in AI Safety and Accountability
    2:00 PMBREAK
    2:30 PMFrauke Kreuter | Adaptive Integrity of AI Models Aligned with Society
    3:30 PMBREAK
    4:00 PMNoémie Elhadad | Foundation Models in Medicine and Healthcare
    5:00 PMDAY TWO CONCLUDES

    Friday, February 6, 2026

    8:30 AMCHECK-IN & BREAKFAST
    9:30 AMAllison Koenecke | Algorithmic Decisions in the SNAP Benefits Pipeline
    10:30 AMBREAK
    11:00 AMNatalie Collina | Learning and Incentives in Human -AI Collaboration
    12:00 PMLUNCH
    1:00 PMCynthia Dwork | Equitable Evaluation via Elicitation
    2:00 PMMEETING CONCLUDES
  • Avrim Blum
    Toyota Technological Institute at Chicago

    Pessimism Traps, Algorithmic Interventions, and Replicability

    In this talk, Avrim Blum will discuss two lines of work involving sequential decision-making. The first involves “pessimism traps,” a societal phenomenon in which a community gets locked into a cycle of suboptimal decisions due to a self-reinforcing pessimism about their chance of success at more ambitious activities. In this work, we use information cascades to model this phenomenon as rational behavior under uncertainty, and examine how algorithmic interventions can be used to break these traps. We show this for both single-community and multi-community models, even when the intervening entity does not know which actions are best for which community. The second line of work involves replicability in single-agent sequential decision-making. Replicability has generally been studied in the iid data setting. Here, we propose a natural definition for replicability in the context of adversarial online decision-making, and provide algorithms that achieve both replicability and sublinear regret. We leave open the question of what is the optimal regret achievable in this setting.

    This work is joint with Saba Ahmadi, Siddharth Bhandari, Emily Diana, Kavya Ravichandran, and Alexander Williams Tolbert.
     

    Sarah H. Cen
    Carnegie Mellon University

    Bridging the Gap Between Research and Policy in AI Safety and Accountability

    As AI becomes increasingly integrated into both the private and public sectors, challenges around AI safety and accountability have arisen. There is a growing, compelling body of work around the legal and societal challenges that come with AI, but there is a gap in our rigorous understanding of these problems.

    In this talk, Sarah H. Cen will dive deep into a few topics in AI safety and accountability. We will discuss AI supply chains (the increasingly complex ecosystem of AI actors and components that contribute to AI products) and study how AI supply chains complicate machine learning objectives. We’ll then shift our discussion to AI audits and evidentiary burdens in cases involving AI. Using Pareto frontiers as a tool for assessing performance-fairness tradeoffs, we will show how a closed-form expression for performance-fairness Pareto frontiers can help plaintiffs (or auditors) overcome evidentiary burdens or a lack of access in AI contexts. Cen will conclude with a longitudinal study of LLMs during the 2024 US election season. If time permits, we may touch on formal notions of trustworthiness.
     

    Natalie Collina
    University of Pennsylvania

    Learning and Incentives in Human -AI Collaboration

    As AI systems become more capable, a central challenge is designing them to work effectively with humans. Natalie Collina will first consider collaborative prediction, motivated by a doctor consulting an AI that shares the goal of accurate diagnosis. Even when the doctor and AI have only partial and incomparable knowledge, repeated interaction enables richer forms of collaboration: we give distribution-free guarantees that their combined predictions are strictly better than either alone, with regret bounds against benchmarks defined on their joint information. Natalie Collina will then revisit the alignment assumption itself. If an AI is developed by, say, a pharmaceutical company with its own incentives, how can we encourage helpful behavior? A natural scenario is that the doctor has access to multiple models, each from a different provider. Under a milder “market alignment” assumption—that the doctor’s utility lies in the convex hull of the providers’ utilities—we show that in Nash equilibrium of this competition, the doctor can achieve the same outcomes as if a perfectly aligned provider were present.

    Based on joint work: Tractable Agreement Protocols (STOC’25), Collaborative Prediction (SODA’26), and Emergent Alignment via Competition (in submission).
     

    Cynthia Dwork
    Harvard University

    Equitable Evaluation via Elicitation

    Individuals with similar qualifications and skills may vary in their demeanor, or outward manner: some tend toward self-promotion while others are modest to the point of omitting crucial information. Equitable evaluation based on the self-descriptions of equally qualified job-seekers with different self-presentation styles is therefore problematic.

    We build an interactive AI for skill elicitation that provides accurate determination of skills while simultaneously allowing individuals to speak in their own voice. Such a system can be deployed, for example, when a new user joins a professional networking platform, or when matching employees to needs during a company reorganization. To obtain sufficient training data, we train an LLM to act as synthetic humans.

    Elicitation mitigates endogenous bias arising from individuals’ own self‑reports. To address systematic model bias we enforce a mathematically rigorous notion of equitability ensuring that the covariance between self-presentation manner and skill evaluation error is small.
     

    Noémie Elhadad
    Columbia University

    Foundation Models in Medicine and Healthcare

    Large language models have rapidly emerged as a transformative force in medicine and have inspired a range of applications now deployed at the point of patient care. Despite their growing adoption in clinical practice and research, our understanding of their capabilities remains incomplete—particularly with respect to their impact on patient outcomes and clinician wellbeing. Moreover, there is increasing recognition that current LLMs exhibit fundamental limitations in domains such as complex reasoning and causal inference.

    In this presentation, Noémie Elhadad will outline current research directions in foundation models, beyond conventional large language models, that are tailored to the unique structure of clinical and electronic health record data and that integrate both empirical observations and the underlying principles of human physiology.
     

    Allison Koenecke
    Cornell Tech

    Algorithmic Decisions in the SNAP Benefits Pipeline

    America’s Supplemental Nutrition Assistance Program (SNAP), formerly known as food stamps, helps low-income households buy nutritious food. Social workers provide pivotal support at many points of the SNAP pipeline: from spreading awareness of the program, to helping applicants determine eligibility and fill out forms, to advising on changes to benefits. In this talk, I describe two projects that ask how algorithms can play a role in easing social worker burdens without perpetuating algorithmic biases. First, we study biases in online advertising for SNAP benefits, arising from the cost differentials between English-speaking and Spanish-speaking ad recipients. We propose a methodological framework for advertisers to determine a demographically equitable allocation for ads, and find broad consensus across political identities for some degree of equity over pure efficiency in this context. Second, we study the efficiency-bias tradeoffs of a chatbot for assisting social workers in answering SNAP eligibility questions. In a randomized experiment varying the quality of LLM-generated responses, we find that social workers with access to LLM suggestions perform significantly better than those without; however, while human accuracy increases with LLM response accuracy, improvement plateaus—likely due to an LLM-distrust effect. Taken together, these studies can inform government service providers on procurement strategies in the AI age while ameliorating administrative burdens and biases.
     

    Frauke Kreuter
    LMU Munich and University of Maryland

    Adaptive Integrity of AI Models Aligned with Society

    Consider disparate cases: a teenager asking an AI assistant whether it is “okay to have sex before marriage,” or a research institution using AI to test the cultural sensitivity of new employment surveys. In both scenarios, these systems’ responses implicitly prioritize certain cultural values over others. Large language models (LLMs) reproduce common viewpoints from training data while failing to represent population diversity or value change. As LLMs shape the most value-laden areas of our lives, this risks profound individual and societal harm. Social scientists have developed population-level datasets for studying diverse attitudes and behaviors, yet alignment methods remain disconnected from this resource, staying static and opaque.

    This talk introduces the idea of adaptive alignment: a novel approach to improving generative AI that represents diverse societal values and normative principles while acknowledging trade-offs and conflicts. With the aim to create dynamic, living benchmarks, we propose a method to leverage decades of carefully selected social science data currently unused in LLMs due to privacy and format challenges. This talk will present pilots in public health and labor, domains with contested values and cross-cultural variation. The goal of this presentation is to foster an interdisciplinary discussion on usefulness, scalability, and challenges of this approach.
     

    Charlotte Peale
    Stanford University

    Uncertainty Quantification Beyond Calibration

    Calibration has emerged as a standard approach to uncertainty quantification, providing valuable insights into model reliability. However, for modern machine learning, calibration exhibits two fundamental limitations that restrict its utility:

    1. Inability to decompose uncertainty: Calibration fails to distinguish between epistemic uncertainty (model-based) and aleatoric uncertainty (data-based). This distinction is vital for understanding prediction errors, especially in complex, subjective tasks (e.g., language modeling), and for determining whether collecting more data could improve performance.
    2. Suboptimal error prediction: The uncertainty estimates produced by a calibrated model can be substantially worse than those derived from externally trained models specifically designed to predict a model’s error. This gap suggests that stronger notions of uncertainty quantification performance are required to guarantee a model’s ability to accurately self-assess its limitations.

    This talk will overview two research contributions that address these fundamental limitations. First, we introduce higher-order calibration, a rigorous theoretical foundation for decomposing a model’s total uncertainty into its aleatoric and epistemic components, with formal guarantees relating the decomposition to real-world data distributions. We demonstrate the practical utility of this decomposition in uncertainty-aware model routing, where estimates are used to efficiently route queries to small models, larger models, or human experts. Second, we establish an equivalence between a model’s level of multicalibration and its competitiveness with externally-trained loss predictors. This equivalence reveals the precise conditions under which models can—or cannot—accurately assess their own limitations.

  • Participation in the meeting falls into the following four categories. An individual’s participation category is communicated via their letter of invitation.

    The Simons Foundation will never ask for credit card information or require payment for registration to our events.

    Group A – PIs, Speakers & Organizers

    Individuals in Group A receive travel and hotel coordination within the following parameters:

    Travel
    Economy Class: For flights that are three hours or less to your destination, the maximum allowable class of service is Economy class.
    Premium Economy Class: For flights where the total air travel time (excluding connection time) is more than three hours and less than seven hours per segment to your destination, the maximum allowable class of service is premium economy.
    Business Class: When traveling internationally (or to Hawaii/Alaska) travelers are permitted to travel in Business Class on those segments that are seven hours or more. If the routing is over budget, a premium economy or mixed-class ticket will be booked.

    Hotel
    Up to three nights at the conference hotel, arriving on Wednesday, February 4 and departing on Saturday, February 7.

    Group B – Funded Participants

    Individuals in Group B receive travel and hotel coordination within the following parameters:

    Travel
    Economy class travel will be booked regardless of flight length.

    Hotel
    Up to three nights at the conference hotel, arriving on Wednesday, February 4 and departing on Saturday, February 7.

    Group C – Unfunded Participants

    Individuals in Group C will not receive financial support but are encouraged to enjoy all conference-hosted meals.

    Group D – Remote Participants

    Individuals in Group D will participate in the meeting remotely.

  • Air and Rail

    For funded individuals, the foundation will arrange and pay for round-trip travel from their home city to the conference city. All travel and hotel arrangements must be booked through the Simons Foundation’s preferred travel agency.

    Travel Deviations

    The following travel specifications are considered deviations and will only be accommodated if the cost is less than or equal to the amount the Simons Foundation would pay for a standard round-trip ticket from your home city to the conference city:

    • Preferred airline
    • Preferred travel class
    • Specific flights/flight times
    • Travel dates outside those associated with the conference
    • Arriving or departing from an airport other than your home city or conference city airports, i.e. multi-segment or triangle trips.

    All deviations must be reviewed and approved by the Simons Foundation and, if the cost is more than what would normally be paid, a reimbursement quote must be obtained through the foundation’s travel agency before proceeding to booking and paying for travel out of pocket. All reimbursements for travel booked directly will be paid after the conclusion of the meeting.

    Changes After Ticketing

    All costs related to changes made to ticketed travel are to be paid for by the participant and are not reimbursable. Please contact the foundation’s travel agency for further assistance.

    Personal & Rental Cars

    Personal car and rental trips over 250 miles each way require prior approval from the Simons Foundation via email.

    Rental cars must be pre-approved by the Simons Foundation.

    The Royalton Park Avenue offers valet parking. Please note there are no in-and-out privileges when using the hotel’s garage, therefore it is encouraged that participants walk or take public transportation to the Simons Foundation.

    Hotel

    Funded individuals who require hotel accommodations are hosted by the foundation for a maximum of three nights, arriving on Wednesday, February 4 and departing on Saturday, February 7.

    Any additional nights are at the attendee’s own expense. To arrange accommodations, please register at the link included in your invitation.

    Royalton Park Avenue
    420 Park Ave S.
    New York, NY 10016
    https://www.royaltonparkavenue.com/

    For driving directions to the Royalton Park Avenue, please click here.

  • Overview

    In-person participants will be reimbursed for meals and local expenses including ground transportation. Expenses should be submitted through the foundation’s online expense reimbursement platform after the meeting’s conclusion.

    Expenses accrued because of meetings not directly related to the Simons Foundation-hosted meeting (a satellite meeting or meeting held at another institution, for example) will not be reimbursed by the Simons Foundation and should be paid by other sources.

    Below are key reimbursement takeaways; a full policy will be provided with the final logistics email circulated approximately 2 weeks prior to the meeting’s start.

    Meals

    The daily meal limit is $125; itemized receipts are required for expenses over $24 USD. The foundation DOES NOT provide a meal per diem and only reimburses actual meal expenses up the following amounts.

    • Breakfast $20
    • Lunch $30
    • Dinner $75

    Allowable Meal Expenses

    • Meals taken on travel days (when you traveled by air or train).
    • Meals not provided on a meeting day, dinner on Friday for example.
    • Group dinners consisting of fellow meeting participants paid by a single person will be reimbursed up to $75 per person and the amount will count towards the $125 daily meal limit.

    Unallowable Meal Expenses

    • Meals taken outside those provided by the foundation (breakfast, lunch, breaks and/or dinner).
    • Meals taken on days not associated with Simons Foundation-coordinated events.
    • Minibar expenses.
    • Meal expenses for a non-foundation guest.
    • Ubers, Lyfts, taxis, etc., taken to and from restaurants in Manhattan.

      • Accommodations will be made for those with mobility restrictions.

    Ground Transportation

    Expenses for ground transportation will be reimbursed for travel days (i.e. traveling to/from the airport or train station) as well as subway and bus fares while in Manhattan are reimbursable.

    Transportation to/from satellite meetings are not reimbursable.

  • Attendance

    In-person participants and speakers are expected to attend all meeting days. Participants receiving hotel and travel support wishing to arrive on meeting days which conclude at 2:00 PM will be asked to attend remotely.

    Entry & Building Access

    Upon arrival, guests will be required to show their photo ID to enter the Simons Foundation and Flatiron Institute buildings. After checking-in at the meeting reception desk, guests will be able to show their meeting name badge to re-enter the building. If you forget your name badge, you will need to provide your photo ID.

    The Simons Foundation and Flatiron Institute buildings are not considered “open campuses” and meeting participants will only have access to the spaces in which the meeting will take place. All other areas are off limits without prior approval.

    If you require a private space to conduct a phone call or remote meeting, please contact your meeting manager at least 48-hours ahead of time so that they may book a space for you within the foundation’s room reservation system.

    Guests & Children

    Meeting participants are required to give 24-hour advance notice of any guests meeting them at the Simons Foundation either before or after the meeting. Outside guests are discouraged from joining meeting activities, including meals.

    With the exception of Simons Foundation and Flatiron Institute staff, ad hoc meeting participants who did not receive a meeting invitation directly from the Simons Foundation are not permitted.

    Children under the age of 18 are not permitted to attend meetings at the Simons Foundation. Furthermore, the Simons Foundation does not provide childcare facilities or support of any kind. Special accommodations will be made for nursing parents.

  • Meeting & Policy Questions

    Meghan Fazzi
    Senior Manager, Events & Administration, MPS
    [email protected]

    Travel & Hotel Support

    FCM Travel Meetings & Events
    [email protected]
    Hours: M-F, 8:30 AM-5:00 PM ET
    +1-888-789-6639

Subscribe to MPS announcements and other foundation updates

privacy consent banner

Privacy preference

We use cookies to provide you with the best online experience. By clicking "Accept All," you help us understand how our site is used and enhance its performance. You can change your choice at any time here. To learn more, please visit our Privacy Policy.