2024 Simons Collaboration on the Theory of Algorithmic Fairness Annual Meeting

Date & Time


Organizers:
Omer Reingold, Stanford University

Speakers:
Ran Balicer, Clalit Research Institute and Ben-Gurion University
Parikshit Gopalan, Apple
Toni Pitassi, University of Toronto
Omer Reingold, Stanford University
Aaron Roth, University of Pennsylvania
Guy Rothblum, Weizmann Institute of Science
Jessica Sorrell, University of Pennsylvania
Ashia Wilson, MIT

Meeting Goals:
The third annual meeting of the Simons Collaboration on the Theory of Algorithmic Fairness will serve as an opportunity for introspective reflection on the collaboration’s journey so far and the envisioned path ahead. Committed to establishing firm mathematical foundations for the emerging area of algorithmic fairness through the lens of computer science theory, the collaboration has made significant advances. These include foundational work, community building, and outreach to other communities within this highly multidisciplinary domain. This year’s meeting will spotlight selected threads of our research and explore focus areas for the future.

  • Agendaplus--large

    Thursday

    8:30 AMCHECK-IN & BREAKFAST
    9:30 AMOmer Reingold | TOC 4 Fairness
    10:30 AMBREAK
    11:00 AMParikshit Gopalan | Omniprediction and Multigroup Fairness
    12:00 PMLUNCH
    1:00 PMAaron Roth | What Should We "Trust" in Trustworthy Machine Learning?
    2:00 PMBREAK
    2:30 PMToni Pitassi | Replicability
    3:30 PMBREAK
    4:00 PMJessica Sorrell | Replicable Reinforcement Learning
    5:00 PMDAY ONE CONCLUDES

    Friday

    8:30 AMCHECK-IN & BREAKFAST
    9:30 AMGuy Rothblum | Verifiable Data Science: Why and How
    10:30 AMBREAK
    11:00 AMRan Balicer | AI in Healthcare Practice – Real-World Strategies and Risk Mitigation
    12:00 PMLUNCH
    1:00 PMAshia Wilson | Societal Challenges and Opportunities in Generative AI
    2:00 PMMEETING CONCLUDES
  • Abstractsplus--large

    Ran Balicer
    Clalit Research Institute and Ben-Gurion University

    AI in Healthcare Practice – Real-World Strategies and Risk Mitigation

    Healthcare organizations have been watching the global race in recent years to develop more powerful AI tools with interest, hope and sometimes fear — fear of missing out, but also fear of introducing error and bias. While some medical-domain machine learning tools have become mainstream, most notably in the domain of imaging, most other potential use cases have been largely lagging. Evidence of unfairness introduced by use of decision support tools is now abundant in many domains, but less so in healthcare. In absence of clear concrete regulation for use of AI in healthcare, or of relevant real-world case studies, healthcare organizations are facing a significant challenge as they consider frameworks of self-regulation and risk management of AI-powered solutions. Such frameworks can serve as an opportunity for confluence of issues discussed by the fairness collaboration and real-world needs of practitioners and healthcare organizations.

    In this talk, Ran Balicer will discuss the strategic view of a large healthcare organization when faced with the challenge of using AI-driven tools in practice — aims, priorities and risk management. We will provide case studies from practice at scale, early assessments of impact, and ongoing work to address issues of fairness and societal impact. Ran Balicer will also present an implemented organization-level self-regulation framework, in ongoing use when introducing AI-driven tools in Israel’s largest integrated healthcare organization. This framework, heavily impacted by the work of the fairness collaboration and annual meetings, will serve to discuss potential for high-impact further work of this collaborative, as well as additional real-world needs that merit further attention in the future.
     

    Parikshit Gopalan
    Apple

    Omniprediction and Multigroup Fairness

    Consider a scenario where we are learning a predictor, whose predictions will be evaluated by their expected loss. What if we do not know the precise loss at the time of learning, beyond some generic properties like convexity? What if the same predictor will be used in several applications in the future, each with their own loss function? Can we learn predictors that have strong guarantees?

    This motivates the notion of omnipredictors: predictors with strong loss minimization guarantees across a broad family of loss functions, relative to a benchmark hypothesis class. Omniprediction turns out to be intimately connected to multigroup fairness notions such as multicalibration, and also to other topics like boosting, swap regret minimization, and the approximate rank of matrices. This talk will survey some recent work in this area, emphasizing these connections.
     

    Toni Pitassi
    University of Toronto

    Replicability

    Replicability is vital to ensuring scientific conclusions are reliable, but failures of replicability have been a major issue in nearly all scientific areas of study in recent decades. A key issue underlying the replicability crisis is the explosion of methods for data generation, screening, testing, and analysis, where, crucially, only the combinations producing the most significant results are reported. Such practices (e.g., p-hacking, data dredging) can lead to erroneous findings that appear to be significant but that don’t hold up when other researchers attempt to replicate them.

    In this talk, Toni Pitassi will initiate a theory of replicable algorithms. Informally, a replicable learning algorithm is resilient to variations in its samples — with high probability, it returns the exact same output when run on different samples from the same underlying distribution. Pitassi will begin by unpacking the definition, clarifying how randomness is instrumental in balancing accuracy and reproducibility. Secondly, Pitassi will demonstrate the utility of the concept by giving sample-efficient learning algorithms for a variety of problems, including standard statistical tasks and any problem that can be learned differentially privately.

    Finally, Pitassi will discuss the computational limitations imposed by enforcing replicability for machine-learning algorithms and argue that they are in tandem with the goals of differential privacy and generalization. Toni Pitassi will conclude with a discussion of recent developments in reproducibility and open problems.
     

    Omer Reingold
    Stanford University

    TOC 4 Fairness

    The Simons Collaboration on the Theory of Algorithmic Fairness aims at establishing firm mathematical foundations, through the lens of computer science (CS) theory, for algorithmic fairness. Omer Reingold will discuss the pivotal role CS theory has within this highly multidisciplinary area. Reingold will also consider the collaboration’s journey so far and the envisioned path ahead.
     

    Aaron Roth
    University of Pennsylvania

    What Should We “Trust” in Trustworthy Machine Learning?

    “Trustworthy” machine learning has become a buzzword in recent years. But what exactly are the semantics of the promise that we are supposed to trust? In this talk, we will make a proposal, through the lens of downstream decision makers using machine learning predictions of payoff relevant states: Predictions are “trustworthy” if it is in the interests of the downstream decision makers to act as if the predictions are correct, as opposed to gaming the system in some way. We will find that this is a fruitful idea. For many kinds of downstream tasks, predictions of the payoff relevant state that are statistically unbiased, subject to a modest number of conditioning events, suffice to give downstream decision makers strong guarantees when acting optimally as if the predictions were correct — and it is possible to efficiently produce predictions (even in adversarial environments!) that satisfy these bias properties. This methodology also gives an algorithm design principle that turns out to give new, efficient algorithms for a variety of adversarial learning problems, including obtaining subsequence regret in online combinatorial optimization problems and extensive form games, and for obtaining sequential prediction sets for multiclass classification problems that have strong, conditional coverage guarantees — directly from a black-box prediction technology, avoiding the need to choose a “score function” as in conformal prediction.

    This is joint work with Georgy Noarov, Ramya Ramalingam and Stephan Xie, and builds on many foundational works arising from the Simons Collaboration on the Theory of Algorithmic Fairness.
     

    Guy Rothblum
    Apple and Weizmann Institute of Science

    Verifiable Data Science: Why and How

    With the explosive growth and impact of machine learning and data analysis algorithms, there are growing concerns that these algorithms might be corrupted. How can we guarantee that a complicated data analysis was performed correctly? Interactive proof systems, originally studied in the context of cryptography, are protocols that allow a weak verifier to verify that a complex computation was performed correctly. Guy Rothblum will survey a line of recent work, showing how to use such proof systems to verify the results of complex data analyses, where verification is super-efficient in terms of data access, communication and runtime, and generating the proof is not much more expensive than simply running the computation.
     

    Jessica Sorrell
    University of Pennsylvania

    Replicable Reinforcement Learning

    The replicability crisis in the social, behavioral and data sciences has led to the formulation of algorithm frameworks for replicability — i.e., a requirement that an algorithm produce identical outputs (with high probability) when run on two different samples from the same underlying distribution. While still in its infancy, provably replicable algorithms have been developed for many fundamental tasks in machine learning and statistics, including statistical query learning, the heavy hitters problem and distribution testing. In this talk, Jessica Sorrell will introduce the study of replicable reinforcement learning and discuss the first formal replicability results for control problems, giving algorithms that converge to the same policy with high probability, even when exploring stochastic environments.

    This talk is based on joint work with Eric Eaton, Marcel Hussing and Michael Kearns.
     

    Ashia Wilson
    MIT

    Societal Challenges and Opportunities in Generative AI

    Ashia Wilson will outline both the promising opportunities and the critical challenges relating to new generative AI technologies that are of current interest. Wilson will then turn the spotlight on the audience and delve into perspectives on where researchers should channel their efforts, particularly as it relates to aligning generative AI technologies with social values like equity and transparency. The primary aim of this presentation is to foster an interactive discussion.

  • Participation & Fundingplus--large

    Participation in the meeting falls into the following four categories. An individual’s participation category is communicated via their letter of invitation.

    Group A – PIs and Speakers
    The foundation will arrange and pay for all air and train travel to the conference as well as hotel accommodations and reimbursement of local expenses. Business-class or premium economy airfare will be booked for all flights over five hours.

    Group B – Out-of-town Participants
    The foundation will arrange and pay for all air and train travel to the conference as well as hotel accommodations and reimbursement of local expenses. Economy-class airfare will be booked for all flights.

    Group C – Local Participants
    Individuals in Group C are considered local and will not receive financial support, but are encouraged to enjoy all conference-hosted meals.

    Group D – Remote Participants
    Individuals in Group D will participate in the meeting remotely. Please register at the link above and a remote participation link will be sent to you approximately two weeks prior to the meeting.

  • Travel & Hotelplus--large

    Air and Rail
    For individuals in Groups A and B the foundation will arrange and pay for round-trip travel from their home city to the conference.

    All travel and hotel arrangements must be booked through the Simons Foundation’s preferred travel agency.

    Travel specifications, including preferred airline, will be accommodated provided that these specifications are reasonable and within budget.

    Travel arrangements not booked through the preferred agency, including triangle trips and routing/preferred airlines outside budget, must be pre-approved by the Simons Foundation and a reimbursement quote must be obtained through the foundation’s travel agency.

    Personal & Rental Cars
    Personal car and rental trips over 250 miles each way require prior approval from the Simons Foundation via email.

    Rental cars must be pre-approved by the Simons Foundation.

    The James NoMad Hotel offers valet parking. Please note there are no in-and-out privileges when using the hotel’s garage, therefore it is encouraged that participants walk or take public transportation to the Simons Foundation.

    Hotel
    Participants in Groups A & B who require accommodations are hosted by the foundation for a maximum of three nights at The James NoMad Hotel. Any additional nights are at the attendee’s own expense. To arrange accommodations, please register at the link above.

    The James NoMad Hotel
    22 E 29th St
    New York, NY 10016
    (between 28th and 29th Streets)
    https://www.jameshotels.com/new-york-nomad/

    For driving directions to The James NoMad, please click here.

  • Reimbursementplus--large

    Overview
    Individuals in Groups A & B will be reimbursed for meals and local expenses including ground transportation. Expenses should be submitted through the foundation’s online expense reimbursement platform after the meeting’s conclusion.

    Expenses accrued as a result of meetings not directly related to the Simons Foundation-hosted meeting (a satellite collaboration meeting held at another institution, for example) will not be reimbursed by the Simons Foundation and should be paid by other sources.

    Below are key reimbursement takeaways; a full policy will be provided with the final logistics email circulated approximately 2 weeks prior to the meeting’s start.

    Meals
    The daily meal limit is $125 and itemized receipts are required for expenses over $24 USD. The foundation DOES NOT provide a meal per diem and only reimburses actual meal expenses.

    • Meals taken on travel days are reimbursable.
    • Meals taken outside those provided by the foundation (breakfast, lunch, breaks and/or dinner) are not reimbursable.
    • If a meal was not provided on a meeting day, dinner for example, that expense is reimbursable.
    • Meals taken on days not associated with Simons Foundation-coordinated events are not reimbursable.
    • Minibar expenses are not reimbursable
    • Meal expenses for a non-foundation guest are not reimbursable.

    Group meals consisting of fellow meeting participants paid by a single person will be reimbursed up to $65 per person per meal and the amount will count towards each individual’s $125 daily meal limit.

    Ground Transportation
    Expenses for ground transportation will be reimbursed for travel days (i.e. traveling to/from the airport) as well as local transportation. While in NYC, individuals are encouraged to use public transportation and not use taxi, Uber or Lyft services.

  • Attendance & Building Protocolsplus--large

    Attendance
    In-person participants and speakers are expected to attend all meeting days. Partial participation is permitted so long as the individual fully attends the first day, which is typically Thursday for two-day meetings. Participants receiving hotel and travel support wishing to arrive on meeting days which conclude at 2:00 PM will be asked to attend remotely.

    COVID-19 Vaccination
    Individuals accessing Simons Foundation and Flatiron Institute buildings must be fully vaccinated against COVID-19.

    Entry & Building Access
    Upon arrival, guests will be required to show their photo ID to enter the Simons Foundation and Flatiron Institute buildings. After checking-in at the meeting reception desk, guests will be able to show their meeting name badge to re-enter the building. If you forget your name badge, you will need to provide your photo ID.

    The Simons Foundation and Flatiron Institute buildings are not considered “open campuses” and meeting participants will only have access to the spaces in which the meeting will take place. All other areas are off limits without prior approval.

    If you require a private space to conduct a phone call or remote meeting, please contact your meeting manager at least 48-hours ahead of time so that they may book a space for you within the foundation’s room reservation system.

    Guests & Children
    Meeting participants are required to give 24 hour advance notice of any guests meeting them at the Simons Foundation either before or after the meeting. Outside guests are discouraged from joining meeting activities, including meals.

    With the exception of Simons Foundation and Flatiron Institute staff, ad hoc meeting participants who did not receive a meeting invitation directly from the Simons Foundation are not permitted.

    Children under the age of 18 are not permitted to attend meetings at the Simons Foundation. Furthermore, the Simons Foundation does not provide childcare facilities or support of any kind. Special accommodations will be made for nursing parents.

  • Contactsplus--large

    Registration and Travel Assistance
    Ovation Travel Group
    [email protected]
    (917) 408-8384 (24-Hours)
    www.ovationtravel.com

    Meeting Questions and Assistance
    Meghan Fazzi
    Manager, Events and Administration, MPS, Simons Foundation
    [email protected]
    (212) 524-6080

Subscribe to MPS announcements and other foundation updates