Simons Collaboration on Algorithms and Geometry Monthly Meeting, September 2019

Date & Time


  • Agendaplus--large
    9:00 - 10:00 AMBreakfast
    10:00 - 11:00 AMHao Huang
    A proof of the Sensitivity Conjecture
    11:00 - 11:15 AMBreak
    11:15 - 12:15 PMBen Krause
    Discrete Analogues in Harmonic Analysis: Bourgain and Beyond
    12:15- 2:00 PMBreak
    2:00 - 3:00 PMAnand Nararajan
    Efficient self-tests for high-dimensional entanglement
    3:00 - 3:15 PMBreak
    3:15 - 4:15 PMJohn Wright
    John Wright, NEEXP in MIP*
  • Abstractsplus--large

    Hao Huang
    A Proof of the Sensitivity Conjecture

    In the n-dimensional hypercube graph, one can easily choose half of the vertices such that they induce an empty graph. However, having even just one more vertex would cause the induced subgraph to contain a vertex of degree at least \sqrt{n}. This result is best possible, and improves a logarithmic lower bound shown by Chung, Furedi, Graham and Seymour in 1988. In this talk we will discuss a very short algebraic proof of it.

    As a direct corollary of this purely combinatorial result, the sensitivity and degree of every boolean function are polynomially related. This solves an outstanding foundational problem in theoretical computer science, the Sensitivity Conjecture of Nisan and Szegedy.
     

    Ben Krause
    Discrete Analogues in Harmonic Analysis: Bourgain and Beyond

    Numerical integration is one of the fundamental problems in mathematics, physics, and engineering; accurately and cheaply determining the area underneath a curve that lives above a given interval, say [0,1]. For general curves, even those that are continuous, this is a very computationally expensive problem, as, in principle, it requires that one understand the behavior of the curve over the entire interval. A beautiful result in Fourier analysis, though, is that one can arbitrarily well approximate the area underneath a continuous curve by “sampling” at the collection of points , for any . In particular, this problem in numerical integration becomes dynamical. Seeking to extend this sampling approach to rougher curves, or to increase computational power by seeking to sample along sparse sequences, e.g. , leads to hard analytic questions that live at the interface of harmonic analysis and analytic number theory: discrete harmonic analysis. In this talk, I will discuss some of the central problems in the field, from the sampling issue above, first addressed by Bourgain, to current lines of inquiry.
     

    Anand Natarajan
    Efficient self-tests for high-dimensional entanglement

    How much, and what sort of entanglement is needed to win a nonlocal game? In many ways this is the central question in the study of nonlocal games, and a full understanding of this question could resolve such conundrums as Tsirelson’s problem, the complexity of MIP*, and Connes’ embedding conjecture. One approach to this question which has proved fruitful is to design *self-tests*: games for which players who wish to play almost optimally must share a quantum state that is close to a specific entangled state. In this talk, I’ll present a self-test for high-dimensional maximally entangled states that is *efficient* and *robust*: to test n qubits of entanglement requires a game of poly(n) size, and the test gives guarantees even for strategies that are a constant distance from optimal.

    These properties are motivated by the complexity-theoretic goal of showing that the entangled value of a nonlocal game is strictly harder to approximate than the classical value.

    The following talk by John will show how to achieve this separation, using self-testing as an ingredient.

    Based on joint work with Thomas Vidick (arXiv:1801.03821)
     

    John Wright
    NEEXP in MIP*

    A long-standing puzzle in quantum complexity theory is to understand the power of the class MIP* of multiprover interactive proofs with shared entanglement. This question is closely related to the study of entanglement through non-local games, which dates back to the pioneering work of Bell. In this work we show that MIP* contains NEEXP (non-deterministic doubly-exponential time), exponentially improving the prior lower bound of NEXP due to Ito and Vidick. Our result shows that shared entanglement exponentially increases the power of these proof systems, as the class MIP of multiprover interactive proofs without shared entanglement is known to be equal to NEXP.

    A crucial part of our proof is the technique of self-testing, covered by Anand in the previous talk.

Subscribe to MPS announcements and other foundation updates