As in previous years, the A&G annual meeting started with conference-style talks on recent important progress and ended with a public talk by an A&G member. Irit Dinur started the day with a talk that presented an interesting connection between the agreement-testing problem that occurs in complexity theory (mainly in probabilistically checkable proofs) and the expansion of simplicial complexes. Here, the notion of expansion has more than one possible interpretation, and Dinur formulated a notion that is suitable for the agreement-testing problem. This notion holds for simplicial complexes that arise from the Ramanujan complexes of Lubotzky, Samuels and Vishne, thus leading to new strong derandomization results.

Tengyu Ma presented rigorous results that explain the empirically observed success of nonconvex optimization problems on typical instances that arise from certain probability distributions on inputs. He demonstrated this by showing that for certain problems and certain distributions with high probability, every local minimum is also a global minimum. Such issues are ubiquitous in learning theory, and this result is a step toward the explanation of the success of currently used heuristic methods. Jelani Nelson presented his recent solution of the longstanding question of the optimality of the Johnson-Lindenstrauss dimensionality reduction lemma. This 1983 lemma, which is a powerful tool in many applications, asserts that for every \(\epsilon>0\) and \(n\) vectors \(x_1,…,x_n\in R^n\), there exists \(k= O(\epsilon^{-2}log\ n)\) and vectors \(y_1,..,y_n\in R^k\) such that $$(1 – \epsilon)||x_i-x_j|| \leq ||y_i-y_j|| \leq (1 + \epsilon)||x_i-x_j||$$ for all \(i,j\). The dependence of the reduced dimension \(k\) on the number of vectors \(n\) is easily seen to be sharp, but it was unknown whether the dependence on \(\epsilon\) as \(\epsilon\rightarrow0\) is sharp. Nelson proved that this dependence on \(\epsilon\) is sharp for all values \(\epsilon\) that are not very small (namely, not smaller than a power of \(1/n\)). This is based on a new and clever coding scheme.

Moritz Hardt presented interesting work about learning an unknown, linear, time-invariant dynamical system from a sequence of noisy observations generated by the system. This work relates the question to certain issues in complex analysis. These results are a thought-provoking first step toward obtaining a rigorous theory for linear systems identification problems that have been investigated for a long time using heuristic methods. The final talk of the nonpublic part of the A&G annual meeting was by Afonso Bandeira, who discussed questions on random matrix models in which a low-rank matrix is added to a random matrix. The question is how large should the low-rank perturbation be in order for it to be detectable statistically. Such questions arise in natural statistical models for principal component analysis and Bandeira’s talk demonstrated fundamental limitations of statistical methods to perform these tasks.

The day ended with a public talk of Alex Andoni on nearest-neighbor search, which is a very important algorithmic task with a wide range of applications in many areas. Such questions are being investigated at A&G intensively (including very recent and ongoing progress), but this talk was more focused on an introduction to this topic for nonspecialists. The talk was attended by many people, and there was a lot of audience interest and participation. I don’t know the vast majority of those who came (you have the list of registrants, so you may know more), but based on the questions that were asked throughout the talk, it seems that they were people who are interested in such topics either from industry or academia. The overall theme of Andoni’s talk was how to build data structures that retrieve approximate nearest-neighbor queries in sublinear time. The methods here are geometric, and quite remarkably, they yield efficient procedures that work irrespective of the input point set. Andoni presented the fundamental questions in this context, as well as some of the fundamental techniques (including his recent work on data-dependent locality-sensitive hashing).