# Fusing Insights from Computer Science and Pure Mathematics

Michael Chapman loves to solve riddles and logic puzzles. This passion was one of the catalysts for him to pursue a mathematical career. As a doctoral student at the Hebrew University of Jerusalem in Israel, Chapman became an expert in group theory, the mathematical study of symmetry, and its various connections to theoretical computer science. For example, he studied the dynamics of random walks on groups, a field that examines, among other things, how many shuffles are needed for a deck of cards to be properly ‘mixed.’ In more recent explorations, Chapman has focused on property testing problems in groups and their relation to approximation theory of algebraic structures.

Chapman is a second-year Junior Fellow with the Simons Society of Fellows. He has a postdoctoral appointment at New York University’s Courant Institute of Mathematical Sciences under the mentorship of Subhash Khot and Oded Regev. We recently spoke about his current work and doctoral achievements, as well as the general mathematical concepts that guide his work today.

Our conversation has been edited for clarity.

**Your academic interests include combinatorics, group theory and probability. How do these fields relate to one another?**

One could — and many do — study each of these topics separately. But even when these topics are studied together, that combination takes various forms. Combinatorics is the study of discrete — often finite — objects, such as graphs (in the sense of a network of nodes and the connections between them). Group theory focuses on the symmetries of an object. For example, a square has eight symmetries, while a circle has infinitely many. In mathematics, finite collections of symmetries are both ‘group theoretic’ and ‘combinatorial’ in their nature and can thus be studied through the lenses of both fields.

An interesting thing about symmetries is that they can either *commute* or *not commute* with one another. In this case, a pair of symmetries commute with one another if the order in which you apply them provides the same result.

For example, let’s say you have a clock with its hand pointing to 12. You then rotate the clock 90 degrees clockwise and then reflect it through its (vertical) middle. The hand will now point to 9. But if you first reflect and *then* rotate, you get a different outcome: The hand will now point to 3. This means that, for the clock, not all symmetries commute with one another.

Probability theory is the rigorous study of random events, and this is a point in which it can enter the scene: Given two randomly chosen symmetries, what is the likelihood of them commuting? Another interesting random event is the following: Let’s say I start flipping a coin again and again. Each time it comes up heads, I rotate my clock by 90 degrees. But each time it comes up tails, I reflect the clock through its middle. What would be the expected position of the hand of the clock after, say, 500 coin flips? This second type of problem, in which you randomly apply symmetries on an object and study the expected behavior, was the focus of one of my doctoral projects. These processes are called *random walks* or *drunken walks* because you can view them as traveling in a random (or unsober) way on a graph of symmetries.

** **

**Tell us more about that.**

Shuffling a deck of cards is an example of the application of a symmetry to the deck. When you shuffle a deck of cards, your goal is to end up with a scenario in which every ordering of the cards is equally likely. So, the longer you shuffle, you’d expect to get closer and closer to this ideal uniform distribution of all orderings. This is true, but surprisingly, the transition from being far from uniform to being close to uniform occurs abruptly.

This is similar to the physics phenomenon of phase transition. Water is a liquid between 0 and 100 degrees Celsius, which is a vast temperature range. But if you cool the water from 1 degree Celsius to –1 degrees — a change of just two degrees — it turns to ice, a substantial change of global behavior.

The same happens with shuffling. There is a long period during shuffling whereby the resulting deck is still very reminiscent of the original ordering. But, by applying only a few extra shuffles, the cards reach their ideal uniform distribution of all orderings. In the study of random walks, this temporal phase transition phenomenon is known as ‘cutoff.’

I study the cutoff phenomenon in various naturally occurring random walks.

**How does this work relate to theoretical computer science? **

Random walks appear throughout theoretical computer science. So, the examples I described above are already on the cusp of mathematics and computer science. But my focus on computer science comes from a different angle. Further into my doctorate, I started studying property testing problems in the context of group theory. These are the kinds of problems that are still my main interest today.

A property tester is a fast, randomized algorithm with the goal of deciding whether a big object, which the tester does not have time to examine thoroughly, satisfies a certain property. For example, say you want to group all Instagram users into 200 sets, such that no users of the same set follow each other. If this can be done, we would say that Instagram is ‘200-colorable.’ Determining whether this is indeed the case requires an immense amount of computing resources. But there is a simpler way: Sample a small, random portion of the people on Instagram, analyze who follows who, and then declare a verdict for all of Instagram according to this local view.

Clearly, we cannot use this method to deduce *with certainty* that Instagram is 200-colorable. However, we may construct our fast random process such that the further Instagram is from being 200-colorable, the higher the chance that this process would spot this uncolorability property.

It turns out that finding good property testers for certain properties of function between groups has far-reaching applications — both theoretical and practical. The fields of probabilistically checkable proofs in computer science, as well as approximation theory of algebraic structures, are closely tied to such questions. In my recent work with Oren Becker, we studied a generalized version of the linearity, or homomorphism, test of Blum–Luby–Rubinfeld for amenable groups, and proved that, in this more general setup, it is a good tester.

A homomorphism is a structure-preserving map between groups, and it is an essential building block in the theory of groups. Currently, I study more efficient candidate testers for verifying that a function is close to being a homomorphism. Inspired by the MIP*=RE breakthrough in quantum information theory, my collaborators and I recently devised complexity classes that are connected to big open problems in pure mathematics, such as the Aldous–Lyons conjecture. The hope is that a better understanding of these complexity classes may resolve these problems in pure math.

** **

**Finally, what are your thoughts about the Simons Junior Fellowship?**

When I was getting my doctorate, I had many colleagues who were around my age. We socialized a lot, and generally shared in the struggles and triumphs of working through a Ph.D.

Now I am older, with a wife and two young daughters. I generally socialize much less. Also, postdoc life tends to be a bit more individualistic. For all these reasons, the Simons Society of Fellows community has been most welcome. I have found genuine friends that will hopefully keep being part of my life in years to come.

I also enjoy learning about what happens in science in general, and the Simons Society of Fellows enables me to do that. I love attending the foundation’s weekly Presidential Lectures, and relish the countless opportunities I have to chat informally with other Junior Fellows, whether about their work, my work, or the lecture we just attended. Overall, it has been a truly enlightening experience.