Untangling Quantum Entanglement

Two quantum particles can be intimately connected even when they are far apart, forming patterns beyond the scope of classical physics. When vast numbers of them link up, the outcome seems beyond comprehension altogether. The pattern-matching power of neural networks may be the key.

When Miles Stoudenmire began to study artificial neural networks, they seemed oddly familiar. A physicist who focuses on condensed matter, he was doing his postdoc at the Perimeter Institute and, like many people, got interested in how neural networks had become adept at interpreting images and translating languages. He didn’t expect they would have any relevance to his own work. He says he remembers thinking, “Wait a second, this is a lot like the techniques we use to store quantum wave functions.”

Physicists have made a lot of connections like that in recent years. Not just condensed matter and neural networks but also quantum computing and quantum gravity turn out to have unexpected parallels in their mathematics and methods. The common theme of these connections is … connections. Things in the world form bonds with one another. They begin to act in harmony. The things might be electrons in a metal, building blocks of space-time, qubits in a quantum computer, or computing units in a neural network. In all these cases the connections among them follow common principles.

When the basic elements are quantum in nature, they can develop a peculiar linkage known as entanglement. Twenty-five years ago, entanglement was a niche subject, and the few who studied it focused on two, maybe three particles at a time. Today, researchers puzzle over vast numbers of particles in unimaginably intricate webs of interaction. As if the sheer quantity of particles weren’t demanding enough, the number of possible configurations grows exponentially, and in a quantum system all their probabilities have to be tracked. “This is really at the origin of all the problems we are trying to solve at CCQ,” says Giuseppe Carleo, who works with Stoudenmire at the Flatiron Institute’s Center for Computational Quantum Physics. “When you have many electrons, such as those that you find in a typical material, providing a full, complete description of the system is impossible.”

LOOKING FOR SOLUTIONS: Color map of two aspects of the self-energy of an interacting fermion system — namely, the modulus and phase (represented by saturation and hue). The red dot indicates the physical solution to be reached. The Center for Computational Quantum Physics is developing methods to solve these kinds of quantum many-body problems. Credit: Photo courtesy of Michel Ferrero of École Polytechnique and Collège de France in Paris

Given recent progress, however, it is a fair bet that in the next 25 years, they will make sense of entanglement in these seemingly intractable systems. Armed with that knowledge, they might have systematic ways to predict material properties and could even home in on a unified theory of physics.

What makes this goal plausible is the progress that has come from recognizing and exploiting commonalities among areas of research that hardly ever used to interact. “You really learn an astounding amount by bringing several conceptually different techniques to bear on the same thing,” says Andrew Millis, co-director of CCQ. Patrick Hayden of Stanford University, director of the It from Qubit collaboration, which focuses on the connections between entanglement and quantum gravity, remarks on how intellectually refreshing the interdisciplinary contact can be: “Everybody can publicly acknowledge their ignorance and embrace it. The only thing to be ashamed of is not being interested or not wanting to learn things that we don’t know.”

Entanglement has an aura of mystery because there doesn’t seem to be any mechanistic explanation for it. But if you set aside questions about its deeper meaning, entanglement is nothing more or less than a pattern within nature — a form of collective organization. Find the pattern, and you can make sense of complexity.

Those patterns can be highly subtle. As the parts of a system assemble themselves, they become correlated. The system no longer ranges over its full space of possibilities but collapses to one corner of that space, and the billions upon billions of variables that originally defined the system are repackaged into relatively few quantities. “The observations of interest are controlled by a very low-dimensional subspace of your big space, and the problem is that you just don’t know how to find that region,” Millis explains. For lack of other options, physicists have tended to rely on quantities drawn from classical physics, potentially missing out on distinctive quantum effects. “There are more intrinsically quantum-mechanical ways to classify phases,” he says.

Finding patterns is the forte of neural networks. For reasons that are still not entirely known, neural networks are uncannily good at extracting the salient features of a sprawling dataset to identify, for instance, people’s movie preferences or pet pictures. Carleo has found that a network can repeat the trick for complex quantum systems. “It understands the main features that characterize a cat or dog, and this is the same challenge we face when we treat quantum systems,” he says.

The network he built with Matthias Troyer, now at Microsoft Research, analyzes one- and two-dimensional grids of electrons. The electrons are fixed in place but can reorient their spins in one of two directions, up or down. Their interactions work at cross-purposes, some seeking to align electrons in the same direction, others driving them apart. How the system will settle down is not at all obvious.

The network predicts the outcome not by directly solving the relevant equations — good luck doing that for a system of this size — but by trial and error. Given one possible configuration of spins as an input, the network returns the probability and energy of that configuration. In so doing, it creates various arithmetic combinations of the spins to capture their correlations. Any one electron might be associated with any other electron or group of electrons.

To train the network, the researchers first initialize it to random values. They enter several trial configurations to see how the energy varies and tune the network to reduce that quantity. They repeat the procedure until they reach a point of diminishing returns. The network then describes the lowest-
energy state of the system, from which the user can calculate all the system’s other properties. Of all the astronomically many correlations that might have been meaningful, the network zeroes in on the most important, revealing the essential physics within the system.

In an alternative procedure, the network can be tuned not to achieve the lowest energy but to match experimental data. Whereas conventional techniques may need a million data points to reconstruct the workings of even a modest system, Carleo’s method might get by with just 100. “On some benchmark calculations,” he says, “we clearly see that using a neural network is a substantial improvement over existing approaches.” He says he is now considering what happens when the electrons are free to hop from one spot on the grid to another. That profoundly complicates the problem, because electrons resist clumping together, and the model needs to track the correlations they develop as they come together and draw apart. The reward, though, may be an explanation of high-temperature superconductivity.

Whereas Carleo applies neural networks to quantum systems, Stoudenmire goes the other way, taking the expertise that physicists have acquired with mammoth systems and adapting it to machine learning.

For most systems physicists encounter, correlations are strongest on small scales and diminish with distance. Distant regions can act in a coordinated way, but their linkage is mediated by what happens on smaller scales. Machine learning, too, often deals with data with correlations that weaken with distance. It need not be spatial distance. In language processing, for example, words spoken in succession are usually related, whereas those uttered a minute apart have a looser connection.

A system with such correlations has a hierarchical structure. Neighboring components can be lumped together and treated as one unit. That unit, in turn, interacts with neighboring units and can be lumped with them. Continuing in this vein, you block the system into progressively larger pieces. Physics occurs on all these levels, just as a country has vibrant politics at every scale, from school board elections to international diplomacy. “Things are happening at different zoom levels and can be related to one another,” Stoudenmire says.

Building on older techniques for describing hierarchical systems, physicists in the early 1990s represented this chunking as a chain of mathematical operations involving tensors, which are a generalization of matrices: grids of numbers that can have not just rows and columns, but third, fourth, fifth and more dimensions. Crucially, you do not need to think of tensors as grids of numbers, but you can manipulate them as single units, however big they may be. “We can work with these vectors that you would never imagine you could work with in terms of their size,” Stoudenmire says. With one stroke, you can operate on effectively infinitely many numbers at once, which gives even the simplest operations an Olympian power.

In fact, all you ever need are linear operations such as rotating and scaling the tensors. To a machine-learning researcher, that beggars belief. A neural network could hardly do a thing with only linear operations. But there’s a crucial difference with these tensors: Linear operations on them are performed in an enormous abstract space. In effect, the complexity is offloaded to that space. “Do you either work in low-dimensional space and do complicated things on it, or do you work in a high-dimensional space and do simple things on it?” Stoudenmire asks. Which trade-off is better depends on the problem. “There’s some really compelling things about physics tensor networks that are either really hard to do with neural networks or completely unimaginable,” he says. He hopes his colleagues in machine learning will feel the same excitement he did when he first saw the links between these fields.