How to Make the Universe Think for Us

How to Make the Universe Think for Us | Quanta Magazine
oooussama

 

The group also implemented their scheme in an optical system — where the input image and weights are encoded in two beams of light that get jumbled together by a crystal — and in an electronic circuit capable of similarly shuffling inputs. In principle, any system with Byzantine behavior will do, though the researchers believe the optical system holds particular promise. Not only can a crystal blend light extremely quickly, but light also contains abundant data about the world. McMahon imagines miniaturized versions of his optical neural network someday serving as the eyes of self-driving cars, identifying stop signs and pedestrians before feeding that information to the vehicle’s computer chip, much as our retinas perform some basic visual processing on incoming light.

The Achilles heel of these systems, however, is that training them requires a return to the digital world. Backpropagation involves running a neural network in reverse, but plates and crystals don’t readily unmix sounds and light. So the group constructed a digital model of each physical system. Reversing these models on a laptop, they could use the backpropagation algorithm to calculate how to adjust the weights to give accurate answers.

With this training, the plate learned to classify handwritten digits correctly 87% of the time. The circuit and laser reached 93% and 97% accuracy, respectively. The results showed “that not only standard neural networks can be trained through backpropagation,” said Julie Grollier, a physicist at the French National Center for Scientific Research (CNRS). “That’s beautiful.”

The group’s quivering metal plate has not yet brought computing closer to the shocking efficiency of the brain. It doesn’t even approach the speed of digital neural networks. But McMahon views his devices as striking, if modest, proof that you don’t need a brain or computer chip to think. “Any physical system can be a neural network,” he said.

The Learning Part

Ideas abound for the other half of the puzzle — getting a system to learn all by itself.

Florian Marquardt, a physicist at the Max Planck Institute for the Science of Light in Germany, believes one option is to build a machine that runs backward. Last year, he and a collaborator proposed a physical analogue of the backpropagation algorithm that could run on such a system.

To show that it works, they digitally simulated a laser setup somewhat like McMahon’s, with the adjustable weights encoded in a light wave that mixes with another input wave (encoding, say, an image). They nudge the output to be closer to the right answer and use optical components to unmix the waves, reversing the process. “The magic,” Marquardt said, is that “when you try the device once more with the same input, [the output] now has a tendency to be closer to where you want it to be.” Next, they are collaborating with experimentalists to build such a system.

But focusing on systems that run in reverse limits the options, so other researchers are leaving backpropagation behind entirely. They take encouragement from knowing that the brain learns in some other way than standard backpropagation. “The brain doesn’t work like this,” said Scellier. Neuron A communicates with neuron B, “but it’s only one-way.”

In 2017, Scellier and Yoshua Bengio, a computer scientist at the University of Montreal, developed a unidirectional learning method called equilibrium propagation. To get a sense of how it works, imagine a network of arrows that act like neurons, their direction indicating a 0 or 1, connected in a grid by springs that act as synaptic weights. The looser a spring, the less the linked arrows tend to snap into alignment.

First, you twist arrows in the leftmost row to reflect the pixels of your handwritten digit and hold them fixed while the disturbance ripples out through the springs, flipping other arrows. When the flipping stops, the rightmost arrows give the answer.

Crucially, you don’t have to train this system by un-flipping the arrows. Instead, you connect another set of arrows showing the correct answer along the bottom of the network; these flip arrows in the upper set, and the whole grid settles into a new equilibrium. Finally, you compare the new orientations of the arrows with the old orientations and tighten or loosen each spring accordingly. Over many trials, the springs acquire smarter tensions in a way that Scellier and Bengio have shown is equivalent to backpropagation.

“It was thought that there was no possible link between physical neural networks and backpropagation,” said Grollier. “Very recently that’s what changed, and that’s very exciting.”

Initial work on equilibrium propagation was all theoretical. But in an upcoming publication, Grollier and Jérémie Laydevant, a physicist at CNRS, describe an execution of the algorithm on a machine called a quantum annealer, built by the company D-Wave. The apparatus has a network of thousands of interacting superconductors that can act like arrows linked by springs and naturally calculate how the “springs” should be updated. The system cannot update these synaptic weights automatically, though.

Closing the Circle

At least one team has gathered the pieces to build an electronic circuit that does all the heavy lifting — thinking, learning and updating weights — with physics. “We’ve been able to close the loop for a small system,” said Sam Dillavou, a physicist at the University of Pennsylvania.

The goal for Dillavou and his collaborators is to emulate the brain, a literal smart substance: a relatively uniform system that learns without any single structure calling the shots. “Every neuron is doing its own thing,” he said.

To this end, they built a self-learning circuit, in which variable resistors act as the synaptic weights and neurons are the voltages measured between the resistors. To classify a given input, it translates the data into voltages that are applied to a few nodes. Electric current courses through the circuit, seeking the paths that dissipate the least energy and changing the voltages as it stabilizes. The answer is the voltage at specified output nodes.

Their major innovation came in the ever-challenging learning step, for which they devised a scheme similar to equilibrium propagation called coupled learning. As one circuit takes in data and “thinks up” a guess, an identical second circuit starts with the correct answer and incorporates it into its behavior. Finally, electronics connecting each pair of resistors automatically compare their values and adjust them to achieve a “smarter” configuration.

The group described their rudimentary circuit in a preprint last summer, showing that it could learn to distinguish three types of flowers with 95% accuracy. Now they’re working on a faster, more capable device.

Even that upgrade won’t come close to beating a state-of-the-art silicon chip. But the physicists building these systems suspect that digital neural networks — as mighty as they seem today — will eventually appear slow and inadequate next to their analog cousins. Digital neural networks can only scale up so much before getting bogged down by excessive computation, but bigger physical networks need not do anything but be themselves.

“It’s such a big, fast-moving and varied field that I find it hard to believe that there won’t be some pretty powerful computers made with these principles,” Dillavou said.

#Universe #Quanta #Magazine

sidaliii