Last March, AlphaGo, a program created by Google DeepMind, was able to beat a world-champion human player of Go, but only after it had trained on a database of thirty million moves, running on approximately a million watts. (Its opponent’s brain, by contrast, would have been about fifty thousand times more energy-thrifty, consuming twenty watts.)
But computer chips are using the architecture of the human brain to become more efficient.
Building on decades of work by Mead and others, engineers have been racing to roll out the first so-called neuromorphic chips for consumer use. Kwabena Boahen’s research group at Stanford unveiled its low-power Neurogrid chip in 2014, and Qualcomm has announced that its brain-inspired Zeroth processor will reach the market in 2018. Another model, I.B.M.’s TrueNorth, only recently moved from digital prototype to usable product. It consists of a million silicon neurons, tiny cores that communicate directly with one another using synapse-like connections. Here, the medium is the message; each neuron is both program and processing unit. The sensory data that the chip receives, rather than marching along single file, fan out through its synaptic networks. TrueNorth ultimately arrives at a decision—say, classifying the emotional timbre of its user’s voice—by group vote, as a choir of individual singers might strike on a harmony. I.B.M. claims the chip is useful in real-time pattern recognition, as for speech processing or image classification. But the biggest advance is its energy efficiency: it uses twenty milliwatts per square centimetre, more than a thousand times less than a traditional chip.
That is from a fascinating New Yorker article by Kelly Clancy.