Bestselling science writer John Gribbin’s new book Computing with Quantum Cats (Prometheus Books) holds more than a few surprises for readers interested in the history of computers–from the massive decoding devices developed to crack enemy German codes during World War II to the latest attempts by companies to build working quantum computers.

Quantum computing exploits the strange two-states-at-once nature of subatomic particles to allow a quantum bit (or qubit) to store twice as much binary information as a standard (classical) computer, where the stored bits are either 0 or 1.

As Jeremy Hsu at IEEE Spectrum, writes:

One key to making quantum computing practical involves harnessing the quantum physics phenomenon of entanglement—separate qubits sharing the same quantum state—so that quantum computers can scale up effectively to tackle more complex challenges.

One of the challenges being encryption for data security. But there are other problems quantum computing could help scientists tackle. For example, Gribbin writes, drug development companies could use quantum computers to more efficiently design small molecules that can interact with cell proteins targeted in certain diseases.

But a lot of work remains to be done. Currently, quantum computing operations last for just a fraction of a second before the system ceases functioning, or a state of what is called decoherence sets in. The best way to understand this is: what happens when the key quantum particles of the system, cease to causally interact with each other.

Gribbin:

A classical computer is good for as long as the hardware lasts, and my wife is not alone in having a computer nearly ten years old that she is still entirely happy with. By contrast, a quantum computer–the virtual Turing machine inside the hardware– “lasts” for about a millionth of a second. In fact, this is not quite the whole story. What really matters are the relative values of the (quantum) decoherence time and the time it takes for a (logic) gate to operate. The gate operation time may be pushed to a millionth of a millionth of a second, allowing for a million operations before complete decoherence occurs. Putting it another way, during the course of the operation of a gate, only one in a million quibits will “dephase.” This just about makes quantum computing feasible….(p. 228)

Still, some huge calculations can be done in even that short of a span.

Developers in business and academia are working to bring quantum computing to fruition, but it could still take years before we see such machines working at the level of what Gribbin calls the Turing machines now present in our smart phones and other portable devices.

The strength of Gribbin’s book is in the historical connections he draws from the very first days of machine computation to the present.

The initial idea for quantum computing, for example, developed in the mid 1980s as a means of proving the existence of parallel universes, one of the oddest consequences of quantum mechanics.

That’s the work of David Deutsch, to whom Gribbin devotes the third part of his book. But I’m getting way ahead. How you get from the origins of the first computing machines to Deutsch is all the fun.

And Gribbin ties it all together in an engrossing three parts.

The book opens with the work of Alan Turing: the British genius who first theorized a machine that can solve problems based on simple programs.

Turing, always idiosyncratic and literal minded, writes Gribbin, “saw that a ‘mechanical process’ carried out by a team of people could be carried out by a machine, in the everyday sense of the word.”

As a young student at Cambridge, Turing set about writing exactly how such a machine would work. In a paper that has become famous, he was able to show:

…that there are uncomputable problems, and that there is no way to distinguish provable statements in mathematics from unprovable statements in mathematics using some set of rules applied in a certain way. That was impressive enough. But what is even more impressive, and the main reason why Turing’s paper “On Computable Numbers” is held in such awe today, is that he realized that his “automatic machine” could be a universal computer. The way the machine works on a particular problem depends on its initial state. It is a limited machine that can only solve a single problem. But as Turing appreciated, the initial state can be set up by the machine reading a string of 1s and 0s from a tape–what we now call a computer program. The same piece of machinery (what we now call hardware) could be made to do any possible task in accordance with appropriate sets of instructions (what we now call software). One such machine can simulate the work of any such machine. And such machines are now known as Turing machines. In his own words, “it is possible to invent a single machine which can be used to compute any computable sequence.” (p. 20)

From Turing, whose untimely tragic death, Gribbin points out, may well not have been a suicide, as has been widely believed, we then move on to the work of John von Neumann: the Hungarian born immigrant to America, and leading quantum physicist who inspired the field of artificial intelligence by writing about machines that can self-replicate and learn by trial-and-error.

Both men were instrumental in the war effort. Turing, helping to crack the ENIGMA code of the Germans, and von Neumann working on the Manhattan Project and helping to improve computers to aid in the process.

After the war, he and Edward Teller pressed for the development of the hydrogen bomb, the ‘super’ bomb as it was known at the time. But designing such a bomb, Gribbin tells us, would require much more computation than the design of the fission bombs that were dropped on Hiroshima and Nagasaki.

Von Neumann was uniquely placed–the only person who was privy to both the secrets of Los Alamos and (computer) developments at the Moore School. He was also the only person with the prestige and influence to ensure that the first program actually run on the ENIAC, starting in December 1945 before the machine was used for its intended purpose, was a simulation for the super project. In a striking example of pragmatism, the ENIAC team were allowed to see the equations, which were not classified, without being told anything about the super bomb, which was. (p. 78)

But von Neumann’s work in pure theory–quantum mechanics–is also key to the story as Gribbin shows. He was responsible –in a negative sense–for inspiring some of the world’s leading physicists to come up with a way to restore classical determinism to the physics of subatomic particles: to challenge the standard view of Quantum Mechanics.

The so-called Copenhagen Interpretation, one of whose more dissatisfying principles for many scientists at the time (including Einstein) was the notion that reality is fundamentally probabilistic, unpredictable.

The fundamental feature of the Copenhagen Interpretation is that a quantum entity such as an electron can be represented by a wave, described by a wave equation (also known as a wave “function”). This wave occupies a large volume of space (potentially, an infinitely large volume). The wave function has a value at any point in space, and this number is interpreted, following a suggestion made by Max Born, as representing the probability of finding the electron at that point. In some places, the wave is, in a sense, strong (the number associated with the wave function is large), and there is a high probability that if we look for the electron we will find it in one of those places; in other places, the wave is weak, and there is a small probability of finding the electron in one of those places. But when we look for the electron we do find it in a particular place, like a particle, not as a spread-out wave. The wave function is said to “collapse” onto that place. But as soon as the experiment is over, the wave starts spreading out across the universe. It is this combination of waves, probability and collapse which makes up the Copenhagen Interpretation, and which von Neumann wrapped up in a neat package in his book. (p. 106)

As Gribbin shows in the second part of his book (‘Quanta’), a lot of physicists from the very start of the quantum revolution worked on ways to get around the indeterminacy of the Copenhagen Interpretation.

To breeze through all too quickly, they included: Louis de Broglie, whose Pilot Wave theory (combining the physics of waves and particles) allowed both the wave and the particle to be physically determined. But his work was dismissed and then forgotten.

Erwin Schrödinger, whose wave equation, in spite of his misgivings, became the foundation of the standard indeterminate view of quantum physics. He also devised the famous cat paradox (referenced in the book’s title) to illustrate the absurdity of this view.

Einstein, along with Podolsky and Rosen, who devised a famous thought experiment to challenge the Copenhagen view.

And finally the haunted, tragic David Bohm, whose hidden variables theory picked up where de Broglie left off.

All of them believed the accepted view of quantum mechanics could not be the final word, and that its champions (Niels Bohr and Werner Heisenberg among others) were blind to the possibility of hidden variables that could re-establish a fully determinist view of reality. (I’m neglecting John Stewart Bell, who was inspired by Bohm’s hidden variables model, to develop his famous Inequality theorem. Gribbin devotes a chapter to his work and its importance to this history.)

For Gribbin, Schrödinger, in many ways one of the most conservative of the group of reactionaries, was also the most surprising. For it turns out he suggested the idea of a ‘many worlds’ interpretation of quantum mechanics …before anyone else.

Flash forward to the 1980s: the upshot of David Deutsch’s work–following in Schrödinger’s steps– was to propose quantum computing as the means to demonstrate the reality of Schrödinger’s parallel worlds.

In the mid-1930s, Turing’s insight pointed the way toward universal classical computers; the mid-1980s, Deutsch’s insight pointed the way towards universal quantum computers. He described a quantum generalization of the Turing machine, and showed that a “universal quantum computer” could be built in accordance with the principle that “every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means…Computing machines resembling the universal quantum computer could, in principle, be built and would have many remarkable properties not reproducible by any [classical] Turing machine,” although they would also be able to simulate perfectly any such Turing machine. He stressed that a strong motivation for developing such machines is that “classical physics is false.” And in particular he drew attention to the way in which “quantum parallelism” would allow such computers to perform certain tasks faster than any classical computer. (p. 196)

I’ve only just scratched the surface here–and I haven’t even reached the final sections where Gribbin discusses the many challenges that still must be faced before you and I ever see a quantum desktop or handheld device.

But by the time you get to the end of this fascinating book, you will get a sense of just how rich and strange was the historical path that led from Turing… to quantum computing.

Available in hardcover and on Kindle.

**Don't Miss: Win a $500 Nintendo Switch Pro Kart Bundle**

*Follow me on Facebook and Twitter. Subscribe to my Vimeo Channel.*

Source: Forbes