(It from Bit)

 

The more energy, the faster the bits flip. Earth, air, fire, and water in the end are all made of energy, but the different forms they take are determined by information. To do anything requires energy. To specify what is done requires information.
—Seth Lloyd (2006)

 

QUANTUM MECHANICS HAS WEATHERED in its short history more crises, controversies, interpretations (the Copenhagen, the Bohm, the Many Worlds, the Many Minds), factional implosions, and general philosophical breast-beating than any other science. It is happily riddled with mysteries. It blithely disregards human intuition. Albert Einstein died unreconciled to its consequences, and Richard Feynman was not joking when he said no one understands it. Perhaps arguments about the nature of reality are to be expected; quantum physics, so uncannily successful in practice, deals in theory with the foundations of all things, and its own foundations are continually being rebuilt. Even so, the ferment sometimes seems more religious than scientific.

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

“How did this come about?” asks Christopher Fuchs, a quantum theorist at Bell Labs and then the Perimeter Institute in Canada.

Go to any meeting, and it is like being in a holy city in great tumult. You will find all the religions with all their priests pitted in holy war—the Bohmians, the Consistent Historians, the Transactionalists, the Spontaneous Collapseans, the Einselectionists, the Contextual Objectivists, the outright Everettics, and many more beyond that. They all declare to see the light, the ultimate light. Each tells us that if we will accept their solution as our savior, then we too will see the light.
 

 

It is time, he says, to start fresh. Throw away the existing quantum axioms, exquisite and mathematical as they are, and turn to deep physical principles. “Those principles should be crisp; they should be compelling. They should stir the soul.” And where should these physical principles be found? Fuchs answers his own question: in quantum information theory.

“The reason is simple, and I think inescapable,” he declares. “Quantum mechanics has always been about information; it is just that the physics community has forgotten this.”

阅读 ‧ 电子书库

VISUAL AID BY CHRISTOPHER FUCHS (Illustration credit 13.1)

 

 

One who did not forget—or who rediscovered it—was John Archibald Wheeler, pioneer of nuclear fission, student of Bohr and teacher of Feynman, namer of black holes, the last giant of twentieth-century physics. Wheeler was given to epigrams and gnomic utterances. A black hole has no hair was his famous way of stating that nothing but mass, charge, and spin can be perceived from outside. “It teaches us,” he wrote, “that space can be crumpled like a piece of paper into an infinitesimal dot, that time can be extinguished like a blown-out flame, and that the laws of physics that we regard as ‘sacred,’ as immutable, are anything but.” In 1989 he offered his final catchphrase: It from Bit. His view was extreme. It was immaterialist: information first, everything else later. “Otherwise put,” he said,

every it—every particle, every field of force, even the space-time continuum itself—derives its function, its meaning, its very existence … from bits.
 

 

Why does nature appear quantized? Because information is quantized. The bit is the ultimate unsplittable particle.

Among the physics phenomena that pushed information front and center, none were more spectacular than black holes. At first, of course, they had not seemed to involve information at all.

Black holes were the brainchild of Einstein, though he did not live to know about them. He established by 1915 that light must submit to the pull of gravity; that gravity curves the fabric of spacetime; and that a sufficient mass, compacted together, as in a dense star, would collapse utterly, intensifying its own gravity and contracting without limit. It took almost a half century more to face up to the consequences, because they are strange. Anything goes in, nothing comes out. At the center lies the singularity. Density becomes infinite; gravity becomes infinite; spacetime curves infinitely. Time and space are interchanged. Because no light, no signal of any kind, can escape the interior, such things are quintessentially invisible. Wheeler began calling them “black holes” in 1967. Astronomers are sure they have found some, by gravitational inference, and no one can ever know what is inside.

At first astrophysicists focused on matter and energy falling in. Later they began to worry about the information. A problem arose when Stephen Hawking, adding quantum effects to the usual calculations of general relativity, argued in 1974 that black holes should, after all, radiate particles—a consequence of quantum fluctuations near the event horizon. Black holes slowly evaporate, in other words. The problem was that Hawking radiation is featureless and dull. It is thermal radiation—heat. But matter falling into the black hole carries information, in its very structure, its organization, its quantum states—in terms of statistical mechanics, its accessible microstates. As long as the missing information stayed out of reach beyond the event horizon, physicists did not have to worry about it. They could say it was inaccessible but not obliterated. “All colours will agree in the dark,” as Francis Bacon said in 1625.

The outbound Hawking radiation carries no information, however. If the black hole evaporates, where does the information go? According to quantum mechanics, information may never be destroyed. The deterministic laws of physics require the states of a physical system at one instant to determine the states at the next instant; in microscopic detail, the laws are reversible, and information must be preserved. Hawking was the first to state firmly—even alarmingly—that this was a problem challenging the very foundations of quantum mechanics. The loss of information would violate unitarity, the principle that probabilities must add up to one. “God not only plays dice, He sometimes throws the dice where they cannot be seen,” Hawking said. In the summer of 1975, he submitted a paper to the Physical Review with a dramatic headline, “The Breakdown of Physics in Gravitational Collapse.” The journal held it for more than a year before publishing it with a milder title.

As Hawking expected, other physicists objected vehemently. Among them was John Preskill at the California Institute of Technology, who continued to believe in the principle that information cannot be lost: even when a book goes up in flames, in physicists’ terms, if you could track every photon and every fragment of ash, you should be able to integrate backward and reconstruct the book. “Information loss is highly infectious,” warned Preskill at a Caltech Theory Seminar. “It is very hard to modify quantum theory so as to accommodate a little bit of information loss without it leaking into all processes.” In 1997 he made a much-publicized wager with Hawking that the information must be escaping the black hole somehow. They bet an encyclopedia of the winner’s choice. “Some physicists feel the question of what happens in a black hole is academic or even theological, like counting angels on pinheads,” said Leonard Susskind of Stanford, siding with Preskill. “But it is not so at all: at stake are the future rules of physics.” Over the next few years a cornucopia of solutions was proposed. Hawking himself said at one point: “I think the information probably goes off into another universe. I have not been able to show it yet mathematically.”

It was not until 2004 that Hawking, then sixty-two, reversed himself and conceded the bet. He announced that he had found a way to show that quantum gravity is unitary after all and that information is preserved. He applied a formalism of quantum indeterminacy—the “sum over histories” path integrals of Richard Feynman—to the very topology of spacetime and declared, in effect, that black holes are never unambiguously black. “The confusion and paradox arose because people thought classically in terms of a single topology for space-time,” he wrote. His new formulation struck some physicists as cloudy and left many questions unanswered, but he was firm on one point. “There is no baby universe branching off, as I once thought,” he wrote. “The information remains firmly in our universe. I’m sorry to disappoint science fiction fans.” He gave Preskill a copy of Total Baseball: The Ultimate Baseball Encyclopedia, weighing in at 2,688 pages—“from which information can be recovered with ease,” he said. “But maybe I should have just given him the ashes.”

Charles Bennett came to quantum information theory by a very different route. Long before he developed his idea of logical depth, he was thinking about the “thermodynamics of computation”—a peculiar topic, because information processing was mostly treated as disembodied. “The thermodynamics of computation, if anyone had stopped to wonder about it, would probably have seemed no more urgent as a topic of scientific inquiry than, say, the thermodynamics of love,” says Bennett. It is like the energy of thought. Calories may be expended, but no one is counting.

Stranger still, Bennett tried investigating the thermodynamics of the least thermodynamic computer of all—the nonexistent, abstract, idealized Turing machine. Turing himself never worried about his thought experiment consuming any energy or radiating any heat as it goes about its business of marching up and down imaginary paper tapes. Yet in the early 1980s Bennett was talking about using Turing-machine tapes for fuel, their caloric content to be measured in bits. Still a thought experiment, of course, meant to focus on a very real question: What is the physical cost of logical work? “Computers,” he wrote provocatively, “may be thought of as engines for transforming free energy into waste heat and mathematical work.” Entropy surfaced again. A tape full of zeroes, or a tape encoding the works of Shakespeare, or a tape rehearsing the digits of Π, has “fuel value.” A random tape has none.

Bennett, the son of two music teachers, grew up in the Westchester suburbs of New York; he studied chemistry at Brandeis and then Harvard in the 1960s. James Watson was at Harvard then, teaching about the genetic code, and Bennett worked for him one year as a teaching assistant. He got his doctorate in molecular dynamics, doing computer simulations that ran overnight on a machine with a memory of about twenty thousand decimal digits and generated output on pages and pages of fan-fold paper. Looking for more computing power to continue his molecular-motion research, he went to the Lawrence Livermore Laboratory in Berkeley, California, and Argonne National Laboratory in Illinois, and then joined IBM Research in 1972.

IBM did not manufacture Turing machines, of course. But at some point it dawned on Bennett that a special-purpose Turing machine had already been found in nature: namely RNA polymerase. He had learned about polymerase directly from Watson; it is the enzyme that crawls along a gene—its “tape”—transcribing the DNA. It steps left and right; its logical state changes according to the chemical information written in sequence; and its thermodynamic behavior can be measured.

In the real world of 1970s computing, hardware had rapidly grown thousands of times more energy-efficient than during the early vacuum-tube era. Nonetheless, electronic computers dissipate considerable energy in the form of waste heat. The closer they come to their theoretical minimum of energy use, the more urgently scientists want to know just what that theoretical minimum is. Von Neumann, working with his big computers, made a back-of-the-envelope calculation as early as 1949, proposing an amount of heat that must be dissipated “per elementary act of information, that is per elementary decision of a two-way alternative and per elementary transmittal of one unit of information.” He based it on the molecular work done in a model thermodynamic system by Maxwell’s demon, as reimagined by Leó Szilárd. Von Neumann said the price is paid by every elementary act of information processing, every choice between two alternatives. By the 1970s this was generally accepted. But it was wrong.

Von Neumann’s error was discovered by the scientist who became Bennett’s mentor at IBM, Rolf Landauer, an exile from Nazi Germany. Landauer devoted his career to establishing the physical basis of information. “Information Is Physical” was the title of one famous paper, meant to remind the community that computation requires physical objects and obeys the laws of physics. Lest anyone forget, he titled a later essay—his last, it turned out—“Information Is Inevitably Physical.” Whether a bit is a mark on a stone tablet or a hole in a punched card or a particle with spin up or down, he insisted that it could not exist without some embodiment. Landauer tried in 1961 to prove von Neumann’s formula for the cost of information processing and discovered that he could not. On the contrary, it seemed that most logical operations have no entropy cost at all. When a bit flips from zero to one, or vice-versa, the information is preserved. The process is reversible. Entropy is unchanged; no heat needs to be dissipated. Only an irreversible operation, he argued, increases entropy.

Landauer and Bennett were a double act: a straight and narrow old IBM type and a scruffy hippie (in Bennett’s view, anyway). The younger man pursued Landauer’s principle by analyzing every kind of computer he could imagine, real and abstract, from Turing machines and messenger RNA to “ballistic” computers, carrying signals via something like billiard balls. He confirmed that a great deal of computation can be done with no energy cost at all. In every case, Bennett found, heat dissipation occurs only when information is erased. Erasure is the irreversible logical operation. When the head on a Turing machine erases one square of the tape, or when an electronic computer clears a capacitor, a bit is lost, and then heat must be dissipated. In Szilárd’s thought experiment, the demon does not incur an entropy cost when it observes or chooses a molecule. The payback comes at the moment of clearing the record, when the demon erases one observation to make room for the next.

Forgetting takes work.

“You might say this is the revenge of information theory on quantum mechanics,” Bennett says. Sometimes a successful idea in one field can impede progress in another. In this case the successful idea was the uncertainty principle, which brought home the central role played by the measurement process itself. One can no longer talk simply about “looking” at a molecule; the observer needs to employ photons, and the photons must be more energetic than the thermal background, and complications ensue. In quantum mechanics the act of observation has consequences of its own, whether performed by a laboratory scientist or by Maxwell’s demon. Nature is sensitive to our experiments.

“The quantum theory of radiation helped people come to the incorrect conclusion that computing had an irreducible thermodynamic cost per step,” Bennett says. “In the other case, the success of Shannon’s theory of information processing led people to abstract away all of the physics from information processing and think of it as a totally mathematical thing.” As communications engineers and chip designers came closer and closer to atomic levels, they worried increasingly about quantum limitations interfering with their clean, classical ability to distinguish zero and one states. But now they looked again—and this, finally, is where quantum information science is born. Bennett and others began to think differently: that quantum effects, rather than being a nuisance, might be turned to advantage.

Wedged like a hope chest against a wall of his office at IBM’s research laboratory in the wooded hills of Westchester is a light-sealed device called Aunt Martha (short for Aunt Martha’s coffin). Bennett and his research assistant John Smolin jury-rigged it in 1988 and 1989 with a little help from the machine shop: an aluminum box spray-painted dull black on the inside and further sealed with rubber stoppers and black velvet. With a helium-neon laser for alignment and high-voltage cells to polarize the photons, they sent the first message ever to be encoded by quantum cryptography. It was a demonstration of an information-processing task that could be effectively accomplished only via a quantum system. Quantum error correction, quantum teleportation, and quantum computers followed shortly behind.

The quantum message passed between Alice and Bob, a ubiquitous mythical pair. Alice and Bob got their start in cryptography, but the quantum people own them now. Occasionally they are joined by Charlie. They are constantly walking into different rooms and flipping quarters and sending each other sealed envelopes. They choose states and perform Pauli rotations. “We say things such as ‘Alice sends Bob a qubit and forgets what she did,’ ‘Bob does a measurement and tells Alice,’” explains Barbara Terhal, a colleague of Bennett’s and one of the next generation of quantum information theorists. Terhal herself has investigated whether Alice and Bob are monogamous—another term of art, naturally.

In the Aunt Martha experiment, Alice sends information to Bob, encrypted so that it cannot be read by a malevolent third party (Eve the eavesdropper). If they both know their private key, Bob can decipher the message. But how is Alice to send Bob the key in the first place? Bennett and Gilles Brassard, a computer scientist in Montreal, began by encoding each bit of information as a single quantum object, such as a photon. The information resides in the photon’s quantum states—for example, its horizontal or vertical polarization. Whereas an object in classical physics, typically composed of billions of particles, can be intercepted, monitored, observed, and passed along, a quantum object cannot. Nor can it be copied or cloned. The act of observation inevitably disrupts the message. No matter how delicately eavesdroppers try to listen in, they can be detected. Following an intricate and complex protocol worked out by Bennett and Brassard, Alice generates a sequence of random bits to use as the key, and Bob is able to establish an identical sequence at his end of the line.

The first experiments with Aunt Martha’s coffin managed to send quantum bits across thirty-two centimeters of free air. It was not Mr. Watson, come here, I want to see you, but it was a first in the history of cryptography: an absolutely unbreakable cryptographic key. Later experimenters moved on to optical fiber. Bennett, meanwhile, moved on to quantum teleportation.

He regretted that name soon enough, when the IBM marketing department featured his work in an advertisement with the line “Stand by: I’ll teleport you some goulash.” But the name stuck, because teleportation worked. Alice does not send goulash; she sends qubits.

The qubit is the smallest nontrivial quantum system. Like a classical bit, a qubit has two possible values, zero or one—which is to say, two states that can be reliably distinguished. In a classical system, all states are distinguishable in principle. (If you cannot tell one color from another, you merely have an imperfect measuring device.) But in a quantum system, imperfect distinguishability is everywhere, thanks to Heisenberg’s uncertainty principle. When you measure any property of a quantum object, you thereby lose the ability to measure a complementary property. You can discover a particle’s momentum or its position but not both. Other complementary properties include directions of spin and, as in Aunt Martha’s coffin, polarization. Physicists think of these quantum states in a geometrical way—the states of a system corresponding to directions in space (a space of many possible dimensions), and their distinguishability depending on whether those directions are perpendicular (or “orthogonal”).

This imperfect distinguishability is what gives quantum physics its dreamlike character: the inability to observe systems without disturbing them; the inability to clone quantum objects or broadcast them to many listeners. The qubit has this dreamlike character, too. It is not just either-or. Its 0 and 1 values are represented by quantum states that can be reliably distinguished—for example, horizontal and vertical polarizations—but coexisting with these are the whole continuum of intermediate states, such as diagonal polarizations, that lean toward 0 or 1 with different probabilities. So a physicist says that a qubit is a superposition of states; a combination of probability amplitudes. It is a determinate thing with a cloud of indeterminacy living inside. But the qubit is not a muddle; a superposition is not a hodgepodge but a combining of probabilistic elements according to clear and elegant mathematical rules.

“A nonrandom whole can have random parts,” says Bennett. “This is the most counterintuitive part of quantum mechanics, yet it follows from the superposition principle and is the way nature works, as far as we know. People may not like it at first, but after a while you get used to it, and the alternatives are far worse.”

The key to teleportation and to so much of the quantum information science that followed is the phenomenon known as entanglement. Entanglement takes the superposition principle and extends it across space, to a pair of qubits far apart from each other. They have a definite state as a pair even while neither has a measurable state on its own. Before entanglement could be discovered, it had to be invented, in this case by Einstein. Then it had to be named, not by Einstein but by Schrödinger. Einstein invented it for a thought experiment designed to illuminate what he considered flaws in quantum mechanics as it stood in 1935. He publicized it in a famous paper with Boris Podolsky and Nathan Rosen titled “Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?” It was famous in part for provoking Wolfgang Pauli to write to Werner Heisenberg, “Einstein has once again expressed himself publicly on quantum mechanics.… As is well known, this is a catastrophe every time it happens.” The thought experiment imagined a pair of particles correlated in a special way, as when, for example, a pair of photons are emitted by a single atom. Their polarization is random but identical—now and as long as they last.

阅读 ‧ 电子书库

THE QUBIT

 

 

Einstein, Podolsky, and Rosen investigated what would happen when the photons are far apart and a measurement is performed on one of them. In the case of entangled particles—the pair of photons, created together and now light-years apart—it seems that the measurement performed on one has an effect on the other. The instant Alice measures the vertical polarization of her photon, Bob’s photon will also have a definite polarization state on that axis, whereas its diagonal polarization will be indefinite. The measurement thus creates an influence apparently traveling faster than light. It seemed a paradox, and Einstein abhorred it. “That which really exists in B should not depend on what kind of measurement is carried out in space A,” he wrote. The paper concluded sternly, “No reasonable definition of reality could be expected to permit this.” He gave it the indelible name spukhafte Fernwirkung, “spooky action at a distance.”

In 2003 the Israeli physicist Asher Peres proposed one answer to the Einstein-Podolsky-Rosen (EPR) puzzle. The paper was not exactly wrong, he said, but it had been written too soon: before Shannon published his theory of information, “and it took many more years before the latter was included in the physicist’s toolbox.” Information is physical. It is no use talking about quantum states without considering the information about the quantum states.

Information is not just an abstract notion. It requires a physical carrier, and the latter is (approximately) localized. After all, it was the business of the Bell Telephone Company to transport information from one telephone to another telephone, in a different location.
 
… When Alice measures her spin, the information she gets is localized at her position, and will remain so until she decides to broadcast it. Absolutely nothing happens at Bob’s location.… It is only if and when Alice informs Bob of the result she got (by mail, telephone, radio, or by means of any other material carrier, which is naturally restricted to the speed of light) that Bob realizes that his particle has a definite pure state.

 

For that matter, Christopher Fuchs argues that it is no use talking about quantum states at all. The quantum state is a construct of the observer—from which many troubles spring. Exit states; enter information. “Terminology can say it all: A practitioner in this field, whether she has ever thought an ounce about quantum foundations, is just as likely to say ‘quantum information’ as ‘quantum state’…‘What does the quantum teleportation protocol do?’ A now completely standard answer would be: ‘It transfers quantum information from Alice’s site to Bob’s.’ What we have here is a change of mind-set.”

The puzzle of spooky action at a distance has not been altogether resolved. Nonlocality has been demonstrated in a variety of clever experiments all descended from the EPR thought experiment. Entanglement turns out to be not only real but ubiquitous. The atom pair in every hydrogen molecule, H2, is quantumly entangled (“verschränkt,” as Schrödinger said). Bennett put entanglement to work in quantum teleportation, presented publicly for the first time in 1993. Teleportation uses an entangled pair to project quantum information from a third particle across an arbitrary distance. Alice cannot measure this third particle directly; rather, she measures something about its relation to one of the entangled particles. Even though Alice herself remains ignorant about the original, because of the uncertainty principle, Bob is able to receive an exact replica. Alice’s object is disembodied in the process. Communication is not faster than light, because Alice must also send Bob a classical (nonquantum) message on the side. “The net result of teleportation is completely prosaic: the removal of [the quantum object] from Alice’s hands and its appearance in Bob’s hands a suitable time later,” wrote Bennett and his colleagues. “The only remarkable feature is that in the interim, the information has been cleanly separated into classical and nonclassical parts.”

Researchers quickly imagined many applications, such as transfer of volatile information into secure storage, or memory. With or without goulash, teleportation created excitement, because it opened up new possibilities for the very real but still elusive dream of quantum computing.

The idea of a quantum computer is strange. Richard Feynman chose the strangeness as his starting point in 1981, speaking at MIT, when he first explored the possibility of using a quantum system to compute hard quantum problems. He began with a supposedly naughty digression—“Secret! Secret! Close the doors …”

We have always had a great deal of difficulty in understanding the world view that quantum mechanics represents. At least I do, because I’m an old enough man [he was sixty-two] that I haven’t got to the point that this stuff is obvious to me. Okay, I still get nervous with it.… It has not yet become obvious to me that there is no real problem. I cannot define the real problem, therefore I suspect there’s no real problem, but I’m not sure there’s no real problem.
 

 

He knew very well what the problem was for computation—for simulating quantum physics with a computer. The problem was probability. Every quantum variable involved probabilities, and that made the difficulty of computation grow exponentially. “The number of information bits is the same as the number of points in space, and therefore you’d have to have something like NN configurations to be described to get the probability out, and that’s too big for our computer to hold.… It is therefore impossible, according to the rules stated, to simulate by calculating the probability.”

So he proposed fighting fire with fire. “The other way to simulate a probabilistic Nature, which I’ll call N for the moment, might still be to simulate the probabilistic Nature by a computer C which itself is probabilistic.” A quantum computer would not be a Turing machine, he said. It would be something altogether new.

“Feynman’s insight,” says Bennett, “was that a quantum system is, in a sense, computing its own future all the time. You may say it’s an analog computer of its own dynamics.” Researchers quickly realized that if a quantum computer had special powers in cutting through problems in simulating physics, it might be able to solve other types of intractable problems as well.

The power comes from that shimmering, untouchable object the qubit. The probabilities are built in. Embodying a superposition of states gives the qubit more power than the classical bit, always in only one state or the other, zero or one, “a pretty miserable specimen of a two-dimensional vector,” as David Mermin says. “When we learned to count on our sticky little classical fingers, we were misled,” Rolf Landauer said dryly. “We thought that an integer had to have a particular and unique value.” But no—not in the real world, which is to say the quantum world.

In quantum computing, multiple qubits are entangled. Putting qubits at work together does not merely multiply their power; the power increases exponentially. In classical computing, where a bit is either-or, n bits can encode any one of 2n values. Qubits can encode these Boolean values along with all their possible superpositions. This gives a quantum computer a potential for parallel processing that has no classical equivalent. So quantum computers—in theory—can solve certain classes of problems that had otherwise been considered computationally infeasible.

An example is finding the prime factors of very large numbers. This happens to be the key to cracking the most widespread cryptographic algorithms in use today, particularly RSA encryption. The world’s Internet commerce depends on it. In effect, the very large number is a public key used to encrypt a message; if eavesdroppers can figure out its prime factors (also large), they can decipher the message. But whereas multiplying a pair of large prime numbers is easy, the inverse is exceedingly difficult. The procedure is an informational one-way street. So factoring RSA numbers has been an ongoing challenge for classical computing. In December 2009 a team distributed in Lausanne, Amsterdam, Tokyo, Paris, Bonn, and Redmond, Washington, used many hundreds of machines working almost two years to discover that 阅读 ‧ 电子书库123阅读 ‧ 电子书库018阅读 ‧ 电子书库6684阅读 ‧ 电子书库5301阅读 ‧ 电子书库177阅读 ‧ 电子书库5513阅读 ‧ 电子书库0494阅读 ‧ 电子书库9583阅读 ‧ 电子书库84962阅读 ‧ 电子书库7207阅读 ‧ 电子书库7285阅读 ‧ 电子书库3569阅读 ‧ 电子书库5953阅读 ‧ 电子书库3479阅读 ‧ 电子书库2197阅读 ‧ 电子书库3224阅读 ‧ 电子书库5215阅读 ‧ 电子书库17264阅读 ‧ 电子书库0050阅读 ‧ 电子书库7263阅读 ‧ 电子书库6575阅读 ‧ 电子书库1874阅读 ‧ 电子书库5202阅读 ‧ 电子书库1997阅读 ‧ 电子书库8646阅读 ‧ 电子书库9389阅读 ‧ 电子书库9564阅读 ‧ 电子书库7494阅读 ‧ 电子书库2774阅读 ‧ 电子书库0638阅读 ‧ 电子书库4592阅读 ‧ 电子书库5192阅读 ‧ 电子书库5573阅读 ‧ 电子书库2630阅读 ‧ 电子书库3453阅读 ‧ 电子书库7315阅读 ‧ 电子书库4826阅读 ‧ 电子书库8507阅读 ‧ 电子书库9170阅读 ‧ 电子书库2612阅读 ‧ 电子书库2142阅读 ‧ 电子书库9134阅读 ‧ 电子书库6167阅读 ‧ 电子书库04292阅读 ‧ 电子书库1431阅读 ‧ 电子书库1602阅读 ‧ 电子书库2212阅读 ‧ 电子书库4047阅读 ‧ 电子书库9274阅读 ‧ 电子书库7377阅读 ‧ 电子书库9408阅读 ‧ 电子书库0665阅读 ‧ 电子书库3514阅读 ‧ 电子书库1959阅读 ‧ 电子书库7459阅读 ‧ 电子书库8569阅读 ‧ 电子书库0214阅读 ‧ 电子书库3413 is the product of 33阅读 ‧ 电子书库478阅读 ‧ 电子书库0716阅读 ‧ 电子书库9895阅读 ‧ 电子书库689阅读 ‧ 电子书库8786阅读 ‧ 电子书库0441阅读 ‧ 电子书库6984阅读 ‧ 电子书库8212阅读 ‧ 电子书库6908阅读 ‧ 电子书库1770阅读 ‧ 电子书库4794阅读 ‧ 电子书库9837阅读 ‧ 电子书库1376阅读 ‧ 电子书库8568阅读 ‧ 电子书库9124阅读 ‧ 电子书库3138阅读 ‧ 电子书库8982阅读 ‧ 电子书库8837阅读 ‧ 电子书库9387阅读 ‧ 电子书库8002阅读 ‧ 电子书库2876阅读 ‧ 电子书库1471阅读 ‧ 电子书库1652阅读 ‧ 电子书库5317阅读 ‧ 电子书库4308阅读 ‧ 电子书库7737阅读 ‧ 电子书库8144阅读 ‧ 电子书库6799阅读 ‧ 电子书库9489 and 36746阅读 ‧ 电子书库0436阅读 ‧ 电子书库6679阅读 ‧ 电子书库9590阅读 ‧ 电子书库4282阅读 ‧ 电子书库4463阅读 ‧ 电子书库3799阅读 ‧ 电子书库6279阅读 ‧ 电子书库5263阅读 ‧ 电子书库2279阅读 ‧ 电子书库1581阅读 ‧ 电子书库6434阅读 ‧ 电子书库3087阅读 ‧ 电子书库6426阅读 ‧ 电子书库7603阅读 ‧ 电子书库2283阅读 ‧ 电子书库81573阅读 ‧ 电子书库9666阅读 ‧ 电子书库5112阅读 ‧ 电子书库7923阅读 ‧ 电子书库33734阅读 ‧ 电子书库1714阅读 ‧ 电子书库33968阅读 ‧ 电子书库1027阅读 ‧ 电子书库0092阅读 ‧ 电子书库7987阅读 ‧ 电子书库3630阅读 ‧ 电子书库8917. They estimated that the computation used more than 1020 operations.

This was one of the smaller RSA numbers, but, had the solution come earlier, the team could have won a $50,000 prize offered by RSA Laboratories. As far as classical computing is concerned, such encryption is considered quite secure. Larger numbers take exponentially longer time, and at some point the time exceeds the age of the universe.

Quantum computing is another matter. The ability of a quantum computer to occupy many states at once opens new vistas. In 1994, before anyone knew how actually to build any sort of quantum computer, a mathematician at Bell Labs figured out how to program one to solve the factoring problem. He was Peter Shor, a problem-solving prodigy who made an early mark in math olympiads and prize competitions. His ingenious algorithm, which broke the field wide open, is known by him simply as the factoring algorithm, and by everyone else as Shor’s algorithm. Two years later Lov Grover, also at Bell Labs, came up with a quantum algorithm for searching a vast unsorted database. That is the canonical hard problem for a world of limitless information—needles and haystacks.

“Quantum computers were basically a revolution,” Dorit Aharonov of Hebrew University told an audience in 2009. “The revolution was launched into the air by Shor’s algorithm. But the reason for the revolution—other than the amazing practical implications—is that they redefine what is an easy and what is a hard problem.”

What gives quantum computers their power also makes them exceedingly difficult to work with. Extracting information from a system means observing it, and observing a system means interfering with the quantum magic. Qubits cannot be watched as they do their exponentially many operations in parallel; measuring that shadow-mesh of possibilities reduces it to a classical bit. Quantum information is fragile. The only way to learn the result of a computation is to wait until after the quantum work is done.

Quantum information is like a dream—evanescent, never quite existing as firmly as a word on a printed page. “Many people can read a book and get the same message,” Bennett says, “but trying to tell people about your dream changes your memory of it, so that eventually you forget the dream and remember only what you said about it.” Quantum erasure, in turn, amounts to a true undoing: “One can fairly say that even God has forgotten.”

As for Shannon himself, he was unable to witness this flowering of the seeds he had planted. “If Shannon were around now, I would say he would be very enthusiastic about the entanglement-assisted capacity of a channel,” says Bennett. “The same form, a generalization of Shannon’s formula, covers both classic and quantum channels in a very elegant way. So it’s pretty well established that the quantum generalization of classical information has led to a cleaner and more powerful theory, both of computing and communication.” Shannon lived till 2001, his last years dimmed and isolated by the disease of erasure, Alzheimer’s. His life had spanned the twentieth century and helped to define it. As much as any one person, he was the progenitor of the information age. Cyberspace is in part his creation; he never knew it, though he told his last interviewer, in 1987, that he was investigating the idea of mirrored rooms: “to work out all the possible mirrored rooms that make sense, in that if you looked everywhere from inside one, space would be divided into a bunch of rooms, and you would be in each room and this would go on to infinity without contradiction.” He hoped to build a gallery of mirrors in his house near MIT, but he never did.

It was John Wheeler who left behind an agenda for quantum information science—a modest to-do list for the next generation of physicists and computer scientists together:

“Translate the quantum versions of string theory and of Einstein’s geometrodynamics from the language of continuum to the language of bit,” he exhorted his heirs.

“Survey one by one with an imaginative eye the powerful tools that mathematics—including mathematical logic—has won … and for each such technique work out the transcription into the world of bits.”

And, “From the wheels-upon-wheels-upon-wheels evolution of computer programming dig out, systematize and display every feature that illuminates the level-upon-level-upon-level structure of physics.”

And, “Finally. Deplore? No, celebrate the absence of a clean clear definition of the term ‘bit’ as elementary unit in the establishment of meaning.… If and when we learn how to combine bits in fantastically large numbers to obtain what we call existence, we will know better what we mean both by bit and by existence.”

This is the challenge that remains, and not just for scientists: the establishment of meaning.


“It was either R4 or a black hole. But the Feynman sum over histories allows it to be both at once.”

Von Neumann’s formula for the theoretical energy cost of every logical operation was kT ln 2 joules per bit, where T is the computer’s operating temperature and k is the Boltzman constant. Szilárd had proved that the demon in his engine can get kT ln 2 of work out of every molecule it selects, so that energy cost must be paid somewhere in the cycle.

This word is not universally accepted, though the OED recognized it as of December 2007. David Mermin wrote that same year: “Unfortunately the preposterous spelling qubit currently holds sway.… Although “qubit” honors the English (German, Italian,…) rule that q should be followed by u, it ignores the equally powerful requirement that qu should be followed by a vowel. My guess is that “qubit” has gained acceptance because it visually resembles an obsolete English unit of distance, the homonymic cubit. To see its ungainliness with fresh eyes, it suffices to imagine … that one erased transparencies and cleaned ones ears with Qutips.”