预计阅读本页时间:-
(You Cannot Stir Things Apart)
Thought interferes with the probability of events, and, in the long run therefore, with entropy.
—David L. Watson (1930)♦
IT WOULD BE AN EXAGGERATION TO SAY that no one knew what entropy meant. Still, it was one of those words. The rumor at Bell Labs was that Shannon had gotten it from John von Neumann, who advised him he would win every argument because no one would understand it.♦ Untrue, but plausible. The word began by meaning the opposite of itself. It remains excruciatingly difficult to define. The Oxford English Dictionary, uncharacteristically, punts:
广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元
1. The name given to one of the quantitative elements which determine the thermodynamic condition of a portion of matter.
Rudolf Clausius coined the word in 1865, in the course of creating a science of thermodynamics. He needed to name a certain quantity that he had discovered—a quantity related to energy, but not energy.
Thermodynamics arose hand in hand with steam engines; it was at first nothing more than “the theoretical study of the steam engine.”♦ It concerned itself with the conversion of heat, or energy, into work. As this occurs—heat drives an engine—Clausius observed that the heat does not actually get lost; it merely passes from a hotter body into a cooler body. On its way, it accomplishes something. This is like a waterwheel, as Nicolas Sadi Carnot kept pointing out in France: water begins at the top and ends at the bottom, and no water is gained or lost, but the water performs work on the way down. Carnot imagined heat as just such a substance. The ability of a thermodynamic system to produce work depends not on the heat itself, but on the contrast between hot and cold. A hot stone plunged into cold water can generate work—for example, by creating steam that drives a turbine—but the total heat in the system (stone plus water) remains constant. Eventually, the stone and the water reach the same temperature. No matter how much energy a closed system contains, when everything is the same temperature, no work can be done.
It is the unavailability of this energy—its uselessness for work—that Clausius wanted to measure. He came up with the word entropy, formed from Greek to mean “transformation content.” His English counterparts immediately saw the point but decided Clausius had it backward in focusing on the negative. James Clerk Maxwell suggested in his Theory of Heat that it would be “more convenient” to make entropy mean the opposite: “the part which can be converted into mechanical work.” Thus:
When the pressure and temperature of the system have become uniform the entropy is exhausted.
Within a few years, though, Maxwell turned about-face and decided to follow Clausius.♦ He rewrote his book and added an abashed footnote:
In former editions of this book the meaning of the term Entropy, as introduced by Clausius, was erroneously stated to be that part of the energy which cannot be converted into work. The book then proceeded to use the term as equivalent to the available energy; thus introducing great confusion into the language of thermodynamics. In this edition I have endeavoured to use the word Entropy according to its original definition by Clausius.
The problem was not just in choosing between positive and negative. It was subtler than that. Maxwell had first considered entropy as a subtype of energy: the energy available for work. On reconsideration, he recognized that thermodynamics needed an entirely different measure. Entropy was not a kind of energy or an amount of energy; it was, as Clausius had said, the unavailability of energy. Abstract though this was, it turned out to be a quantity as measurable as temperature, volume, or pressure.
It became a totemic concept. With entropy, the “laws” of thermodynamics could be neatly expressed:
First law: The energy of the universe is constant.
Second law: The entropy of the universe always increases.
There are many other formulations of these laws, from the mathematical to the whimsical, e.g., “1. You can’t win; 2. You can’t break even either.”♦ But this is the cosmic, fateful one. The universe is running down. It is a degenerative one-way street. The final state of maximum entropy is our destiny.
William Thomson, Lord Kelvin, imprinted the second law on the popular imagination by reveling in its bleakness: “Although mechanical energy is indestructible,” he declared in 1862, “there is a universal tendency to its dissipation, which produces gradual augmentation and diffusion of heat, cessation of motion, and exhaustion of potential energy through the material universe. The result of this would be a state of universal rest and death.”♦ Thus entropy dictated the universe’s fate in H. G. Wells’s novel The Time Machine: the life ebbing away, the dying sun, the “abominable desolation that hung over the world.” Heat death is not cold; it is lukewarm and dull. Freud thought he saw something useful there in 1918, though he muddled it: “In considering the conversion of psychical energy no less than of physical, we must make use of the concept of an entropy, which opposes the undoing of what has already occurred.”♦
Thomson liked the word dissipation for this. Energy is not lost, but it dissipates. Dissipated energy is present but useless. It was Maxwell, though, who began to focus on the confusion itself—the disorder—as entropy’s essential quality. Disorder seemed strangely unphysical. It implied that a piece of the equation must be something like knowledge, or intelligence, or judgment. “The idea of dissipation of energy depends on the extent of our knowledge,” Maxwell said. “Available energy is energy which we can direct into any desired channel. Dissipated energy is energy which we cannot lay hold of and direct at pleasure, such as the energy of the confused agitation of molecules which we call heat.” What we can do, or know, became part of the definition. It seemed impossible to talk about order and disorder without involving an agent or an observer—without talking about the mind:
Confusion, like the correlative term order, is not a property of material things in themselves, but only in relation to the mind which perceives them. A memorandum-book does not, provided it is neatly written, appear confused to an illiterate person, or to the owner who understands it thoroughly, but to any other person able to read it appears to be inextricably confused. Similarly the notion of dissipated energy could not occur to a being who could not turn any of the energies of nature to his own account, or to one who could trace the motion of every molecule and seize it at the right moment.♦
Order is subjective—in the eye of the beholder. Order and confusion are not the sorts of things a mathematician would try to define or measure. Or are they? If disorder corresponded to entropy, maybe it was ready for scientific treatment after all.
As an ideal case, the pioneers of thermodynamics considered a box of gas. Being made of atoms, it is far from simple or calm. It is a vast ensemble of agitating particles. Atoms were unseen and hypothetical, but these theorists—Clausius, Kelvin, Maxwell, Ludwig Boltzmann, Willard Gibbs—accepted the atomic nature of a fluid and tried to work out the consequences: mixing, violence, continuous motion. This motion constitutes heat, they now understood. Heat is no substance, no fluid, no “phlogiston”—just the motion of molecules.
Individually the molecules must be obeying Newton’s laws—every action, every collision, measurable and calculable, in theory. But there were too many to measure and calculate individually. Probability entered the picture. The new science of statistical mechanics made a bridge between the microscopic details and the macroscopic behavior. Suppose the box of gas is divided by a diaphragm. The gas on side A is hotter than the gas on side B—that is, the A molecules are moving faster, with greater energy. As soon as the divider is removed, the molecules begin to mix; the fast collide with the slow; energy is exchanged; and after some time the gas reaches a uniform temperature. The mystery is this: Why can the process not be reversed? In Newton’s equations of motion, time can have a plus sign or a minus sign; the mathematics works either way. In the real world past and future cannot be interchanged so easily.
“Time flows on, never comes back,” said Léon Brillouin in 1949. “When the physicist is confronted with this fact he is greatly disturbed.”♦ Maxwell had been mildly disturbed. He wrote to Lord Rayleigh:
If this world is a purely dynamical system, and if you accurately reverse the motion of every particle of it at the same instant, then all things will happen backwards to the beginning of things, the raindrops will collect themselves from the ground and fly up to the clouds, etc, etc, and men will see their friends passing from the grave to the cradle till we ourselves become the reverse of born, whatever that is.
His point was that in the microscopic details, if we watch the motions of individual molecules, their behavior is the same forward and backward in time. We can run the film backward. But pan out, watch the box of gas as an ensemble, and statistically the mixing process becomes a one-way street. We can watch the fluid for all eternity, and it will never divide itself into hot molecules on one side and cool on the other. The clever young Thomasina says in Tom Stoppard’s Arcadia, “You cannot stir things apart,” and this is precisely the same as “Time flows on, never comes back.” Such processes run in one direction only. Probability is the reason. What is remarkable—physicists took a long time to accept it—is that every irreversible process must be explained the same way. Time itself depends on chance, or “the accidents of life,” as Richard Feynman liked to say: “Well, you see that all there is to it is that the irreversibility is caused by the general accidents of life.”♦ For the box of gas to come unmixed is not physically impossible; it is just improbable in the extreme. So the second law is merely probabilistic. Statistically, everything tends toward maximum entropy.
Yet probability is enough: enough for the second law to stand as a pillar of science. As Maxwell put it:
Moral. The 2nd law of Thermodynamics has the same degree of truth as the statement that if you throw a tumblerful of water into the sea, you cannot get the same tumblerful of water out again.♦
The improbability of heat passing from a colder to a warmer body (without help from elsewhere) is identical to the improbability of order arranging itself from disorder (without help from elsewhere). Both, fundamentally, are due only to statistics. Counting all the possible ways a system can be arranged, the disorderly ones far outnumber the orderly ones. There are many arrangements, or “states,” in which molecules are all jumbled, and few in which they are neatly sorted. The orderly states have low probability and low entropy. For impressive degrees of orderliness, the probabilities may be very low. Alan Turing once whimsically proposed a number N, defined as “the odds against a piece of chalk leaping across the room and writing a line of Shakespeare on the board.”♦
Eventually physicists began speaking of microstates and macrostates. A macrostate might be: all the gas in the top half of the box. The corresponding microstates would be all the possible arrangements of all particles—positions and velocities. Entropy thus became a physical equivalent of probability: the entropy of a given macrostate is the logarithm of the number of its possible microstates. The second law, then, is the tendency of the universe to flow from less likely (orderly) to more likely (disorderly) macrostates.
It was still puzzling, though, to hang so much of physics on a matter of mere probability. Can it be right to say that nothing in physics is stopping a gas from dividing itself into hot and cold—that it is only a matter of chance and statistics? Maxwell illustrated this conundrum with a thought experiment. Imagine, he suggested, “a finite being” who stands watch over a tiny hole in the diaphragm dividing the box of gas. This creature can see molecules coming, can tell whether they are fast or slow, and can choose whether or not to let them pass. Thus he could tilt the odds. By sorting fast from slow, he could make side A hotter and side B colder—“and yet no work has been done, only the intelligence of a very observant and neat-fingered being has been employed.”♦ The being defies ordinary probabilities. The chances are, things get mixed together. To sort them out requires information.
Thomson loved this idea. He dubbed the notional creature a demon:
“Maxwell’s intelligent demon,” “Maxwell’s sorting demon,” and soon just “Maxwell’s demon.” Thomson waxed eloquent about the little fellow: “He differs from real living animals only [only!] in extreme smallness and agility.”♦ Lecturing to an evening crowd at the Royal Institution of Great Britain, with the help of tubes of liquid dyed two different colors, Thomson demonstrated the apparently irreversible process of diffusion and declared that only the demon can counteract it:
He can cause one-half of a closed jar of air, or of a bar of iron, to become glowingly hot and the other ice cold; can direct the energy of the moving molecules of a basin of water to throw the water up to a height and leave it there proportionately cooled; can “sort” the molecules in a solution of salt or in a mixture of two gases, so as to reverse the natural process of diffusion, and produce concentration of the solution in one portion of the water, leaving pure water in the remainder of the space occupied; or, in the other case, separate the gases into different parts of the containing vessel.
The reporter for The Popular Science Monthly thought this was ridiculous. “All nature is supposed to be filled with infinite swarms of absurd little microscopic imps,” he sniffed. “When men like Maxwell, of Cambridge, and Thomson, of Glasgow, lend their sanction to such a crude hypothetical fancy as that of little devils knocking and kicking the atoms this way and that …, we may well ask, What next?”♦ He missed the point. Maxwell had not meant his demon to exist, except as a teaching device.
The demon sees what we cannot—because we are so gross and slow—namely, that the second law is statistical, not mechanical. At the level of molecules, it is violated all the time, here and there, purely by chance. The demon replaces chance with purpose. It uses information to reduce entropy. Maxwell never imagined how popular his demon would become, nor how long-lived. Henry Adams, who wanted to work some version of entropy into his theory of history, wrote to his brother Brooks in 1903, “Clerk Maxwell’s demon who runs the second law of Thermo-dynamics ought to be made President.”♦ The demon presided over a gateway—at first, a magical gateway—from the world of physics to the world of information.
Scientists envied the demon’s powers. It became a familiar character in cartoons enlivening physics journals. To be sure, the creature was a fantasy, but the atom itself had seemed fantastic, and the demon had helped tame it. Implacable as the laws of nature now seemed, the demon defied these laws. It was a burglar, picking the lock one molecule at a time. It had “infinitely subtile senses,” wrote Henri Poincaré, and “could turn back the course of the universe.”♦ Was this not just what humans dreamed of doing?
Through their ever better microscopes, scientists of the early twentieth century examined the active, sorting processes of biological membranes. They discovered that living cells act as pumps, filters, and factories. Purposeful processes seemed to operate at tiny scales. Who or what was in control? Life itself seemed an organizing force. “Now we must not introduce demonology into science,” wrote the British biologist James Johnstone in 1914. In physics, he said, individual molecules must remain beyond our control. “These motions and paths are un-co-ordinated—‘helter-skelter’—if we like so to term them. Physics considers only the statistical mean velocities.” That is why the phenomena of physics are irreversible, “so that for the latter science Maxwell’s demons do not exist.” But what of life? What of physiology? The processes of terrestrial life are reversible, he argued. “We must therefore seek for evidence that the organism can control the, otherwise, un-co-ordinated motions of the individual molecules.”♦
Is it not strange that while we see that most of our human effort is that of directing natural agencies and energies into paths which they would not otherwise take, we should yet have failed to think of primitive organisms, or even of the tissue elements in the bodies of the higher organisms, as possessing also the power of directing physico-chemical processes?
When life remained so mysterious, maybe Maxwell’s demon was not just a cartoon.
Then the demon began to haunt Leó Szilárd, a very young Hungarian physicist with a productive imagination who would later conceive the electron microscope and, not incidentally, the nuclear chain reaction. One of his more famous teachers, Albert Einstein, advised him out of avuncular protectiveness to take a paying job with the patent office, but Szilárd ignored the advice. He was thinking in the 1920s about how thermodynamics should deal with incessant molecular fluctuations. By definition, fluctuations ran counter to averages, like fish swimming momentarily upstream, and people naturally wondered: what if you could harness them? This irresistible idea led to a version of the perpetual motion machine, perpetuum mobile, holy grail of cranks and hucksters. It was another way of saying, “All that heat—why can’t we use it?”
It was also another of the paradoxes engendered by Maxwell’s demon. In a closed system, a demon who could catch the fast molecules and let the slow molecules pass would have a source of useful energy, continually refreshed. Or, if not the chimerical imp, what about some other “intelligent being”? An experimental physicist, perhaps? A perpetual motion machine should be possible, declared Szilárd, “if we view the experimenting man as a sort of deus ex machina, one who is continuously informed of the existing state of nature.”♦ For his version of the thought experiment, Szilárd made clear that he did not wish to invoke a living demon, with, say, a brain—biology brought troubles of its own. “The very existence of a nervous system,” he noted, “is dependent on continual dissipation of energy.” (His friend Carl Eckart pithily rephrased this: “Thinking generates entropy.”♦) Instead he proposed a “nonliving device,” intervening in a model thermodynamic system, operating a piston in a cylinder of fluid. He pointed out that this device would need, in effect, “a sort of memory faculty.” (Alan Turing was now, in 1929, a teenager. In Turing’s terms, Szilárd was treating the mind of the demon as a computer with a two-state memory.)
Szilárd showed that even this perpetual motion machine would have to fail. What was the catch? Simply put: information is not free. Maxwell, Thomson, and the rest had implicitly talked as though knowledge was there for the taking—knowledge of the velocities and trajectories of molecules coming and going before the demon’s eyes. They did not consider the cost of this information. They could not; for them, in a simpler time, it was as if the information belonged to a parallel universe, an astral plane, not linked to the universe of matter and energy, particles and forces, whose behavior they were learning to calculate.
But information is physical. Maxwell’s demon makes the link. The demon performs a conversion between information and energy, one particle at a time. Szilárd—who did not yet use the word information—found that, if he accounted exactly for each measurement and memory, then the conversion could be computed exactly. So he computed it. He calculated that each unit of information brings a corresponding increase in entropy—specifically, by k log 2 units. Every time the demon makes a choice between one particle and another, it costs one bit of information. The payback comes at the end of the cycle, when it has to clear its memory (Szilárd did not specify this last detail in words, but in mathematics). Accounting for this properly is the only way to eliminate the paradox of perpetual motion, to bring the universe back into harmony, to “restore concordance with the Second Law.”
Szilárd had thus closed a loop leading to Shannon’s conception of entropy as information. For his part, Shannon did not read German and did not follow Zeitschrift für Physik. “I think actually Szilárd was thinking of this,” he said much later, “and he talked to von Neumann about it, and von Neumann may have talked to Wiener about it. But none of these people actually talked to me about it.”♦ Shannon reinvented the mathematics of entropy nonetheless.
To the physicist, entropy is a measure of uncertainty about the state of a physical system: one state among all the possible states it can be in. These microstates may not be equally likely, so the physicist writes S = −Σ pi log pi.
To the information theorist, entropy is a measure of uncertainty about a message: one message among all the possible messages that a communications source can produce. The possible messages may not be equally likely, so Shannon wrote H = −Σ pi log pi.
It is not just a coincidence of formalism: nature providing similar answers to similar problems. It is all one problem. To reduce entropy in a box of gas, to perform useful work, one pays a price in information. Likewise, a particular message reduces the entropy in the ensemble of possible messages—in terms of dynamical systems, a phase space.
That was how Shannon saw it. Wiener’s version was slightly different. It was fitting—for a word that began by meaning the opposite of itself—that these colleagues and rivals placed opposite signs on their formulations of entropy. Where Shannon identified information with entropy, Wiener said it was negative entropy. Wiener was saying that information meant order, but an orderly thing does not necessarily embody much information. Shannon himself pointed out their difference and minimized it, calling it a sort of “mathematical pun.” They get the same numerical answers, he noted:
I consider how much information is produced when a choice is made from a set—the larger the set the more information. You consider the larger uncertainty in the case of a larger set to mean less knowledge of the situation and hence less information.♦
Put another way, H is a measure of surprise. Put yet another way, H is the average number of yes-no questions needed to guess the unknown message. Shannon had it right—at least, his approach proved fertile for mathematicians and physicists a generation later—but the confusion lingered for some years. Order and disorder still needed some sorting.
We all behave like Maxwell’s demon. Organisms organize. In everyday experience lies the reason sober physicists across two centuries kept this cartoon fantasy alive. We sort the mail, build sand castles, solve jigsaw puzzles, separate wheat from chaff, rearrange chess pieces, collect stamps, alphabetize books, create symmetry, compose sonnets and sonatas, and put our rooms in order, and to do all this requires no great energy, as long as we can apply intelligence. We propagate structure (not just we humans but we who are alive). We disturb the tendency toward equilibrium. It would be absurd to attempt a thermodynamic accounting for such processes, but it is not absurd to say we are reducing entropy, piece by piece. Bit by bit. The original demon, discerning one molecule at a time, distinguishing fast from slow, and operating his little gateway, is sometimes described as “superintelligent,” but compared to a real organism it is an idiot savant. Not only do living things lessen the disorder in their environments; they are in themselves, their skeletons and their flesh, vesicles and membranes, shells and carapaces, leaves and blossoms, circulatory systems and metabolic pathways—miracles of pattern and structure. It sometimes seems as if curbing entropy is our quixotic purpose in this universe.
In 1943 Erwin Schrödinger, the chain-smoking, bow-tied pioneer of quantum physics, asked to deliver the Statutory Public Lectures at Trinity College, Dublin, decided the time had come to answer one of the greatest of unanswerable questions: What is life? The equation bearing his name was the essential formulation of quantum mechanics. In looking beyond his field, as middle-aged Nobel laureates so often do, Schrödinger traded rigor for speculation and began by apologizing “that some of us should venture to embark on a synthesis of facts and theories, albeit with second-hand and incomplete knowledge of some of them—and at the risk of making fools of ourselves.”♦ Nonetheless, the little book he made from these lectures became influential. Without discovering or even stating anything new, it laid a foundation for a nascent science, as yet unnamed, combining genetics and biochemistry. “Schrödinger’s book became a kind of Uncle Tom’s Cabin of the revolution in biology that, when the dust had cleared, left molecular biology as its legacy,”♦ one of the discipline’s founders wrote later. Biologists had not read anything like it before, and physicists took it as a signal that the next great problems might lie in biology.
Schrödinger began with what he called the enigma of biological stability. In notable contrast to a box of gas, with its vagaries of probability and fluctuation, and in seeming disregard of Schrödinger’s own wave mechanics, where uncertainty is the rule, the structures of a living creature exhibit remarkable permanence. They persist, both in the life of the organism and across generations, through heredity. This struck Schrödinger as requiring explanation.
“When is a piece of matter said to be alive?”♦ he asked. He skipped past the usual suggestions—growth, feeding, reproduction—and answered as simply as possible: “When it goes on ‘doing something,’ moving, exchanging material with its environment, and so forth, for a much longer period than we would expect an inanimate piece of matter to ‘keep going’ under similar circumstances.” Ordinarily, a piece of matter comes to a standstill; a box of gas reaches a uniform temperature; a chemical system “fades away into a dead, inert lump of matter”—one way or another, the second law is obeyed and maximum entropy is reached. Living things manage to remain unstable. Norbert Wiener pursued this thought in Cybernetics: enzymes, he wrote, may be “metastable” Maxwell’s demons—meaning not quite stable, or precariously stable. “The stable state of an enzyme is to be deconditioned,” he noted, “and the stable state of a living organism is to be dead.”♦
Schrödinger felt that evading the second law for a while, or seeming to, is exactly why a living creature “appears so enigmatic.” The organism’s ability to feign perpetual motion leads so many people to believe in a special, supernatural life force. He mocked this idea—vis viva or entelechy—and he also mocked the popular notion that organisms “feed upon energy.” Energy and matter were just two sides of a coin, and anyway one calorie is as good as another. No, he said: the organism feeds upon negative entropy.
“To put it less paradoxically,” he added paradoxically, “the essential thing in metabolism is that the organism succeeds in freeing itself from all the entropy it cannot help producing while alive.”♦
In other words, the organism sucks orderliness from its surroundings. Herbivores and carnivores dine on a smorgasbord of structure; they feed on organic compounds, matter in a well-ordered state, and return it “in a very much degraded form—not entirely degraded, however, for plants can make use of it.” Plants meanwhile draw not just energy but negative entropy from sunlight. In terms of energy, the accounting can be more or less rigorously performed. In terms of order, calculations are not so simple. The mathematical reckoning of order and chaos remains more ticklish, the relevant definitions being subject to feedback loops of their own.
Much more remained to be learned, Schrödinger said, about how life stores and perpetuates the orderliness it draws from nature. Biologists with their microscopes had learned a great deal about cells. They could see gametes—sperm cells and egg cells. Inside them were the rodlike fibers called chromosomes, arranged in pairs, with consistent numbers from species to species, and known to be carriers of hereditary features. As Schrödinger put it now, they hold within them, somehow, the “pattern” of the organism: “It is these chromosomes, or probably only an axial skeleton fibre of what we actually see under the microscope as the chromosome, that contain in some kind of code-script the entire pattern of the individual’s future development.” He considered it amazing—mysterious, but surely crucial in some way as yet unknown—that every single cell of an organism “should be in possession of a complete (double) copy of the code-script.”♦ He compared this to an army in which every soldier knows every detail of the general’s plans.
These details were the many discrete “properties” of an organism, though it remained far from clear what a property entailed. (“It seems neither adequate nor possible to dissect into discrete ‘properties’ the pattern of an organism which is essentially a unity, a ‘whole,’ ”♦ Schrödinger mused.) The color of an animal’s eyes, blue or brown, might be a property, but it is more useful to focus on the difference from one individual to another, and this difference was understood to be controlled by something conveyed in the chromosomes. He used the term gene: “the hypothetical material carrier of a definite hereditary feature.” No one could yet see these hypothetical genes, but surely the time was not far off. Microscopic observations made it possible to estimate their size: perhaps 100 or 150 atomic distances; perhaps one thousand atoms or fewer. Yet somehow these tiny entities must encapsulate the entire pattern of a living creature—a fly or a rhododendron, a mouse or a human. And we must understand this pattern as a four-dimensional object: the structure of the organism through the whole of its ontogenetic development, every stage from embryo to adult.
In seeking a clue to the gene’s molecular structure, it seemed natural to look to the most organized forms of matter, crystals. Solids in crystalline form have a relative permanence; they can begin with a tiny germ and build up larger and larger structures; and quantum mechanics was beginning to give deep insight into the forces involved in their bonding. But Schrödinger felt something was missing. Crystals are too orderly—built up in “the comparatively dull way of repeating the same structure in three directions again and again.” Elaborate though they seem, crystalline solids contain just a few types of atoms. Life must depend on a higher level of complexity, structure without predictable repetition, he argued. He invented a term: aperiodic crystals. This was his hypothesis: We believe a gene—or perhaps the whole chromosome fiber—to be an aperiodic solid.♦ He could hardly emphasize enough the glory of this difference, between periodic and aperiodic:
The difference in structure is of the same kind as that between an ordinary wallpaper in which the same pattern is repeated again and again in regular periodicity and a masterpiece of embroidery, say a Raphael tapestry, which shows no dull repetition, but an elaborate, coherent, meaningful design.♦
Some of his most admiring readers, such as Léon Brillouin, the French physicist recently decamped to the United States, said that Schrödinger was too clever to be completely convincing, even as they demonstrated in their own work just how convinced they were. Brillouin was particularly taken with the comparison to crystals, with their elaborate but inanimate structures. Crystals have some capacity for self-repair, he noted; under stress, their atoms may shift to new positions for the sake of equilibrium. That may be understood in terms of thermodynamics and now quantum mechanics. How much more exalted, then, is self-repair in the organism: “The living organism heals its own wounds, cures its sicknesses, and may rebuild large portions of its structure when they have been destroyed by some accident. This is the most striking and unexpected behavior.”♦ He followed Schrödinger, too, in using entropy to connect the smallest and largest scales.
The earth is not a closed system, and life feeds upon energy and negative entropy leaking into the earth system.… The cycle reads: first, creation of unstable equilibriums (fuels, food, waterfalls, etc.); then use of these reserves by all living creatures.
Living creatures confound the usual computation of entropy. More generally, so does information. “Take an issue of The New York Times, the book on cybernetics, and an equal weight of scrap paper,” suggested Brillouin. “Do they have the same entropy?” If you are feeding the furnace, yes. But not if you are a reader. There is entropy in the arrangement of the ink spots.
For that matter, physicists themselves go around transforming negative entropy into information, said Brillouin. From observations and measurements, the physicist derives scientific laws; with these laws, people create machines never seen in nature, with the most improbable structures. He wrote this in 1950, as he was leaving Harvard to join the IBM Corporation in Poughkeepsie.♦
That was not the end for Maxwell’s demon—far from it. The problem could not truly be solved, the demon effectively banished without a deeper understanding of a realm far removed from thermodynamics: mechanical computing. Later, Peter Landsberg wrote its obituary this way: “Maxwell’s demon died at the age of 62 (when a paper by Leó Szilárd appeared), but it continues to haunt the castles of physics as a restless and lovable poltergeist.”♦