预计阅读本页时间:-
(The Return of Meaning)
It was inevitable that meaning would force its way back in.
—Jean-Pierre Dupuy (2000)♦
THE EXHAUSTION, the surfeit, the pressure of information have all been seen before. Credit Marshall McLuhan for this insight—his most essential—in 1962:
广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元
We are today as far into the electric age as the Elizabethans had advanced into the typographical and mechanical age. And we are experiencing the same confusions and indecisions which they had felt when living simultaneously in two contrasted forms of society and experience.♦
But as much as it is the same, this time it is different. We are a half century further along now and can begin to see how vast the scale and how strong the effects of connectedness.
Once again, as in the first days of the telegraph, we speak of the annihilation of space and time. For McLuhan this was prerequisite to the creation of global consciousness—global knowing. “Today,” he wrote, “we have extended our central nervous systems in a global embrace, abolishing both space and time as far as our planet is concerned. Rapidly, we approach the final phase of the extensions of man—the technological simulation of consciousness, when the creative process of knowing will be collectively and corporately extended to the whole of human society.”♦ Walt Whitman had said it better a century before:
What whispers are these O lands, running ahead of you, passing under the seas?
Are all nations communing? is there going to be but one heart to the globe?♦
The wiring of the world, followed hard upon by the spread of wireless communication, gave rise to romantic speculation about the birth of a new global organism. Even in the nineteenth century mystics and theologians began speaking of a shared mind or collective consciousness, formed through the collaboration of millions of people placed in communication with one another.♦
Some went so far as to view this new creature as a natural product of continuing evolution—a way for humans to fulfill their special destiny, after their egos had been bruised by Darwinism. “It becomes absolutely necessary,” wrote the French philosopher Édouard Le Roy in 1928, “to place [man] above the lower plane of nature, in a position which enables him to dominate it.”♦ How? By creating the “noosphere”—the sphere of mind—a climactic “mutation” in evolutionary history. His friend the Jesuit philosopher Pierre Teilhard de Chardin did even more to promote the noosphere, which he called a “new skin” on the earth:
Does it not seem as though a great body is in the process of being born—with its limbs, its nervous system, its centers of perception, its memory—the very body of that great something to come which was to fulfill the aspirations that had been aroused in the reflective being by the freshly acquired consciousness of its interdependence with and responsibility for a whole in evolution?♦
That was a mouthful even in French, and less mystical spirits considered it bunkum (“nonsense, tricked out with a variety of tedious metaphysical conceits,”♦ judged Peter Medawar), but many people were testing the same idea, not least among them the writers of science fiction.♦ Internet pioneers a half century later liked it, too.
H. G. Wells was known for his science fiction, but it was as a purposeful social critic that he published a little book in 1938, late in his life, with the title World Brain. There was nothing fanciful about what he wanted to promote: an improved educational system throughout the whole “body” of humanity. Out with the hodgepodge of local fiefdoms: “our multitude of unco-ordinated ganglia, our powerless miscellany of universities, research institutions, literatures with a purpose.”♦ In with “a reconditioned and more powerful Public Opinion.” His World Brain would rule the globe. “We do not want dictators, we do not want oligarchic parties or class rule, we want a widespread world intelligence conscious of itself.” Wells believed that a new technology was poised to revolutionize the production and distribution of information: microfilm. Tiny pictures of printed materials could be made for less than a penny per page, and librarians from Europe and the United States met to discuss the possibilities in Paris in 1937 for a World Congress of Universal Documentation. New ways of indexing the literature would be needed, they realized. The British Museum embarked on a program of microfilming four thousand of its oldest books. Wells made this prediction: “In a few score years there will be thousands of workers at this business of ordering and digesting knowledge where now you have one.”♦ He admitted that he meant to be controversial and provocative. Attending the congress himself on behalf of England, he foresaw a “sort of cerebrum for humanity, a cerebral cortex which will constitute a memory and a perception of current reality for the whole human race.”♦ Yet he was imagining something mundane, as well as utopian: an encyclopedia. It would be a successor to the great national encyclopedias—the French encyclopedia of Diderot, the Britannica, the German Konversations-Lexikon (he did not mention China’s Four Great Books of Song)—which had stabilized and equipped “the general intelligence.”
This new world encyclopedia would transcend the static form of the book, printed in volumes, said Wells. Under the direction of a wise professional staff (“very important and distinguished men in the new world”), it would be in a state of constant change—“a sort of mental clearinghouse for the mind, a depot where knowledge and ideas are received, sorted, summarized, digested, clarified and compared.” Who knows whether Wells would recognize his vision in Wikipedia? The hurly-burly of competing ideas did not enter into it. His world brain was to be authoritative, but not centralized.
It need not be vulnerable as a human head or a human heart is vulnerable. It can be reproduced exactly and fully, in Peru, China, Iceland, Central Africa.… It can have at once the concentration of a craniate animal and the diffused vitality of an amoeba.
For that matter, he said, “It might have the form of a network.”
It is not the amount of knowledge that makes a brain. It is not even the distribution of knowledge. It is the interconnectedness. When Wells used the word network—a word he liked very much—it retained its original, physical meaning for him, as it would for anyone in his time. He visualized threads or wires interlacing: “A network of marvellously gnarled and twisted stems bearing little leaves and blossoms”; “an intricate network of wires and cables.”♦ For us that sense is almost lost; a network is an abstract object, and its domain is information.
The birth of information theory came with its ruthless sacrifice of meaning—the very quality that gives information its value and its purpose. Introducing The Mathematical Theory of Communication, Shannon had to be blunt. He simply declared meaning to be “irrelevant to the engineering problem.” Forget human psychology; abandon subjectivity.
He knew there would be resistance. He could hardly deny that messages can have meaning, “that is, they refer to or are correlated according to some system with certain physical or conceptual entities.” (Presumably a “system with certain physical or conceptual entities” would be the world and its inhabitants, the kingdom and the power and the glory, amen.) For some, this was just too cold. There was Heinz von Foerster at one of the early cybernetics conferences, complaining that information theory was merely about “beep beeps,” saying that only when understanding begins, in the human brain, “then information is born—it’s not in the beeps.”♦ Others dreamed of extending information theory with a semantic counterpart. Meaning, as ever, remained hard to pin down. “I know an uncouth region,” wrote Borges of the Library of Babel, “whose librarians repudiate the vain and superstitious custom of finding a meaning in books and equate it with that of finding a meaning in dreams or in the chaotic lines of one’s palm.”♦
Epistemologists cared about knowledge, not beeps and signals. No one would have bothered to make a philosophy of dots and dashes or puffs of smoke or electrical impulses. It takes a human—or, let’s say, a “cognitive agent”—to take a signal and turn it into information. “Beauty is in the eye of the beholder, and information is in the head of the receiver,”♦ says Fred Dretske. At any rate that is a common view, in epistemology—that “we invest stimuli with meaning, and apart from such investment, they are informationally barren.” But Dretske argues that distinguishing information and meaning can set a philosopher free. The engineers have provided an opportunity and a challenge: to understand how meaning can evolve; how life, handling and coding information, progresses to interpretation, belief, and knowledge.
Still, who could love a theory that gives false statements as much value as true statements (at least, in terms of quantity of information)? It was mechanistic. It was desiccated. A pessimist, looking backward, might call it a harbinger of a soulless Internet at its worst. “The more we ‘communicate’ the way we do, the more we create a hellish world,” wrote the Parisian philosopher—also a historian of cybernetics—Jean-Pierre Dupuy.
I take “hell” in its theological sense, i.e., a place which is void of grace—the undeserved, unnecessary, surprising, unforeseen. A paradox is at work here: ours is a world about which we pretend to have more and more information but which seems to us increasingly devoid of meaning.♦
That hellish world, devoid of grace—has it arrived? A world of information glut and gluttony; of bent mirrors and counterfeit texts; scurrilous blogs, anonymous bigotry, banal messaging. Incessant chatter. The false driving out the true.
That is not the world I see.
It was once thought that a perfect language should have an exact one-to-one correspondence between words and their meanings. There should be no ambiguity, no vagueness, no confusion. Our earthly Babel is a falling off from the lost speech of Eden: a catastrophe and a punishment. “I imagine,” writes the novelist Dexter Palmer, “that the entries of the dictionary that lies on the desk in God’s study must have one-to-one correspondences between the words and their definitions, so that when God sends directives to his angels, they are completely free from ambiguity. Each sentence that He speaks or writes must be perfect, and therefore a miracle.”♦ We know better now. With or without God, there is no perfect language.
Leibniz thought that if natural language could not be perfect, at least the calculus could: a language of symbols rigorously assigned. “All human thoughts might be entirely resolvable into a small number of thoughts considered as primitive.”♦ These could then be combined and dissected mechanically, as it were. “Once this had been done, whoever uses such characters would either never make an error, or, at least, would have the possibility of immediately recognizing his mistakes, by using the simplest of tests.” Gödel ended that dream.
On the contrary, the idea of perfection is contrary to the nature of language. Information theory has helped us understand that—or, if you are a pessimist, forced us to understand it. “We are forced to see,” Palmer continues,
that words are not themselves ideas, but merely strings of ink marks; we see that sounds are nothing more than waves. In a modern age without an Author looking down on us from heaven, language is not a thing of definite certainty, but infinite possibility; without the comforting illusion of meaningful order we have no choice but to stare into the face of meaningless disorder; without the feeling that meaning can be certain, we find ourselves overwhelmed by all the things that words might mean.
Infinite possibility is good, not bad. Meaningless disorder is to be challenged, not feared. Language maps a boundless world of objects and sensations and combinations onto a finite space. The world changes, always mixing the static with the ephemeral, and we know that language changes, not just from edition to edition of the Oxford English Dictionary but from one moment to the next, and from one person to the next. Everyone’s language is different. We can be overwhelmed or we can be emboldened.
More and more, the lexicon is in the network now—preserved, even as it changes; accessible and searchable. Likewise, human knowledge soaks into the network, into the cloud. The web sites, the blogs, the search engines and encyclopedias, the analysts of urban legends and the debunkers of the analysts. Everywhere, the true rubs shoulders with the false. No form of digital communication has earned more mockery than the service known as Twitter—banality shrink-wrapped, enforcing triviality by limiting all messages to 140 characters. The cartoonist Garry Trudeau twittered satirically in the guise of an imaginary newsman who could hardly look up from his twittering to gather any news. But then, eyewitness Twitter messages provided emergency information and comfort during terrorist attacks in Mumbai in 2008, and it was Twitter feeds from Tehran that made the Iranian protests visible to the world in 2009. The aphorism is a form with an honorable history. I barely twitter myself, but even this odd medium, microblogging so quirky and confined, has its uses and its enchantment. By 2010 Margaret Atwood, a master of a longer form, said she had been “sucked into the Twittersphere like Alice down the rabbit hole.”
Is it signaling, like telegraphs? Is it Zen poetry? Is it jokes scribbled on the washroom wall? Is it John Hearts Mary carved on a tree? Let’s just say it’s communication, and communication is something human beings like to do.♦
Shortly thereafter, the Library of Congress, having been founded to collect every book, decided to preserve every tweet, too. Possibly undignified, and probably redundant, but you never know. It is human communication.
And the network has learned a few things that no individual could ever know.
It identifies CDs of recorded music by looking at the lengths of their individual tracks and consulting a vast database, formed by accretion over years, by the shared contributions of millions of anonymous users. In 2007 this database revealed something that had eluded distinguished critics and listeners: that more than one hundred recordings released by the late English pianist Joyce Hatto—music by Chopin, Beethoven, Mozart, Liszt, and others—were actually stolen performances by other pianists. MIT established a Center for Collective Intelligence, devoted to finding group wisdom and “harnessing” it. It remains difficult to know when and how much to trust the wisdom of crowds—the title of a 2004 book by James Surowiecki, to be distinguished from the madness of crowds as chronicled in 1841 by Charles Mackay, who declared that people “go mad in herds, while they recover their senses slowly, and one by one.”♦ Crowds turn all too quickly into mobs, with their time-honored manifestations: manias, bubbles, lynch mobs, flash mobs, crusades, mass hysteria, herd mentality, goose-stepping, conformity, groupthink—all potentially magnified by network effects and studied under the rubric of information cascades. Collective judgment has appealing possibilities; collective self-deception and collective evil have already left a cataclysmic record. But knowledge in the network is different from group decision making based on copying and parroting. It seems to develop by accretion; it can give full weight to quirks and exceptions; the challenge is to recognize it and gain access to it. In 2008, Google created an early warning system for regional flu trends based on data no firmer than the incidence of Web searches for the word flu; the system apparently discovered outbreaks a week sooner than the Centers for Disease Control and Prevention. This was Google’s way: it approached classic hard problems of artificial intelligence—machine translation and voice recognition—not with human experts, not with dictionaries and linguists, but with its voracious data mining of trillions of words in more than three hundred languages. For that matter, its initial approach to searching the Internet relied on the harnessing of collective knowledge.
Here is how the state of search looked in 1994. Nicholson Baker—in a later decade a Wikipedia obsessive; back then the world’s leading advocate for the preservation of card catalogues, old newspapers, and other apparently obsolete paper—sat at a terminal in a University of California library and typed, BROWSE SU[BJECT] CENSORSHIP.♦ He received an error message,
LONG SEARCH: Your search consists of one or more very common words, which will retrieve over 800 headings and take a long time to complete,
and a knuckle rapping:
Long searches slow the system down for everyone on the catalog and often do not produce useful results. Please type HELP or see a reference librarian for assistance.
All too typical. Baker mastered the syntax needed for Boolean searches with complexes of ANDs and ORs and NOTs, to little avail. He cited research on screen fatigue and search failure and information overload and admired a theory that electronic catalogues were “in effect, conducting a program of ‘aversive operant conditioning’ ” against online search.
Here is how the state of search looked two years later, in 1996. The volume of Internet traffic had grown by a factor of ten each year, from 20 terabytes a month worldwide in 1994 to 200 terabytes a month in 1995, to 2 petabytes in 1996. Software engineers at the Digital Equipment Corporation’s research laboratory in Palo Alto, California, had just opened to the public a new kind of search engine, named AltaVista, continually building and revising an index to every page it could find on the Internet—at that point, tens of millions of them. A search for the phrase truth universally acknowledged and the name Darcy produced four thousand matches. Among them:
- The complete if not reliable text of Pride and Prejudice, in several versions, stored on computers in Japan, Sweden, and elsewhere, downloadable free or, in one case, for a fee of $2.25.
- More than one hundred answers to the question, “Why did the chicken cross the road?” including “Jane Austen: Because it is a truth universally acknowledged that a single chicken, being possessed of a good fortune and presented with a good road, must be desirous of crossing.”
- The statement of purpose of the Princeton Pacific Asia Review: “The strategic importance of the Asia Pacific is a truth universally acknowledged …”
- An article about barbecue from the Vegetarian Society UK: “It is a truth universally acknowledged among meat-eaters that …”
- The home page of Kevin Darcy, Ireland. The home page of Darcy Cremer, Wisconsin. The home page and boating pictures of Darcy Morse. The vital statistics of Tim Darcy, Australian footballer. The résumé of Darcy Hughes, a fourteen-year-old yard worker and babysitter in British Columbia.
Trivia did not daunt the compilers of this ever-evolving index. They were acutely aware of the difference between making a library catalogue—its target fixed, known, and finite—and searching a world of information without boundaries or limits. They thought they were onto something grand. “We have a lexicon of the current language of the world,”♦ said the project manager, Allan Jennings.
Then came Google. Brin and Page moved their fledgling company from their Stanford dorm rooms into offices in 1998. Their idea was that cyberspace possessed a form of self-knowledge, inherent in the links from one page to another, and that a search engine could exploit this knowledge. As other scientists had done before, they visualized the Internet as a graph, with nodes and links: by early 1998, 150 million nodes joined by almost 2 billion links. They considered each link as an expression of value—a recommendation. And they recognized that all links are not equal. They invented a recursive way of reckoning value: the rank of a page depends on the value of its incoming links; the value of a link depends on the rank of its containing page. Not only did they invent it, they published it. Letting the Internet know how Google worked did not hurt Google’s ability to leverage the Internet’s knowledge.
At the same time, the rise of this network of all networks was inspiring new theoretical work on the topology of interconnectedness in very large systems. The science of networks had many origins and evolved along many paths, from pure mathematics to sociology, but it crystallized in the summer of 1998, with the publication of a letter to Nature from Duncan Watts and Steven Strogatz. The letter had three things that combined to make it a sensation: a vivid catchphrase, a nice result, and a surprising assortment of applications. It helped that one of the applications was All the World’s People. The catchphrase was small world. When two strangers discover that they have a mutual friend—an unexpected connection—they may say, “It’s a small world,” and it was in this sense that Watts and Strogatz talked about small-world networks.
The defining quality of a small-world network is the one unforgettably captured by John Guare in his 1990 play, Six Degrees of Separation. The canonical explanation is this:
I read somewhere that everybody on this planet is separated by only six other people. Six degrees of separation. Between us and everyone else on this planet. The President of the United States. A gondolier in Venice. Fill in the names.♦
The idea can be traced back to a 1967 social-networking experiment by the Harvard psychologist Stanley Milgram and, even further, to a 1929 short story by a Hungarian writer, Frigyes Karinthy, titled “Láncszemek”—Chains.♦ Watts and Strogatz took it seriously: it seems to be true, and it is counterintuitive, because in the kinds of networks they studied, nodes tended to be highly clustered. They are cliquish. You may know many people, but they tend to be your neighbors—in a social space, if not literally—and they tend to know mostly the same people. In the real world, clustering is ubiquitous in complex networks: neurons in the brain, epidemics of infectious disease, electric power grids, fractures and channels in oil-bearing rock. Clustering alone means fragmentation: the oil does not flow, the epidemics sputter out. Faraway strangers remain estranged.
But some nodes may have distant links, and some nodes may have an exceptional degree of connectivity. What Watts and Strogatz discovered in their mathematical models is that it takes astonishingly few of these exceptions—just a few distant links, even in a tightly clustered network—to collapse the average separation to almost nothing and create a small world.♦ One of their test cases was a global epidemic: “Infectious diseases are predicted to spread much more easily and quickly in a small world; the alarming and less obvious point is how few short cuts are needed to make the world small.”♦ A few sexually active flight attendants might be enough.
In cyberspace, almost everything lies in the shadows. Almost everything is connected, too, and the connectedness comes from a relatively few nodes, especially well linked or especially well trusted. However, it is one thing to prove that every node is close to every other node; that does not provide a way of finding the path between them. If the gondolier in Venice cannot find his way to the president of the United States, the mathematical existence of their connection may be small comfort. John Guare understood this, too; the next part of his Six Degrees of Separation explanation is less often quoted:
I find that A) tremendously comforting that we’re so close, and B) like Chinese water torture that we’re so close. Because you have to find the right six people to make the connection.
There is not necessarily an algorithm for that.
The network has a structure, and that structure stands upon a paradox. Everything is close, and everything is far, at the same time. This is why cyberspace can feel not just crowded but lonely. You can drop a stone into a well and never hear a splash.
No deus ex machina waits in the wings; no man behind the curtain. We have no Maxwell’s demon to help us filter and search. “We want the Demon, you see,” wrote Stanislaw Lem, “to extract from the dance of atoms only information that is genuine, like mathematical theorems, fashion magazines, blueprints, historical chronicles, or a recipe for ion crumpets, or how to clean and iron a suit of asbestos, and poetry too, and scientific advice, and almanacs, and calendars, and secret documents, and everything that ever appeared in any newspaper in the Universe, and telephone books of the future.”♦ As ever, it is the choice that informs us (in the original sense of that word). Selecting the genuine takes work; then forgetting takes even more work. This is the curse of omniscience: the answer to any question may arrive at the fingertips—via Google or Wikipedia or IMDb or YouTube or Epicurious or the National DNA Database or any of their natural heirs and successors—and still we wonder what we know.
We are all patrons of the Library of Babel now, and we are the librarians, too. We veer from elation to dismay and back. “When it was proclaimed that the Library contained all books,” Borges tells us, “the first impression was one of extravagant happiness. All men felt themselves to be the masters of an intact and secret treasure. There was no personal or world problem whose eloquent solution did not exist in some hexagon. The universe was justified.”♦ Then come the lamentations. What good are the precious books that cannot be found? What good is complete knowledge, in its immobile perfection? Borges worries: “The certitude that everything has been written negates us or turns us into phantoms.” To which, John Donne had replied long before, “He that desires to print a book, should much more desire, to be a book.”♦
The library will endure; it is the universe. As for us, everything has not been written; we are not turning into phantoms. We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.