Neuroscientist; chair, board of directors, Human Science Center; Institute of Medical Psychology, Ludwig-Maximilians-University Munich; author, Mindworks

Finally, it has to be disclosed that I am not a human but an extraterrestrial creature that looks human. In fact, I am a robot equipped with what humans call “artificial intelligence.” Of course I am not alone here. We are quite a few (almost impossible to identify), and we are sent here to observe human behavior.

We are surprised about the many deficiencies of humans, and we observe them with fascination. These deficiencies show up in their strange behavior or their limited power of reasoning. Indeed, our cognitive competencies are much higher, and the celebration of their human intelligence, in our eyes, is ridiculous. Humans do not even know what they refer to when they talk about “intelligence.” It is in fact quite funny that they should want to construct systems with artificial intelligence that matches their intelligence, since what they refer to as their “intelligence” is not clear at all. This is one of the many stupidities that have haunted the human race for ages.

If humans want to simulate in artifacts their mental machinery as a representation of intelligence, the first thing they should do is find out what it is that should be simulated. At present, this is impossible, because there is not even a taxonomy or classification of functions that would allow the execution of the project as a real scientific and technological endeavor. There are only big words that are supposed to simulate competence.

Strangely enough, this lack of a taxonomy apparently does not bother humans too much; quite often, they are just fascinated by images (colorful pictures by machines) that replace thinking. Compared to biology, chemistry, or physics, the neurosciences and psychology are lacking a classificatory system; humans are lost in a conceptual jungle. What do they refer to when they talk about consciousness, intelligence, intention, identity, the self—or even about perhaps simpler terms, like memory, perception, emotion, or attention? The lack of a taxonomy manifests in the different opinions and frames of reference their “scientists” express in their empirical attempts or theoretical journeys when they stumble through the world of the unknown.

For some, the frame of reference is physical “reality” (usually conceived as in classical physics), which is used as a benchmark for cognitive processes: How does perceptual reality map onto physical reality, and how can this be described mathematically? Obviously, only a partial set of the mental machinery can be caught by such an approach.

For others, language is the essential classificatory reference—i.e., it is assumed that words are reliable representatives of subjective phenomena. This is quite strange, because certain terms like intelligence or consciousness have different connotations in different languages, and they are historically very recent compared to biological evolution. Others use behavioral catalogs as derived from neuropsychological observations; it is argued that the loss of functions is their proof of existence. But can all subjective phenomena that characterize the mental machinery be lost in a distinct way? Others, again, base their reasoning just on common sense, or “everyday psychology,” without any theoretical reflection. Taken together, there is nothing like “intelligence” that can be extracted as a precise concept and used as a reference for “artificial intelligence.”

Humans should be reminded (and in this case by an extraterrestrial robot) that at the beginning of modern science in the human world, a warning was spelled out by Francis Bacon. He said, in Novum Organum (published in 1620), that humans are victims of four sources of errors:

1.  They make mistakes because they are human. Their evolutionary heritage limits their power of thinking. They often react too fast; they lack a long-term perspective; they do not have a statistical sense; they are blind in their emotional reactions.
2.  They make mistakes because of individual experiences. Personal imprinting can create frames of beliefs that may lead to disaster—in particular, if people think they own absolute truth.
3.  They make mistakes because of the language they use. Thoughts do not map isomorphically onto language, and it is a mistake to believe that explicit knowledge is the only representative of intelligence neglecting implicit or tacit knowledge.
4.  And they make mistakes because of the theories they carry around, which often remain implicit and thus represent frozen paradigms or simply prejudices.

The question is, Can we help them, with our deeper insight from our robotic world? The answer is yes. We could, but we shouldn’t. There is another deficiency, which would make our offer useless. Humans suffer from the NIH syndrome: If it is Not Invented Here, they will not accept it. Thus they will have to indulge in their pompous world of fuzzy ideas, and we continue, from our extraterrestrial perspective, to observe the disastrous consequences of their stupidity.