Luc Steels

The Future of Intelligence


Introduction

Intelligence is a capacity which has evolved incrementally in small steps during millions of years of evolution Donald 1991. The last major brain size increase happened only 200,000 years ago. With it came an important evolution in the anatomy of the vocal tract necessary for speech and thus the full development of language. It is a biological fact that evolution never stops although there may be phases of relative stability Gould 1977. It is therefore not at all excluded that new forms of intelligence remain possible or that the human race will further develop in the direction of increased intelligence.

This article speculates on three different ways in which intelligence might further develop on this planet:

1. Although there are no biological signs pointing in the direction of an increase in the anatomical basis of intelligence in humans or other species, there are awesome technological advances at the moment which may make it possible to extend biological capacities. The possible extensions include artificial sensory devices, electronic memory units, computer processors, and mechanical actuators. What would happen if this technology is applied to ourselves? Would it lead to a new species? This species might possibly be called the Homo Cyber Sapiens Steels 1995a. Its members could start as extensions of ourselves but gradually become more independent from biological `wetware' in order to continue their existence. Minsky Minsky 1994 has suggested that this may lead to a form of immortality. These techniques could perhaps first be applied to make animals more intelligent. Primates are assumed to be intelligent to some degree and even capable of limited forms of language Dunbar 1988. Maybe we could start by giving them an artificial vocal apparatus capable of the articulation required for speech.

2. There is a second way towards other forms of intelligence. Efforts to build completely artificial humanoids, i.e. intelligent robots, seriously started in the late nineteen fifties and have been steadily making progress. In some optimistic views Moravec 1988, robots with the capacity of human intelligence are only fifty years away. But most robot builders are much more pessimistic. Progress in artificial intelligence so far has been almost entirely in disembodied intelligence, focusing on the modeling and implementation of reasoning patterns. As I will discuss later in the paper, there remain many formidable obstacles. A new approach is now being tried which paradoxically wants to de-emphasise engineering in favor of biology. Its proponents argue that we must first build `artificial life' before artificial intelligence becomes possible Steels 1994. Maybe with this artificial life approach a new artificial species with human like intelligence might one day be possible. I propose to call this species the Robot Homonidus Intelligens.

3. There is a third very important development at the moment: the emergence of a new virtual world known as cyberspace. This world is based on digital technology but is rapidly approaching some of the characteristics of the biological world Steels 1995b. Cyberspace is open because new hardware, new interconnections, new applications, and new users are added all the time. It is distributed with no single entity in control. It has also many layers of complexity and is forming a complex dynamical system in which phenomena like chaos and self-organisation may occur.

Current applications, such as the World Wide Web, do not go beyond the storage, dissimenation, and presentation of information. But new applications known as software agents may provide dramatic breakthroughs. Software agents are mobile programs that carry part of their own data. They may act as representatives of their owners and adapt themselves to the computational circumstances in which they find themselves. In some cases software agents have human-like interfaces Takeuchi 1993. There is the beginning of fascinating work on how software agents may build up their own complexity using biological mechanisms like evolution by natural selection Ray 1992. It is conceivable that, as the world of software agents increases in complexity on its own force, new forms of intelligence emerge which may possibly surpass human intelligence. The strong interfacing with humans and the real world may make these software agents as common as biological entities in the future.

The rest of the paper explores these three developments in some more detail, particularly from the point of view of feasability and possible impact.

The Homo Cyber Sapiens

For each significant jump forward in the evolution of human intelligence, there has been a dramatic increase in brain size. For example, the Homo Sapiens shows a 20 percent volume increase compared with the Homo Erectus. There are no signs that the human brain is expanding (even at a slow rate), but technological memory extensions of the human brain might be possible in the not too distant future. Artificial memories have been increasing in storage capacity and decreasing in size. The end is not yet in sight, particularly when nanotechnology becomes fully developed Drexler 1994. New biological sensors and actuators must also have been evolving steadily as the human race progressed in intelligence. One recent example is the vocal tract which is essential for developing the capability of speech. Also here there is a comparable technological development with increasingly versatile and miniaturised sensory devices and actuators. One of the latest developments are micromachines which are edged directly in silicon.

So, the technology is there. If we could figure out a way to construct effective brain-computer interfaces with which a biological brain can tap additional memory, processing capacity and sensors or actuators, then the required significant increase in brain size can be realised. There is intensive work on brain-computer interfaces both from the technological side, with attempts to construct interfaces between electronic circuits and neurons, and from the biological side, with attempts to implant devices. Various successes have been achieved with hearing implants in which the brain grows interfacing neurons and adapts to absorb the information provided by the electronic circuits. Intensive work is now going on to build artificial retinas and restore vision to the blind.

No work is going on yet to provide a general interface to the brain as a whole, for example to increase language capacity or memory. The main question is of course what the nature of the extension should be:

  • In one hypothesis the artificial brain extensions should mimic the operation of human neurophysiology. Much progress has been made the last years in neural modeling McCelland 1986, and various devices, some of them exploiting VLSI, have been constructed Nadel 1989. So far the performance of these artificial neural networks has fallen far short of natural systems and it could be argued that staying close to human neurophysiology will unnecessarily limit the potential power of artificial systems. For example, if we could extend our brain with a calculating device, we will undoubtely want one that performs calculations with a speed and precision comparable to current computers, as opposed to a calculator that is as slow and error-prone as the human brain. Similarly, if we extend our brain with new language capacities (for example `plug-in' modules with vocabularies and grammars of a language) then we want this extension to be fast and accurate and not in need of continuous practice as in the case of natural linguistic skills.
  • In another hypothesis, the artificial brain may be completely different from the natural brain. It suffices to build bridges between the two so that the contents and processing of one become accessible to the other. Such a hypothesis would rely more on results in knowledge-oriented artificial intelligence research (AI)Genesereth Nilson1987. AI engineering has yielded systems with impressive performance in areas like computer chess, expert problem solving or natural language processing, without mimicking the hardware of the brain. Such a solution would however have other drawbacks. So far AI systems are not situated nor adaptive. They are painstakingly built by human engineers and almost always need to be extended by hand. To make their use in an evolving context feasible, there would at least have to be a mechanism for regular updating. Nobody knows how to do that in an effective way today, despite intensive efforts in machine learning.
The main emphasis in interfacing artificial sensors and actuators to the brain is for replacing failing sensory modalities in handicaped persons. But the same technology can also be used to extend other modalities. For example, humans which already have two eyes could be equiped with additional cameras to extend the range of vision, or control directly the behavior of motors for locomotion. If direct connections could somehow be established between the brain and the electronic information highway, then there is the intriguing possibility that the brain has access to vast amounts of information and could in turn cause action at a distance by the intermediary of electronic devices. This idea is at this point so far out that we can hardly imagine its consequences. Will we in the future `read' electronic mail directly or `send messages' to other brains without the intermediary of our normal sensory apparatus, or maybe even without the intermediary of language? Will we travel and gain experiences in cyberspace once the appropriate brain-computer interfaces are possible? These possibilities would bring about a complete revolution in our perception of the world. For example, time and space, which are for us now extremely hard constraints, would no longer be limiting factors and thus be experienced in a quite different way.

How realistic is the development of a Homo Cyber Sapiens? Today we know almost nothing how we could expand our sensory and actuator modalities nor how we could use plug-in devices to expand our memory and processing capacities possibly with ready-made modules providing knowledge and skills for specific tasks. There are small-scale experiments going on and artists like the Australian Stelarc have gone very far in exploring the interaction between the body and artificial devices but routine employment seems to be far off in the future. But then, in how far is it ethically appropriate to push these developments? On the one hand, it appears frightening because the brain is not only the most complex but also the most delicate organ of the human body. It is frightening also because the new species might overpower us. At the same time, the expansion of our brain capacity appears a natural step because the evolution of intelligence has continuously happened in the past as well. Moreover the ecological pressures seem to force the further development of intelligence.

The Robot Homonidus Intelligens

Although current work on extending our brain capacities is in its infancy, this cannot be said from work on building robotic agents and artificial intelligence. The overall research efforts has so far been relatively small, compared to the research efforts in biology for understanding the functioning of the brain for example, or the research efforts for detecting the elementary particles. Nevertheless there has been constant work in the area since the beginnings of cybernetics and artificial intelligence in the fifties. The results so far are very impressive. A whole range of programs has been written that exhibit features of (human) intelligence.

For example, chess programs now compete at grandmaster level, expert systems have demonstrated human-level performance in difficult problems like scheduling, diagnosis, or design, natural language programs of high complexity have been built for parsing and producing natural language, and some machine learning programs have been capable to extract compact representations from examples.

But there are still very strong limits to the present state of the art which even raise the question whether we will ever get to intelligent robotic agents.

1. Most of the work in knowledge-oriented AI goes through the following cycle:

  • A particular, usually valuable, expertise is identified,
  • this expertise is modeled at the knowledge level Newell 1982 and formalised, and
  • the formal representation is coded in a computer-based form using symbolic programming techniques. The resulting system, often called a knowledge system, is then incorporated in a particular environment and used as a tool by humans in their work. A high degree of sophistication has been reached in this kind of approach and a wide variety of systems has been built and put into practice. But there are two important problems. The first one is that the effort required to build knowledge systems is very substantial. It takes several man-years of effort to capture moderately sized expertise and this may move up to tens of man-years for non-trivial tasks. A second, more important problem however is that such an effort captures instances of `frozen intelligence' without the associated mechanisms and processes that gave rise to the intelligent behavior. This means that the maintenance and adaptation has to be done by hand which is quite expensive and in most cases unrealistic. This suggests that something is now known about the internal mechanisms of knowledge representation, reasoning, and problem solving but that we do not understand how these mechanisms develop in interaction with the environment.

2. Knowledge systems are examples of disembodied intelligence. They do not have any direct links to the real world through sensors or actuators. Instead, the link is made through the intermediary of humans. This works reasonably well for domains such as legal reasoning where the inputs and outputs take a form that is already symbolic. But if we are looking at robotic agents which have to operate independently in their environment, then the interfaces to the real world become essential. It has long been assumed that it would be straightforward to link up the symbols used by AI systems to the world through sensors and actuators, but the problems turn out to be enormous Brooks 1991. They are mostly related to the fact that the signals obtained from sensors do not contain enough information to extract the detailed symbolic world models required by classical AI planning techniques or that it would take too much time to do so. Moreover actuators will never be full-proof. This means that it will never be possible to plan a particular course of action in advance and hope that the execution will be so perfect that it does not need to be adapted. These difficulties have lead in the last few years to a new `bottom-up' approach to artificial intelligence which attempts to use minimal world models and emphasises direct coupling between sensing and effecting mediated by dynamical processes Steels Brooks 1995.

3. Current knowledge systems as well as robots are machines. They do not have any reason of existence on their own. They are not individuals, not even organisms. Although they have in many cases the possibility to choose the most appropriate action from a repertoire of actions, the goals and possible actions are supplied externally. When two robots cooperate, they do so because humans have programmed into them particular behaviors that cause the cooperation. The need for cooperation does not emerge from within the robots or the situations in which they find themselves. In this sense, current AI systems are tools for humans rather than independently existing autonomous entities in their own right. It is at this point fully unclear how a robotic agent could be given a sense of self, the right of initiative, the responsibility for its own action, and so on. Even worse, there is hardly any work going on within the context of artificial intelligence to build truely autonomous agents.

These three unresolved issues constitute formidable obstacles to the development of the Robot Homonidus Intelligens. The obstacles are not really technological. The state of the art in electronics, computer engineering, and mechanical engineering make it possible to build the body and brains of a humanoid and efforts in this direction are currently going on Brooks 1994. The real obstacles are a lack of a theory of intelligence and particularly of a theory that explains how intelligence grounded in a real world environment may come about.

To overcome these fundamental bottlenecks, a new approach towards artificial intelligence has emerged recently which departs in certain ways radically from the knowledge-oriented approach dominating current work in AI \cite{Steels1994}. The most characteristic feature of this approach is a move away from engineering and towards biology, not in the sense that attempts are made to develop realistic biological models (as some neural network researchers are trying to do (Eeckman 1993) but in the sense that researchers are trying to understand the "principles" with which biological systems operate and apply them to the construction of artificial systems. The word `construction' is not even appropriate here, because one of the main ideas is that intelligent autonomous agents cannot be built but should evolve in a process similar to the way that intelligence evolved in nature: using a combination of evolution by natural selection and adaptivity and development as in the development of a biological individual Koza 1992, Cliff 1993. This approach is known as the artificial life approach to artificial intelligence Steels 1994.

For example, we have created in our own laboratory a complete robotic ecosystem which includes an environment with different pressures for the robots (e.g. the need to collect energy and ensure that it is available), different robotic agents which have to cooperate but are also in competition with each other, and a growing repertoire of adaptive structural components (called behavior systems) which are causally responsible for behavior. (see Steels1994b, McFarland 1994).

Such an integrated experimental environment ensures that all the different levels (genetic, structural, individual, group) are present at the same time, each with strong interactions to the environment. This way a wholistic approach to the study of intelligence is possible. Our objective is to come up and test out scenarios that show the progressive steps towards intelligent agents, similar to the way biologists and chemists are investigating scenarios for the origin of life Kauffman 1993 or physicists are researching scenarios for the origin of the universe Weinberg 1980. The big challenge is to use only principles compatible with the laws of physics and biology and to avoid programming specific mental capabilities. Instead, intelligent behavior, including individuality, linguistic communication, cooperation, etc. must emerge as a result of pressures in the ecosystem in which the agents find themselves and structure-forming processes such as self-organisation or selectionism.

Already we have been able to re-enact important steps in the evolution towards more intelligence. In one experiment Steels1995b, a robot went through a series of discoveries: that it should slow down in the charging station to maximise the time spent in recharging, that phototaxis helped to find the charging station, that a contact switch gave a direct indication that the charging station was approached, etc. In another experiment a group of identical robots spontaneously split in different groups with one group working significantly more than the other group. These experiments show that the spontaneous growth of behavioral complexity in robotic agents may become a reality in the coming decade and that this path may eventually lead to forms of artificial intelligence.

Software Agents

The loosely coupled world wide computer network knowns as the Internet is growing at an exponential rate. It consists of millions of computer systems which share resources by accepting standard protocols for the retrieval, presentation, and throughput of information. In the last few years the world wide web (WWW) has seen an incredible upsurge in the exchange of information among humans. Thousands of sites have been constructed which make local information visible from all over the world. The information can take the form of text, pictures, sound, and video. Different forms of interactivity are being experimented with, for example through camera's broadcasting the captured image directly on the network. A development that has not taken off yet but is clearly in the making is the presence of software agents operating over Internet.

A software agent is technically speaking a mobile program which carries part of its data. The program is mobile in the sense that it can halt its execution on one machine and transfers itself using the Internet to another machine and resume execution. In a certain sense computer viruses are a first but negative example of software agents, but many positive uses are imaginable. For example, one can imagine that email messages are much more active. They could include graphics, sound, and parts of programs such as interactive dialog capabilities or ways to show something to the recipient of the mail. They could also provide the facilities to go back to the senders with information they have collected at the remote site \cite{White1994}. Another application is one where an agent explores the WWW to find information for its owner, such as information about hotels and travel facilities to a location of interest Pinkerton 1994.

Research is currently going on to give agents various degrees of autonomy. First of all there is the autonomy to migrate and re-instantiate on other machines Kotay Kotz 1994 and, if possible, to adapt the program to newly found circumstances. Second there is the autonomy to act as a representative which would mean that the agent has mentalistic attitudes like beliefs and commitments and that it has the authority to negotiate Shoham 1993.

The most interesting development concerns agents which expand their functionality as they operate in dynamically changing open environments. For example, Tom Ray Ray 1992 has developed a system called Tierra in which an agent takes the form of a program that can make a copy of itself. The copy process occasionally makes mistakes thus causing variants to be produced. The selectionist criteria amount to limitations in time and space and the occasional action of a `reaper' which will blindly eliminate new programs. Ray has shown that within this computational environment different forms of copy-programs originate spontaneously, including parasites that use part of the programs of others to copy themselves. Ray is currently engaged in the construction of a distributed version of this system (called NetTierra) in which agents will also be mobile and thus can exploit the computational resources on other computer sites to enhance their distribution in the total population. These experiments are important preliminary steps to learn how new functionality may emerge in software agents independently of designed intensions, and to investigate the security issues involved in evolving autonomous software entities.

Research on software agents is still in its infancy. The agents that are currently released on the Internet do not have the autonomy discussed in the previous paragraphs, neither can they be called intelligent. They hardly use any representations or reasoning capabilities. However if some of the techniques developed in robotic agents can be transferred to software agents and if the problem how new complexity could spontaneously originate in societies of agents, can be solved, then there is no doubt that cyberspace will be inhabited by increasingly more complex and more intelligent beings.

Conclusions

The paper discussed possible ways in which new forms of intelligence comparable, or even more powerful, than current human intelligence may come about. One way, the
Homo Cyber Sapiens, is rooted in biology and based on extensions by technological artefacts. The other way, the Robot Homonidus Intelligens, is purely technological but still grounded in the world. The third way is through software agents, which live and breed in cyberspace. All these developments may happen, driven by new technological advances and the increasing pressures occuring in human societies, although their realisation is particularly for the first two cases, far into the future.


Bibliography
  • Brooks, R. (1991) Intelligence without reason, IJCAI-91, Sydney, Australia, pp 569--595.
  • Brooks, R. (1994) Coherent Behavior from Many Adaptive Processes. In: Cliff, D. et.al. (1994) eds. From animals to animats 3. Cambridge: MIT Press. p. 22-29.
  • Cliff, D., P. Husbands, and I. Harvey (1993) Evolving Visually Guided Robots. In: Meyer, J-A., H.L. Roitblatt, and S.W. Wilson (1993) >From Animals to Animats2. Proceedings of the Second International Conference on Simulation of Adaptive Behavior. MIT Press/Bradford Books, Cambridge Ma. p. 374-383.
  • Cohen, J. and I. Stewart (1994) The Collapse of Chaos. London: Viking Press.
  • Dawkins, R. (1976) The Selfish Gene. Oxford: Oxford University Press.
  • Donald, M. (1991) Origins of the modern mind. Three stages in the evolution of culture and cognition. Cambridge Ma: Harvard University Press.
  • Drexler, K. (1994) Nanosystems. Molecular Machinery, Manufacturing, and Computation. New York: John Wiley.
  • Dunbar, R. (1988) Primate Social Systems. London: Croom Helm.
  • Eeckman, F. and J. Bower (1993) (eds.) Computation and Neural Systems. Boston: Kluwer Academic Publishers.
  • Genesereth, M. and N. Nilsson (1987) Logical Foundations of Artificial Intelligence. Morgan Kaufmann Pub. Los Altos.
  • Gould, S. and N. Eldredge (1977) Punctuated Equilibria: The tempo and mode of evolution reconsidered. Paleobiology, 3: 115-151.
  • Kauffman, S.A. (1993) The origins of order: self organization and selection in evolution. Oxford University Press, Oxford.
  • Kosko, B. (1992) Neural Networks and Fuzzy Systems. A Dynamical Systems Approach to Machine Int


Thema
Letter to the editor