Cologne Conference Futures CCF14, October 2014
The theory of memes builds on the idea that words, songs, stories, skills and technologies are replicators, as genes are. They are information that is copied with variation and selection and therefore support an evolutionary process. The culture we see around us is the result; the winners in the memes’ competition to use human brains to get themselves copied, stored, recombined and passed on. No other species on earth is capable of imitating well enough to support cumulative evolution of this kind. When our ancestors reached this critical turning point they let loose this second replicator and the world changed forever.
The memetic evolution that followed not only transformed human brains and bodies into meme machines but changed the atmosphere and biosphere of the whole planet, and those changes continue. Thus the advent of a new replicator was dangerous (and we may speculate about how dangerous).
Could there be another such transition? Understanding the gene-meme transition may help us to imagine how. Genes evolved from ‘naked replicators’ into replicators housed inside the bodies of bacteria, plants and animals. These are the phenotypes that express the genes they carry and compete with each other to survive and reproduce. Memes appeared when one of these phenotypes (a hominin ancestor) began a new kind of copying. Instead of cells copying chemical structures, they copied a new kind of information: gestures, sounds, skills and, ultimately, technologies.
Could the same happen again? Could memes build phemotypes that became a new kind of machine copying a new kind of information? I suggest that this is happening right now. Although most human memes are still naked replicators, without anything corresponding to a phenotype, culture is evolving towards a more effective system in which printing presses and factories churn out books, cars and phones that compete to be bought and so cause more copies to be produced. These are the meme phenotypes, as are our computers and all the rest of our information technology.
So the question becomes – could one of these copy, store, vary and select digital information without us? Indeed they could and already do. Is there yet any information that undergoes all of these essential evolutionary processes without any human intervention? I do not know. But it is interesting to realise that if there were we would not know about it, by definition. That is, it would be like dark information that we would know about only by its use of energy and computing resources.
If this happens a new evolutionary process will take off that is greater and faster than any yet. It will demand enormous resources for its expansion and, being human, we will probably be unable to resist these demands without depriving ourselves of the communication technologies we so love. A possible future here is one of rapidly increasing artificial thinking, dependent on massive increases in hardware and software and having dramatic effects on the planet (again).
And what would our role be? A few billion years ago, in a process of endosymbiosis, one kind of bacterium was absorbed into another to become the mitochondria that power living cells today. In a similar way I wonder whether we biological thinkers might become absorbed into artificial ones for the same purpose – to provide the energy. We would scurry around living our human lives, building ever bigger and better machines, while the intellectual action and evolution is going on beyond our ken.
Would these new machines be conscious? Would we need to worry about their welfare or suffering? Much current thinking on consciousness treats it as an evolved ability with effects and functions that make it adaptive for human beings. I have argued that it is nothing of the kind. Consciousness is an illusion that arises because we build self-models, or stories about inner selves, that have the power of consciousness and free will. So it is not whole humans that are conscious but representations of selves that do not really exist. Doing this may be adaptive for us in various ways but it misleads us terribly when we try to understand our own minds.
If this is correct then machines could only become conscious in the way we are if they became as deluded as we are. This would happen if they constructed models of self-like entities to which they attributed the power of consciousness and free will. We might expect all sorts of distributed representations of entities to appear in this new world of tremes. The trouble is, they would be way beyond our grasp. We might find ourselves rather like a mitochondrion trying to understand the animal inside which it lives, and to which it gives its energy.
The whole of my argument here suggests that the future for artificial intelligence and consciousness lies not with us clever humans intelligently designing better and better machines until they cross some threshold or reach a singularity but with a third replicator and a new evolving world that we will, unwittingly, have let loose and cannot control.