The mind is alive, but the intelligence explosion is rubbish

TwitterGoogle+FacebookRedditTumblr

The “intelligence explosion” or “intelligence singularity” is a fun idea. Computers are getting more powerful, they are integrated into all parts of our lives, and machine learning tools allow us to search for and process information better than ever before. Maybe we are nearing the point where computers become as or even more intelligent than us? Maybe computers will be able to evolve and develop their own intelligence? Maybe we are nearing the intelligence singularity?

 

Earlier this week I heard a BBC radio interview with Margaret Boden, a pioneer of and authority on Artificial Intelligence. Contrary to much of the hype surrounding her subject area, she said that people talking about the intelligence singularity were “talking rubbish” and the idea of humans developing a computer with human conversational ability “seems to me to be a fantasy”.

 

I couldn’t agree more. “Rubbish” is just about the level I would put the idea at, and in an earlier post I used the word “nonsense”. But I was still interested, partly because of a continued challenge by Olle Häggström, to hear a thorough scientific argument against the singularity. If Stephen Hawking and some of his eminent colleagues think it can happen, we shouldn’t dismiss it out of hand.

 

Unfortunately, the radio programme wasn’t long enough for Boden to give a full argument and I couldn’t find an article where she addresses the singularity directly. So this is my attempt to make the type of argument I think she might make. My argument is based on my own reflections after reading Boden (2006). This is a nice article, and if you want to really understand the issues then I recommend you stop reading this and carefully study her article instead. But if you just want to watch me stick it to the singularity using the ideas I gained from my own reading then continue with this instead.

 

The first step in my argument is to identify a connection between mind and life. Intelligence is only intelligence if it comes from a living entity. Artificial intelligence that just mimics the human mind through a series of programmed steps is not really a living mind. It is just a computer program that follows logical steps. For example, we can imagine a version of the Apple’s Siri system that eventually fools people into thinking they have a real personal assistant in their phone. This is a conceivable technological development, that could happen in the next 20 years. By covering enough eventualities, Apple’s Super-Siri might pass a Turing test. But even if it did pass the test, I think most people would intuitively agree that it is just a clever piece of software. It is not really a living mind.

 

Intuition alone is not quite good enough if we really want to connect mind to life. Can we say more precisely why super-Siri (if it existed) isn't alive? The argument starts by referring to John Searle's Chinese room thought experiment. I won't go in to the details but suffice to say, he argues that if I had access to a lookup table which allowed me to answer questions in Chinese then this would not mean that I understand Chinese. Likewise, when super-Siri answers questions it isn't really understanding English it is just following a set of rules.

 

In isolation, Searle’s argument isn't the reason Siri isn't alive, and I don't think the Chinese room gets to the heart of the problem. But thinking a little further, as Margearet Boden does, we see the bigger problem that follows from Searle’s room. Super-Siri will, in its first Turing passing version lack a few other important properties of a real brain. Firstly, it will be not be particularly robust. If I remove a couple of hundred randomly selected neurons from my brain then I don’t think I’ll have any problem finishing this sentence. But if I remove several hundred lines of randomly selected code from Siri, it will probably crash. As a second example, super-Siri will not be adaptive. If we ask it to do a visual pattern recognition task then it will be lost. It wasn’t coded for that purpose. As a third example, think how difficult it would be to kill me compared to the elimination of Siri. For Siri, all you have to do is smash it with a hammer. You won’t get me without a fight. I am an autonomous agent, capable of controlling my own destiny and predicting whether or not you are about to hit me.

PhoneSmash

One point that arises above is that the Turing Test, as useful as it has been as a thinking tool, is not a test of General Artificial Intelligence. But this isn’t the major point. Indeed, most ‘futurists’ wouldn’t claim that passing the Turing test lies at the singularity. Rather, the major point is that the mind and life are almost impossible to separate. If we want to create a proper mind, we need to reproduce many other aspects of life. We need to create robustness, adaptivity, autonomy and many other life-like properties. If we want to understand mind, we need to understand life.

 

Once I realised this, then working as I do with biology, I saw clearly just how far away we are from understanding ‘mind’. We are nowhere near. In biology, apart from when we talk to journalists or write grant proposals, we tend to be modest about what we say we understand. Yes, biologists have sequenced the human genome, they have developed drugs to almost control HIV, they scan brains in various ways, they can grow stem cells, they understand developmental pathways and they can modify small numbers of genes in crops. But if you ask a typical biologist if they understand ‘life’, they would probably just laugh at the question.

 

No. We don’t understand life yet. We understand a few practical things here and there, and have some rather large data sets which we would love to understand better. But no biologist would claim we are on the cusp of a detailed understanding of ‘life’, even in an overblown research proposal.

 

A good way to see how little understanding of ‘life’ we really have is to ask how good we are at simulating it. Simulating life is part of the relatively small research field of Artificial Life. ALife aims to reproduce features of life inside a computer, just like Artificial Intelligence aims to create features of a mind inside a computer. Boden gives ten examples of properties of life that Artificial Life tries to capture: self-organization, autonomy, emergence, development, adaptation, responsiveness, evolution, reproduction, growth, and metabolism. Two of these were properties I claimed to be lacking from a super-Siri. In fact, maybe the only one of the ten that super-Siri would have is responsiveness. Super-Siri would not be alive and it would not be a mind.

 

How far are we from understanding replicating these ten properties of life in a computer? In a review by Bedau et al. (2000), the authors provide a list of open questions from ALife. From these we can see just how far we have to go before we simulate a brain. For example, (question 4) of simulating a bacteria–like organism over an entire lifecycle and (question 1) of generating an RNA-like molecule from simpler molecules are both open questions. I would maintain that no discernible progress has been made on these questions over the 15 years since the publication of Bedau et al. (2000) . So in this context, the answer to question 11, of demonstrating the emergence of intelligence and mind in an artificial living system is very far off indeed.

 

If someone does answer question 4, and simulates a single cell over an entire lifecycle, then I might start to take an interest in General Artificial Intelligence. But at that point I’ll want to know how groups of bacterial cells self-organize to form complex functional patterns, how environment and genes interact to produce structure and how evolution shapes biodiversity of bacteria and other organisms. These are just some of the unanswered questions about multi-celullar biology. Boden, for example, raises the problem of integrating a proper metabolic system in to an Artificial Intelligence. There are so many challenges to be addressed in understanding multi-cellular evolution, that even the uncompleted step of modelling single cells is only a tiny piece of progress.

 

So despite of the small group of vocal evangelizers calling on us to take a closer look at the consequences and potential dangers of a general AI, if we consider all the steps involved in understanding life then we can see that the singularity is still a long way off. More importantly, if we consider the potential risks to our security posed by technology, general AI is among the least relevant of them. The issue is not so much how long it will take to reach the singularity, but all the potential dangers along the way. As I have argued, the steps towards creating ‘mind’ necessarily involves smaller steps towards creating ‘life’. And each of these steps will involve its own ethical and moral dilemmas. For example, when we completely understand bacterial lifecycles, then we will probably be able to create whole new forms of bacteria. Maybe we will start by putting healthy, muscle building or brain-boosting bacteria in our morning yoghurt, leading to unexpected longer-term consequences. Maybe we will create deadly bacteria that we will lose control of on the battlefield. The nightmare scenarios are endless, and we will have to deal with them in a rational manner, but they all start a long time before we have created a virtual ‘mind’.

 

I can’t stop myself from closing with some critical remarks about the general AI 'philosophers', or whatever they are. For someone like myself who is working on the science of ‘life’, I can’t help but find them slightly irksome. Here are we, working away at unraveling the secrets of evolution and biology, and there they are setting themselves up as great thinkers about the future of humanity. Instead of working on the science, these futurists are carried away by their own thought processes. Yes, scientists need to be better at looking at the ethical and social consequences of their work. But we shouldn’t just pick out the coolest, most self-indulgent, headline catching problem that we can find and then claim we need a new way of thinking about the dilemmas involved. That isn’t good quality science. It is poor quality science fiction.

Published

Updated

Categories

Other

Comments