You might not believe a word he says, but BT does. Ian Pearson has been BT's futurologist since 1991. His job is to imagine where today's technologies will lead us. Artificial intelligence, genetic modification, intelligent viruses, imaginary civilisations and Second Life 10.0, as well as some pretty nasty scenarios involving robots such as the Terminator are all real possibilities he included in his Technology Timeline.
In this interview, Pearson talks about his profession, explains why he doesn't think we will understand intelligent machines when they finally arise, and warns to the big ethical dilemmas our technological civilisation will have to face sooner or later.
Why does BT have a futurologist?
You can use the term futurist, if you prefer. It is pretty much the international term. Futurologist is peculiarly a British one, but everybody else uses futurist. We like to think having futurologists in BT is kind of [when] you look out the window on your car when you're driving alone through fog. You can't see a very clear picture of what is ahead. You try to look at every obstacle. Sometimes you will misinterpret an apparent shape in the distance, but few of us would drive through fog without bothering to look out the window. Blurred vision is a lot better than none at all. The same is true for business, which is why BT employs me.
So the further ahead that you can see, the better you can plan. It's a useful function, but BT didn't have a futurologist before me. It just considered it as part of planning. People would think ahead a little while, but there wasn't very much very long term thinking before I came along. I joined BT in 1985 but I only became a full-time futurologist in 1991.
Royal Dutch Shell has a famous scenario planning research team. Do you work the same way?
We work in different ways. Shell basically invented the field of corporate futurology, as far as I can tell. But what they do mostly is what is called scenario planning, different possibilities for what lies ahead, and they may plan for each of those different possible scenarios. What we do in BT is to use that here and there throughout the company for various reasons, but I personally don't think it works very well in terms of thinking what the future actually looks like. We can look at different scenarios. But when you think about the future a lot in a tech-dominated area like telecom, you can work out pretty much what it is going to look like, rather than just planning scenarios. Therefore I find it much better to try to predict what's going to happen than to have a list of few possibilities.
How do you make your predictions?
I do a lot of reading. I try to keep in touch with what's happening. I read some business and news magazines and technology journals and websites, to try to keep up with what's happening around the world. And then I spend a lot of time listening to other people and giving them insights on what they think will happen on their respective fields. Reading consumes a lot of my time as well as being in touch with another people, one way or another. Then I spend a long time daydreaming, thinking about how the thing interacts, and gradually I come up with a view of the future. When I talk with other people about it of course they argue with me sometimes. For example someone can say: "That is a very stupid conclusion," and I think again. This allows me to refine my ideas by sharing it with other colleagues, and find better conclusions.
10 years ago, in May 1997, Deep Blue won the chess tournament against Gary Kasparov. Do you consider, as Kasparov did, that was the first glimpse of a new kind of intelligence?
Yes, it's a very good example of what you can do with computer-based intelligence. What it pointed out was that it doesn't have to do things the same way that people do in order to achieve goals that people use their intelligence to do. Deep Blue didn't work the same way as people. Deep Blue used a great deal of number crunching. It was not a conscious machine. It was just a very dumb machine that was not aware of its existence. It just crunched numbers in order to be able to solve problems that might require one of the finest human minds on the planet to solve. But it was a big breakthrough. I think it was a very important breakthrough for the thinking instead. A lot of us realised then that it wasn't going to be necessarily to figure out exactly how the brain works to do a lot of problems which require intelligence, because to solve these things one can use computers rather then a big computer with consciousness or self-awareness.
Nonetheless, I think the task of producing machines with consciousness or self awareness is still important. We will probably make conscious machines sometime between 2015 and 2020, I think. But it probably won't be like you and I. It will be conscious and aware of itself and it will be conscious in pretty much the same way as you and I, but it will work in a very different way. It will be an alien. It will be a different way of thinking from us, but nonetheless still thinking. It doesn't have to look like us in order to be able to think the same way.
But as soon as machines become intelligent, according to Moore's Law they will soon surpass humans. By the way, BT's 2006 technology timeline predicts that AI entities will be awarded with Nobel prizes by 2020, and soon after robots will become mentally superior to humans. What comes after that: the super intelligence or God 2.0?
I think that I would certainly still go along with those time frames for superhuman intelligence, but I won't comment on God 2.0. I think that we still should expect a conscious computer smarter than people by 2020. I still see no reason why that it is not going to happen in that time frame. But I don't think we will understand it. The reason is because we don't even understand how some of the principal functions of consciousness should work.
I'll give you an example of it. In the early 1990s in the University of Sussex, there was an experiment to generate a programme to evolve the circuits to distinguish between different tones on a telephone circuit, allowing the circuits to work in different ways. And the circuits that the computer came up with worked in very different ways from those of people came up with. So, the computer doesn't use conventions that people use, but it came up with solutions that were more elegant and worked in very different ways. With even the simplest of systems it takes us a long time to try to figure out how they work, because they are so different from the way people would solve the same problems. Therefore, I don't think we will understand how these smart machines.
If you think they are capable of being much more intelligent than people, well, I agree with the logic that they are cleverer in designing cleverer things. But they will get very, very clever. It's kind of like a hamster trying to understand a human being. They can't simply understand the problem. How could they possibly think in the same way?
It's like as if a human being is compared with an alien intelligence, which is hundreds of millions of times smarter. We don't have the right capabilities to start thinking in the same way. So, we put machines winning Nobel Prizes in our technology timeline, because we got good reasons to do that. You see, most of us, even if we like to think that we are reasonably intelligent, most of us are not capable of doing something so wonderful to get a Nobel Prize. And we wouldn't expect to be able to understand all the essays or readings that a Nobel Prize is capable or not, because maybe these smarter guys work in a different level of the rest of us. How could we possibly understand it? With computers will be pretty much the same sometime in the not very far future.
In this context, can we consider today's Second Life as some kind of 'The Matrix' 1.0?
That's an interesting way of looking at it. I never thought of it in those terms. I don't really think we can. Although both The Matrix and Second Life are about socialisation, Second Life is an imaginary world where we can inhabit it, but the key difference is that people are very aware of being there; and in The Matrix the key thing in the movie wasn't too much that it was a virtual environment where people didn't know that they were in there. I don't think Second Life would ever evolve into a place where people weren't aware that they are online. I think people will always be able to distinguish between being connected in an imaginary world and when they're actually in real life. We will always be able to distinguish real life from imaginary life. That's the key difference between The Matrix and Second Life.
But certainly we can adapt the concept of Second Life from simple virtual environments and we could add full sensory capabilities to that, so we can make it something completely convincing. In that regard a future version of that thing could be very like The Matrix, and then we could have 'Second Life meets Total Recall'. It's a bit like The Matrix then, where you have a very large environment with people connected together with a very convincing level of reality.
It seems like the holographic deck of Star Trek, isn't it?
Yeah, the holodeck on Star Trek was a bit like the future capabilities of virtual environments. You know, we were looking into that sort of future, so in 2020 we should be able to induce sensations, we should be able to report sensations and replace sensations. Then we can do something approximate to Star Trek's holodeck or Total Recall, and we could have something a little bit like The Matrix, or a Second Life 10.0. I think the future is converging to most of those things; rather than Second Life. A metaphor we like to use is The Sims, the game with its imaginary characters interacting with each other. They are not humans but they interact with each other. With the arriving of artificial intelligence, we could end up with some of The Sims features with real conscience. That will be a very interesting situation, when you will have an imaginary civilisation living imaginary lives with a human point of view. For the members of those civilisations it will be quite real, and they will have their real existence within the network, within cyberspace.
I understand you're interested in NBIC (nanoscience, biotechnology, information technology and cognitive science) convergence. A lot of people have real concerns about it. For example, Bill Joy, Sun's former CTO, wrote in 2000 a famous manifesto in Wired magazine warning this convergence could represent a threat to Mankind very existence. In BT's Tech timeline I read that by the 2030s a nanotech based virus could be transmitted between machines and people over the net. Would it not be a real nightmare?
It would if you put things on those terms. For instance, we put those things into the Technology Timeline to highlight the possibilities of future technology. But I think in some cases we will probably want to make regulations to prevent people from doing some of those things. The NBIC convergence does allow you to do a lot of very powerful things which will bring very huge benefits for mankind. And it also likewise makes some very formidable weapons and some pretty nasty nightmare scenarios. The point that Bill Joy was getting out in his article is that it was entirely possible. So at some point we will have to figure out how to stop these things from happening, and we have to persuade governments around the world that there are serious problems which need to be regulated in order to be prevented.
Take, for example, genetic modification technology. Governments may do something about it, and they may produce some agreements, although there would be some countries which don't get covered. For example, in most of the world it's illegal to clone people and there is very strong restriction on what you could do in terms of genetic modification. I would expect that sort of thing will probably happen with these extremely NBIC convergences, where scientists couldn't get access to a technology level in order to get close to the capability of doing things like nano assembly with viruses and similar stuff.
I would think that a lot of people at that point will be screaming from the risk of stopping the development of very clever new technology. But probably there will be very, very tight restrictions on NBIC. The trouble is that a lot of these technologies will be very difficult to police. So even if they're made illegal across the world through international treaties and stuff like that, how do you police what somebody's doing in his backyard? And there could be very small equipment and a very smart guy... You can't spot what's happening by using surveillance by satellite, because it's very difficult to see what's going on. In that regard we can't do very much about it, we must have to accept the risk. Well, that's not news. You know, once the technology exists or even once the technology is half way to existing, all you need is a few smart guys in a very small space spending some time together and they might come up with something. How can we forbid them to do it?
So, the whole concept of NBIC convergence in the form of viral extreme AI, conscious machines, super-humans, nano-assemblers, genetic modification... linking these together does give you these capabilities, only we might decide how we might release them, though we might have limited capabilities to do so. So I think I would agree with Bill Joy to a degree. I'm not so optimistic if we are going to find a solution for that. At the moment we can't see the solution. I think he draw a very valid point!
Stephen Hawking defended in 2001 the genetic enhancing of our species in order to compete with intelligent machines. Do you believe human genetic enhancing would be feasible, or even practical?
We are developing a good deal of understanding of how a human being is constructed and how it works with only armies of proteins and things that goes with it, to figure out how work the processes that are involved in life as well. This progress is going to accelerate over the next decade. Therefore it's very likely indeed that we will have the capabilities to modify people in several ways. But again we have to have reservations to police that to some degree, but at the same time we should have the possibilities to make pretty much any minor modifications in the human being that we want. For example, people will kind of look at genetic modification with genes that actually do something useful as well as get rid of genes that don't. I don't know how to answer this question, and most scientists don't.
We should be able to take genes from other organisms, and we could modify it by mixing the genes together. But eventually I think we could go a lot further then that, when we really understand the basic principles by which those genes operate and we gather other insights of nature that took eons of years to evolve on their own. But we could go much further than just taking genes from other organisms. We should be able to design genes from the ground up to achieve whatever goals we are trying to achieve. We should be able to decide what characteristics we want to create, and we should be able to do all specific proteins and systems to achieve it.
We will need a great deal of capabilities if we want to decide how people look like or if you want to even determine what your personality might be like. But it will be quite likely over time that we will start doing all these things. We will modify people and we quite likely eventually will end up with a number of much augmented human capabilities. The question is: how far do we go till we decide that we want to add genetic processing enhancement in people that allow them, for example, to link directly to machines? Do we want them to link to the network? Should we produce genetics that allows people to connect directly to the internet, for example, in 2050, 2060 just by thinking, and therefore to communicate to other people that are walking around telepathically? Why don't follow that path? People might want to go on that path if the capabilities might be there as well as the engineering to do so? Even these things would be possible, very interesting possibilities.
Right now the Pentagon is using some 5,000 robots in Iraq and Afghanistan, patrolling cities, disarming explosives or making reconnaissance flights. The next step is allowing them to carry weapons. Does this way lead to a Terminator scenario?
It's certainly one of the top concerns engineers have been worrying about, whether taking it leads towards Terminator how it happens in the film, whenever you design a robot that should be under their command, but then it becomes self aware or something and decides not to follow your command. When the US is developing robotic weapons they are making a step towards that. The question is how far it can go down that path without a huge line of assistance. For example, in a small dictatorship regime or something, could it afford to take it the full way to make some weapon system?
Well, probably not yet because they will lack capabilities that so far have not been built. That capability will need enormous resources to take that development path. It would take a long time to get to the point where it would be possible. I think that's a potential that we have. If you're aware that the possibility exists people obviously will have to think about it when designing these machines, and don't stop merely by expecting to make something which is quite likely to go a line to the Terminator scenarios. We certainly have a self censorship when we try not to be stupid destroying the world.
Are you an optimist about the future? Do you believe we can improve technology at the same time we save the world from hunger, overpopulation, pollution and environmental destruction?
I am an optimist. I recognise that there are dangers in the future. But somehow I still believe that we will manage to avoid those problems and that the future will be much better than it is today. If you go far enough ahead we will solve a lot of those problems using advanced machines. Someway or somehow we will manage to find a way to avert it without destroying the world. That's what I believe. If I look at the negative part of it, there is a risk, a significant risk that we might destroy the world on ways that we couldn't be able to ask. And I think that in the next several decades there will be a balance on problems being caused by technologies as well as solutions being made by them. But in the short to medium terms it probably won't be much better or much worse than it is today. We will have some new problems but we will also have new solutions too. But in the very long term, there's a lot of optimism ahead that we might solve a lot of the problems that we caused, and we will eventually catch up with new problems being caused by a coming technology. So, concerning the far future I am an optimistic, because the opposite is too nasty to think about.
BT's 2006 Technology Timeline predicts that by 2051 humanoid robots will beat the England football team. But would they beat Brazil?
I am not a football fan by any means. The last time I was interested in football I was really quite small, but I remember this guy called Pelé. I'm pretty sure he was Brazilian. You guys seem good at football. I think that if any country in the world is still going to beat the robots it will be Brazil.