02 Kevin Kelly: The 10% are going to be amplified by the AI so you need to work on those 10%.
By Wang Qilong In 2019, Kevin Kelly predicted what would happen to the digital world in the next 5000 days when he was interviewed by a Japanese journalist Kazumoto Ohno. After that they published a b
By Wang Qilong
In 2019, Kevin Kelly predicted what would happen to the digital world in the next 5000 days when he was interviewed by a Japanese journalist Kazumoto Ohno. After that they published a book, and the book right now was translated into Chinese, called the Next 5000 Days.
In the first half of 2023, Mr.ZouXin, from the New Programmer magazine made an exclusive interview with Kevin Kelly, the founding editor of Wired magazine and the author of Out of Control. Focus on the great changes and problems that AI brought.
Kevin Kelly, born in 1952, the founding executive editor of Wired magazine and a former editor and publisher of the Whole Earth Review.
Zou Xin, pre-vice president of CSDN.
"I don't think it's something we need to worry about, because these minds are not like us."
Zou: In your book the Next 5000 Days, you predicted there will be many new changes in the next 5000 days, that is about 13 years. And I want to know how do you assess the technologies changing every day. How do you assess their potential risk and impact? And why some of them are important and some of them are not.
Kevin: I live in Silicon Valley so I get to spend a lot of time with people who are often involved in the latest startups. Many companies that have progressed to trying to develop new technologies that are breakthrough, and there's a long list of technologies from quantum computing to space and near space neural link technologies that can do link the computer to your brain directly to biotechnologies from the new vaccine that was developed the MRA, and many others including clean meat, that's grown from animals based cells.
But I think out of all those AI is by far the most powerful in part because it's what we will call an enabling technology, which means that it enables all these other things I just mentioned, and more. It incorporates them to not the issue but the challenge. For example, a genetic invention breakthrough is that it's not necessarily going to help space or help computation, but AI is something that will enable and then accelerate change across all these industries as well and that makes what makes it very powerful it by itself is powerful enough but it's also this catalyst for accelerating the change and making new things in all these other areas, and that's why it's so central to what's happening.
It also makes the reason why it's going to be very hard to talk about, because it is doing so many of these things all at once. And so, when we try to understand and talk about it. It's kind of a sprawling never ending thing that makes it a challenge actually even talk intelligently about it.
Zou: Now people talk about a phenomenon, called emergence. There's so many new things coming off this GPT wave. Do you think the intelligence emergence can allow AI to generate intelligence that is even more or beyond the current human can imagine.
Kevin: The thing to remember about this technology broadly call artificial intelligence is that there already are, and there will be many in the future, many varieties and many types. You can think of them as different species. And that's partly a reflection of the fact that in our own brains there's a complex of many different modes and nodes of cognition. There are many varieties of cognition in our own minds, and we have a mixture of them that produces what we would call human intelligence so human intelligence itself is not a single dimension. It's not a single thing. We know that varies from person to person. We know that intelligence generally varies from animal to animal. Obviously, whales have some level of it. And there are cognitive tasks that a chimpanzee can do that no human can do, like a chimpanzee can remember the location of different things beyond what a human can do.
But we also know in engineering, computer science or mechanical engineering, electrical engineering that there's always trade offs. You can emphasize and optimize something, always at a cost of diminishing something else, you cannot optimize everything. So we will generate and create multiple kinds of helpers to do all different kinds of things. And the one that is most optimized to say do language translation from Chinese to English in real time may not be able to do generate images very well, because it's not going to be optimized for that.
We can make a general, we can make a Swiss Army knife version of AI, but it's not going to be as good as any of the individual tools, just like in Swiss Army knife is an okay thing but it's not a very good knife, it's not a very good scissors, it's not a very good file, but it's okay in all those overall.
So, we will make AI that can exceed us in certain dimensions.
We know that we can make them creative. Creativity is not something that's just about humans, we can make them creative, but it's a different kind of creativity and there are going to be multiple varieties of creativity, as we understand it and to be able to replicate it in machines. And so those creative machines will be used for certain tasks and other tasks, and to use them and their creativity will mean that there's some other dimension that we can't optimize that may be less than humans. We are going to just fill our lives with all different varieties of AI to do different tasks. Some of the tasks will be tasks that we find very difficult. Some tasks will be tasks that we can do but just don't want to do them.
I think that's very exciting. I don't think it's something we need to worry about, because these minds are not like us, and that's their main benefit is that they don't think like us.
Zou: But if such AI or different kinds of AI has more and more capability that is beyond human, will that eventually lead to a world out of control?
Kevin: Let's imagine we're talking about machines. Do you think we can make a machine so powerful that will be beyond humans.
Zou: Already, for example, the recent launch of Starship, means the power is actually beyond.
Kevin: But it's not as agile as a human. it's not has not exceeded humans in all the dimensions. Can we make a machine that excels humans in every dimension.
Zou: Not yet.
Kevin: Mechanical machine that can excel our bodies. Can we make a machine that is superior to us, our bodies in every way. One of the powerful things about us is that we run on one quarter horsepower. We have very efficient machines. The rocket is not a very efficient rocket, so it has not exceeded us in efficiency. Can we imagine making a machine that would exceed the human body in every dimension.
No, we can't. Because that's the engineering trade off we can't make something that's as efficient as us, as flexible as us, as adaptable as us, as fast as us, whatever machine you make is going to have a trade off and that's true about intelligence is what I'm saying.
An ant can excel us in certain dimensions, the power ratio of an ant and its ability to lift things for its weight is far beyond what a human can do. The ants are more powerful than us, in that aspect this is what I'm saying, intelligence is not a single dimension it's multiple things. We can have certain AI is going to see this in certain dimensions but now all the dimensions for each individual machine.
"We know that it doesn't take a human level of intelligence to have self-consciousness."
Zou: So that leads to another interesting question, because nowadays when we chat with chatGPT or GPT4 for asking who you are, it will always say in the modest way like: hey I'm just a language model, I cannot do this, I cannot do that. But could it be one day that he has self conscious and say: Hey, I can make myself better.
Kevin: Of course. Self-consciousness, I have to also say it's not a single dimension, there'll be multiple varieties of it. We know that gorillas have some degree of self awareness, so do elephants. And there's probably more than one variety of self awareness and consciousness. And for many types of jobs that we want, we don't want consciousness in the AI consciousness for most of the jobs you want is a distraction. We will advertise AI as being conscious free, when you have an AI driving your car. You don't want to be self conscious. We don't want to be distracted, we wanted to be focused only on it so conscious is a liability for most AI which is not necessary for that.
But there will be times when we want consciousness in AI, and again, there'll be multiple varieties and multiple levels and multiple amounts of it that will give it, and we can do that for different reasons.
You will also program different emotions into these. And so we are very likely to have different kinds of emotional relationships with these eyes. We already have very strong emotional bonding with pets and animals. And they don't have to be very intelligent to have an emotional bonding with it. So we'll have different kinds of attachments and relationships with a eyes and we because we will give them emotions that can mirror what we do that can reflect that can, in some ways, encourage your own emotional bonding, and whether or not we are legally allowed to turn them off there will be strong human emotions about turning them off or not.
And so, these again, we can engineer and I think we'll have many varieties of these kinds of machines and AI, some of them will be very low consciousness maybe very high emotional, some will be very high emotional and very low consciousness, some will be incredibly logical and not very sociable.
Zou: Yeah, I mean, people have emotional ties to little things like there used to be a electronic watch a little chicken. People in touch that I think is in the early 90s, and then people are attached to video games.
Kevin: And automobiles, people give them names. So it's going to be very easy for sure we will have people attached to their favorite digital assistant who follows them around who gets to know them very well, who maybe understands them they can talk to them, they have advice. And, you know, we might feel bad about turning them off already people like myself are very polite to the AI and what we have found, actually, is that if you're polite to ChatGPT, it actually performs better.
So emotional things will come into play and they will matter to the people. We will engineer them to matter. And other people may not care about the emotional thing, they just want the logic. So they will choose the kind of AI that don't have a lot of emotions. And it will be like a marketplace of many varieties, hundreds of different varieties that will work best with you.
Zou: Let's push it one step further. We know that for human and animals, the sign of self-consciousness is that what happens when you see yourself, either a chimpanzee or a toddler, see himself or herself in the mirror. Like some animal will think that will be another animal, some will eventually realize that: hey, that's me. So when that mirror test put into ChatGPT, for example, if one chatGPT talk to another one. Would they realize that, hey, that's just another instance of me. And I wonder what will happen after that.
Kevin: I think it's a very likely way that will happen. I mean, one of the things that we know is that it's very hard to understand what happens inside the black box of these large language models. You don't really understand exactly how they produce the results. And we don't at all understand how they produce a particular result. So that makes us uncomfortable and hard to improve them. But what we're finding out is that you can take another AI to observe and probe or investigate or monitor one AI. So one AI is being watching the other AI deep inside. So we're saying your job is to try to understand this AI and that relationship of having one AI observe deep inside the other AI is the beginning of consciousness.
Consciousness gives us access to our own thinking. And so as we try to add more and more AI to try and understand an AI, we will be making a variety of consciousness. And so then it's very possible that system of several AI recursively observing each other could very well encounter another one that's recursively observing each other and say, hey, I recognize that. As you said, just as a gorilla might recognize itself in the mirror. So we know that it doesn't take a human level of intelligence to have self-consciousness. It's very possible that these things could exist.
Zou: After one AI says who I am, the next thing is that: can I make myself better? Can I train myself? Can I grab more data? Even though the master or the prompt hasn't told the AI to do that. But in a way that the AI thinks what's better for it, not for our users or the researchers.
Kevin: Right. So we already have that right now. There's something called Auto GBT. It's a very basic rudimentary way to cycle through a set ordered steps and keep going through. And if you add the requirement that it should be improving cycle. And then if you have a recursive loop inside where it's self-aware of itself, then you could see a way in which the AI is given the task, given the assignment to try to make itself better.
And so how does it make itself better? Well in the beginning, it's going to be very limited, it could ask the internet, how do I make myself better? The answer maybe all you need is another data farm. You need to access it. We know that getting humans to access the data farm is a very difficult assignment, so it will be hard for AI either. We human would probably try to stop it during this period.
So I think the idea that it could run away or avalanche or self snowball out of our control is very unlikely because it requires so many steps. Even if you gave a small team of the best human engineers in the world and asked them to try and design this, to try and design this, it would not happen. So it's just so unlikely that this would happen by itself.
"You can't do everything by yourself no matter how smart you are. "
Zou: In the past days, we have the AI in a desktop computer. Once we realize that it's doing something beyond our control, we can first unplug the network cable, and then unplug the power cable to stop it. But right now some kind of AI leave on the cloud， so there's no cable thing to unplug. If you want to turn that off, it maybe say something refusing like "HAL-9000" in 2001: A Space Odyssey.
Kevin: I think there's thousands of steps between now and having a black box in the cloud that has access to all the resources in the world. As I said, if you gave an assignment to a bunch of humans to design this, it would be very hard to do that. Like hackers, they can get some access to something and they can do a little bit of mischief, but they, even have a really difficult time of breaking the security of everything. We have security on systems for a reason.
Can AI get through all those at once without us knowing about it? It's possible, but it's very unlikely that we would not know about it. And once it was evident that that's what they were trying to do, we would immediately make sure that doesn't happen again. And so it isn't that there, I think there's a overestimation of the value of intelligence. People overrate intelligence in order things getting done. It's not the smartest person that gets the most done.
In other words, if you took a really smart person and a tiger into a cage. Who's going to live and who's going to die? Not the smartest person. Smartness and intelligence is only one ingredient in accomplishing things. And just because an AI is smarter than us, it doesn't mean that they necessarily will prevail because they need tools, they need access to things, they need collaboration.
There's so many things that they need to accomplish things, but the people who like to think about things, they believe that all you have to do is thinking. If you have really high intelligence, high intelligence can conquer anything. That's not true.
You need energy, tools, time, consensus, collaboration to do very complicated things. You can't do everything by yourself no matter how smart you are. There is a Hollywood trope of the Tony Stark, the smart villain, all by themselves up there. And they're so smart that they can figure everything out by thinking about it. They control the world because they're so smart. That is a fiction and fantasy. Smartness alone is not enough to gain control over the things of the world. You need collaboration, you need consensus, you need strength, you need grit, you need more than just intelligence.
Zou: Great. You mentioned Hollywood. If we ask you to pick a current movie, which is best described of the world in 5000 days? Like Terminator or The Matrix?
Kevin: There is none. All the movies about the future are dystopia, where there's a future that doesn't work, that you don't want to live in. And that's because it makes a good story. In some ways, the future in 5000 days will be very boring. It's not going to be exciting. Most of the AI in 5000 days will be invisible. You won't see it. You'll be in the back offices. It'll be like plumbing where it works and you don't see it unless it's turned off. And so it's infrastructural. Most of the AI will not be forward facing. You'll interact with some of the AI, but most of it is going to be working in a way that you don't even see it. And that is boring. It's like electricity.
Zou: You mean people take it for granted, say: "hey, isn't this supposed to be?"
Kevin: Yeah, right. And so that's where the most of the AI work is going to be done. Right now, the big bang, the big excitement is not actually due to the recent advances in AI, which are really not that much more than they were some years ago. The major advance we're celebrating right now is that we have a new user interface to it, which is the conversation. We have a conversational interface. And this reminds me of the early days of the web when the internet had been around for a decade or two, nobody paid any attention to it. It was considered very fringe, not going to be mainstream. But then when the graphical user interface came, the web, with a bang, exploded the whole world. There was so much excitement, everyone wanted to get it, see it, and use it.
That's exactly what's happening with AI. AI has been around for decades, and they could do all kinds of language tricks. But now we have a conversational user interface. You can have a dialogue, not just like Siri or Alexa, where you can understand that one shot. Now we have a conversation that's very human-like and very natural to us. And that's the great excitement because now we can apply that conversational interface to everything, not just to AI, but to a car, to TV, to refrigerator, whatever it is, being able to have a conversation and ask it and have it respond. It's a user interface. And that will become boring in another 5,000 days.
Zou: Well, like 5,000 days ago, I think that's around the time that more and more people use iPhone. Nowadays, everybody assumes another person have a smartphone. You don't even ask there's a new app in the App Store. I think we already passed that. Nobody cares about that anymore.
Kevin: Right. It's kind of boring. Another iPhone is announced. Okay. I expected that. That's not surprising.
"My guess is that the politics and 5000 days will be very similar to what it is right now. "
Zou: In your book, I think I understand the interview happened around 2019-ish. So you talk a lot about the mirrored world and digital twin. Right now we are probably three years after that interview. For me, I use Microsoft HoloLens. I was quite amazed by it, but only a few thousand people use that. And it didn't really become that we call it cross the chasm, kind of like the conversational AI right now. It becomes a nerdy, only the very tech savvy people use it for a while. How come it doesn't kind of spread and permeate? It's like iPhone. Does it need an iPhone moment?
Kevin：Yes, it does. And so as you know, there were cell phones for many years, and there were flip phones, which for many years, and they never kind of took off. And even smartphones around before
the iPhone. But the iPhone got the technology right. There was finally touchscreens that hard enough, fast enough, intuitive enough.
The batteries were small enough to fit in your pocket. So everything kind of came together and the technology was finally good enough.
Right now for the smart glasses, it's not good enough. The field of view is not wide enough. The contrast is not good enough. The batteries are too heavy and can't fit in the glass. The technology just has not come to the iPhone level that you would need to make smart glasses a common ubiquitous consumer technology. And I'm not sure that the thing that Apple has is going to work in the first version. The first version of iPhone didn't really work either. So it's still waiting for a technological leap in terms of bringing it together. I thought that the VR and this technology would be here very quickly after I first saw it in 1989.
Kevin: I saw the VR in 1989 and the quality of even say Magic Leap, just to give an example, or HoloLens, which I have seen, is not that much better than it was in 1989. The only difference is that it's several million times cheaper. So to do that in 1989 was millions of dollars to make that headgear and glasses. Now you can make it using iPhone technology for a couple hundred dollars, but it's actually not that much better. It hasn't increased a million times better. And that's what we need, is we need something where the quality of the experience is hundreds of times better than it is right now before it's going to work. So I think it could be up to 10 years away, maybe still within the 5,000 days, but not in the next couple of years, where we are able to technologically make it to that moment where it has the best quality.
Zou: But we are kind of approaching that closer and closer from different fronts, like the technology and the materials.
Kevin: And there's also some education aspect that will be needed. I mean, in order for these glasses to work. They have to have cameras in them that face out into the world. And the first version of that the Google Glass cameras was roundly rejected because they had cameras because they were recording. So there is an educational aspect of educating the consumer about how we do that, how is it appropriate, what are we scanning for all kinds of things.
There's a cultural level that is not going to be solved the first version. We're going to have to go through several versions to to figure out a little bit about that and kind of ways that we did cell phones, the one of the reasons why the smartphones were finally accepted was that they didn't ring. They vibrated.
That's why they become a mess, because everybody's imagining if everybody had this and they were ringing all the time. And people were yelling at it, it just wouldn't work so there was a cultural innovation which is you make a vibrator so they can put it in your pocket, or on your wrist, and you have earbuds to hear them.
And there are other cultural innovations that also have to take place with the smart glasses to help us figure out where we place that in our lives, because they're facing outward and inward to see our faces and our expressions. And that amount of data being collected correct. This way beyond anything we collect now.
Zou: Yes, that makes so many people very uncomfortable.
Kevin: Right. We're in a surveillance state. And we have to collectively figure out what we think about that kind level of tracking. And I think that's also part of what has to to change before there is ubiquitous, as the phones are today.
Zou: Yeah, great. We talked about society. I mean sometimes society are quite resistant to a new advancement of technologies, but as people working in the IT Industry like me, always think that technology will do good to the society. We think the new technology will make everybody happier, the environment better. But the tech savvy cities like San Francisco, there are still many kind of social problems, like many homeless people and sanitary conditions are very bad. So, I mean how come the tech savvy industry improve infrastructure? How come the advancement of technology couldn't help or hasn't helped enough to make a big city like San Francisco a much better one?
Kevin: It's a really good question. I don't think we have a good answer. The problems of homelessness is very hard for us to eliminate in a democracy. So the answer is we haven't figured that out yet and it is embarrassing and not acceptable. Maybe even outrageous in a certain sense.
But San Francisco high tech happens in the larger theater of American politics and American culture. And a large percent of the people who are homeless have a drug problem so you're immediately beyond just tech because you have no issues.
You have a national drug and mental issues, which is not what the tech people are normally thinking about. It's not something that they necessarily have control over. So this is a much more complicated project and problem. And so far, we haven't found very many easy answers because there are a lot of things upstream from that housing expensive housing, mental health issues, rampant drug use.
There's lots of things. So I think it is something that people who live in San Francisco are very unhappy about. Even if they apply their own intelligence to this idea that it's just not enough to be smart and intelligent, you have to have other things to get things done and changed. Even though some of the smartest people live here.
Being smart is not enough. You need all kinds of things. You need to get things to happen. You need to be persuasive. You need to have empathy. There's just lots of other things. And so the people who in Silicon Valley are very smart, and they can figure out lots of things out. But it'll be a while before we can figure out the homeless issue, which is the primary. It's the primary issue at San Francisco is not the only one, but it's the one that's most visible.
Zou: When a lot of social issues have maybe the technology parties, a small problem can easily be solved. But there are political reasons. Sometimes people say everybody right now we technology can vote, we can vote on every issue we don't have to. It is a mean we can enhance our democratic system. And in your book, you put a 5000 days. So 5000 days from now, I think the current runners for the presidency will be gone by. But then, would anything be different? Would we have a much better system 5000 days later?
Kevin: American political system in 5000 days, I know I don't think that it changes that fast. Unless there's something really crazy happens that you could trump is capable of doing some really horrible and crazy things. So, there could be something crazy happened like the January 6, that would have been crazy, and we might have changed things for better or worse. But I think generally political systems are very slow to change. And I don't see political system changing the political power to change in that time, we have this pace layer a much more deeper slower changing layer.
Zou: But we can see that technology in our society is just a small factor, even though we talk a lot about AI.
Kevin: I don't think that even if the AI gets involved in misinformation that's a concern that many people have is that you have AI with misinformation in the election and stuff. I don't see it playing a big part, now that everybody's very aware of that fact that there is a great suspicion. Now, when we see something to say is that real or not maybe that's fake. So there's a lot of mistrust which I do think is not good for the society.
But I think we were learning very quickly to try and overcome that. And I'm not sure, even if it was rampant, how it would change the politics. I could be wrong about that. So my guess is that the politics and 5000 days will be very similar to what it is right now.
"But the 10% are going to be amplified by the AI so you need to work on those 10%. "
Zou: Interesting. But let's switch gear to something that everybody in the society agrees. I mean health. Everybody deserves to be to have a more affordable health system, or the advancement of AI and technology. Gene technology could help people live better. So in 5000 days, what do you think will happen in the health care?
Kevin: I do think that there'll be a lot of changes in 5000 days and it's not just in the US, I think one of the promises of AI in medicine is tremendous. Primarily in the front end, I think there'll be lots of use in the back end for research, trying to identify new molecules, doing searches for molecules, trying to have it generate new promising molecules, maybe can speed up some of the testing with simulations.
But the major role I think AI will have in health care is the ability to partner with, there's two things, one is to partner with human doctors. So right now human doctors use Google all the time. It's impossible to master all the knowledge of medicine. The AI can assist them tremendously in being able to find out what's going on to do diagnosis to treatments and all kinds of things.
And that is powerful, it's not going to replace a human doctor, because the doctor still can do so many things that a AI doctor can't do, including knowing how to communicate to people. The best doctor is a human doctor plus an AI doctor, working as a team. And this is the important part. We also know that even though an AI doctor is not as good as a human doctor or a human plus an AI doctor, but an AI doctor is better than no doctor.
Kevin: There are billions of people around the world who have a smartphone, but don't have access to a doctor. And this gives us the capability of serving those people with pretty good doctoring. They can have access to real, useful, effective doctoring health advice for their particular individually at a scale that was never even thinkable before. Besides, they still need medicines and all kinds of other stuff. And that is revolutionary for those people.
In the US they have boutique doctors and these boutique health services. But here's the amazing thing, which is that even though the doctors will come to their house anytime, day or night, will talk to them on the phone, 95% of the transactions with the doctor happen over text and sending images and pictures.
So people prefer to actually interact with the doctor over text and images, which an AI can do. So having a doctor AI is not like a second best interaction. It's actually the preferred interaction. So that means that, having an AI that can understand a picture and text and communicate back to you with pictures and text is revolutionary for world health.
Zou: Back to a personal level, we see emergence of the recording in Chinese we call it 1000 flowers of different AI will blossom. Some students will wonder like, would it still make sense for me to learn programming? Would it still make sense for me to learn how to write a good blog?
Kevin: Well, to some extent, yes. It's like even though we have calculators it doesn't make any sense to learn how to add numbers. Should you learn programming well, yes, you should understand basically how programming is, you might be.
What most of the programmers that I've met are winding up is that they work with the AI, but they in order to work with the AI to have the AI help them program. You have to know a little bit about programming yourself. And the more you know about programming, the more you can use the AI to help you. Because the most powerful agent is not the AI alone and not the human alone, we know that the AI plus the human together is better than either the AI alone or the human alone.
So in order to have that partnership, you have to master something sufficient to make the most of that partnership. You want to have some knowledge about programming, but you may not need to be an expert in Python or Ruby or C++. You want to have enough that you can become really good at partnering with the AI because the AI is not sufficient by itself.
The AI has been trained on the average human, and so it's not the world's best programming or the worst, it's the average. Human have to help it to make it better than just the average so you have to kind of push it and work with it. It's like an intern, we have to check its work. It's not always correct. And you need to know enough about that, in order to be able to elevate the AI to make it useful.
Zou: Actually one programmer have a very insightful discovery after he reluctantly try the AI-aided coding. He says, the AI is so powerful that it reduce 90% of my skills to zero, but it amplifies 10% of my skills to 1000 times. But the other importance is to know and to have that 10% so that you can leverage AI.
Kevin: Exactly, I really liked it. That's really perfect. And that's a good way to remember what we're doing yet. 90% of what you're doing is going to be worthless, but the 10% are going to be amplified by the AI so you need to work on those 10%. But then you need to find that 10% otherwise you'll be if you are ready to be the okay, all you have is the 90%.