Skip to main content

Learn English with Rishi Sunak and Elon Musk. Join the fascinating conversation between British Prime Minister Rishi Sunak and tech visionary Elon Musk. They delve into critical topics like AI safety, the role of government in AI regulation, the impact of AI on labor markets, and its potential in education and companionship. Discover their insights on the need for global collaboration and the pursuit of truth in the ever-evolving world of artificial intelligence.

Donwload available for Premium Subscribers

PDF Full Transcript

Explore every word with our concise PDF transcripts.

Audio Version

Immerse in speeches with clear, downloadable audios.

English Lesson

Enhance English skills with interactive speech lessons.

Become a member now!

⚬ Free 30-day trial

Life is too short for long-term grudges.

Elon Musk

Transcript

Rishi Sunak: Good evening, everybody. Welcome. Elon, thanks for being here.

Elon Musk: Thank you for having me.

Rishi Sunak: We feel very privileged, we’re excited to have you. Right, so I’m going to start with some questions and then we’re going to open it up. Let me get straight into it. So Bill Gates said, there is no one in our time who has done more to push the bounds of science innovation than you.

Elon Musk: Well, that’s kind of him to say.

Rishi Sunak: Yeah, well, that’s a nice thing to have anyone say about you. Nice coming from Bill Gates. But oddly enough, when it comes to AI, actually for around a decade, you’ve almost been doing the opposite and saying, hang on, we need to think about what we’re doing and what we’re pushing here and what do we do to make this safe and maybe we shouldn’t be pushing as fast or as hard as we are. Like, I mean, you’ve been doing it for a decade. What was it that caused you to think about it that way? And why do we need to be worried?

Elon Musk: Yeah, I’ve been somewhat of a Cassandra for quite a while, where people would… we should really be concerned about AI. They’d be like, what are you talking about? Like, I’ve never really had any experience with AI. But since I was immersed in technology, I have been immersed in technology for a long time, I could see it coming. But I think this year was… there have been a number of breakthroughs. I mean, the point at which someone can see a dynamically created video of themselves, you know, like, it’s already created a video of you saying anything in real time, or me. And… so there’s sort of the deep fake videos, which are really incredibly good, in fact, sometimes more convincing than real ones. And… deep real. AAn then… and then obviously, things like chat GPT were quite remarkable. And I saw GPT-1, GPT-2, GPT-3, GPT-4, you know, the whole sort of lead up to that.

So it was easy for me to kind of see where it’s going. If you just sort of extrapolate the points on a curve, and assume that trend will continue, then we will have profound artificial intelligence, and obviously, at a level that far exceeds human intelligence. So… I’m glad to see at this point that people are taking safety seriously. And I’d like to say thank you for holding this AI safety conference. I think actually, it will go down in history as being very important. I think it’s really quite profound. And I do think overall that the potential is there for artificial intelligence, AI, to have most likely a positive effect, and to create a future of abundance, where there is no scarcity of goods and services. But it is somewhat of the magic genie problem, where if you have a magic genie that can grant all the wishes, usually those stories don’t end well. Be careful what you wish for, including wishes.

Rishi Sunak: Yeah.  So you talked a little bit about the summit. And thank you for being engaged in it, which has been great. People enjoyed having you there, participating in this dialogue. Now, one of the things that we achieved today in the meetings between the companies and the leaders was an agreement that externally, ideally governments, should be doing safety testing of models before they’re released. I think this is something that you’ve spoken about a little bit. It was something we worked really hard on, because my job in government is to say, hang on, there is a potential risk here. Not a definite risk, but a potential risk of something that could be bad. My job is to protect the country. And we can only do that if we develop the capability we need in our safety institute, and then go in and make sure we can test the models before they are released. Delighted that that happened today. But what’s your view on what we should be doing? You’ve talked about the potential risk. And again, we don’t know, but what are the types of things governments like ours should be doing to manage and mitigate against those risks?

Elon Musk: Well, I generally think that it is good for government to play a role when the public safety is at risk. So really, for the vast majority of software, the public safety is not at risk. If the app crashes on your phone or your laptop, it’s not a massive catastrophe. But when you’re talking about digital superintelligence, I think, which does pose a risk to the public, then there is a role for government to play to safeguard the interests of the public. And this is, of course, true in many fields, you know, aviation, cars. I deal with regulators throughout the world because of Starlink being communications, rockets being aerospace, and cars being vehicle transport. So I’m very familiar with dealing with regulators. And I actually agree with the vast majority of regulations.

There’s a few that I disagree with from time to time, but 1% probably, or less than 1% of regulations I disagree with. So there is some concern from people in Silicon Valley who have never dealt with regulators before and they think that this is going to just crush innovation and slow them down and be annoying. And it will be annoying. It’s true. They’re not wrong about that. But I think we’ve learned over the years that having a referee is a good thing. And if you look at any sports game, there’s always a referee. And nobody’s suggesting, I think, to have a sports game without one. And I think that’s the right way to think about this is for government to be a referee to make sure the sportsman-like conduct and that the public safety is addressed, that we care about public safety. Because I think there might be, at times, too much optimism about technology. And I say that as a technologist. So I ought to know. And on balance, I think that AI will be a force for good, most likely. But the probability of it going bad is not 0%. So we just need to mitigate the downside potential.

Rishi Sunak: And then how,  you talk about referee and that’s what we’re trying to do.

Elon Musk: Demis is right there.

Rishi Sunak: Yeah, well, there we go. I mean, you know, and we talked about this and Demis and I discussed this a long time ago.

Elon Musk: We’re like literally facing right at him.

Rishi Sunak: And actually, you know, Demis,  to his credit and the credit of people in the industry, did say that to us. I think Demis said it’s not right that Demis and his colleagues are marking their own homework, right? There needs to be someone independent. And that’s why we’ve developed the Safety Institute here. Do you think governments can develop the expertise? One of the things we need to do is say, hang on, you know, Demis, Sam, all the others have got a lot of very smart people doing this. Governments need to quickly tool up capability-wise, personnel-wise, which is what we’re doing. I mean, do you think it is possible for governments to do that fast enough, given how quickly the technology is developing? Or what do we need to do to make sure we do do it quick enough?

Elon Musk: No, I think it’s a great point you’re making. The pace of AI is faster than any technology I’ve seen in history by far. And it seems to be growing in capability by at least five-fold, perhaps ten-fold per year. It’ll certainly grow by an order of magnitude next year. And government isn’t used to moving at that speed. But I think even if there are not firm regulations, even if there isn’t an enforcement capability, simply having insight and being able to highlight concerns to the public will be very powerful. So… even if that’s all that’s accomplished, I think that will be very, very good.

Rishi Sunak: OK. Well, hopefully we can do better than that.

Elon Musk: Hopefully, yeah.

Rishi Sunak: Yeah. No, but that’s helpful. Actually, we were talking before. It was striking. People have spent their life in technology, living Moore’s Law. And what was interesting over the last couple of days talking to everyone who’s doing the development of this, and I think you concur with this, is just the pace of advancement here is unlike anything all of you have seen in your careers in technology. Is that fair? Because you’ve got these kind of compounding effects from the hardware and the data and the personnel.

Elon Musk: Yeah. I mean, the two… currently, the two leading centers for AI development are the San Francisco Bay Area and the London area. And there are many other places where it’s being done, but those are the two leading areas. So I think if… if the United States and the UK and China are aligned on safety, that’s all going to be a good thing. That’s where the leadership is generally.

Rishi Sunak: Actually, it’s interesting. You mentioned China there. So I took a decision to invite China to the summit over the last couple of days. And it was not an easy decision. A lot of people criticized me for it. My view is if you’re going to try to have a serious conversation, you need to. But what were your thoughts? You do business all around the world. You just talked about it there. Should we be engaging with them? Can we trust them? Is that the right thing to have done?

Elon Musk: If China is not on board with AI safety,  it’s somewhat of a moot situation. The single biggest objection that I get to any kind of AI regulation or safety controls are, well, China’s not going to do it, and therefore they will just jump into the lead and exceed us all. But actually, China is willing to participate in AI safety. And thank you for inviting them. And I think we should thank China for attending. When I was in China earlier this year, my main subject of discussion with the leadership in China was AI safety and saying that this is really something that they should care about. And they took it seriously. And you are too, which is great. And having them here, I think, was essential, really. If they’re not participants, it’s pointless.

Rishi Sunak: It’s pointless. Yeah. No, that’s – and I think we were pleased. I think they were engaged yesterday in the discussions and actually ended up signing the same communique that everyone else did.

Elon Musk: That’s great.

Rishi Sunak: Which is a good start, right? And as I said, we need everyone to approach this in a similar way if we’re going to have, I think, a realistic chance of resolving it. I was going to – you talked about innovation earlier and regulation being annoying. There was a good debate today we had about open source. And I think you’ve kind of been a proponent of algorithmic transparency and making some of the X algorithms public. And actually, we were talking about Jeffrey Hinton on the way in. He’s particularly – he’s been very concerned about open source models being used by bad actors. You’ve got a group of people who say they are critical to innovation happening in that distributed way. Look, it’s a trick – there’s probably no perfect answer and there’s a tricky balance. What are your thoughts on how we should approach this open source question? Or where should we be targeting whatever regulatory or monitoring that we’re going to do?

Elon Musk: Well, the open source algorithms and data tend to lag the closed source by six to 12 months. But given the rate of improvement, there’s actually therefore quite a big difference between the closed source and the open. If things are improving by a factor of, let’s say, five or more, then being a year behind is five times worse. So it’s a pretty big difference. And that might be actually an okay situation. But it certainly will get to the point where you’ve got open source AI that can do… that will start to approach human level intelligence or perhaps exceed it. I don’t know quite what to do about it. I think it’s somewhat inevitable that there will be some amount of open source and I guess I would have a slight bias towards open source because at least you can see what’s going on. Whereas closed source, you don’t know what’s going on.

Now, it should be said with AI that even if it’s open source, do you actually know what’s going on? Because if you’ve got a gigantic data file and… sort of billions of data points or weights and parameters, you can’t just read it and see what it’s going to do. It’s a gigantic file of inscrutable numbers. You can test it when you run it. You can run a bunch of tests to see what it’s going to do. But… it’s probabilistic as opposed to deterministic. It’s not like traditional programming where you’ve got… very discrete logic and the outcome is very predictable and you can read each line and see what each line is going to do. A neural net is just a whole bunch of probabilities. It sort of ends up being a giant comma separated value file. It’s like our digital god is a CSV file. Really? Okay. But that is kind of what it is.

Rishi Sunak: Now, that point you’ve just made is one that we have been talking about a lot because, again,  conversations with people who are developing their technology make the point that you’ve just made. It is not like normal software where there’s predictability about inputs improving leading to this particular output improving. And as the models iterate and improve, we don’t quite know what’s going to come out the other end. I think Demis would agree with that, which is why I think there is this bias that we need to get in there while the training runs are being done, before the models are released, to understand what is this new iteration brought about in terms of capability, which it sounds like you would agree with. I was going to shift gears a little bit. You’ve talked a lot about human consciousness, human agency, which actually might strike people as strange, given that you are known for being such a brilliant innovator and technologist. But it’s quite heartfelt when I hear you talk about it and the importance of maintaining that agency in technology and preserving human consciousness.

Now, that kind of links to the thing I was going to ask is, when I do interviews or talk to people out and about in this job about AI, the thing that comes up most, actually, is probably not so much the stuff we’ve been talking about, but jobs. What does AI mean for my job? Is it going to mean that I don’t have a job or my kids are not going to have a job? Now… my answer as a policymaker, as a leader, is actually AI is already creating jobs. You can see that in the companies that are starting. Also, the way it’s being used is a little bit more as a co-pilot, necessarily, versus replacing the person. There’s still human agency, but it’s helping you do your job better, which is a good thing. As we’ve seen with technological revolutions in the past, clearly there’s change in the labor market, the amount of jobs. I was quoting an MIT study today that they did a couple of years ago. Something like 60% of the jobs at that moment didn’t exist 40 years ago, so hard to predict.

My job is to create an incredible education system, whether it’s at school, whether it’s retraining people at any point in their career, because ultimately, if we’ve got a skilled population, then we ought to keep up with the pace of change and have a good life. It’s still a concern. What would your observation be on AI and the impact on labor markets and people’s jobs, and how they should feel about that as they think about this?

Elon Musk: I think we are seeing the most disruptive force in history here. We have for the first time, we will have for the first time, something that is smarter than the smartest human. It’s hard to say exactly what that moment is, but there will come a point where no job is needed. You can have a job if you want to have a job for personal satisfaction, but the AI will be able to do everything. I don’t know if that makes people comfortable or uncomfortable. That’s why I say, if you wish for a magic genie, that gives you any wishes you want. There’s no limit. You don’t have these three limits. Three-wish limit. Nonsense. You just have as many wishes as you want. It’s both good and bad.

One of the challenges in the future will be, how do we find meaning in life if you have a magic genie that can do everything you want? When there’s new technology, it tends to usually follow an S-curve. In this case, we’re going to be on the exponential portion of the S-curve for a long time. You’ll be able to ask for anything. We won’t have universal basic income. We’ll have universal high income. In some sense, it’ll be somewhat of a leveler or an equalizer. Really, I think everyone will have access to this magic genie. You’ll be able to ask any question. It’ll certainly be good for education. It’ll be the best tutor. You could end up the most patient tutor. So they all there. There will be no shortage of goods and services. We’ll be in an age of abundance. I’d recommend people read Iain Banks. The Banks’ culture books are probably the best envisioning. In fact, not probably. They’re definitely by far the best envisioning of an AI future. There’s nothing even close. So I’d really recommend Banks. I’m a very big fan. All his books are good. I’m not going to say which one, all of them. So that’ll give you a sense of what is… a fairly utopian or pro-utopian future with AI.

Rishi Sunak: Which is good from a, as you said,  it’s a universal high income, which is a nice phrase. It’s good from a materialistic sense, age of abundance. Actually that then leads to the question that you posed. I’m somebody who believes work gives you meaning. I think a lot about that. I think work is a good thing. It gives people purpose in their lives. If you then remove a large chunk of that, what does that mean? Where do you get that? Where do you get that drive, that motivation, that purpose? I mean, you were talking about it. You work a lot of hours.

Elon Musk: I do. As I was mentioning when we were talking earlier, I have to somewhat engage in deliberate suspension of disbelief because I’m putting so much blood, sweat, and tears into a work project and burning the 3 am oil. Then I’m like, wait, why am I doing this? I can just wait for the AI to do it. I’m just lashing myself for no reason. I must be a glutton for punishment or something.

Rishi Sunak: We call Demis and tell him to hurry up  and then you can have a holiday. That’s the plan. Yeah. No, it’s a tricky thing because part of our job is to make sure that we can navigate to that very, I think, largely positive place that you’re describing and help people through it between now and then because these things bring a lot about change in the labor market as we’ve seen.

Elon Musk: Yeah. I think it probably is generally a good thing because there are a lot of jobs that are uncomfortable or dangerous or tedious and the computer will have no problem doing that. They’re happy to do that all day long. It’s fun to cook food, but it’s not that fun to wash the dishes. The computer is perfectly happy to wash the dishes. We still have sports where humans compete in the Olympics. Obviously a machine can go faster than any human, but we still have humans race against each other and have these sports competitions against each other where even though the machines are better, we’re still, I guess, competing to see who can be the best human at something. People do find fulfillment in that. I guess that’s perhaps a good example of how even when machines are faster than us, stronger than us…

Rishi Sunak: We still find a way.

Elon Musk: We still enjoy competing against other humans to at least see who’s the best human.

Rishi Sunak: Yeah. That’s a good analogy. We’ve been talking a lot about managing the risks. Before we move on and finish on AI, let’s just talk a little bit about the opportunities. You’re engaged in lots of different companies, neurologically an obvious one, which is doing some exciting stuff. You touched on the thing that I’m probably most excited about, which is in education. I think many people will have seen Sal Khan’s video from earlier this year as Ted talked about, as you talked about, it’s like a personal tutor.

Elon Musk: Yeah, personal tutor. An amazing personal tutor.

Rishi Sunak: An amazing personal tutor. We know the difference in learning, having that personalized tutor is incredible compared to classroom learning. If you can have every child have a personal tutor specifically for them that then just evolves with them over time, that could be extraordinary. For me, I look at it and I think, gosh, that is within reach at this point. That’s one of the benefits I’m most excited about. When you look at the landscape of things that you see as possible, what is it that you are particularly excited about?

Elon Musk: I think certainly AI tutors are going to be amazing, perhaps already are. I think there’s also perhaps companionship, which may seem odd because how can the computer really be your friend? If you have an AI that has memory and remembers all of your interactions and has read everything, you can actually give it permission to read everything you’ve ever done. It really will know you better than anyone, perhaps even yourself, where you can talk to it every day and those conversations build upon each other. You will actually have a great friend. As long as that friend can stay your friend and not get turned off or something. Don’t turn off my friends. I think that will actually be a real thing. One of my sons has some learning disabilities and has trouble making friends, actually. I was like, well, an AI friend would actually be great for him.

Rishi Sunak: That was a surprising answer. That’s actually worth reflecting on. That’s really interesting. We’re already seeing it, actually, as we deliver psychotherapy anyway, now doing far more digitally and by telephone to people. It’s making a huge difference and you can see a world in which actually AI can provide that social benefit to people. Just a quick question on X and then we should open it up to everybody. You made a change, well, you made many changes.

Elon Musk: Yeah, quite a few.

Rishi Sunak: One of the changes which goes into the space  that we have to operate in and this balance between free speech and moderation is something we grapple with as politicians. You were grappling with your own version of that and you moved away from a manual human way of doing it, the moderation, to the community notes. I think it was an interesting change. It’s not what everyone else has done. It would be good, what was the reasoning behind that and why do you think that is a better way to do that?

Elon Musk: Part of the problem is if you empower people as censors, then there’s going to be some amount of bias they have. Then whoever appoints the censors is effectively in control of information. The idea behind community notes is, well, how do we have a consensus driven… it’s not really censoring it, but consensus driven approach to truth. How do we make things the least amount untrue? You can say you can’t perhaps get to pure truth, but you can aspire to be more truthful. The thing about community notes is it doesn’t actually delete anything. It simply adds context. Now that context could be this thing is untrue for the following reasons. Importantly with community notes, everything is open source, actually. You can see the software, every line of the software. You can see all of the data that went into a community note. You can independently create that community note. If you see manipulation of the data, you can actually highlight that and say, well, there appears to be some gaming of the system. You can suggest improvements. It’s maximum transparency.

Rishi Sunak: Combined with the kind of wisdom of the crowds and transparency to get to a better answer.

Elon Musk: Really one of the key elements of community notes  is that in order for a note to be shown, people who have historically disagreed must agree. There is a bit of AI usage here. There’s populated parameter space around each contributor to community notes. Then a parameter space…. Everyone’s got basically these vectors associated with them. It’s not as simple as right or left. It’s saying it’s several hundred vectors because things are more complicated than simply right or left. Then we’ll do inverse correlation. Say like, okay, these people generally disagree, but they agree about this note. Then that?

That gives the note credibility. That’s the core of it. It’s working quite well. I’ve yet to see a note actually be present for more than a few hours that is incorrect. The batting average is extremely good. When I ask people, they’ll say, oh, they’re worried about community notes being disinformational. I can send me one. Then they can’t. I think it’s quite good. The general aspiration with the X platform is to inform and entertain the public and to be as accurate as possible and as truthful as possible, even if someone doesn’t like the truth. People don’t always like the truth.

Rishi Sunak: No.

Elon Musk: Not always.

Rishi Sunak: Yeah.

Elon Musk: That’s the aspiration. I think if we stay true to the truth, then I think we’ll find that people use the system to learn what is going on. I think actually truth pays. Assuming you don’t want to engage in self-delusion, then… I think it’s the smart move.

Rishi Sunak: It’s been a huge privilege and pleasure to have you here.

Elon Musk: Thank you. Thank you very much for having me.