Placeholder Image

Subtitles section Play video

  • The algorithm discovered the easiest way to grab people's attention and keep them glued to the screen is by pressing the greed or hate or fear button in our minds.

  • The people who manage the social media companies, they are not evil, they just really didn't foresee.

  • This is the problem.

  • I mean because we don't know if they really have consciousness or they're only very very good at mimicking consciousness.

  • In the Hollywood scenario, you have the killer robots shooting people.

  • In real life, it's the humans pulling the trigger but the AI is choosing the targets.

  • I think maybe the most important thing is really to understand what AI is because now there is so much hype around AI that it's becoming difficult for people to understand what is AI.

  • Now everything is AI.

  • So you know your coffee machine is now a coffee machine, is an AI coffee machine and your shoes are AI shoes.

  • And what is AI?

  • You know the key thing to understand is that AIs are able to learn and change by themselves, to make decisions by themselves, to invent new ideas by themselves.

  • If a machine cannot do that, it's not really an AI and therefore by definition something that we cannot predict how it will develop and evolve and for good or for bad.

  • It can invent medicines and treatments we never thought about but it can also invent weapons and dangerous strategies that go beyond our imagination.

  • You characterize AI not as artificial intelligence but as alien intelligence.

  • You give it a different term.

  • Can you explain the difference there and why you why you've landed on that word?

  • Traditionally the acronym AI stood for artificial intelligence but with every passing year AI becomes less artificial and more alien.

  • Alien not in the sense that it's coming from outer space, it's not.

  • We create it.

  • But alien in the sense it analyzes information, makes decisions, invents new things in a fundamentally different way than human beings.

  • And artificial is from artifact.

  • It gives us the impression that this is an artifact that we control and this is misleading because yes we design the kind of baby AIs.

  • We gave them the ability to learn and change by themselves and then we release them to the world and they do things that are not under our control, that are unpredictable.

  • So in this sense they are alien, not in the sense that they came from Mars.

  • When I said earlier that you know AIs can make decisions and AIs they are not just tools in our hands, they are agents creating new realities.

  • So you may think okay this is a prophecy for the future, a prediction about the future, but it's already in the past because even though social media algorithms they are very very primitive AIs, you know the first generation of AIs, they still reshaped the world with the decisions they made.

  • In social media, Facebook, Twitter, TikTok, all that, the ones that make the decision what you will see at the top of your news feed or the next video that you'll be recommended, it's not a human being sitting there making these decisions.

  • It's an AI, it's an algorithm.

  • And these algorithms were given a relatively simple and seemingly benign goal by the corporations.

  • The goal was increase user engagement, which means in simple English make people spend more time on the platform, because the more time people spend on TikTok or Facebook or Twitter or whatever, the company makes more money.

  • It sells more advertisements, it harvests more data that it can then sell to This is the goal of the algorithm.

  • Now engagement sounds like a good thing, who doesn't want to be engaged?

  • But the algorithms then experimented on billions of human guinea pigs and discovered something, which was of course discovered even earlier by humans, but now the algorithms discovered it.

  • The algorithms discovered that the easiest way to increase user engagement, the easiest way to grab people's attention and keep them glued to the screen, is by pressing the greed or hate or fear button in our minds.

  • You show us some hate-filled conspiracy theory and we become very angry, we want to see more, we tell about it to all our friends, user engagement goes up.

  • And this is what they did over the last 10 or 15 years.

  • They flooded the world with hate and greed and fear, which is why the conversation is breaking down.

  • These are kind of unintended consequences.

  • Like the people who manage the social media companies, they are not evil, they didn't set out to destroy democracy or to flood the world with hate and so forth.

  • They just really didn't foresee that when they give the algorithm the goal of increasing user engagement, the algorithm will start to hate.

  • But initially, when they started this whole ball rolling, they really didn't know.

  • And this is just kind of a warning of look what happens with even very primitive AIs.

  • And the AIs of today, which are far more sophisticated than in 2016, they too are still just the very early stages of the AI evolutionary process.

  • We can think about it like the evolution of animals.

  • Until you get to humans, you have 4 billion years of evolution.

  • You start with microorganisms like amoebas, and it took billions of years of evolution to get to dinosaurs and mammals and humans.

  • Now, AIs are at present at the beginning of a parallel process that Chanty Petty and so forth, they are the amoebas of the AI world.

  • But AI evolution is not organic.

  • It's inorganic, it's digital, and it's billions of times faster.

  • So while it took billions of years to get from amoebas to dinosaurs, it might take just 10 or 20 years to get from the AI amoebas of today to AI T-Rex.

  • Consumers, like all of us, we're being lured into a trust of something so powerful we can't comprehend and are ill-equipped to be able to kind of cast our gaze into the future and imagine where this is leading us.

  • Absolutely.

  • I mean, part of it is that there is enormous positive potential in AI.

  • It's not like it's all doom and gloom.

  • There is really enormous positive potential if you think about the implications for healthcare, that, you know, AI doctors available 24 hours a day that know our entire medical history and have read every medical paper that was ever published and can tailor their advice, their treatment to our specific life history and our blood pressure, our genetics.

  • It can be the biggest revolution in healthcare ever, if you think about self-driving vehicles.

  • So every year, more than a million people die all over the world in car accidents.

  • Most of them are caused by human error, like people drinking and then driving or falling asleep at the wheel or whatever.

  • Self-driving vehicles are likely to save about a million lives every year.

  • This is amazing.

  • You think about climate change.

  • So yes, developing the AIs will consume a lot of energy, but they could also find new sources of energy, new ways to harness energy that could be our best shot at preventing ecological collapse.

  • So there is enormous positive potential.

  • We shouldn't deny that.

  • We should be aware of it.

  • And on the other hand, it's very difficult to appreciate the dangers because the dangers, again, they're kind of alien.

  • Like if you think about nuclear energy, yeah, it also had positive potential, cheap nuclear energy, but people had a very good grasp of the danger, nuclear war.

  • Anybody can understand the danger of that.

  • With AI, it's much more complex because the danger is not straightforward.

  • The danger is really, I mean, we've seen the Hollywood science fiction scenarios of the big robot rebellion, that one day a big computer or the AI decides to take over the world and kill us or enslave us.

  • And this is extremely unlikely to happen anytime soon because the AIs are still a kind of very narrow intelligence.

  • Like the AI that can summarize a book, it doesn't know how to act in the physical world outside.

  • You have AIs that can fold proteins.

  • You have AIs that can play chess, but we don't have this kind of general AI that can just find its way It's hard to understand.

  • So what's so dangerous about something which is so kind of narrow in its abilities?

  • And I would say that the danger doesn't come from the big robot rebellion.

  • It comes from the AI bureaucracies.

  • Already today and more and more, we will have not one big AI trying to take over the world.

  • We will have millions and billions of AIs constantly making decisions about us everywhere.

  • You apply to a bank to get a loan, it's an AI deciding whether to give you a loan.

  • You apply to get a job, it's an AI deciding whether to give you a job.

  • You're in court or you're found guilty of some crime, the AI will decide whether you go for six months or three years or whatever.

  • Even in armies, we already see now in the war in Gaza and the war in Ukraine, AI make the decision about what to bomb.

  • And in the Hollywood scenario, you have the killer robots shooting people.

  • In real life, it's the humans pulling the trigger, but the AI is choosing the targets.

  • I start thinking about like this, this bias I have around the originality of human thought and emotion and this kind of assumption that AI will never be able to fully mimic the human experience, right?

  • There's something indelible about what it means to be human that the machines will never be able to fully replicate.

  • And when you talk about, you know, information, the purpose of information being to create connection, a big piece there is intimacy, like intimacy between human beings.

  • So information is meant to create connection, but now we have so much information and we're feeling very disconnected.

  • So there's something broken in this system.

  • And I think it's driving this loneliness epidemic, but on the other side, it's making us value like intimacy, maybe a little bit more than we were previously.

  • And so I'm curious about where intimacy kind of fits into this, you know, post-human world in which culture is being dictated by machines.

  • I mean, human beings are wired for that kind of intimacy.

  • And I think our radar or our kind of ability to, you know, identify it when we see it is part of what makes us human to begin with.

  • Maybe the most important part.

  • I think the key distinction here that is often lost is the distinction between intelligence and consciousness.

  • That intelligence is the ability to pursue goals and to overcome problems and obstacles on the way to the goal.

  • The goal could be a self-driving vehicle trying to get from here to San Francisco.

  • The goal could be increasing user engagement.

  • And an intelligent agent knows how to overcome the problems on the way to the goal.

  • This is intelligence.

  • And this is something that AI is definitely acquiring.

  • In at least certain fields, AI is now much more intelligent than us.

  • Like in playing chess, much more intelligent than human beings.

  • But consciousness is a different thing than intelligence.

  • Consciousness is the ability to feel things, pain, pleasure, love, hate.

  • When the AI wins a game of chess, it's not If there is a tense moment in the game, it's not clear who is going to win.

  • The AI is not tense.

  • It's only the human player which is tense or frightened or anxious.

  • The AI doesn't feel anything.

  • Now there is a big confusion because in humans and also in other mammals, in other animals, in dogs and pigs and horses and whatever, intelligence and consciousness go together.

  • We solve problems based on our feelings.

  • Our feelings are not something that kind of evolution, it's decoration.

  • It's the core system through which mammals make decisions and solve problems is based on our feelings.

  • So we tend to think that consciousness and intelligence must go together.

  • And in all these science fiction movies, you see that as the computer or robot becomes more intelligent, then at some point, it also gains consciousness.

  • It falls in love with the human or whatever.

  • And we have no reason to think like that.

  • Yeah.

  • Consciousness is not a mere extrapolation of intelligence.

  • Absolutely not.

  • It's a qualitatively different thing.

  • Yeah.

  • And again, if you think in terms of evolution, so yes, the evolution of mammals took a certain path, a certain road in which you develop intelligence based on consciousness.

  • But so far, what we see as computers, they took a different route.

  • Their road develops intelligence without consciousness.

  • I mean, computers have been developing, you know, for 60, 70 years now.

  • They are not very intelligent, at least in some fields, and still zero consciousness.

  • Now, this could continue indefinitely.

  • Maybe they are just on a different path.

  • Maybe eventually they will be far more intelligent than us in everything and still will have zero consciousness, will not feel pain or pleasure or love or hate.

  • Now, what adds to the problem is that there is nevertheless a very strong commercial and political incentive to develop AIs that mimic feelings, to develop AIs that can create intimate relations with human beings, that can cause human beings to be emotionally attached to the AIs.

  • Even if the AIs have no feelings of themselves, they could be trained, they are already trained, to make us feel that they have feelings and to start developing relationships with them.

  • Why is there such an incentive?

  • Because that the human can have.

  • That intimacy is not a liability.

  • It's not something bad that, oh, I need this.

  • No, it's the greatest thing in the world.

  • But it's also potentially the most powerful weapons, weapon in the world.

  • If you want to convince somebody to buy a product, if you want to convince somebody to vote for a certain politician or party, intimacy is like the ultimate weapon.

  • Now it is possible technically to mass produce intimacy.

  • You can create all these AIs that will interact with us and they will understand our feelings because even feelings are also patterns.

  • You can predict a person's feelings by watching them for weeks and months and learning their patterns and facial expression and tone of voice and so forth.

  • And then if it's in the wrong it could be used to manipulate us like never before.

  • Sure, it's our ultimate vulnerability.

  • This beautiful thing that makes us human becomes this great weakness that we have because as these AIs continue to self-iterate, their capacity to mimic consciousness and human intimacy will reach such a degree of fidelity that it will be indistinguishable to the human brain and then humans become like these unbelievably easy to hack machines who can be directed wherever the AI, you know, chooses to direct them.

  • Yeah, it's not a prophecy.

  • We can take actions today to prevent this.

  • We can have regulations about it.

  • We can, for instance, have a regulation that AIs are welcome to interact with humans but on condition that they disclose that they are AIs.

  • If you talk with an AI doctor, that's good, but the AI should not pretend to be a human being.

  • You know, I'm talking with an AI.

  • I mean, it's not that there is no possibility that AI will develop consciousness.

  • We don't know.

  • I mean, there could be that AIs will really develop consciousness.

  • But does it matter if it's mimicking it to such a degree of fidelity?

  • Does it even, in terms of like how human beings interact with it, does it matter?

  • For the human beings, no.

  • I mean, this is the problem.

  • I mean, because we don't know if they really have consciousness or they're only very, very good at mimicking consciousness.

  • So the key question is ultimately political and ethical.

  • If they have consciousness, if they can feel pain and pleasure and love and hate, this means that they are ethical and political subjects.

  • They have rights that you should not inflict pain on an AI the same way you should not inflict pain on a human being.

  • Now, and the other thing is, it's very difficult to understand what is happening.

  • If we want humans around the world to cooperate on this, to build guardrails, to regulate the development of AI, first of all, you need humans to understand what is happening.

  • Secondly, you need the humans to trust each other.

  • And most people around the world are still not aware of what is happening on the AI front.

  • You have a very small number of people in just a few countries, mostly the U.S. and China and a few others, who understand.

  • Most people in Brazil, in Nigeria, in India, they don't understand.

  • And this is very dangerous because it means that a few people, many of them are not even elected by the U.S. citizen.

  • They are just, you know, private companies.

  • They will make the most important decisions.

  • And the even bigger problem is that even if people start to understand, they don't trust each other.

  • Like, I had the opportunity to talk to some of the people who are leading the AI revolution.

  • And you meet with these, you know, entrepreneurs and business tycoons and politicians also in the U.S., in China, in Europe, and they all tell you the same thing, basically.

  • They all say, we know that this thing is very, very dangerous, but we can't trust the other humans.

  • If we slow down, how do we know that our competitors will also slow down?

  • Whether our business competitors, let's say here in the U.S., or our Chinese competitors across the ocean.

  • And you go and talk with the competitors, they say the same thing.

  • We know it's dangerous.

  • We would like to slow down to give us more time to understand, to assess the dangers, to debate regulations, but we can't.

  • We have to rush even faster because we can't trust the other corporation, the other country.

  • And if they get it before we get it, it will be a disaster.

  • And so you have this kind of paradoxical situation where the humans can't trust each other, but they think they can trust the AIs.

  • Because when you talk with the same people and you tell them, okay, I understand you can't trust the Chinese or you can't trust open AI, so you need to move faster developing this super AI.

  • How do you know you could trust the AI?

  • One of the things I heard you say that really struck me was this.

  • It's a quote.

  • If something ultimately destroys us, it will be our own delusions.

  • So can you elaborate on that a little bit and how that applies to what we've been talking about?

  • Yeah, I mean, the AIs, at least of the present day, they cannot escape our control and they cannot destroy us unless we allow them or unless we kind of order them to do that.

  • We are still in control.

  • But because of our, you know, political and mythological delusions, we cannot trust the other humans.

  • And we think we need to develop these AIs faster and faster and give them more and more power because we have to compete with the other humans.

  • And this is the thing that could really destroy us.

The algorithm discovered the easiest way to grab people's attention and keep them glued to the screen is by pressing the greed or hate or fear button in our minds.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it