Subtitles section Play video
MALE SPEAKER: So today we're here to see Jerry Kaplan.
He's co-founded four startups, two of which
have gone public-- serial entrepreneur, widely
respected as a technical innovator,
and a bestselling author.
In the interest of time though, I'm
going to abridge his long list of many accomplishments
and talk about a few things that I think will especially
interest you before he talks about things that especially
interest him.
So one especially interesting thing that he's done-- he
was the co-founder, alongside his friends Kevin Doren
and Robert Carr, of the GO Corporation in 1987.
They were pioneers in the work of tablet and stylus-based
computing, the precursors to the Apple Newton, the Palm Pilot,
and later smartphones and tablets of today.
If you chronicled, actually, his time there,
they have a very interesting book called "Startup:
The Silicon Valley Adventure."
Some of you may have heard of this.
A fun little fact.
Omid Kordestani-- some of you may know him
as our former chief business officer, started
at Google in 1999-- got his start, actually, at Go Corp.
Jerry may talk about that.
I don't know.
It's possible.
Here he is today again to talk about artificial intelligence
and the changing world of work automation.
So we are here for "Humans Need Not Apply."
Give a warm welcome to Jerry Kaplan, everyone.
[APPLAUSE]
JERRY KAPLAN: Thanks, how's the mic?
Oh, that's good.
All right, well, a mentor of mine used to say never
give a talk the first time.
I want you to know I've put together
a special talk for you guys.
This is the first I'm giving it.
We'll see what happens.
I should leave some time.
I have lots of weird anecdotes about Google
that I will be happy to tell when I'm not on the camera,
as long as I'm not being recorded.
OK, so now for something completely different, as
they used to say on "Monty Python."
The common wisdom about artificial intelligence
is that we're building increasingly intelligent
machines that are ultimately going
to surpass human capabilities and steal our jobs
and maybe even escape human control
and take over the world.
So I'm going to present the case today
that that narrative is both misguided and
counterproductive-- that a more appropriate way to frame
this, which is really better supported
by actual historical and current events,
is that AI is simply a natural extension
of long-standing efforts to automate tasks that date back
at least to the start of the Industrial Revolution.
And then I want to talk about the consequences
if you think about it in that particular way.
But let me ask about the audience-- how many of you
are engineers?
OK, how many of you are not engineers?
Two.
How many people haven't raised their hand yet?
Nobody.
OK.
And that's called closure, right?
OK, and how many of you are doing anything
even vaguely related to AI?
Oh, not that many, OK.
Cool.
At least you won't admit it by the time I'm done with my talk,
I think.
OK, so let me start with a little bit of a history lesson.
I'm teaching Impact of Artificial Intelligence
at Stanford.
And much to my shock, the students
who studied artificial intelligence
don't know much about its history.
So here's a kind of irreverent view.
I'm going to start with an unorthodox history of AI.
Now, here's a news flash for you.
Science does not proceed scientifically.
So it's like the making of legislation and sausage.
Perhaps this is better done outside of the public view.
More than you might want to believe,
progress is often due to the clash of egos
and ideas and institutions.
You guys work in an institution.
I'm sure you see that occasionally.
And artificial intelligence is no exception,
so let me start right at the beginning.
Dartmouth College, 1956.
A group of scientists-- they got together
for an extended working session.
How many of you who John McCarthy is?
Oh, man, OK.
He's a mathematician who was then employed at Dartmouth.
Now, he hosted this meeting along with-- raise your hand
if you know these guys-- Marvin Minsky.
Oh, more than John, OK.
He was then at Harvard.
Claude Shannon?
That's good.
You guys should know who Shannon is.
He was at Bell Laboratories.
And Nathaniel Rochester?
Probably no hands.
One hand.
Are you his son?
Sorry?
AUDIENCE: [INAUDIBLE].
JERRY KAPLAN: OK, there you go.
He was at IBM.
Now, here's what these guys had to say,
or John McCarthy had to say.
He called his proposal "A Proposal
for the Dartmouth Summer Research Project
on Artificial Intelligence."
Now, this was the first known use of the term artificial
intelligence.
But what's not commonly known is why did John McCarthy choose
that particular name?
He explained this later-- much later,
actually-- his motivation.
He said, "As for myself, one of the reasons
for inventing the term artificial intelligence
was to escape the association with cybernetics.
Its concentration on analog feedback seemed misguided,
and I wished to avoid having either
to accept Norbert Wiener as a guru
or having to argue with him."
Now, Norbert Wiener, as you may know,
was a highly respected-- Norbert Wiener?
Anybody?
Oh, my god.
OK.
Cybernetics.
Cybernetics.
Good, you've heard the term at least.
He was a highly respected senior mathematician
and a philosopher at MIT.
Now, while he was that, McCarthy, this guy,
was just a junior professor at Dartmouth.
So he didn't want to have to go up against the powers that be.
So to understand the original intention of the founding
fathers of AI, it's worth reading
some of the actual text of this conference proposal.
I think it's on the screen.
"The study is to proceed on the basis of the conjecture
that every aspect of learning or any other feature
of intelligence can in principle be so precisely described
that a machine can be made to simulate it.
An attempt will be made to find out
how to make machines use language,
form abstractions and concepts, solve
kinds of problems now reserved for humans,
and improve themselves."
It's 1950-- what was it, 1956?
"We think that a significant advance
could be made in one or more these problems if a carefully
selected group of scientists work on it together
for a summer."
Now, that's a pretty dubious agenda for a summer break.
Now, many of the Dartmouth conference participants
had their own view about how to best approach
artificial intelligence.
But John McCarthy's specialty was mathematical logic.
In particular, he believed that logical inference
was the key to, using his words, simulated intelligence.
That's what he thought AI was.
Now, his approach, skipping ahead quite a ways,
but his approach eventually became
known as what's called the physical symbol systems
hypothesis.
Anybody here have heard of that?
One.
Good man, OK, you can take over for the rest of the talk.
Now, that was the dominant paradigm
in the field of artificial intelligence for the first 30
years or so after the Dartmouth Conference.
Now, here's John McCarthy.
I'm old enough to have known John McCarthy when
I was a postdoc at Stanford, where he founded the Stanford
Artificial Intelligence Lab.
Now, John was definitely a brilliant scientist.
He invented the programming language Lisp.
Good.
And he invented the concept of time sharing.
Not too many people know that.
But he definitely had the mad professor thing going.
Let's see if this works.
Almost.
I'm using somebody else's computer.
You know, he had the wild eyes and the hair.
The guy on the right, as you may know,
is Professor Emmett Brown, who invented the-- what is it?
The flux capacitor time machine.
How many people know the flux capacitor?
OK, good.
I'm just checking to make sure this talk works.
But I'm confident that John McCarthy, having met him,
never really expected that his clever name emerging
field is going to turn out to be one
of the great accidental marketing coups of all time.
So it's not only inspired generations of researchers,
including myself, but it spawned a virtual industry
of science fiction and Hollywood blockbusters and media
attention and pontificating pundits, also including myself.
How do you name the field something less arousing,
like logical programming, or symbolic systems?
I doubt very many of us would have ever heard
of the field today.
The field would have just motored
along automating various tasks while we marvelled
at the cleverness not of what the creations were,
but of the engineers.
I'm getting a little bit ahead of my story.
In any case, McCarthy's hypothesis
that logic was the basis of human intelligence is, at best,
questionable.
Today, in fact, most AI researchers
have abandoned this approach and believe
it was just plain wrong.
The symbolic system approach has been almost entirely abandoned
in favor of generally what's now referred
to as machine learning.
How many people here are doing machine learning?
Good.
OK, or you certainly know about it.
But rejecting that old approach is throwing the baby out
with the bathwater.
Some truly important advances in computing
came out of symbolic systems, including
things like heuristic search algorithms, logical problem
solvers, game players, reasoning systems.
These were all the old approach.
And many of the results of all that work
are in wide practical use today.
For example, formulating driving directions-- I got
lost coming here.
I didn't know the difference between the express lane
and the regular lane.
I thought I was in the other one.
Take this exit.
No exit.
Laying out factories and warehouses,
proving that complex computer chips actually
meet their specifications-- this all uses early AI techniques.
And I'm sure that there are many more of these to come.
Now, did I mention machine learning?
It's certainly the focus of most current research,
and in some circles, at least where I am,
it's considered a serious candidate
for the real basis of human intelligence.
Now, my personal opinion is that while it's
a very powerful technology, and it's
going to have a very significant practical impact,
it's very unlikely to be the computational equivalent
of the human mind.
And whatever your view, you might
be surprised to learn a little more about what
are the fundamental concepts that underlie what's
called the connectionist or neural networking approach
to machine learning came from.
There are some other approaches, mainly in the statistical area.
So let's see.
Frank Rosenblatt, anybody heard of him?
Wow, OK, great.
I didn't until I started researching this.
Back in the late 1950s, John McCarthy
wasn't the only one interested in building
intelligent machines.
There was another highly optimistic proponent,
and that was Professor Frank Rosenblatt
at Cornell-- another competing prominent institution.
You've got Cornell, and you've got Dartmouth--
and lots of people at MIT.
And Rosenblatt was intrigued by some pioneering research
by psychologists Warren McCulloch and Walter Pitts
at the University of Chicago.
And McCulloch and Pitts had observed
that a network of brain neurons could
be modeled by, of all things, logical expressions.
So Rosenblatt got the bright idea
to implement their ideas in a computer program, which
he rebranded as a perceptron.
Anybody heard of perceptrons?
Oh, good.
Cool.
He built an early version of what today,
we would call a simple neural network.
This is the geekiest looking guy.
He looks like he's 12 years old.
That's him.
That's his actual photo cell array right there.
Now, he wasn't about to be outdone by McCarthy and Minsky,
so Rosenblatt heavily promoted his work in the popular press.
For instance, he was quoted in "The New York Times"
in 1958 saying, the machine that he is going to build
would be the first device to think as the human brain.
In principle, it would be possible to build brains
that could reproduce themselves on an assembly line,
and which would be conscious of their existence.
This is 1958.
The article went on to say that the embryo
of an electronic computer-- the embryo
of an electronic computer today that
will be able to walk, talk, see, write, reproduce itself, and be
conscious of its existence.
And here's what I love.
"It is expected to be finished in about a year
at a cost of about $100,000."
So much for the journalistic accuracy of "The New York
Times."
By the way, I'm usually debating John Markoff.
He's the science writer there.
We love to beat each other.
I wish he was here.
He'd go crazy.
Now, that might seem a little bit optimistic,
given that Rosenblatt's demonstration included
only 400 photo cells connected to 1,000 perceptrons, which
after 50 trials, was able to tell whether a card had
a square marked on the right side or on the left side.
That's what he could do.
Now, on a more positive note, and this
is also pretty remarkable, I can't help
but notice that many of his wilder
prophecies in the article have actually now become reality.
He went on to say-- listen to this closely-- remember, 1958.
"Later, perceptrons will be able to recognize people, call out
their names, instantly translate speech
from one language to speech or writing in another language."
He was right.
It only took 50 years longer than he predicted.
OK, now, Rosenblatt's work was well known to at least some
of the participants at that Dartmouth Conference.
In particular, he attended the Bronx High School of Science--
anybody here go there?
Not one, wrong coast-- with Marvin Minsky.
They were one year apart.
So they later wound up going to these different forums
and debating each other, promoting
their respectively favored approaches
to artificial intelligence.
Until in 1969, Minsky, who's now at MIT-- remember,
one guy's at Cornell, the other guy's
at MIT-- along with a colleague of Marvin Minsky's called
Seymour Papert, published a book called
"Perceptrons." in which he went to pains to discredit, somewhat
unfairly I might add, a simplified
version of Rosenblatt's work.
Now, here's the way science really works.
Now, Rosenblatt was unable to mount a proper defense
for a very simple reason.
Anybody guess what it was?
He died in a boating accident in 1971, two years later.
He couldn't defend himself.
Now, the book, however, proved highly influential,
effectively foreclosing funding and research
on perceptrons and artificial neural networks
in general for more than a decade.
So after 50 years, which is better,
the symbolic systems approach or the machine learning approach?
The plain fact is both of these approaches
have different strengths and weaknesses.
In general, symbolic reasoning is
more appropriate for problems that
require abstract reasoning.
And machine learning, on the other hand,
is better for problems that require sensory perception
or extracting patterns from large collections
of noisy data.
So you might ask the question, why
was the symbolic approach dominant
in the last half of the 20th century and machine learning
is dominant today?
The answer is fairly simple-- the machines.
They are literally a million times faster, cheaper,
and have a million times more memory at the same price
as they did back then.
That's a qualitative difference.
In the early days of AI, machines just
weren't powerful enough to automatically learn
anything of interest-- the square is on the right,
the square is on the left.
They had n only a minuscule fraction of the processing
speed and a vanishingly small amount of memory
in which to store data compared to today's computers.
But most importantly, there simply
weren't many sources of machine readable
data available to learn from.
What were you going to learn?
For real time learning, most communication at that time
was on paper.
And for real-time learning, the data from sensors
was equally primitive and only available, usually,
in an analog form that really resisted processing digitally.
So you had four trends-- improvement
in computing speed, memory, the transition
from physical to electronically stored data, and easier access
to large bodies of data.
God knows you guys know about that.
It's mainly due to the internet and low-cost, high-resolution
digital sensors.
I don't know how I came up with five, but I can't count.
These were the prime drivers-- never
give a talk the first time.
These were prime drivers in the refocusing of efforts
from the symbolic reasoning approach to the machine
learning approach.
OK, there's a little bit of history for you.
Now let me get to the main issue.
Can machines think?
So what is artificial intelligence, really?
After a lifetime of work in this field
and a great deal of reflection on this question,
my reluctant and disappointing answer is simple.
No.
Or at least they can't think the way people think.
So far, at least, there's no obvious road map from here
to there.
Machines are not people.
And there's simply no persuasive argument
that they're on the same path to becoming
generally intelligent, sentient beings,
despite what you see in the movies.
Now, wait a minute, you might say.
Jerry, can't they solve all sorts of complex reasoning
and perception problems?
Sure they can.
They can perform tasks that humans
solve using intelligence.
But that doesn't mean that the machines are intelligent.
It merely means that many tasks that we thought
required general intelligence are in fact subject to solution
by other kinds of mechanical means.
Now, there's an old joke in AI, which
is that once an AI problem is solved, it's no longer AI.
Anybody heard that?
A couple of people, good.
Now, personally, I don't think that's any longer a joke.
I'm going to look at some of the signature
accomplishments of artificial intelligence
from this different perspective.
Let's start with computer chess.
Now, for decades-- most of you guys
weren't around to see this, but I
was-- the archetypal test of the coming of age of AI
wasn't the Turing test.
It was, could a machine ever beat
the world's chess champion?
For a long time, you see, chess was
considered the quintessential demonstration
of human intelligence.
So surely when a computer was world chess champion,
AI would have arrived.
That's it.
We'd have smart machines.
Well, it happened in 1997 when IBM's Deep Blue
beat the then champion, Garry Kasparov.
Lots of ink was spilled in the media lamenting the arrival
of super-intelligent machines.
There was a lot of hand wringing or what this meant
for the future of mankind.
But the truth is it meant nothing other than that you
could do a lot of clever programming
and use the increases in speed of computers to play chess.
The techniques used have applications
to similar classes of problems.
But they hardly proved to be the harbingers of the robot
apocalypse.
So let me tell you what people said
after that non-event happened.
They said, OK, sure, computers can play chess.
But they'll never be able to drive a car.
This really was what happened.
That requires a broad understanding
of the real world-- the ability to make split-second judgments
in chaotic circumstances.
And, of course, common sense-- machines will never have that.
Well, as you know, this bulwark of human supremacy
was breached in 2004 with the DARPA Grand
Challenge for autonomous vehicles,
which are soon coming, if they're not here,
to a parking lot near you.
How many of you guys have taken a ride
in the Google self-driving cars?
What?
Oh, they should send one up here.
Have you been at least down to the Tesla dealership
to take a test drive?
I did that over the weekend.
The self-driving car was cool.
OK, now our self-driving cars do just that.
They drive cars.
They don't build houses.
They don't cook meals.
They don't make beds.
That's what they do.
So computers can play chess and drive cars.
But then they said-- people said,
but they could never play Jeopardy.
OK, well, that requires too much world knowledge
and understanding metaphors and clever wordplay.
Well, thanks again to be ingenious people at IBM,
this hurdle has also been cleared.
As undoubtedly you know, IBM's Watson system
beat Ken Jennings, the world Jeopardy champion in 2011.
Now, what is Watson?
The reality is it's a collection of facts and figures encoded
into cleverly organized modules that
can quickly and accurately answer
various types of common Jeopardy questions.
Watson's main advantage over the human contestants,
believe it or not, was that it could ring in
before they could when it estimated a high likelihood
that it had a correct answer.
I would love to go in this in more detail for you.
It turns out most of the Jeopardy champions
know the answers.
They're just not that fast.
And so the machine had numerous advantages.
It's a long-- it was kind of a magic show.
It's a wonderful accomplishment.
It's a really remarkable and very sophisticated
knowledge-based retrieval system and an inference system
that was honed, at least at that time,
to a particular problem set.
Now they're trying to apply it to lots of others.
Now, how many of you saw that, or pictures of it?
OK, now here's what bothers me.
Is this supposed to be animated?
OK.
Now, in my opinion, IBM didn't do the field of AI
any favors by wrapping Watson in a theatrical suite
of anthropomorphic features.
There's really no technical reason
to have a system say its responses in a calm, didactic
tone of voice.
Yes, Alex, the answer is such and such.
Much less to put up a head-like graphic of swirling lights,
suggesting that the machine had a mind
and was thinking about the problem.
These were incidental adornments to a tremendous technical
achievement.
Now, without a deep understanding
of how these systems work, and with humans
as the only available exemplars with which
to interpret the results, the temptation
to view them as human-like is really irresistible.
But they aren't those things.
OK, so let me give you a couple more interesting examples--
more contemporary, things that you're probably
more familiar with.
What about these machine learning systems?
Aren't they more like human intelligence?
Well, not really.
True, I could argue this for two hours here.
Lots of people sticking their hand up.
In reality, the use of the term neural networks is
little more than an analogy, in the same sense as saying
airplane design was inspired by birds.
It's in the same category.
Consider how machines and people learn.
You can teach a computer to recognize cats
by showing it a million images.
You guys know Andrew Ng?
He was at Google when he did that work.
You can show it a million images,
or you could simply point one out to a three-year-old
and get the same job done.
That's a cat.
Oh, that's it.
Now, from then on, the three year old knows what a cat is.
Obviously, humans and machines do not learn the same way.
And let me give you another interesting example.
Anybody here doing machine translation?
One Google site.
OK, I'm going into the lion's den in about two weeks.
I'm going to talk to the machine translation people,
among others.
Now, tremendous strides have been
made in this field in the past few years
mainly by applying statistical and machine learning
techniques to large bodies of concordant text.
But how do people perform this difficult task?
Think about how people do it.
They learn two or more languages,
along with the respective cultures and conventions.
Then they read some text in one language,
they understand what it says, and they render the meaning
as closely as possible in another language.
Now, machine translation, as successful as it is today,
there's no relationship to the human translation process.
Its success simply means there's another way
to approximate the same results.
It's mostly just concordances of text.
It doesn't relate to the way people solve that problem.
What do we learn from this?
It's just a way to-- we just didn't
think there was another solution, but there is,
besides having people understand that.
Now, let me go on to one you are all carrying around.
You carry around smartphones.
They're reminiscent of the capabilities
of the computer on the Star Trek Enterprise-- "Star Trek?"
Everybody?
Good, OK.
I started talking about "The Jetsons"
in my class at Stanford, nobody knew what I was talking about.
What's that?
You know, Rosie and-- you know, OK.
That's called getting old.
So this is more like-- I lost complete track,
and I got the whole thing right in front of me.
Hey, Siri, you know?
You can talk to your phone, and it talks back.
It also becomes more capable every day
as you download new apps and upgrade the operating system.
So I'm using examples of [INAUDIBLE].
But do you really think of your phone
as getting smarter in the human sense
when you download an app or you enable voice recognition?
Certainly not in the same sense that you
get smarter when you learn calculus
or when you learn philosophy.
It's the electronic equivalent of a Swiss Army knife.
It's a bunch of different information processing tools
that are bound together into a single unit,
taking advantage of some commonalities, like detailed
maps, and like internet access.
Now, you have one integrated mind,
while your phone has no mind at all.
There's no one home.
So I try to make the case that machines perform
an increasingly diverse array of tasks
that people perform by applying their native intelligence.
Now, does that mean that machines are smart?
Well, now things get interesting.
Let's talk about how you might measure supposed machine
intelligence.
I pulled that picture off the internet.
So I didn't make it up.
That's part of the point I'm trying to make.
We can start by looking at how we measure human intelligence.
Now, a common method is with IQ tests, but even for humans,
this is a deeply flawed concept.
We love to measure and rank things with numbers.
But let's face it, reducing human intelligence
to a flat, linear scale is highly questionable.
Little Sally did two more arithmetic problems than Johnny
did in time allotted, so her IQ is seven points higher
than his.
Bull.
But that's not to say that some people aren't smarter
than others-- only that simple numerical measures provide
an inappropriate patina of objectivity and precision.
As psychologists are fond of pointing out,
there are many different kinds of intelligence.
There's social and emotional, analytic, athletic, musical, et
cetera.
But what on Earth does it mean to say that Mozart and Einstein
have the same IQ?
Now, suppose we gave the same intelligence
tests to a machine.
Wow!
And it only took one millisecond to accurately complete
all of the sums that took Sally, and Johnny, and Alan.
It must be super-smart.
It also outperforms all humans on memory tests,
logical reasoning tests, and god knows what else.
Maybe it can shoot straighter, read faster,
and can outrun the fastest human.
Oh my god.
Robots can outperform us.
What are we all going to do?
So are the robots taking over?
Are the robots taking over?
Of course, by the logic I just gave you,
machines took over a long time ago whether they are smart
or not.
They move our freight.
They score our tests.
They explore the cosmos.
They plant and pick most of our crops.
They trade stocks.
They store and retrieve our documents, as Jacob knows,
in petabytes.
They manufacture just about everything,
including themselves.
And sometimes they do it with human help,
and sometimes without human intervention.
And yet, they aren't taking over our businesses.
They aren't marrying our children.
They are not watching the SyFy channel when we're not around.
So what's wrong with the traditional picture of AI?
We can build machines and write programs and perform
tasks that previously required human intelligence
and attention, but there's really nothing new about that.
Each new technological breakthrough
from the invention of the plow to to the CGI
rendering of Rapunzel's hair is better
understood as an advance in automation,
not as a usurpation of human primacy.
We can program machines to solve very complex problems,
and they may operate with increasing independence.
But as a friend of mine once observed,
a vehicle will really be autonomous
when you instruct it to take you to the office,
and it decides to go to the beach instead.
My point is simple.
Lots of problems we think require human intelligence
to solve actually don't.
There are lots of other ways to solve them,
and that's what the machines are doing.
Calculating used to be the province
of highly trained specialists.
Did you guys know that?
You used to go see somebody when you
wanted to do some calculation.
Now all it takes is the $0.99 calculator.
Making money in the stock market used
to be the province of experts.
Now the majority of trading is initiated by computers.
It's the same for driving directions, picking and packing
orders in warehouses, and designing more efficient wings
for airplanes.
But you don't have to worry about the robots taking over.
Robots don't have feelings, except in the movies.
Here's a news flash for you.
They aren't male or female.
As I like to say to my Stanford students,
what does it mean for a robot to be gay?
So robots don't have independent goals and desires.
A robot that's designed to wash and fold laundry
isn't going to wake up one day and say, oh my god, what a fool
I've been.
I really want to play the great concert halls of Europe.
So just as we can teach bears to ride bikes,
and we can teach chimps to use sign language,
we could build machines to perform
tasks the way people do, and even
to simulate human emotions.
We can make them say ouch when you pinch them or wag
their tails when you pet them.
But there's simply no compelling reason
to believe this bears any meaningful relationship
to human behavior or experience.
Machines aren't people, even if we
build them to talk and walk and chew gum the way that we do.
OK, now I've given you a new way to think
about artificial intelligence.
Let's talk about the implications
of this new perspective.
I'm going to try to run through this
pretty quickly because I was warned people like
to ask questions.
Now, it's certainly true that AI is
going to have a serious impact on labor
markets and employment.
But perhaps not in the way that people expect.
If you think of machines as becoming even more intelligent
and threatening our livelihoods, the obvious solution
is to prevent them from getting smarter, and to lock our doors
and arm ourselves with Tasers against these robots that
are coming to take our jobs.
Well, the robots are coming, but not exactly for our jobs.
Machines and computers don't perform jobs.
They automate tasks.
Now, except in extreme cases, you don't roll in a robot
and show an employee to the door.
Instead, the new technologies hollow out and change the jobs
that people perform.
Even experts spend most of their time doing
mundane, repetitive tasks.
They review lab tests.
They draft simple contracts.
They write straightforward press releases.
They fill out paperwork and forms.
On the blue collar side, lots of workers
lay bricks, paint houses, mow lawns, drive cars, load trucks,
pack boxes, and take blood samples.
They fight fires, deliver direct traffic, et cetera.
And many of these intellectual and physical tasks
require straightforward logic or simple hand-eye coordination.
Now, the new technologies, mainly driven
by artificial intelligence, are poised to automate these tasks,
not to replace the jobs.
Now, if your job involves a narrow, well-defined set
of duties, and many do, then indeed,
your employment is at risk.
If you have a broader set of responsibilities,
or if your job requires a human touch such as expressing
sympathy or providing companionship,
I don't think you have too much to worry about.
Now, just check out this comparison
of the job duties between licensed practical nurses
and bricklayers.
Whose job do you think is most at risk from automation?
By the way, this list is hilarious.
"Monitoring fluid and food intake and output."
I was like, OK, I didn't know they measure output.
"Providing emotional support."
What you guys are working on-- (ROBOT VOICE)
I am so sorry about your problem.
I mean, come on.
Most jobs, as opposed to tasks, involve
a mix of general capabilities and specific skills.
And as machines perform the more routine tasks,
the plain fact is that fewer people are
needed to get the jobs done.
So one person's productivity enhancing tool
is in fact another's pink slip, or more likely,
a job opening that no longer needs to be filled.
Now, this is called structural unemployment.
Automation, whether it's driven by artificial intelligence
or not, it changes the skills that
are necessary to perform work.
I need to move ahead, because we're running out of time.
So this is called structural unemployment,
and it's the mismatch of skills against the needs of employers.
People get put out of work because it's not
so much that there's a lack of jobs,
but the training that people need to perform those jobs--
there's a disconnect.
Now, historically, as automation has eliminated
the need for workers, the resulting increase in wealth
has eventually generated new kinds of jobs
to take up the slack.
And I see no reason that pattern is not going to continue,
but the keyword there is eventually.
Let's talk about farm employment.
This stuff is amazing, if you look into it.
200 years ago, more than 90% of the US population
worked in agriculture.
Basically, almost all anyone did was grow and prepare food.
That's what it meant to work.
Now, today, less than 2% of the population
is required to feed everybody, as you can
see in the free food over here.
Oh my god, is everybody out of work?
Of course not.
We've had plenty of time to adapt,
and as our standard of living has relentlessly increased,
which I'll get to in a minute, new opportunities
have always arisen for people to fill the expanding
expectations of our ever richer and greedier society.
Now, if a person from 1800 could see us today,
they'd think we'd all gone nuts.
Why not work a few hours a week, buy a sack of potatoes
and a jug of wine , build a shack in the woods,
dig a hole for an outhouse, and live a life of leisure?
Somehow, our rising expectations seem
to be magically out of pace due to our wealth.
OK, so what are the jobs of the future?
I don't see why we can't be a society of competitive gamers,
artisans, personal shoppers, flower arrangers, tennis pros,
party planners, and no doubt a lot
of other things that don't exist yet.
You might say, well, who is going to do the real work?
Well, our great grandchildren may
think of our idea of real work as so 21st century.
It may take, as we think of with agriculture-- it may only
take 2% of the population, assisted by some pretty
remarkable automation, to accomplish what's
taking 90% of our labor today.
So what?
It may be as important to them to have
fresh flowers in the house each day
as it is for us to take a shower every day,
which 70% of the US population does.
By the way, in 1900, the average was once a week.
I'm glad I'm not there.
So let me move ahead.
That's the good news.
The bad news is it it's going to take time
for this transition to happen.
And there's a new wave of AI enabled applications that's
likely to accelerate the normal cycle of job creation
and destruction.
So we're going to need to find new ways
to retrain displaced workers.
I was going to go into this.
I know Jacob's interested in this,
but hopefully, we're going to have to skip over this idea.
Our problem is our vocational training system
is really messed up.
It's mainly because the government today is the lender
of first resort for students.
So the skills that people learn are
disconnected from the needs of the employers
in the marketplace.
So we're not actually investing in education
so much as we're heading out money
to people to learn things that won't help them pay it back.
You can't get a job?
It's too bad.
Your student loan is still due.
How many of you guys have student loans?
OK, not bad.
So there are different ways to do this,
and we need to create new financial instruments
that tie the development of capital,
the deployment of capital, to the return on the investment.
And I've got this concept that I talk
about in my book, which is somewhere around here,
that I call a job mortgage.
So you get a mortgage for your education,
and it is payable solely out of your future earnings stream.
And that causes all the right incentives
to align so that we're teaching people the right things.
Otherwise, people aren't going to give them
the money if they don't know that there's going to be
a likelihood of a payback.
Finally, there's one other dark cloud.
I painted a very optimistic view of the future.
While it's true that automation makes a society richer,
there are serious questions about whose pockets
are filled by that wealth.
You may be aware while we were on high tech,
we tend to believe we are developing
dazzling technologies for a needy and grateful world,
and indeed, we've made great progress
in raising the standard of living for the poorest
people on Earth.
But for the developed world, the news is not so good.
Up until about 1970, on and off, we've
found ways to distribute at least some
of those economic benefits across society,
and this was the rise in the supposed-- the mythical middle
class.
But it doesn't take much to see that those days are over.
They began to diverge.
So as economists know, automation
is the substitute of capital for labor.
And Karl Marx was right.
The struggle between capital and labor
is a losing proposition for workers.
What that means is that the benefits of automation
naturally accrue to those who can invest in the new systems.
And why not?
People aren't really working harder than they used to work.
In fact, they aren't really smarter than they used to be.
Working hours have actually decreased
slowly but consistently for about the last 100 years.
The reason we can do more with less
is that the business owners invest some of their capital
into the process and productivity for improvements.
And they reap the most of the rewards.
So what has all this got to do with AI?
Now, the technologies that are on the drawing
boards in our labs are quickening the hearts
of entrepreneurs and investors everywhere,
as you guys are well aware.
And they are the ones who stand to benefit
while they export more and more of the risk out
to the rest of society.
Workers are less secure today.
Wages are stagnant.
Pension funds can go bust.
We're raising a generation of contractors
for the gig economy.
They're working variable hours, and health benefits
are their own problem.
That's not true for you guys.
You have regular employment jobs.
But if you really find out what's
going on in the rest of the world, this is true.
Now, some people have the mistaken impression
that the free market will naturally
address these problems if only we can get the government out
of the way.
And I'm here to tell you that our economy is hardly
an example of unfettered capitalism.
The fact is that there are all sorts of rules and policies
that drive where the capital goes, how it's deployed,
and who gets the returns.
And the basic problem is-- ah, this is great.
I should show the slides while I give the talk.
The basic problem is that our economic and regulatory
policies have become decoupled from our social goals.
And we have to fix that.
But the question is how?
Now here's the good news.
Most people have no idea about this.
The good news is that the economy isn't static.
It doubles about every 40 years.
You guys are familiar with the singularity and the Moore's
curve and all that.
That's happening with the economy too,
not just with computers.
It doubles about every 40 years, and it's done that reliably
since the start of the Industrial
Revolution in the 1700s.
In 1800, the average household income was $1,000.
And that's about the same as it is
today in Malawi and Mozambique.
And probably not coincidentally, their economies
look surprisingly similar to what the US was 200 years ago.
Yet I doubt that people in Ben Franklin's time
thought of themselves as dirt poor--
that they were barely scratching out an existence.
So what this means is that 40 years from now, most likely
there's literally going to be twice as much wealth
to go around.
So the challenge for us is to implement policies
that will encourage that wealth to be more broadly distributed.
We don't have to take from the rich and give to the poor.
We need to provide incentives for entrepreneurs
and businesses to find ways to benefit
ever larger swaths of society.
So in my book, again, I just give you
an example of the kinds of policies
that smart folks like you could come up with.
And the idea here is to make corporate taxes progressive.
I'm not saying this is the answer or even an answer.
It's just the kind of thinking we need to do.
You can make corporate taxes progressive
based on how widely distributed the equity in a company is.
So companies that have larger stockholder bases
have a lower tax rate.
Microsoft, to use them as an example,
they should pay a far lower tax rate
than Bechtel, which is privately held.
Now, progressive policies like this,
to promote our social goals-- by the way,
I flesh that out in the book in quite a bit of detail,
how it would work, and I encourage to you to buy a copy,
if not read one.
Progressive policies like that can promote our social goals
without stifling the economy.
We just have to get on with it and stop believing the myth
that unfettered capitalism is the answer to the world's
problems.
So let me wrap things up and recap.
I don't want you to think I'm anti-AI.
Nothing's further from the truth.
I think the potential impact on world is similar,
and I'm not exaggerating this-- the potential impact is
about the same as the invention of the wheel.
We need to think of it not of some sort
of magical discontinuity in the development of intelligent life
on earth, but as a powerful collection of automation tools
with the potential to transform our livelihoods
and to vastly increase our wealth.
The challenge we face is that our existing institutions,
without some enlightened rethinking,
run a serious risk of making a mess of this opportunity.
I'm supremely confident that our future
is very bright-- that it it's more
"Star Trek" than "Terminator."
But the transition is going to be protracted and brutal
unless we pay attention to the issues
that I tried to raise with you here today.
We have to find new and better ways
to ensure that our economy doesn't motor on,
going faster and faster, while throwing ever more people
overboard.
Our technology and our economy should serve us, not
the other way around.
So thank you.
I'm sorry to run so long.
Next time I give the talk, it would be half this long.
[APPLAUSE]
Do we have time for questions?
MALE SPEAKER: Yeah, so we do have a little bit of time
for questions.
There is a mic placed right over there
for those who want to ask questions, please line up.
You guys need to do that.
Go for it.
AUDIENCE: So for the task of taking over the world,
are there other means except for having human intelligence?
JERRY KAPLAN: Yeah, it's a very dangerous issue,
as a matter of fact, because a lot of the AI technologies,
we talk about the productivity and all of that.
But they have very serious applications in, for example,
military use.
And this is a very difficult problem.
A lot of very smart people are actually--
I wouldn't say secretly, but not publicly working on.
A friend of mine is on his way to Geneva right now.
There's meetings regularly with the UN.
There's a lot going on in the US military,
because they recognize that just going ahead and implementing
the kinds of technologies to the battlefield that are currently
being applied to driving cars and other things
might backfire, because it would be a lot easier
for non-state actors, to put it politely,
and dictators to-- today it takes enormous investment
to make and buy bombers and all that kind of stuff.
It's going to be really cheap, just like everything else,
like cellphones.
And there are some really creepy things
that can be done to take over the world
and wipe out humanity at a very low cost,
and that's going to be a big problem.
AUDIENCE: So you gave a lot of examples of problems
that we thought that were innately human,
but we were later able to describe it in a different way.
So why should we doubt that unbounded learning
is a problem we can't describe in a different way?
By unbounded learning, I mean the example
you gave-- oh, that's a cat.
Or for humans, you know, if you ask them, pass me
the purple cup, they'll learn that's purple
if they didn't know that word.
Why can't that unbounded learning be described in a way
that we can train machines to learn in the same way
that babies learn?
JERRY KAPLAN: Well, I'm going to turn your question
around a little bit to put it in the context of what
I said here.
I'm not saying that any particular task is completely
impervious to machine learning.
In fact, it might very well be.
However, machine learning is really
good at picking out patterns in large volumes of data
as it's practiced today.
It could be a future form of software technology, which
could do something more into what you said,
but that doesn't mean that they have human-like characteristics
or human learning.
And it doesn't mean that all of our jobs
will go away because a lot of our jobs
require face-to-face interaction or the expression of empathy.
And I don't buy the idea that machines can effectively
express empathy in their dealings with other people.
So the tasks can go away, and maybe they
will be able to learn as well.
That would be great.
That's a cat.
That's a chair.
That's that-- boom, boom, boom, the machine's got it.
But that's just another step in a long line of things
where people looked, and they went, wow!
It used to take people to do that.
Now a machine can do it.
It's just the next step.
So a lot of that stuff is going to go away.
And if we all wind up-- nobody wants
to watch a robot play competitive tennis.
It's just not interesting.
So I mean, there are lots of these jobs that inherently
require human beings.
So I'm trying to separate the task of automation
from what will people do.
And I hope that began--
AUDIENCE: Yeah, I guess it was even just the empathy example.
You say that robots can't be empathetic, but maybe they can.
Maybe we just think that's an innately human thing.
The tennis example actually is very convincing to me,
like, I would never watch a robot play tennis.
But if a robot was just as empathetic
and had a human form, and you couldn't
tell that it was a robot when you walked into the doctor.
We think it's a human problem, but maybe it's not.
JERRY KAPLAN: Boy, this is a really complicated topic.
What you said is right.
We can fool people.
And we do this all the time.
We can build a way-- this has gone on since the 16th century,
where they built automatons.
We went, oh, my god, it's just like a person.
And they were amazing.
Have you ever seen these?
These mechanical devices are absolutely incredible-- 16th
and 17th century.
It was fun entertainment for the courts of kings.
But if you know it's a machine, the fact that the screen comes
up and says, thank you, I really appreciate
that you placed your order.
I mean, come on.
It just doesn't compute emotionally.
We're not going to buy that story.
So a lot of it has to do with, like, toys that look like dogs
or look like children or play.
It's all very complicated, because play--
if you're doing it knowingly, that's perfectly reasonable.
If you're doing play because you're being fooled, or being
persuaded to buy something because the machine has gone,
oh, come on.
I got a whole bunch of hungry mouths to feed.
Oh, please, buy this car for me.
We're not going to like that.
That's my point.
AUDIENCE: Thank you.
JERRY KAPLAN: Thanks.
Yes, sir.
No waiting on check stand 2.
AUDIENCE: So I agreed with most everything you said.
I had a problem with one of your examples,
and this may seem like a nitpick.
But I'm going to flip this around.
You said teaching a machine a new task was like teaching
a primate to use sign language.
So do you think primates don't use intelligence the way
we do and don't understand what's being said?
JERRY KAPLAN: That's a good point.
I think that the point I was trying to make
is different than the one that you-- I'm not saying I
didn't say what you said, but that's not really what I meant.
What I meant was you can take something
that has no natural affinity for that particular task,
and you can get it to do that task
to some level of competence.
That's what I was trying to say.
Your point about chimp stuff is pretty interesting,
because obviously they have brains.
And I think most reasonable people think
they have rudimentary minds.
But the point is they don't naturally use sign language,
and as I say, you teach a bear to ride a bike.
That's not like, oh, my god, what are we going to do?
Next thing you know we'll be teaching bears to drive cars.
It's that we can make machines that also appear human-like
and do these human-like activities,
but it's not a natural part of the process.
Machines have certain characteristics,
and I can give another talk on that subject.
What are those characters ?
And they're different than people,
and we need to understand that and stop
thinking about ourselves as we're making more and more
intelligent machines.
I'm just giving you the framing to help
us to understand the economic results that are appropriate.
Thank you.
MALE SPEAKER: If we go quickly, we
have time for two more questions.
JERRY KAPLAN: Two more, OK.
AUDIENCE: You were talking a bit about the social impacts,
and in the end, the people who own the machines
get the benefit from the machines.
And I agreed there when you talked about ways
to change policy, but historically,
social-focused policies have come
from things like labor movements-- people
controlling the means of production or whatever.
These kinds of things where people go on strike.
Who is going to strike if all the people doing tasks
have just been replaced?
I own a machine.
I don't have any workers, right?
I got a factory that builds everything I need,
and I try to sell it to people.
And eventually it might implode, I don't know,
if everybody is doing it.
But the remaining jobs are really
just kind of these neat, supervisory things or the 1%
of the population that can get a big audience on whatever
medium.
How do you run a whole economy based on that?
JERRY KAPLAN: Well, there are two basic points you're making.
Let me try to respond to them each briefly.
Because you're right in a lot of senses.
When you automate people out of jobs, for those jobs,
we don't need people.
And the question is when are the new jobs
going to arrive, if ever?
Most people are thinking about this statically,
like, well, we're just going to automate away 90% of the jobs
when we can build machines that dig ditches
and do all that kind of stuff.
But I think historically what happens
is new jobs are created that require humans
for one reason or another.
And I tried to kind of make that point.
We really can be a leisure society.
What we would think of as a leisure society,
that's what you'll get paid for in the future.
In 80 years, the average American, if these trends hold,
the average American household would
be making $200,000 a year.
Now, most of you guys make $200,000 a year, I understand.
It pays well here, and I was making a joke.
But my point about that is that there
are going to be people-- when you're
making that kind of money, you want those fresh flowers
every day, and you may be willing to pay somebody
to do that and pay them a living wage to do it.
So I think the nature of work is going to shift.
Those people from 1800 who look at us today and think
we're crazy because we are we're doing
stuff we don't need to do for people who don't need it done.
And that is the way they would look at it.
And I can't say for sure, but I think that pattern
is likely to continue.
It's just very hard to visualize what that's
going to be like in 80 years.
AUDIENCE: My question is when do you
think AI will get to the point where
it can predict the behavior of other AI actors?
Because I think that's the heart of human intelligence
in the social context, and we haven't really
seen much in that task space.
JERRY KAPLAN: Wow, again, a very complicated-- I
could go on for a long time on this.
This has come up already in things like the flash
crash of 2010, which I cover in my book, which you're all
encouraged to take a look at.
It's a real problem, because people are stealthy
developing these systems, and it creates
what's called systemic risk.
Because these machines are like gods in terms of the damage
they can inflict in milliseconds.
And so it shut down the US power grid.
You can bet that China today, or I shouldn't pick out
China-- powerful players today have the ability
to completely decimate our economy for a fair period
of time almost instantly-- almost like a press
of a button.
And so these are difficult issues,
because you get two of those.
You ever seen the old movie "Colossus-- The Forbin
Project"?
Anybody?
So one guy, two guys will know what I'm talking about.
It's about just that.
They created an intelligent system.
The Russians-- this was like 1960 when they made the film,
it was great-- also did that.
And the two of them figured there had to be another one,
and they got together, and they took over the world.
It's actually a pretty cool film.
It's not as stupid as it sounds.
So it's a real issue.
These side effects, the unintended
consequences-- the kinds of technology
we're developing-- that's another hour-long talk.
It doesn't mean we shouldn't do it.
It means we need to be aware of it
to figure out how to control it in reasonable ways.
So I apologize [INAUDIBLE].
If anybody wants to stay and hear stories about early Google
like I would-- well here he is.
I won't do it on camera because I don't want anybody
to record it.
But thank you.
Thank you so much for listening to my crazy rants.
[APPLAUSE]