Subtitles section Play video
[MUSIC PLAYING]
DIANE GREENE: Hello.
FEI-FEI LI: Hi.
DIANE GREENE: Who's interested in AI?
[CHEERING]
Me too.
Me three.
OK.
So I'm the moderator today.
I'm Diane Greene, and I'm running Google Cloud
and on the Alphabet board.
And I'm going to briefly introduce
our really amazing guests we have here.
I also live on the Stanford campus,
so I've known one of our guests for a long time,
because she's a neighbor.
So let me just introduce them.
First is Dr. Fei-Fei Li, and she is the Chief Scientist
for Google Cloud.
She also runs AI Lab at Stanford University, the Vision Lab,
and then she also founded SAILORS,
which is now AI4ALL, which you'll
hear about a little bit later.
And is there anything you want to add to that, Fei-Fei?
FEI-FEI LI: I'm your neighbor.
[LAUGHTER]
That's the best.
DIANE GREENE: And so now we have Greg Corrado.
And actually there's one amazing coincidence.
Both Fei-Fei and Greg were undergraduate physics majors
at Princeton together at the same time.
And didn't really know each other that well
in the 18-person class.
FEI-FEI LI: We were studying too hard.
GREG CORRADO: No, it was kind of surprising to go
to undergrad together, neither of us in computer science,
and then rejoin later only once we were here at Google.
DIANE GREENE: All paths lead to AI and neural networks
and so forth.
But anyhow, so Greg is the Principal Scientist
in the Google Brain Group.
He co-founded it.
And more recently, he's been doing
a lot of amazing work in health with neural networks
and machine learning.
He has a PhD in neuroscience from Stanford.
And so he came into AI in a very interesting way.
And maybe he'll talk about the similarities between the brain
and what's going on in AI.
Would you like to add anything else?
GREG CORRADO: No, sounds good.
DIANE GREENE: OK.
So I thought since both of them have been involved in the AI
field for a while and it's recently
become a really big deal, but it'd
be nice to get a little perspective on the history,
yours in Vision and yours in neuroscience, about AI
and how it was so natural for it to evolve to where it is now
and what you're doing.
And start with Fei-Fei.
FEI-FEI LI: I guess I'll start.
So first of all, AI is a very nascent field
in the history of science of human civilization.
This is a field of only 60 years of age.
And it started with a very, very simple but fundamental quest--
is can machines think?
And we all know thinkers and thought leaders
like Alan Turing challenged humanity with that question.
Can machines think?
So about 60 years ago, a group of very pioneering scientists,
computer scientists like Marvin Minsky, John McCarthy,
started really this field.
In fact, John McCarthy, who founded Stanford's AI lab,
coined the very word artificial intelligence.
So where do we begin to build machines that think?
Humanity is best at looking inward in ourselves
and try to draw inspiration from who we are.
So we started thinking about building machines that
resemble human thinking.
And when you think about human intelligence,
you start thinking about different aspects like ability
to reason and ability to see and ability
to hear, to speak, to move around, make decisions,
manipulate.
So AI started from that very core, foundational dream
60 years ago, started to proliferate
as a field of multiple subfield, which includes robotics,
computer vision, natural language processing,
speech recognition.
And then a very important development
happened around the '80s and '90s,
which is a sister field called machine learning started
to blossom.
And that's a field combining statistical learnings,
statistics, with computer science.
And combining the quest of machine intelligence, which
is what AI was born out of, with the tools and capabilities
of machine learning.
AI as a field went through an extremely
fruitful, productive, blossoming period of time.
And fast-forward to the second decade of 21st century.
The latest machine learning booming that we are observing
is called deep learning, which has
a deep root in neuroscience, which I'll let you talk about.
And so combining deep learning as
a powerful statistical machine learning tool
with the quest of making machines more intelligent.
Whether it's to see or is it to hear or to speak,
we're seeing this blossom.
And last I just want to say, three critical factors
converged around the last decade,
which is the 2000s and the beginning of 2010s, which are
the three computing factors.
One is the advance of hardware that
enabled more powerful and capable computing.
Second is the emergence of big data,
powerful data that can drive the statistical learning
algorithms.
And I was lucky to be involved myself in some of the effort.
And then the third one is the advances of machine learning
and deep learning algorithms.
So this convergence of three major factors
brought us the AI boom that we're seeing today.
And Google has been investing in all three areas,
honestly, earlier than the curve.
Most of the effort started even in early 2000s.
And as a company, we're doing a lot of AI work
from research to products.
GREG CORRADO: And it's been really interesting to watch
the divergence in exploration in various academic fields
and then the re-convergence as we see ideas that are aligned.
So it wasn't, as Fei-Fei says, it wasn't so long
ago that fields like cognitive science, neuroscience,
artificial intelligence, even things
that we don't talk about much more like cybernetics,
were really all aligned in a single discipline.
And then they've moved apart from each other
and explored these ideas independently
for a couple of decades.
And then with the renaissance in artificial neural networks
and deep learning, we're starting
to see some re-convergence.
So some of these ideas that were popular
only in a small community for a couple of decades
are now coming back into the mainstream
of what artificial intelligence is, what statistical pattern
recognition is, and it's really been delightful to see.
But it's not just one idea.
It's actually multiple ideas that you
see that were maintained for a long time in fields
like cognitive science that are coming back into the fold.
So another example beyond deep learning
is actually reinforcement learning.
So for the longest time, if you looked at a university
catalog of courses and you were looking
for any mention of reinforcement learning whatsoever,
you were going to find it in a psychology
department or a cognitive science department.
But today, as we all know, we look
at reinforcement learning as a new opportunity,
as something that we actually look
at for the future of AI that might be something that's
important to get machines to really learn
in completely dynamic environments,
in environments where they have to explore entirely
new stimuli.
So I've been really excited to see how this convergence has
happened back in the direction from those ideas
into mainstream computer science.
And I think that there's some hope for exchange
back in the other direction.
So neuroscientists and cognitive scientists
today are starting to ask whether we
can take the kind of computer vision models
that Fei-Fei helped pioneer and use those as hypotheses for how
it is that neural systems actually compute, how
our own biological brains see.
And I think that that's really exciting
to see this kind of exchange between disciplines
that have been separated for a little while.
DIANE GREENE: You know, one little piece of history I think
that's also interesting is what you did, Fei-Fei,
with ImageNet, which is a nice way of explaining building
these neural networks where you labeled all these images
and then people could refine their algorithms by--
go ahead and explain that just real quickly.
FEI-FEI LI: OK, sure.
So about 10 years ago, the whole community of computer vision,
which is a subfield of AI, was working on a holy grail problem
of object recognition, which is you open your eyes,
you can see the world full of objects
like flowers, chairs, people.
And that's a building block of visual intelligence
and intelligence in general.
And to crack that problem, we were building, as a field,
different machine learning models.
We're making small progress, but we're hitting a lot of walls.
And when my student and I started working on this problem
and started thinking deeply about what
is missing in the way we're approaching this problem,
we recognize this important interplay
between data as statistical machine learning models.
They really reinforce each other in very deep mathematical ways
that we're not going to talk about the details here.
That realization was also inspired by human vision.
If you look at how children learn,
it's a lot of learning through big data
experiences and exploration.
So combining that, we decided to put together
a pretty epic effort of we wanted
to label all the images we can get on the internet.
And of course, we Google Searched a lot
and we downloaded billions of images
and used crowdsourcing technology
to label all the images, organize them
into a data set of 50 million images, organized
in 22,000 categories of objects, and put that together,
and that's the ImageNet project.
And we democratized it to the research world
and released the open source.
And then starting in 2010, we held
an international challenge for the whole AI community
called ImageNet Challenge.
And one of the teams from Toronto,
which is now at Google, won the ImageNet Challenge
with the deep learning convolutional neural network
model.
And that was year 2012.
And a lot of people think the combination of ImageNet
and the deep learning model in 2012
was the onset of what Greg--
DIANE GREENE: A way to compare how they were doing.
And it was really good.
So yeah.
And so Greg, you've been doing a lot of brain-inspired research,
very interesting research.
And I know you've been doing a lot of very impactful research
in the health area.
Could you tell us a little bit about that?
GREG CORRADO: Sure.
So I mean, I think the ImageNet example actually
sort of sets a playbook for how we
can try to approach a problem.
The kind of machine learning and AI
that is most practical and most useful today
is ones where machines learn through imitation.
It's an imitation game where if you have examples
of a task being performed correctly,
the machine can learn to imitate this.
And this is called supervised learning.
And so what happened in the image recognition
case is that by Fei-Fei building an object recognition data set,
we could all focus on that problem
in a really concrete, tractable way
in order to compare different methods.
And it turned out that methods like deep learning
and artificial neural networks were
able to do something really interesting in that space
that previous machine learning and artificial intelligence
methods had not, which was that they were able to go directly
from the data to the predictions and break the problem up
into many smaller steps without having being
told exactly how to do that.
So that's what we were doing before is that we were trying
to engineer features or cues, things that we could see
in the stimuli that then we would do
a little bit of statistical learning on to figure out
how to combine these signals.
But with artificial neural networks and deep learning,
we're actually learning to do those things all together.
And this applies not only to computer vision,
but it applies to most things that you could
imagine a machine imitating.
And so the kinds of things that we've
done like with Google Smart Reply and now Smart Compose,
we're taking that same approach.
That if you have a lot of text data, which it turns out
the internet is full of, what you can actually do
is you can look at the sequence of words
so far in a conversation or in an email exchange
and try to guess what comes next.
DIANE GREENE: I'm going to interrupt here a little bit
and get a little more provocative here.
GREG CORRADO: All right.
DIANE GREENE: So you're talking about neural-inspired machine
learning and so forth.
And so this artificial intelligence
is kind of bringing into question what are we humans?
And then there's this thing out there
called AGI, Artificial General Intelligence.
What do you think's going on here?
Are we getting to AGI?
GREG CORRADO: I really don't think so.
So there's a variety of opinions in the community.
But my feeling is that, OK, we've finally
gotten artificial neural networks
to be able to recognize photos of cats.
That's really great.
We also now can--
DIANE GREENE: Fei-Fei, was that AGI when we recognized a cat?
FEI-FEI LI: No.
That's not enough to define AGI.
GREG CORRADO: So the kind of thing that's working well right
now is this sort of pattern recognition,
this immediate response where we're able to recognize
something kind of reflexively.
And we now have, I believe, machines
can do pattern recognition every bit as well as humans can.
And that's why they can recognize objects
in photos, that's why they can do speech recognition,
and that's why they can win at a game like Go.
But that is only one small sliver, a tiny sliver,
of what goes into something like intelligence.
Notions of memory and planning and strategy
and contingencies, even emotional intelligence, these
are things that we haven't even scratched the surface.
And so to me, I feel like it's really a leap
too far to imagine that having finally cracked pattern
recognition, after some decades of trying,
that we are therefore on the verge of cracking all
of these other problems that go into what constitutes
general intelligence.
DIANE GREENE: Although we have gone
way faster than either of you ever expected us to go,
I believe.
FEI-FEI LI: Yes and no.
Humanity has a tendency to overestimate
short-term progress and underestimate
long-term progress.
So eventually, we will be achieving things that we cannot
dream of.
But Diane and Greg, I want to just give a simple example
to define AGI.
So the definition of AGI, again, is an introspective definition
of what humans and human intelligence can do.
I have a two-year-old daughter who doesn't like napping.
And I thought I'm smart enough to scheme
to put her in a very complicated sleeping bag that doesn't
get herself out of the crib.
And just a couple of months ago, I
was on the monitor watching this kid, two-year-old,
where for the first time I was training her
for napping by herself.
She was very angry.
So she looked around, figured out a weak spot on the crib
where she might be able to climb out,
figured out how to unzip her complicated sleeping
bag that I thought I schemed to try to prevent that,
and figured out a way to climb out
of a crib that's way taller than who
she is and managed to escape safely
and without breaking her legs.
DIANE GREENE: OK, how about AGI equivalent to my cat
or equivalent to a mouse?
FEI-FEI LI: If you're shifting the definition, sure.
DIANE GREENE: I see, OK.
FEI-FEI LI: But even cat, I think
there are things that a cat is capable of doing.
GREG CORRADO: So I do think that if you
look at an organism like a cat from a behavioral level,
like how cats behave and how they respond
to their environments, I think that you could imagine
a world where you have something like a toy that
is for entertainment purposes that approximates
a cat in a bunch of ways in that the sorts of behaviors
that the human observe, you're like, oh, it walks around.
It doesn't bump into things.
It meows at me every once in a while.
I do believe that we can build a system like that.
But what you can't do is you can't take that robot
and then dump it in the forest and have it figure out
what it needs to do in order to survive and make things work.
FEI-FEI LI: But it's a goal.
It's a healthy goal.
DIANE GREENE: It's a healthy goal.
And along the way, at least we all three agree
that AI's capacity to help us solve all our big problems
is going to outweigh any kind of negative,
and we're pretty excited about that, I guess.
In Cloud, you're kind of doing some cool things with AutoML
and so forth.
FEI-FEI LI: Yeah, so we talk a lot,
Diane, about the belief of building benevolent technology
for human use.
Our technology reflect our values.
So I personally, and I know Greg's whole team is working
on bringing AI to people and to the fields that really need it
to make a positive difference.
So at Cloud, we're very lucky to be working with customers
and partners from all kinds of vertical industries,
from health care where we collaborate,
to agriculture, to sustainability,
to entertainment, to retail, to commerce, to finance, where
our customers bring some of the toughest problem and their pain
points, and we can work with them hand-in-hand
to solve some of that.
So for example, recently we rolled out AutoML.
And that is the recognition of the pain
of entering machine learning.
It's still a highly technical field.
The bar is still high.
Not enough people are trained experts
in the world of machine learning.
But yet our industry already has so much need
to tag pictures, understand imageries, just as an example,
in vision.
So how do we answer that call of need?
So we've worked hard and thought about this suite
of product called AutoML where the customer--
we lower the entry barrier by relieving them
from coding machine learning custom models themselves.
All they have to do is to give us
the kind of-- provide the kind of data and concept they need.
Here's an example of a ramen company in Tokyo
that has many shops of ramens and they
want to build an app that recognize the ramens
from different ramen stores.
And they give us the pictures of ramens
and the concepts of their store.
One store, two store, three.
And what we do is to use a technique,
a machine learning technique that Google and many others
have developed called learning to learn, and then
build a customized model for the customer that recognize ramens
for their different stores.
And then the customer can take that model
to do what they want.
DIANE GREENE: I can write a little C++,
maybe some JavaScript.
Could I do AutoML?
FEI-FEI LI: Absolutely.
Absolutely.
We're working with teams that they don't have not even C++
experience.
And we have a drag and drop interface,
and you can use AutoML that way.
GREG CORRADO: Because I really believe that there are so
many problems that can be solved using this technique that it's
critical that we share as much as possible about how
these things work.
I don't believe that these technologies should
live in walled gardens, but instead we
should develop tools that can be used
by everyone in the community.
And that's part of why we have a very aggressive open source
stance to our software packages, particularly in AI.
And that includes things like TensorFlow
that are available completely freely,
and it includes the kinds of services
that are available on Cloud to do the kind of compute,
storage, and model tuning and serving that you need to use
these things in practice.
And I think it's amazing that the same tools
that my applied machine learning team
uses to tackle problems that we're interested
in, those same tools are accessible to all of you
as well to try to solve the same problems in the same way.
And I've been really excited with how great the uptake is
and how we're seeing expanding to other languages.
Mentioning JavaScript.
Quick plug for tensorflow.js is actually really awesome.
DIANE GREENE: Oh, and you should probably run it on a TPU.
GREG CORRADO: Yes, exactly.
DIANE GREENE: It does give a nice boost.
So you're building, I mean, with machine learning,
we're bringing it to market in so many ways,
because we have the tools to build your own models,
the TensorFlow.
We have the AutoML that brings it to any programmer.
And then what's going on with all the APIs,
and how is that going to affect every industry,
and what do you see going on there?
FEI-FEI LI: So Cloud already has a suite
of APIs for a lot of our industry partners
and customers, from Translate to Speech to Vision.
DIANE GREENE: Which are based on models we build.
FEI-FEI LI: Yes.
For example, Box is a major partner
with Google Cloud where they recognize a tremendous need
for organizing customers' imagery data to help customers.
So they actually use Google's Vision API to do that.
And that's a model easily delivered to our customers
through our service.
DIANE GREENE: Yeah, it's pretty exciting.
I mean, Greg, how do you think that's going to play out
in the health industry?
I know you've been thinking about that.
GREG CORRADO: So health care is one of the problems
that a bunch of people are working on at Google,
and a lot of people are working on outside as well, because I
think there's a huge opportunity to use these technologies
to expand the availability and the accuracy of health care.
And part of that is because doctors today are basically
trying to weather an information hurricane in order
to provide care.
And so I think there are thousands
of individual opportunities to make doctors' work more fluid,
to build tools to solve problems that they want solved,
and to do things that help patients
and improve patient care.
DIANE GREENE: I mean, I think you were telling me
that so many doctors are so unhappy because they
have so much drudgery to do.
Is this a big breakthrough?
GREG CORRADO: Yeah, absolutely.
I mean, I believe that there's been a great--
when you go to a doctor, you're looking for medical attention.
And right now a huge amount of their attention
is not actually focused on the practice of medicine,
but is focused on a whole bunch of other work
that they have to do that doesn't require
the kind of insights and care and connection
the real practice of medicine does.
And so I believe that machine learning and AI
is going to come in for health care
through assistive technologies that help the doctors do
what they want to do better.
DIANE GREENE: By understanding what they do in a system.
No substitute for the humans.
GREG CORRADO: No.
FEI-FEI LI: No substitutes.
DIANE GREENE: Speaking of human, Fei-Fei,
do you want to talk a little bit about why
you think this humanistic AI approach is so critical?
FEI-FEI LI: Yeah.
Thank you.
So if we look at the history of AI, we've entered phase two.
The first 60 years is AI as more or less a niche technical field
where we're still laying down scientific foundations.
But starting this point on, AI is
one of the biggest drivers of societal changes to come.
So how do we think about AI in the next phase?
What is the frame of mind that should be driving
us has been on top of my mind.
And I think deeply about the need for human-centered AI,
which in my opinion, includes three elements to complete
the human-center AI thinking.
The first element is really advancing AI to the next stage.
And here we bring our collective background
from neuroscience, cognitive science.
Whether we're getting to AGI tomorrow or in 50 years,
there is a need for AI to be a lot more flexible, nuanced,
learn faster, and more unsupervised,
semi-supervised [INAUDIBLE] learning ways
to be able to understand emotion,
to be able to communicate with humans.
So that is the more human-centered way
of advancing AI science.
The second part is the human-center AI technology
and application is that I love what you're saying that there's
no substitute for humans.
This technology, like all technology,
is to enhance humans, to augment humans, not to replace humans.
We'll replace certain tasks.
We'll replace humans out of danger or tasks that we cannot
perform.
But the bottom line is we can use AI to help our doctors,
to help our disaster relief workers,
to help decision makers.
So there is a lot of technology in robotics,
in design, in natural language processing that
is centered around human-centered AI
technology and application.
The third element of human-centered AI
is really to combine the thinking of AI
as a technology as well as the societal impact.
We are so nascent in seeing the impact of this technology.
But already, like Diane said, that we
are seeing the impact in different ways, ways
that we might not even predict.
So I think it's really important.
And it's a responsibility of everyone
from academia to industry to government
to bring social scientists, philosophers,
law scholars, policy makers, ethicists,
and historians at the table and to study more deeply about AI's
social and humanistic impact.
And that is the three elements of human-centered AI.
DIANE GREENE: That's pretty wonderful.
And I think we at Google here, Alphabet, are working as hard
as we can to do humanistic AI.
You mentioned what we need to be careful about out there
with AI and regulatory.
What are some of the barriers to--
I think every company in the world
has a use for AI in many, many ways.
I mean, it's just exploding in all the verticals.
But there are some impediments to adoption.
For example, in the financial industry
they need to have something called explainable AI.
And could you just talk about some of the different barriers
you see to being able to take advantage of AI?
FEI-FEI LI: We should start with health care.
GREG CORRADO: Yeah, so I think that there
are a bunch of really important things to consider.
So one of the things is, of course, we
want to have machine learning systems that
are designed to fit the needs of the folks that are
using them and applying them.
And that can often include not just giving me the answer,
but telling me something about how that was derived.
So some kind of explainability.
So in the health care space, for example,
we've been working on a bunch of things in medical imaging.
And it's not acceptable to just tell the doctor that,
oh, something looks fishy in this x-ray
or this pathology slide or this retinal scan.
You have to tell them, well, what do you think is wrong?
But more importantly, you actually
have to show them where in the image
you think the evidence for that conclusion
lies so that they can then look at it
and decide whether they concur or they disagree
or, oh, well, there was a speck of dust there
and that's what the machine is picking up on.
And the good news is that these things actually are possible.
And I think there's kind of been this unfortunate mythology
that AI and deep learning in particular is a black box.
And it really isn't.
We didn't study how it worked, because for a long time
it really didn't work that well.
But now that it's working well, there
are a lot of tools and techniques
that go into examining how these systems work.
And I think explainability is a big part of it
in terms of making these things available for a bunch
of applications.
FEI-FEI LI: So in addition to the explainability,
I would add bias.
I think bias is an issue we need to address in AI.
And I see bias, from where I sit, two major kind of bias
we need to address.
One is the pipeline of AI development,
starting from the bias of the data
to the outcome of the bias.
And we have heard a lot about if the machine learning
algorithm is fed with data that does not represent the problem
domain in a fair way, we will introduce bias.
Whether it's missing a group of people's data
or biasing it to a skewed distribution,
those are things that would have deep consequences,
whether you're in the health care domain or finance
or legal decision making.
So I think that is a huge issue very nicely that Google
is already addressing that.
We have a whole team at Google working on bias.
DIANE GREENE: Yeah.
That's true.
FEI-FEI LI: And another bias I think is important
is the people who are developing AIs.
The human bias and the lack of diversity is also another bias.
DIANE GREENE: It's so important.
And that kind of brings me to maybe some of our--
we're getting close to the end.
But where is AI going?
I mean, how prevalent is it going to be?
I mean, we look at our universities and these machine
learning classes have 800 people, 900 people.
There is such a demand.
Every computer science graduate wants to know it.
Where is it going?
I mean, will every high school graduating senior
be able to customize AI to their own purposes?
And what does it look like five, 10 years from now?
FEI-FEI LI: So from a technology point of view,
I think that because of the tremendous investment
in resource, both in the private sector
as well in the public sector now,
many countries are waking up to investing AI,
we're going to see a huge continue development
of AI technology.
I'm mostly excited either at Cloud
or seeing what Greg's team is doing,
AI being delivered to the industries that really
matter to people's lives and the work quality and productivity.
But Diane, I think you're also asking
is how are we educating more people in AI?
DIANE GREENE: Both making it easier to use
and educating them and what's it going to look like?
What do you predict?
FEI-FEI LI: That's a really tough question,
because at the core of today's AI is still calculus.
And that's not going to change.
GREG CORRADO: So I think that from the kind of tech industry
perspective or from the computer science education perspective,
I think that we're going to see AI and ML become
as essential as networking is.
No one really thinks about, oh, well,
I'm going to write some software and it's
going to be standalone on a box and it's not going
to have a TCPI connection.
We all know that you're going to have
a TCPI connection at the end of the day somewhere.
And everyone understands the basics of the networking stack.
And that's not just at the level of engineers.
That's the level of designers, of executives,
of product developers and leaders.
And the same thing, I think, is going
to happen with machine learning and AI, which
is that designers are going to start to understand, how can I
make a completely revolutionary kind of product that folds
in machine learning the same way that we fold in networking
and internet technologies into almost everything we build?
So I think we're going to see tremendous uptake
and it becoming kind of a pervasive background
part of the technologies.
But I think in that process the ways
that we use AI are going to evolve.
So I think right now we're seeing
a lot of things where AI and machine learning
add some spice, some extra, a little coolness on a feature.
And I think that what you're going to see over
the next decade is you're going to see more
of a core integration into what it means for the product
to actually work.
And I think that one of the great opportunities
there is actually going to be the development
of artificial emotional intelligence
that allows products to actually have much more natural and much
more fluid human interaction.
We're beginning to see that in the Assistant now with speech
recognition, speech synthesis, understanding
dialogues and exchanges.
But I think that this is still in its infancy.
We're going to get to a point where the products
that we build, they interact with humans in the way
that the humans find most useful just out of the box.
FEI-FEI LI: And I spend a lot of time with high schoolers,
because I really believe in the future.
We always talk about AI changing the world.
And I always say the question is, who is changing AI?
And to me, bringing more human mission thinking
into technology development and thought leadership
is really important.
Not only important for the future
of our technology and the value we instill in our technology,
but also in bringing the diverse group of students
and future leaders into the development of AI.
So at [? Server ?] at Google, we all work a lot on this issue.
And personally, I'm very involved
with AI4ALL, which is a nonprofit that
educates high schoolers around the country
from diverse backgrounds, whether they're
girls or students of underrepresented minority
groups.
And we bring them onto university campus
and work with them on AI thinking and AI studies.
DIANE GREENE: And at Google, we're
just completely committed to bringing all our best
technologies to everybody in the world.
And we're doing that through the cloud,
and we're bringing these tools, we're
bringing these APIs and the training
and the partnering and the processors.
And we're pretty excited to see what all you
guys are going to do with it.
Thank you very much.
GREG CORRADO: Thanks, everybody.
[MUSIC PLAYING]