Subtitles section Play video
(bright music)
>> Narrator: Live from Austin, Texas.
It's theCUBE, covering South by Southwest 2017.
Brought to you by Intel.
Now here's John Furrier.
>> We're here live in South by Southwest Austin, Texas.
Silicon Angle, theCUBE, our broadcast,
we go out and extract the signal from noise.
I'm John Furrier, I'm here with Naveene Rao,
the vice president general manager of
the artificial intelligence solutions group at Intel.
Welcome to theCUBE.
>> Thank you, yeah.
>> So we're here, big crowd here at Intel, Intel AI lounge.
Okay, so that's your wheelhouse.
You're the general manager of AI solutions.
>> Naveene: That's right.
>> What is AI? (laughs)
I mean--
>> AI has been redefined through time a few times.
Today AI means generally applied machine learning.
Basically ways to find useful structure
in data to do something with.
It's a tool, really, more than anything else.
>> So obviously AI is a mental model,
people can understand kind of what's going on with software.
Machine learning and IoT gets kind of in the industry,
it's a hot area, but this really is
points to a future world where you're seeing software
tackling new problems at scale.
So cloud computing, what you guys are doing with the chips
and software has now created a scale dynamic.
Similar to Moore's, but Moore's Law is done for devices.
You're starting to see software impact society.
So what are some of those game changing impacts
that you see and that you're looking at at Intel?
>> There are many different thought labors
that many of us will characterize as drudgery.
For instance, if I'm an insurance company,
and I want to assess the risk of 10 million pages of text,
I can't do that very easily.
I have to have a team of analysts run through,
write summaries.
These are the kind of problems we can start to attack.
So the way I always look at it is
what a bulldozer was to physical labor, AI is to data.
To thought labor, we can really get through
much more of it and use more data
to make our decisions better.
>> So what are the big game changing things
that are going on that people can relate to?
Obviously, autonomous vehicles
is one that we can all look at and say,
"Wow, that's mind blowing."
Smart cities is one that you say,
"Oh my god, I'm a resident of a community.
"Do they have to re-change the roads?
"Who writes the software, is there a budget for that?"
Smart home, you see Alexa with Amazon,
you see Google with their home product.
Voice bots, voice interfaces.
So the user interface is certainly changing.
How is that impacting some of the things
that you guys are working on?
>> Well, to the user interface changing,
I think that has an entire dynamic on how people use tools.
Easier something is, the more people use,
the more pervasive it becomes,
and we start discovering these emergent dynamics.
Like an iPod, for instance.
Storing music in a digital form,
small devices around before the iPod.
But when it made it easy to use,
that sort of gave rise to the smartphone.
So I think we're going to start seeing
some really interesting dynamics like that.
>> One of the things that I liked
about this past week in San Francisco,
Google had their big event, their cloud event,
and they talked a lot about, and by the way,
Intel was on stage with the new Xeon processor,
up to 72 cores, amazing compute capabilities,
but cloud computing does bring that scale together.
But you start thinking about data science
has moved into using data, and now you have
a tsunami of data, whether it's taking
an analog view of the world
and having now multiple datasets available.
If you can connect the dots, okay, a lot of data,
now you have a lot of data plus a lot of datasets,
and you have almost unlimited compute capability.
That starts to draw in some of the picture a little bit.
>> It does, but actually there's one thing missing
from what you just described, is that our ability
to scale data storage and data collection
has outpaced our ability to compute on it.
Computing on it typically is some sort
of quadratic function, something faster
than when your growth on amount of data.
And our compute has really not caught up with that,
and a lot of that has been more about focus.
Computers were really built to automate streams of tasks,
and this sort of idea of going highly parallel
and distributed, it's something somewhat new.
It's been around a lot in academic circles,
but the real use case to drive it home
and build technologies around it is relatively new.
And so we're right now in the midst of
transforming computer architecture,
and it's something that becomes a data inference machine,
not just a way to automate compute tasks,
but to actually do data inference
and find useful inferences in data.
>> And so machine learning is the hottest trend right now
that kind of powers AI, but also there's some talk
in the leader circles around learning machines.
Data learning from engaged data, or however
you want to call it, also brings out another question.
How do you see that evolving, because do we need to
have algorithms to police the algorithms?
Who teaches the algorithms?
So you bring in this human aspect of it.
So how does the machine become a learning machine?
Who teaches the machine, is it...
(laughs) I mean, it's crazy.
>> Let me answer that a little bit with a question.
Do you have kids?
>> Yes, four.
>> Does anyone police you on raising your kids?
>> (laughs) Kind of, a little bit, but not much.
They complain a lot.
>> I would argue that it's not so dissimilar.
As a parent, your job is to expose them to
the right kind of biases or not biased data
as much as possible, like experiences, they're exactly that.
I think this idea of shepherding data
is extremely important.
And we've seen it in solutions that Google has brought out.
There are these little unexpected biases,
and a lot of those come from just what we have in the data.
And AI is no different than a regular intelligence
in that way, it's presented with certain data,
it learns from that data and its biases are formed that way.
There's nothing inherent about the algorithm itself
that causes that bias other than the data.
>> So you're saying to me that exposing more data
is actually probably a good thing?
>> It is.
Exposing different kinds of data, diverse data.
To give you an example from the biological world,
children who have never seen people of different races
tend to be more, it's something new and unique
and they'll tease it out.
It's like, oh, that's something different.
Whereas children who are raised
with people of many diverse face types or whatever
are perfectly okay seeing new diverse face types.
So it's the same kind of thing in AI, right?
It's going to hone in on the trends that are coming,
and things that are outliers, we're going to call as such.
So having good, balanced datasets, the way we collect
that data, the way we sift through it
and actually present it to an AI is extremely important.
>> So one of the most exciting things
that I like, obviously autonomous vehicles,
I geek out on because, not that I'm a car head,
gear head or car buff, but it just,
you look at what it encapsulates technically.
5G overlay, essentially sensors all over the car,
you have software powering it,
you now have augmented reality, mixed reality
coming into it, and you have an interface to consumers
and their real world in a car.
Some say it's a moving data center,
some say it's also a human interface
to the world, as they move around in transportation.
So it kind of brings out the AI question,
and I want to ask you specifically.
Intel talks about this a lot in their super demos.
What actually is Intel doing with the compute
and what are you guys doing to make that accelerate faster
and create a good safe environment?
Is it just more chips, is it software?
Can you explain, take a minute to explain
what Intel's doing specifically?
>> Intel is uniquely positioned in this space,
'cause it's a great example of a full end to end problem.
We have in-car compute, we have software,
we have interfaces, we have actuators.
That's maybe not Intel's suite.
Then we have connectivity, and then we have cloud.
Intel is every one of those things,
and so we're extremely well positioned
to drive this field forward.
Now you ask what are we doing in terms of hardware
and software, yes, it's all of it.
This is a big focus area for Intel now.
We see autonomous vehicles as being
one of the major ways that people interact
with the world, like locality between cars
and interaction through social networks
and these kinds of things.
This is a big focus area, we are working
on the in-car compute actively,
we're going to lead that, 5G is a huge focus for Intel,
as you might've seen in other, Mobile World Congress,
other places.
And then the data center.
And so we own the data center today,
and we're going to continue to do that
with new technologies and actually enable
these solutions, not just from
a pure hardware primitives perspective,
but from the software-hardware interaction in full stack.
>> So for those people who think of Intel
as a chip company, obviously you guys
abstract away complexities and put it into silicon,
I obviously get that.
Google Next this week, one thing I was really impressed by
was the TensorFlow machine learning algorithms
in open source, you guys are optimizing the Xeon processor
to offload, not offload, but kind of take on...
Is this kind of the paradigm that Intel looks at,
that you guys will optimize the highest performance
in the chip where possible, and then to let the software
be more functional?
Is that a guiding principle, is that a one off?
>> I would say that Intel is not just a chip company.
We make chips, but we're a platform solutions company.
So we sell primitives to various levels,
and so, in certain cases, yes, we do optimize for software
that's out there because that drives adoption
of our solutions, of course.
But in new areas, like the car for instance,
we are driving the whole stack, it's not just the chip,
it's the entire package end to end.
And so with TensorFlow, definitely.
Google is a very strong partner of ours,
and we continue to team up on activities like that.
>> We are talking with Naveene Rao,
vice president general manager Intel's AI solutions.
Breaking it down for us.
This end to end thing is really interesting to me.
So I want to get just double click on that a little bit.
It requires a community to do that, right?
So it's not just Intel, right?
Intel's always had a great rising tide
floats all boats kind of concept
over the life of the company, but now, more than ever,
it's an API world, you see integration points
between companies.
This becomes an interesting part.
Can you talk up to that point about
how you guys are enabling partners to work with,
and if people want to work with Intel,
how do they work, from a developer to whoever?
How do you guys view this community aspect?
I mean, sure you'd agree with that, right?
>> Yeah, absolutely.
Working with Intel can take on many different forms.
We're very active in the open source community.
The Intel Nervana AI solutions are completely open source.
We're very happy to enable people in the open source,
help them develop their solutions on our hardware, but also,
the open source is there to form that community
and actually give us feedback on what to build.
The next piece is kind of one quick down,
if you're actually trying to build an end to end solution,
like you're saying, you got a camera.
We're not building cameras.
But these interfaces are pretty well defined.
Generally what we'll do is, we like to select some partners
that we think are high value add.
And we work with them very closely,
and we build stuff that our customers can rely on.
Intel stands for quality.
We're not going to put Intel branding on something,
unless it sort of conforms to some really high standard.
And so that's I think a big power here.
It doesn't mean we're not going to enable the people
that aren't our channel partners or whatever,
they're going to have to be enabled
through a more of a standard set of interfaces,
software or hardware.
>> Naveene, I'll ask you, in the final couple minutes
we have left, to kind of zoom out and look at the coolness
of the industry right now.
So you're exposed, your background, we got your PhD,
and then you topic wise now heading up the AI solutions.
You probably see a lot of stuff.
Go down the what's cool to you scene,
share with the audience some of the cool things
that you can point to that we should pay attention to
or even things that are cool that we should be aware
that we might not be aware of.
What are some of the coolest things
that are out there that you could share?
>> To share new things, we'll get to that in a second.
Things I think are one of my favorites, AlphaGo,
I know this is like, maybe it's hackneyed.
But as an engineering student in CS in the mid-90s,
studying artificial intelligence back then
or what we called artificial intelligence,
Go was just off the table.
That was less than 20 years ago.
In that time, it looked like such an insurmountable problem,
the brain is doing something so special
that we're just not going to figure it out in my lifetime,
to actually doing it is incredible.
So to me, that represents a lot.
So that's a big one.
Interesting things that you may not be aware of
are other use cases of AI, like we see it in farming.
This is something we take for granted.
We go to the grocery store, we pick up our food
and we're happy, but the reality is,
that's a whole economy in and of itself,
and scaling it as our population scales
is an extremely difficult thing to do.
And we're actually interacting with companies
that are doing this at multiple levels.
One is at the farming level itself, automating things,
using AI to determine the state of different props
and actually taking action in the field automatically.
That's huge, this is back-breaking work.
Humans don't necessarily--
>> And it's important too, because people are worried about
the farming industry in general.
>> Absolutely.
And what I love about that use case of like
applying AI to farming techniques is that,
by doing that, we actually get more consistency
and you get better yields.
And you're doing it without any additional chemicals,
no genetic engineering, nothing like that,
you're just applying the same principles we know better.
And so I think that's where we see
a lot of wonderful things happening.
It's a solved problem, but just not at scale.
How do I scale this problem up?
I can't do that in many instances,
like I talked about with the legal documents
and trying to come up with a summary.
You just can't scale it today.
But with these techniques, we can.
And so that's what I think is extremely exciting,
any interaction there, where we start to see scale--
>> And new stuff, and new stuff?
>> New stuff.
Well, some of it I can't necessarily talk about.
In the robot space, there's a lot happening there.
I'm seeing a lot in the startup world right now.
We have a convergence of the mechanical part of it
becoming cheaper and easier to build
with 3D printing, the Maker revolution,
all these kind of things happening,
which our CEO is really big on.
So that, combined with these techniques becoming mature,
is going to come up with some really cool stuff.
We're going to start seeing The Jetsons kind of thing.
It's kind of neat to think about, really.
I don't want to clean my room, hey robot, go clean my room.
>> John: I'd love that.
>> I'd love that too.
Make me dinner, maybe like a gourmet dinner,
that'd be really awesome.
So we're actually getting to a point
where there's a line of sight.
We're not there yet, I can see it in the next 10 years.
>> So the fog is lifting.
All right, final question, just more of a personal note.
Obviously, you have a neuroscience background,
you mentioned that Go is cool.
But the humanization factor's coming in.
And we mentioned ethics, came up, we don't have time
to talk about the ethics role, but as societal changes
are happening, with these new impacts of technologies,
there's real impact.
Whether it's solving diseases and farming,
or finding missing children, there's some serious stuff
that's really being done.
But the human aspects of converging with algorithms
and software and scale.
Your thoughts on that, how do you see that
and how would you, a lot of people are trying
to really put this in a framework to try to advance more
either sociology thinking, how do I bring sociology
into computer science in a way that's relevant.
What are some of your thought here?
Can you share any color commentary?
>> I think it's a very difficult thing to comment on,
especially because there are these emergent dynamics.
But I think what we'll see is,
just as like social network have interfered in some ways
and actually helped our interaction with each other,
we're going to start seeing that more and more.
We can have AIs that are filtering interactions for us.
A positive of that is that we can actually
understand more about what's going on around in our world,
and we're more tightly interconnected.
You can sort of think of it as
a higher bandwidth communication between all of us.
When we're in hunter-gatherer societies,
we can only talk to so many people in a day.
Now we can actually do more, and so
we can gather more information.
Bad things are maybe that things become more impersonal,
or people have to start doing weird things
to stand out in other people's view.
There's all these weird interactions--
>> It's kind of like Twitter. (laughs)
>> A little bit like Twitter.
You can say ridiculous things sometimes to get noticed.
We're going to continue to see that,
we're already starting to see that at this point.
And so I think that's really
where the social dynamic happened.
It's just how it impacts our day to day communication.
>> Talk to Naveene Rao, great conversation here
inside the Intel AI lounge.
These are the kind of conversations
that are going to be on more and more kitchen tables
across the world, I'm John Furrier with theCUBE.
Be right back with more after this short break.
>> Thanks, John.
(bright music)