Subtitles section Play video
Hello, and welcome to Tech Decoded.
I'm Henry Gabb, editor of The Parallel Universe,
Intel's magazine devoted to software innovation.
And today, I'm here with Charlotte Dryden,
who leads a team developing software for visual computing,
from Edge to Cloud.
Charlotte has 20 years of experience in semiconductors
in international business.
She holds a degree in electrical and biomedical engineering
from Vanderbilt University, and an MBA from the University
of San Diego.
Welcome, Charlotte.
Thanks for coming.
Thank you, Henry.
I've done some work in computational linguistics--
essentially, teaching a computer how to read.
Your teaching computers how to see more like humans.
What do you see as the future of computer vision?
So, I work in the computer vision software technology
space.
And our whole goal is to enable machines
to see exactly as we do.
And what that will allow is many,
many things, such as autonomous vehicles, valet copters,
robots to move as humans move, and see as humans see.
So I see a huge set of opportunities there.
Now, computer vision is obviously
a hot topic right now.
But you and I may define it differently.
How do you define computer vision, as the expert?
All the science and technology that's needed--
both hardware and software--
to enable what we are discussing,
which is allowing machines to see as humans see.
Do you still need a PhD to do it?
No.
I think that this technology is becoming
rapidly more accessible to developers of all types.
The ninja developers who are the PhD experts,
maybe they're ninjas without PhDs,
and the maker community and students, and even just
people who like technology and want to try it out.
I think that there's enough abstraction
with the technologies that we've built,
that many of the technologies are accessible to many.
The barrier to entry seems to have come down quite a bit.
And I agree, part of that has to do with the software
abstraction.
But what else is lowering the barrier to entry
to people who want to get in to computer vision?
Yeah.
Yeah, the one thing that's helped
is the reduction in hardware--
hardware cost.
So, it used to require a big set of servers and a lot of storage
to develop any computer vision technology.
If you look at deep learning, it used
to require very expensive hardware and large amounts
of time in large pools of data to train a model
to do any sort of object recognition.
But now, the processor speeds are faster.
The price of the hardware is reduced.
And the hardware is more available
to the average person.
So, with that, we see more innovation
from many different types of developers.
So Charlotte, Intel is heavily invested
in the area of computer vision.
Can you tell us more about what we're doing in this segment?
Yes.
So, Intel has a broad portfolio of hardware and software
for computer vision.
And in addition to the CPU and the GPU for visual computing,
we now have a suite of hardware accelerator options
that can allow the right performance
and power for the right visual computing workload.
So we have the Movidius IP that we recently acquired,
the Myriad product.
We also have the FPGA that we acquired from Altera
a few years ago.
And now, we've recently-- last year--
acquired Mobileye, so that we have computer vision hardware
for autonomous driving.
With that, we're in the developer products group.
So we've designed software tools to make
that hardware accessible to computer vision algorithm
developers.
For example, we've developed the computer vision SDK,
which includes a large number of OpenCV library functions
that have been finely tuned for all of the various hardware
accelerators.
And then, we have deep learning tools to help with optimizing
trained models for object detection--
or facial recognition, for example--
so that they run best on Intel hardware.
And then we have a host of tools for custom coding.
When you bring up OpenCV--
which has an interesting history,
because it originated in Intel labs almost 20 years ago.
And as a matter of fact, a few months
after I joined Intel back in 2000,
Gary Bradski, the creator, had just
published an interesting article in Dr. Dobb's Journal
describing the OpenCV library, and the things
it could do to teach your computer how to see.
And at the time, as a new Intel employee, I thought,
I didn't know Intel was doing computer vision.
But now, almost 20 years later, it's
almost the de facto industry standard for computer vision.
It's open source.
It has come a long way.
What are some of the opportunities
now for developers who are using OpenCV for computer vision
apps?
OpenCV is the de facto standard.
A lot of expert technologists over the years
have made Open CV a great starting point for computer
vision algorithm developers who want
to add any sort of vision function
to their application or their algorithm.
So, Open CV will continue to evolve, especially
as Intel leverages various hardware
accelerators, especially for low-powered situations--
even high performance compute situations in the cloud.
So OpenCV will continue to evolve,
and will continue to have more and more functions so
that machines can behave like a human eye.
For the future for developers, I see them
continuing to leverage OpenCV.
I see us continuing to educate developers of all types
on the use of OpenCV, so that they know that it's accessible
and it's not as hard as it used to be.
See, one of the things I love about OpenCV,
is that it makes me feel like a computer vision expert,
when I'm not.
I love that.
Most people don't admit that.
They use OpenCV, and then they act
as if they built the whole thing ground-up.
It's raised the level of abstraction
to allow that to happen.
And I get access to highly-tuned functions
that do computer vision.
Exactly.
What do you see coming with OpenCV.js?
So, I see a lot of future.
OpenCV.js is a big one, because it
makes computer vision functions accessible to web developers.
So, with the Edge to the Cloud and this whole internet
of things, we're going to have a lot of web apps.
And having strong vision functions
available to web app developers, those
are worlds that didn't used to come together.
When you combine those worlds with some of the deep learning
technologies that we have, and other advancements with custom
coding-- again--
I see a bright future for the internet
of things with very sophisticated vision functions.
And now that virtual reality and augmented reality
are meeting computer vision, what kind
of compelling applications do you see coming in the future?
So we have good applications for AR today that are useful.
We can take our smartphones and overlay our smartphone
on an image and get extra information
about that particular image.
Or, see video streams or other images on top of the image
that we've selected.
That's augmented reality.
And for virtual reality, I think we're just getting started.
We see virtual reality a lot at trade shows.
But the price of the hardware isn't accessible to many yet.
So I see opportunities for that area to evolve.
When you're taking multiple video streams
and combining that with motion detection
to create a surreal environment, that's
very heavy and complicated technology.
But I can see where that would help medicine quite a bit--
or education, or industrial use cases.
And if we change gears a little bit
and think about the socio-technical issues
of living in an era of ubiquitous cameras,
that we're under constant surveillance,
cameras are everywhere, there's certainly good and bad.
But what's your take on what it means
to live in the era of constant surveillance?
And what do you try to convey to your team
as they develop these software solutions?
Some people love that we live in an era of ubiquitous computers.
Some people just love their selfies, and their videos,
and their cameras.
I, personally, am a little stressed
out by all of the cameras everywhere.
I value my privacy and I value the safety of my data.
I'm guessing I'm not alone.
So, to me, the ubiquity of cameras
brings up the concerns around ethics.
And for my team, who develops developer products,
we need to think about how to help developers
be more responsible when they're developing their vision
applications.
How do we give them the right controls
so that they can give the end user more
choices around privacy and security,
while still having the convenience that these vision
applications allow?
Well, thank you, Charlotte, for sharing your insights
on computer vision and its broader impact.
I'm Henry Gabb with Tech Decoded.
Please visit our website for more detailed webinars
on computer vision in Intel's hardware and software
portfolio for visual computing.
[INTEL THEME JINGLE]