Subtitles section Play video
[MUSIC PLAYING]
SAM BEDER: Hi, everyone.
My name is Sam Beder, and I'm a product manager
on Android Things.
Today, I'm going to talk to you about Google
services on Android Things, and how
adding these services to your device
can unlock your device's potential.
What I really want to convince you of today
is not only is integrating Google services on Android
Things really, really easy and really, really seamless,
but it can make a huge difference in the use cases
that you can put on your device as well as for your end users.
And I know this year, we have many sessions on Android Things
as well as demos in the sandbox area,
and code labs to learn more about what's
possible on Android Things.
I also know that many of you are coming to this session
already with ideas of devices that you
want to make on Android Things or for IoT devices in general.
And I want to show you today all the compelling use cases
that you can get when you integrate some of these Google
services.
So I'm going to go through a number of services today.
First, I'm going to talk about Google Play services, which
includes a whole suite of tools such as the mobile Vision
APIs, location services, as well as Firebase.
After that, I'm going to dive into Firebase in a little bit
more detail to show you how the real time
database that Firebase provides can
allow you to publish and persist data
and events in interesting ways.
After that, I'm going go into TensorFlow,
and how TensorFlow--
we think-- is the perfect application
of the powerful on-device processing
of your Android Things device to really add intelligence
to that device.
Next, I'm going to talk about Google Cloud platform
and how using Google Cloud platform,
you can train, visualize, and take action
on your devices in the field.
Finally, I'm going to touch on the Google Assistant and all
the amazing use cases that you can
get when you integrate the Google Assistant on Android
Things.
Before I dive into these services,
I want to quickly go over Android Things.
So, Android Things is based on a system on module design.
This means that we work really closely with our silicon
partners to bring you modules which you can place directly
into your IoT devices.
Now, these modules are such that it's
economical to put them in devices when you're
making millions of devices or if you have a very small run,
or if you're just prototyping a device.
So earlier today, we actually had a session
specifically on going from prototype to production
on Android Things, which can give you more detail about how
it's feasible to do all this, all the hardware design,
and bring your device to production on Android Things.
The Android Things operating system
is then placed on top of these modules.
So Android Things is a new vertical
of Android built for IoT devices.
Since we work so closely with our silicon partners,
we're able to maintain these modules in new ways.
It allows these devices to be more secure and updateable.
Also, since it's an Android vertical,
you get all the Android APIs they're
used to for Android development as well as the developer tools
and the Android ecosystem.
In addition, on Android Things we've
added some new APIs such as peripheral iO and user
drivers that allow you to control the hardware
on your device in new ways.
We've also added support for a zero display
build for IoT devices without a screen.
But really the key piece of Android Things, I believe,
is the services on top.
Because of the API surface that Android Things provides,
it makes it much easier for Google
to put our services on top of Android Things.
I say endless possibilities here because not only does Google
already support all the services I'm
going to walk you through today, but any services
that Google makes in the future will be much more portable
on Android Things because of this API surface.
So now, let's start diving into some of these services.
Let's talk about Google Play services and all
the useful tools that it provides.
Google Play services gives you access
to a suite of tools, some of which you see here.
So you get things like the mobile vision APIs,
which allow you to leverage the intelligence in your Android
camera to identify people in an image
as well as faces and their expressions.
You also get the nearby APIs, which lets you--
when you have two devices near each other--
allows those devices to interact with each other
in interesting ways.
You get all the Cast APIs, which lets you
from your Android device cast to a cast enabled device
somewhere else.
Next, you get all the location services,
which lets you query things like,
what are the cafes near me and what are their hours.
You also get the Google Fit APIs,
which allow you to attach sensors and accelerometers
to your device and then visualize
this data as steps or other activities in interesting ways.
Finally, you get Firebase, which we'll
talk about more in a minute.
Some of you might know about CTF certification
and how CTF certification is a necessary step in order
to get these Google Play services.
With Android Things, because of our hardware model
that I just talked about, these modules
actually come pre-certified.
So they're all pre-CTF certified,
meaning Google Play Services will work right out of the box.
You have to do absolutely no work
to get these Google Play services on your Android Things
device.
We also have, for Android Things,
a custom IoT variant of Google Play services.
Now I actually think this is a pretty big deal.
This allows us to make Google Play services more lightweight
by taking out things like phone specific UI elements
and game libraries that we don't think
are relevant for IoT devices.
We also give you a signed out experience
of Google Play services.
So, no unauthenticated APIs because these just aren't
relevant for many IoT devices.
So now, let's dive into Firebase in a little bit more detail.
I'm going to walk you through one of our code samples.
So this is the code sample for a smart doorbell using Firebase.
It involves one of our supported boards,
as well as a button and a camera.
So I'm going to walk you through this diagram.
On the left, you see a user interacting
with the smart doorbell.
What happens is, they press the button on the smart doorbell
and the camera takes a picture of them.
On the right, there's another user
who, in their Android phone, they
can use an app to connect to a Firebase database that
can retrieve that image in real time.
So how does this work?
When you press the button on the smart camera,
the camera takes a picture of you.
Then, using the Android Firebase SDK,
which uses the Google Play services APIs
all on the device, it sends this image
to the Firebase database in the cloud.
The user on the other end can then
use the exact same Google Play services and Android Firebase
SDK on their phone to connect to this Firebase database
and retrieve that image.
In our code sample, we also send this image
to the Cloud Vision APIs to get additional annotations
about what's in the image.
So these annotations could be something like, in this image
there is a person holding a package.
So that can give you additional context about what's going on.
It's pretty cool.
If you actually go and build this demo, you can see.
When you press the button and it takes a picture, in less than a
second the picture will appear.
And then a few seconds later, after the image
is propagated through the Cloud Vision APIs,
the annotations will appear as well.
So to really show you how this works,
I'm going to walk through some of the code that
pushes this data to Firebase.
So the first line you see here is just
creating a new door ring instance
that we're going to use in our Firebase database.
Then, all we need to do to make this data appear
in our Firebase database is set the appropriate fields
of our door ring instance.
So here you can see in the highlighted portion,
we're setting the time stamp and the image fields so that--
with the server time stamp and the image URL--
and then this image as well as the timestamp
will appear in our Firebase database
to be retrieved by the user on the other side.
As I mentioned in our code sample,
we also send our images to the Cloud Vision APIs
to get those annotations.
So, we do that by calling the Cloud Vision APIs
and then simply setting the appropriate field
for those annotations so that that additional context
will appear as well for the user on the other end.
So, Firebase is one of the many Google Play services
that you get with Android Things.
But in the interest of time, I can't talk about
all the Google Play services.
So instead, I want to move on to TensorFlow.
We really think that TensorFlow is the perfect application
for the on device processing of your Android Things device.
So, as you've heard from some of the previous talks on Android
Things, Android Things is not really
well suited if you're just making a simple sensor.
To fully utilize the Android Things platform,
it should be doing more.
There should be some intelligence on this device.
You might wonder, though, if you're making an internet
connected device-- an IoT device--
why do you actually need this on device processing?
There's actually several reasons why
this could be really important.
One reason has to do with bandwidth.
If, for example, you're making a camera that's
counting the number of people in a line
and you just care about that number,
by only propagating out that number
you save huge amounts on bandwidth
by not needing to send the image anywhere.
The second reason for on device processing
has to do with when you have intermittent connectivity.
So if your device is only sometimes connected
to the internet, for it to be really functional
it needs to have on device processing for when
it's offline.
The next reason for on device processing
has to do with the principle of least privilege.
So if you, again, had that camera where all you care about
is the number of people standing in a line,
by the principle of least privilege
you should only be propagating that number
even if you trust the other and where you're sending it.
There's also some regulatory reasons
where this could be important for your use case.
The final reason for device processing
has to do with real time applications.
So if you're, for example, making
a robot that has to navigate through an environment,
you want to have on device processing
so if something comes in front of that robot,
you'll be able to react to the situation.
Again, I want to mention that we have a code lab for TensorFlow
and Android Things.
So you can try it out in the code lab area or at home.
But to really show you TensorFlow in action,
I actually want to do a live demo so we can really
see that it works.
So what I have here--
it's a pretty simple setup.
We have one of our supported boards, which
is a Raspberry Pi in this case, as well as a button, a camera,
and a speaker.
The button's here on top.
The camera is actually located in this little Android head's
eye.
So it's in its eye right there.
And then the speaker's in its mouth.
So what's going to happen is, when I press the button,
the camera will take a picture.
That image is then sent through a TensorFlow model located
locally on the device.
And then the speaker will then say what that TensorFlow
model thinks it saw.
So for you here today, I have various dog
breeds because locally on this TensorFlow model, I have
what's called the Inception Model.
Now the Inception Model is a model provided by Google
that's able to identify thousands of objects, including
dog breeds.
So let's see if it can do it.
I just need to line up the image and--
GOOGLE ASSISTANT: I see a Dalmatian.
SAM BEDER: All right.
So for those of you couldn't see--
Yeah.
[APPLAUSE]
Deserves an applause.
It is, in fact, a dalmatian.
But let's do it one more time to show you that it, you know,
can do more than just one dog breed.
So this time I have a French bulldog.
All right.
Line it up again.
Hope for the best.
GOOGLE ASSISTANT: Hey, that looks like me.
Just kidding.
I see a French bulldog.
[APPLAUSE]
SAM BEDER: All right.
Yeah.
Good job, little guy.
So as I mentioned, this is all running totally locally.
So this is not connected to the internet at all,
and since this is battery powered, it's totally portable.
So I think that this example really
shows you some of the power you can get with TensorFlow.
So now, let's actually walk through some
of the code that makes this integration possible.
This first page, as you can see, is pretty simple.
And this just shows us loading up the appropriate TensorFlow
library to be used by our device.
The first thing I want you to note here
is that we're actually only loading the same libraries
as is used by Android.
So, all the TensorFlow code that works on Android
will also work on Android Things.
All of the samples that you already
have on Android for TensorFlow you can import immediately
to Android Things.
The second thing I want you to note
is that here we're actually only loading
in the inference libraries of TensorFlow.
TensorFlow is basically composed of two sets of libraries.
There's training, which is where you give it thousands
of images along with labels--
so you can make that model that can make those predictions.
And then there's the inference libraries,
where you're using that model that you trained to actually
make those predictions.
So now, let's go through some of the core functionality
to actually do those predictions.
So these are the steps to actually run input data
through a TensorFlow model.
The first method you see there, the feed method,
is where you're actually loading in your input data.
So we have three arguments.
There's the input layer name, which
is simply that first layer of your TensorFlow model
where you're going to put your input data.
Next, there's tensor dimensions which simply describes
the structure of your input layer
so you can understand what's going into your model.
Then you have image pixels, which
is the actual input data which you are
going to make predictions on.
So here in our case, since we're taking a picture, of course
the input data is pixels.
But this same type of TensorFlow model
will work across many use cases.
So if instead you had just sensor data or a combination
of sensor data and camera data, you
could use the same type of TensorFlow model
and it would still work.
So the next slide, the actual highlighted portion,
is where the actual work gets done.
So we call it the run method--
to actually run this input data through our TensorFlow model
to get that prediction on the other side.
So here, we just need to provide the output layer
where we want the data to go.
Finally, we need to fetch our data so we can use it.
So we call it Fetch along with an output array
to store our data.
Now, this output array is composed
of elements that correspond to the confidence
that an object is what we saw in the image.
So in our first example, we predicted dalmatian.
That means that the element with highest confidence
was that that corresponded to dalmatian.
You could actually do a little bit more nuanced things
with these results.
So for example, if there's two results that
both were highly confident, you could say,
I think it's one of these two things.
And if there were no results above a certain threshold
of confidence, you could say, I don't
know what's in this image.
So even once you have your output of confidences,
you can do a little bit extra depending on your use case.
So as I mentioned, this demo is running completely locally.
But I think that there's actually
more interesting things that we can do once we also connect
devices like this to the cloud.
So next, I want to talk about Google Cloud
Platform and specifically Cloud IoT Core.
So Cloud IoT Core is a new offering
that we're announcing here at iO that's specifically
for connecting IoT devices to the Google Cloud Platform.
Now, the Google Cloud Platform has a number of services.
You can do things like MQTT protocol support.
MQTT is a lightweight protocol that's
used for communications as well as many industrial purposes.
Cloud IoT Core is also a 100% managed service.
This means you get things like automatic load balancing
and resource pre-provisioning.
You can connect one device to Cloud IoT Core or a million
devices, and all these things still work the same way.
There's also a global access point,
which means that no matter what region your device is in,
it can use the same configurations
and connect to the same Google Cloud.
Cloud IoT Core also comes with a Device Manager
that can allow you to interact with your devices in the field.
So you get things like the ability
to configure individual devices that you have in the field,
as well as control those devices,
set up alerts, and set up role level access controls.
Role level access controls could be something
like allowing one user to be able to have read and write
access over a set of devices, and then another user
could only have read access or a subset of those devices.
So as I mentioned, Cloud IoT Core
also connects you to all the benefits
of Google Cloud Platform.
This diagram shows you a bunch of the benefits
that Google Cloud Platform provides.
And I'm not going to go through all of them,
but just to point out a few.
You get things like BigQuery and BigData
that allow you to input all the data that you're gathering
from your Android Things devices and then visualize and query
over that data.
You also get CloudML, to make even more complicated machine
learning models based on all the data you've collected
using the power of the cloud.
Finally, you get all the analytics tools
that Google Cloud Platform provides,
to visualize and set up alerts on your data
and take action on the devices you have in the field.
So to understand these analytics a little bit better,
I'm going to go through one more demo.
So this demo is actually running live in our sandbox area.
And this is just a screenshot of it working.
What we've done here is we've set up
a bunch of environmental stations
running on Android Things and spread them
around Mountain View campus.
Now, these environmental stations
have a bunch of sensors on them, things
like humidity sensor, temperature sensor, air
pressure sensor, luminosity sensor, and motion detection.
And then we're able to aggregate all this data in the cloud
by connecting it through a Cloud IoT Core.
So on the left, you can see some of the data
from some of these devices they were able to aggregate.
We can also see average temperatures
and other analytics on our data.
We can also dive into one specific device
to really see more data on what's
going on with that device as well as more time series
data on how that device has performed over time.
You might notice, though, that this demo shows you
really well that you can connect these devices to Google Cloud.
But it doesn't really utilize the on device processing
that I talked about with my TensorFlow demo.
So next, I want to go over a few more examples that
show you these two services working together.
Because when you combine TensorFlow and Google Cloud
Platform, I think you can do some really amazingly
powerful things.
So my first example kind of extends
this environmental station demo that I just walked you through.
Imagine instead of just putting these environmental stations
around, we actually connected them
to a smart vending machine.
We were then able to use all the input
data from our environmental station
to have a machine learning model using TensorFlow running
locally on this device.
You could predict things like supply and demand
based on that vending machine's environment,
and then optimize when this vending
machine would be restocked.
You could also connect all of your vending
machines to the cloud and do even more complicated analysis
on those vending machines.
You could do inventory analysis to figure out
which items are performing best in which environments,
and you could also do even better prediction models
based on all the data you're collecting.
This is actually a perfect example
to do what we call federated learning.
So, federated learning is when we have multiple machines that
are all able to learn locally, but based
on those local learning we can aggregate
that data to make an even better machine learning
model in the cloud.
So here, you can imagine having one vending machine in a school
and another vending machine in a stadium,
and both vending machines would have very personalized models
based on their environment.
But they would also both benefit from each other
by aggregating their data in the cloud.
This is also a good example that shows
you can do interesting things without a camera just using
sensor data.
But my next example goes over a camera use case
because I think that cameras are perfect applications for doing
some of this on device processing.
So imagine you have a grocery store.
And the grocery store puts up cameras
to count the number of people standing in line.
This camera would use a TensorFlow model
that's locally able to count that number of people
in the image and propagate that number to the cloud.
You could use this data to open the optimal number of registers
at any given time so you never have
to wait in line at the grocery store again.
With all of your aggregated data,
you could also do more complicated machine
learning models.
You could predict how many people
you should staff at your grocery store on any given day.
You could also see how optimal each grocery
store is performing and the differences
between grocery stores.
This could even be useful for the shoppers--
the end users.
You can imagine making a mobile app
where, at home, you can check how long the grocery store
line is so that you never are frustrated
by having to wait in line because you'll know in advance
what the situation will be.
The next use case I want to go over
brought in this camera example a little bit more
and applies it to an industrial use case.
So imagine with a factory that, let's say, makes pizzas.
And we add a camera that's able to do quality control
to increase both the quality and the efficiency
for this industrial application.
I should note that we have another talk that's
specifically on enterprise use cases on Android Things.
So you should listen to that talk
if you want to know more about what's
possible on Android Things for some
of these industrial applications.
So in this case, we would have a TensorFlow model
that's locally able to learn how to accept and reject pizzas by,
for example, counting the number of toppings of each pizza.
So as we see some of these pizzas go by,
most of them we'll see will have six tomatoes and five olives.
And so they're accepted.
But then soon, we'll come to one-- this one--
that one-- that has too many tomatoes--
too few tomatoes-- and too few olives.
Sorry.
Too few tomatoes and too many olives.
So we reject that pizza.
We could also propagate this data
to the cloud to do more analysis such as track
our throughput and flag if our error rate goes
above a certain threshold and we want
to do a manual check on our machines.
There's one more use case I want to go over
that uses machine learning in a slightly different way.
So that's going to be reinforcement learning applied
to an agricultural use case.
So imagine we have a field that has
a bunch of moisture sensors in the ground,
as well as sprinklers.
And these are all connected to a central hub
running Android Things.
Now, this Android Things hub could
do some machine learning to optimize
exactly what the output of when and how much
water each sprinkler should output
to optimize our crop growth.
You may have heard of DeepMind.
Sundar actually mentioned it in his keynote
as a company at Alphabet that recently
made AlphaGo, which beat the best go player in the world.
Now, this used reinforcement learning
in really powerful ways.
And I think that reinforcement learning
is an amazing tool that could also be used on Android Things
really well.
With reinforcement learning, you could discover some nuanced use
cases, such as--
imagine your hill had a hill on it.
In that case, you may actually want
to water the crops at the bottom of the hill
less than those at the top of the hill
because the sprinklers at the top of the hill
might have runoff water that'll add
the extra water to the crops at the bottom of the hill.
So Android Things makes integrations
like these really seamless, and provides you
the tools to do anything that you imagine.
And I think that using things like TensorFlow and cloud
together can also do some really amazing use cases that you
can't do with just one.
Combining these services could do so much more for your device
and for your end users.
There's one more service I want to talk about today,
and that's the Google Assistant.
So Android Things supports the Google Assistant SDK.
Now, there is a huge number of use cases
that we think the Assistant can do for you.
It allows you to connect to all the knowledge of Google
as well as allows you to control the devices in your home.
Again, we have a code lab that goes
over getting Android Things to work with the Google Assistant.
So you can do it at home or you can do it in our sandbox area.
We also partnered with AIY, which
is a group at Google that makes kits
for do it yourself artificial intelligence makers.
And so what you see on the screen here is the kit
they recently released--
the voice kit-- that is one of the easiest ways
that you can get started with Android Things
working with the Google Assistant.
Before I end my talk today, I want
to go over one more feature of Android Things,
and that's the Android Things Developer Console.
The Android Things Developer Console
brings all these services together.
It's our new Developer Portal, which
we're going to release soon, that
lets you add all these services to a device in a really
simple way.
The key with the Android Things developer console
is customization.
You get ultimate control of exactly what services
will go on your device when using the Android Things
Developer Console.
You also get device management and updates.
So this Allows you to create your projects
as well as upload your own APKs for your own device
functionality and push those feature updates
to your devices in the field.
The Android Things Developer Console
is also where you'll get all the updates from Google.
So these are the security updates
and the feature updates that will make your devices secure.
Now, since you get total control with the Developer Console
you get to control which updates you take
and exactly when these updates push out.
But I believe that the customization of The Developer
Console gives you the control to really create anything
that you can imagine, unlocking this unlimited potential
of what we think is possible of Android Things,
especially when combined with Google services.
So to summarize, Android Things gives you
that platform that makes hardware development feasible.
It gives you all the Android APIs
to make your development process easy,
combined with this system on module design
to make it quick and economical to make a prototype
and also bring that device to production.
But the services on top, I believe,
are the huge factor that allows you to really
innovate and enhance your device as well as bring
new features to your users.
So we have Google Play services, which
gives you this suite of tools like the mobile vision APIs,
location services, as well as Firebase.
You get TensorFlow, which uses the powerful on device
processing of your Android Things
device to add that intelligence to your device.
You also get Google Cloud Platform, and specifically
Cloud IoT Core to connect your device
to the even greater intelligence of the cloud.
And finally, you get the Google Assistant,
the latest and greatest in Google's
personal assistant technology.
All these services, and any that come in the future,
will fit on top of Android Things
to unlock this potential of your device.
I want to leave you today with my call to action.
We have a huge number of sessions
on Android Things this year, as well as demos and code
labs for you to learn more about what's
possible on Android Things.
We also have a developer site where
you can visit to download the latest Android Things image
and start making your idea.
I encourage you to add some of these Google services
to your device to see how powerful they really can
be, and then tell us about it.
Join our developer community, where thousands of people
are already asking questions, sharing their ideas,
sharing their prototypes, and getting feedback.
Again, I'm Sam Beder.
And I look forward to hearing about all the amazing devices
that you're building on Android Things that integrate
these powerful Google services.
Thank you.
[APPLAUSE]
[MUSIC PLAYING]