Subtitles section Play video
[MUSIC PLAYING]
EITAN MARDER-EPPSTEIN: How's everyone doing today?
Yeah?
Good.
All right.
Well, welcome to Google I/O. My name is Eitan Marder-Eppstein.
And I am an engineering manager here at Google.
And I work on augmented reality.
And I'm going to take a few polls throughout this talk.
And the first one is how many of you
are familiar with augmented reality in general?
OK.
Every time I give a talk like this, more hands go up,
which is a really, really great thing.
And today, what I'm going to do is
give a quick refresher about augmented reality for those
of you who maybe aren't quite as familiar with it,
and especially how augmented reality relates to smartphones,
which is something that we're really, really
excited about here at Google.
And then I'm going to talk about some of the things
that we're doing at Google to improve
our platform for augmented reality and the capabilities
that we give to some of these devices.
All right.
So I need my clicker.
So I'm actually going to go over here to get
the presentation started.
But off we go.
So smartphone AR stems from this observation
that over the last decade, our phones
have gotten immensely more powerful,
CPUs and GPUs have improved a lot.
But the ability of phones to see and understand
their environments, and really make sense of the world
around them, until very recently was
largely unchanged and limited.
So if you pointed your phone at this table,
it would allow you to take a picture of the table
or even a video of your friend climbing over the table.
But your phone wouldn't really have an understanding
of the geometry of the table, of its position
relative to the table as it moves through space.
And so what augmented reality seeks to do on smartphones
is to take all of this amazing advancement in computing power
and leverage it to bring new capabilities to your phone,
and to take your phone from beyond just the screen,
beyond its own little box, to expand it to understanding
the world around it.
So now, when my phone looks at this table,
it can see that there's a surface there,
that there are chairs next to it.
And as I move through the environment,
my phone can actually track its position as it moves.
And we think at Google that augmented reality
is really exciting.
And we've been excited to see some of the stuff
that you've built. And we've categorized it
into two main buckets where we think
augmented reality can be really, really great for applications.
So the first bucket is we think that augmented reality can
be useful on smartphones.
So recently, I was remodeling my kitchen.
All right, another poll-- how many of you
have remodeled anything in a house?
All right.
So if you've done that, you know that measurements
is a real pain.
And what I needed to do was measure for a backsplash.
We were buying some subway tile for our kitchen.
And I, instead of taking a tape measure out,
actually pulled out my phone, went to my counter,
and measured from point A to B to C. And I did all of that
without moving any of my appliances
where I would have normally had to move in order
to get an accurate measurement with my tape measure.
So AR can be useful in that way, just
from providing a better geometric understanding
about your environment.
AR can also be useful for shopping applications.
So recently, we had some very old chairs at my house.
And my partner and I were looking to replace them,
kind of like these chairs here.
And we were getting into a debate over which
chairs we liked more.
And so with augmented reality, we
were able to take a 3D model of a chair,
place it in the environment, see the exact size and scale
and color.
And we could have our arguments about inevitably
what kind of chair we would have at home
rather than exposing everyone to it at the store,
and be more targeted about how we made our purchase
and even buy this furniture online and feel much more
comfortable with it.
So that's how AR can just provide more utility
in your daily life.
But AR can also be fun.
So imagine a character running across the floor,
jumping onto this chair, and jumping onto this table,
or me sitting in one of these chairs
and having the floor drop out from under me
to create an ice fishing game.
Ice fishing sounds a little bit boring,
but I can tell you that in this game,
it's actually a lot of fun.
And AR can also be used for creative expression.
So here, now in your pocket, you have a lot of ability
to go out and create new things that
were previously only capable to be created by professionals.
So you can generate computer-generated content
on the go, on the fly.
You can take your favorite character
and put them into your scene, and have
your friend pose next to them.
Or you can take pizza or hot dogs or your favorite food
items, as we showed here, and put them
on the table in front of you.
But now, you have this amazing video editing capability
in your pocket.
And for those of you who have seen our AR Stickers
application on the Google Pixel phone,
you know what I'm talking about.
And for those who haven't, please check it out.
It's really, really cool to have this creation
power in your pocket.
All right.
So that's great.
AR can be useful.
AR can be fun.
But how do you actually build applications for AR?
How do you get involved as developers?
This is a developer conference.
So how many of you are familiar with ARCore, when I say ARCore?
All right, about half of you.
So ARCore is Google's development platform
for augmented reality.
We want to make it easy for you to build applications that
take advantage of these new capabilities
that phones provide, of the ability of phones
to see and understand their environments,
and to build applications that actually
react to this understanding.
And ARCore was launched a few months ago.
And it provides three main capabilities
to allow you to do this.
The first is something we call motion tracking.
So here, consider the example of taking the Scarecrow
from "The Wizard of Oz" and wanting
to place the Scarecrow at a taco stand
and make it seem like he's waiting in line for tacos
because everyone loves tacos.
So here, if I look at the Scarecrow with my phone,
ARCore actually understands its position
relative to a virtual object that I've placed in space.
So as I move a meter forward, the phone
knows that I've moved a meter in this direction.
And as I turn left, the phone also knows that.
It's able to track its motion as I move through space.
And now, if I combine that with my desire
to place the Scarecrow a meter in front of me,
I can put the Scarecrow right here.
And as I move my phone around, I can
change where I'm rendering the Scarecrow in the virtual scene
to match my physical environment.
So that allows you to register virtual objects
to your physical scene in a very natural and intuitive way.
The second capability that ARCore provides
is something called lighting estimation.
So here, continuing our "Wizard of Oz" theme,
we've got the Cowardly Lion.
And when you turn off the lights,
say we want to make the lion afraid because it's cowardly.
So here, ARCore is looking at the camera feed
and it is estimating the real world
lighting of your environment.
And with that estimate, ARCore can now
light characters in a realistic fashion,
helping you to build a more immersive experience that
looks natural because the virtual objects that you're
putting in your scene look correct.
So you can see the tone on the lion change
when it goes from light to dark.
And you can even script interactions
for your characters.
In this case, making the lion afraid when the lights go off.
And the third capability that ARCore provides
is environment understanding.
So here, as ARCore is moving around the world
and it's tracking its motion and it's also
estimating the lighting of the environment,
ARCore is also trying to recognize surfaces.
So ARCore might recognize this plane
below me which is the ground, or this surface here
which is the table, or even maybe
this vertical surface behind me.
And it allows you to place objects
that are grounded to reality.
So if we want to place the Android character
on this table, I can detect the surface
and actually place my virtual character on a physical object
in the world.
So those are three capabilities-- motion tracking,
lighting estimation, and environment understanding.
And when you combine them together,
it allows you to build these experiences that
were previously impossible, that bring
the virtual and physical worlds together
and meld them into a new reality that
enables people to see and experience your application
in a new and different light.
And we're really excited about this and the opportunity
to bring apps to our ecosystem for it.
And so we have worked really, really
hard to expose support for ARCare
on as many devices as possible.
And with help from our partners in our Android OEM ecosystem,
today ARCore is supported on over 100 million devices.
And we're working to increase that number every single day.
We believe that augmented reality
is a next shift in computing, and that soon everyone
will take for granted that this power is in their devices.
So that's our scale.
But we're also interested in scaling
the capabilities of ARCore.
We want to teach ARCore to do new and interesting things.
And that's what the rest of the talk is going to be about.
So today, we're announcing some new things in ARCore.
And they fall broadly into two categories.
The first is we're announcing some new capabilities
for ARCore, improving what these devices can do.
And those are Augmented Images and Cloud Anchors.
And we'll talk about them in the talk today.
And then we're also announcing some new tools for ARCore.
One new tool is how you can use augmented reality
on the web, which we think is really exciting.
And you can check a talk to that later today at 12:30 PM.
And another is how you can more easily
write 3D applications for Android and AR specifically.
We've introduced our scene form library,
which is a helper library for 3D rendering on Android.
And we encourage you to check out that talk at 5:30 today.
So enough about the preamble.
We're now going to get into the meat of it
and talk about what's new in ARCore.
And I'm going to kick it off with our first feature, which
are augmented images.
So augmented images stem from your feedback.
We've heard you as you develop augmented reality applications,
ask us, hey, AR is great.
Wouldn't it be better if we could also
trigger augmented reality experiences off
of 2D images in our environment, like movie posters
or textbooks?
And so augmented images seek to do just that.
Augmented images provide a mechanism
to take a 2D texture in the world
and make it more engaging by expanding it
to a 3D interactive object.
And to show a concrete example of this,
consider the case where we have a new children's toy.
It's called Castle Toy, I think.
And we have told ARCore, hey, we want
you to recognize the surface of this Castle Toy box.
So now, as part of the product, you
can hold up your phone to it and you can actually
have an immersive experience come out
of that box, a more engaging experience for your product.
So augmented images allow you to detect these kinds of textures.
And then script behaviors, and take this 2D flat surface
and turn it into 3D, which we think is really exciting.
And it's based on your feedback.
You told us that you wanted this feature and now we have it.
So that's the future in a nutshell.
But I want to tell you about how it works
and also how you can use it in your applications.
So augmented images fundamentally
work in three major steps.
The first step is you need to tell ARCore what
images you're interested in.
And there are two ways that you can do this.
The first way to do this is to tell ARCore
that you want to detect certain kinds of images in real time.
So you could download an image from a server.
You could have it bundled in your application.
And you tell ARCore at runtime that,
hey, please load this image, learn
how to detect it in the scene, and tell me when you do.
The second option is to tell ARCore in advance.
So we've provided tools where you, on your desktop computer,
can take up to 1,000 images and train ARCore on them
in an offline fashion, saying, I would
like you to be able to recognize any of these 1,000 images
when I run my application on device.
All right.
So the next step is now that we've
trained ARCore to recognize these images,
we actually want to detect them on device.
We want to show ARCore a scene and have it detect
the images that we've trained.
So now, when ARCore moves around the environment
with your phone, ARCore will also
look for textures in the environment
and try to match those to the textures that you trained on.
And when it finds a match, ARCore
provides you information on that match
with the third step, which is it gives you a tracked object.
So for those of you who are familiar with ARCore,
tracked objects are a notion for the physical objects
in space that ARCore knows about.
To this point, that's been planes
like these surfaces, both horizontal and now vertical.
But it also can give you points in the environment of interest
that you can attach to.
And now, an augmented image is just another tracked object.
So you use it just like you would
use any plane or any point.
And you can attach your virtual content
to the detection of the physical object in the world.
So that's it.
Really simple, three simple steps--
number one, tell ARCore what you're looking for.
Number two, have ARCore detect objects in the scene.
And number three, attach your virtual content
to these physical objects.
And because this is a developer conference,
I want to show you those same steps in code.
We're going to go through them in Java really quick.
But this is also the same for Unity and Unreal.
The concepts apply across all of our development environments.
So we'll go through the same exact steps again.
Step number one is you need to add images to ARCore's memory.
You need to tell it what images it's interested in.
And so here, we're creating this new augmented images database
and just adding an image to it.
And we're doing this in real time on the phone.
Now, this is a little bit expensive.
You have to pay a cost, computationally,
for each image you add.
So a little bit later, I'll also show you
how to create it with the alternate flow on the computer.
But once ARCore has a database of images that it can detect,
we go to the second step.
So the second step is ARCore is always looking
for those images for you.
And you can get it from the AR frame, each and every frame
that AR sees or that ARCore sees in the world.
So now, you've got a list of all the augmented images
in the scene.
And you want to attach virtual content to it.
So that brings me to the third step.
So for step number three, you just
take the augmented images, the augmented image that you want.
And you create an anchor off of it.
And then you can attach virtual content to that anchor.
And it's the same as you would for any kind of plane detection
or point detection that you've been used to in the past.
So that's it, three simple steps.
And if you want to do the pre-computation
on the computer, this is what you run.
So there's a command called build-db.
And you can pass up to 1,000 images into this command.
And it'll build an image database in advance
that you can then load in ARCore using this code.
So this loads the database from file, pulls it in.
It's computationally efficient because ARCore has already
done the work that it needs to to be able to recognize
these images later.
And now, you can go off and running with the same other two
steps that we showed before, which is detecting the image
and then placing content relative to it.
All right.
Pretty simple.
Now, I want to show you a demo of this in action.
So we're going to switch to the Pixel phone here.
And we're going to run this augmented images demo.
So here, we've actually trained ARCore to recognize this poster
on the wall.
And so when I look at the poster,
you can see that it fades out and it goes from 2D into 3D.
And now as I move, the perspective that I see changes.
So I've got a 3D object coming out of this 2D texture.
Nothing's really changed in the world.
But I can make it more engaging and immersive.
All right.
So that's the demo of augmented images, pretty simple.
And now, I want to talk a little bit about some use cases.
Posters are great for demos, but we
think augmented images have a lot more potential as well.
So the first use case that we're excited about is education.
Imagine a textbook coming to life in front of you,
or going into a museum tour where artwork on the wall
jumps out at you and gives you more information
about the artists or maybe their progression as they
were sketching a painting.
We think augmented images are useful for advertising.
Advertising is all about engagement.
Imagine being at a movie theater and holding your phone
up to a movie poster and having content come out
or telling you showtimes.
Or imagine being at a bus stop with a little bit of time
to kill and engaging with the ad that you
have on the side of the bus stop station.
We think augmented images can also be useful for the products
that you're advertising.
So here, you can build products that
meld the physical and digital worlds, that
bring both together.
It could be Castle Toy, where you have an experience that
comes out of the box itself, or it
could be a how-to guide for your coffee machine
as you try to make coffee for the first time
with your expensive espresso machine
and you have no idea what to do.
So we think augmented images expand
the capabilities and usefulness of AR in general.
And we're really, really excited to see
what you build with them.
And we also are not done yet.
We're going to talk about one more feature today.
And for that, I'm going to bring out
James Birney, who's a product manager who works with me.
And he's going to talk to you about Cloud Anchors.
I think you'll really enjoy it.
Thanks very much.
Come on up, James.
[APPLAUSE]
JAMES BIRNEY: So real quick, before we get
started-- you guys have been sitting for awhile.
And I really like doing this at the beginning of our things.
We're going to do the wave real quick going across the room.
All right?
You guys ready?
Laptops ready?
All right, three, two, one-- up, up, up, up, up, up, up.
Yay, ARCore.
Woo-hoo!
It worked.
[LAUGHS]
All right.
Thank you, guys.
All right.
So like Eitan mentioned, my name's James Birney.
I'm a product manager on ARCore, and specifically
on Cloud Anchors.
Raise your hand if you saw the Cloud Anchors announcement
yesterday.
All right, good.
That's slightly more than half, awesome.
So that's what we're going to cover in this section.
Hopefully you guys are going to be
really excited by the time we get through talking with Cloud
Anchors and you're going to want to immediately start building.
So before we hop into Cloud Anchors,
it's really important to start with where AR is today.
So could I get a quick hand if you've built an AR app before?
All right, so that's roughly about half of you.
So for the other half, what happens when--
let's say that together we're going
to build an app where we're going to place some dinosaurs.
And so we're going to have a T-Rex over here
and maybe a Triceratops over here.
And they're going to interact.
The way that we would do that in the AR app today
is we would plant an anchor.
And then the T-Rex and the Triceratops
would be placed as relative offsets from those anchors.
And that becomes your reference frame in your AR app.
Now, let's say that Eitan were to come back up on stage.
He's not going to come up because that's a long walk.
But Eitan goes ahead and creates a separate dinosaur app over
here.
And he places, say, a bunch of pterodactyls.
And again, he plants an anchor.
And his pterodactyls are all placed relative to that anchor.
Now, what's missing is Eitan's app
is running in a different reality,
a different augmented reality than the app
that we have over here.
And the reason why is those two anchors
can't talk to each other.
So this is what Cloud Anchors solves,
is we give you the ability to create a shared reference
frame.
So that reference frame I was mentioning before,
where you have the anchor and you have the offsets
to our T-Rex and to our pterodactyl,
that now, we can have a common anchor in the middle
and all the AR content.
So everything from pterodactyls to T-Rexes
are able to then interact and play.
And then you can create these really fun experiences where
not only is my content interacting
with Eitan's content, but I can control Eitan's content.
He can control mine.
That's pretty cool.
So that's kind of an abstract thing where I'm literally
moving my hands around onstage.
A more concrete example would be our Just a Line app,
which if you haven't seen it before
is an experimental app that we as Google built.
It literally draws a single line in space.
And what we added to it is the ability
to do not just one artist, but multiple artists drawing
in the same space.
So I'm going to show you an extended version
of the video they showed you really quickly yesterday,
where you can see multiple artists drawing together.
And hopefully you see from this video
the powerful experience that you get out
of this, where now, you're able to interact with your friends
and draw together.
And when one person draws a line,
you can build on top of that.
So I'll give it a second here for the video to finish
and for you guys to absorb what's going on
because that's a new concept.
OK, so let's talk a little bit about how
we create these cloud anchors.
We've done an awful lot of work to make it very simple.
So it's only a few steps.
Let me walk you through them.
So step one is--
let's say in this example, we make our stick woman.
Her name is going to be Alice.
And Alice is going to place a cloud anchor.
Now, the verb that we use to create a cloud anchor
is called hosting.
The reason why is we're going to host that native anchor up
to the cloud.
So when we host that cloud anchor,
the features which are the visual
features in the environment.
So let's say that Alice is standing here.
And as Alice is looking at the table,
she places a cloud anchor or the app
will place a cloud anchor for her on the stage, right
here next to our beautiful succulent.
Do you guys like our succulent?
OK.
[LAUGHS] Thank you.
I appreciate the one person.
OK.
So what the phone is going to extract from the environment
is all the points where these leaves come
to what the phone will see as contrast points, where
the colors change, where the lighting changes.
So the edge of this table, the edge
of this tablecloth, every point where
the leaves kind of change, those are the visual features that
get abstracted and then get uploaded to the cloud.
That then gets saved and processed.
And what Alice gets back in a couple
seconds is that cloud anchor.
Now, in that cloud anchor is a really important attribute.
That attribute is the Cloud Anchor ID.
So you can think about the Cloud Anchor ID as--
you can think about Cloud Anchors
the same way you think about a file.
So say you're going to save a file to Google Drive.
And when you save it, you need to create a file name, right?
Well, with Cloud Anchors, we're going
to create essentially that file name or that ID for you.
And that ID is the way that you're
going to reference it later.
Would be really hard to find the file without knowing the name,
right?
So the Cloud Anchor ID is the same concept.
So how this comes into play is all
Alice needs to do to get Bob, our stick man over there,
to connect to Alice's cloud anchor is to--
excuse me-- is to send over that Cloud Anchor ID to Bob.
And that's all she needs to send over to Bob.
Once Bob has the Cloud Anchor ID,
he then uses the Cloud Anchor ID to--
and our verb here is resolve.
In resolving, we'll add the Cloud Anchor ID
to Bob's reference frame.
So let's say that Bob is standing right here as well.
He looks at the same area.
The visual features that will get uploaded to the cloud,
in the cloud will match those visual features
against the visual features that Alice had previously uploaded.
And we will give Bob back a cloud anchor
that will be relative to where his device is.
So even though both devices are in different locations,
we'll create the cloud anchor in a consistent physical location.
And that's the magic.
Because they're in a consistent physical location,
you then have a shared reference frame.
And then at that point, we can place--
again, let's use dinosaurs because everybody
loves dinosaurs, right--
we can place our dinosaurs relative to that cloud anchor
and we can start our shared AR experience.
Hopefully that makes sense.
Oh, cloud anchor comes back.
And then I'm going to tie it all together here.
We created a very fancy visualization.
The orange dots that come up, those
are the visual features we were talking about.
They go up to the cloud.
Bob uploads his visual features up to the cloud.
They get matched.
And then the two of them create the same shared reference
frame.
And then once that shared reference frame is created--
wait a second for the GIF to loop around--
you'll see that spaceship show up.
And then the two of them can follow the spaceship
around the room.
And once they're paired, then the devices
can go anywhere in the room.
And they're in the same reference frame.
And they can interact together.
All right.
So let's keep on going one level deeper, like "Inception,"
with some sample code.
OK, so same format as before, but before we
get to those two methods of hosting and resolving,
it's really important that we enable the feature.
So when you're working with ARCore,
interact with the session.config and turn on our feature.
You need to do this on all devices.
But hopefully this is pretty straightforward.
Then on the first device--
so this is Alice's device, the one
that creates the Cloud Anchor.
The main method we need to call here is HostCloudAnchor.
On HostCloud-- and then with--
excuse me.
With HostCloudAnchor, you can feed
in any preexisting native anchor.
So as Eitan was mentioning before,
normally this is created from a horizontal plane or now
from a vertical plane.
And you can pass in that anchor into HostCloudAnchor.
Asynchronously that call will complete
in a couple of seconds.
And what comes back is your cloud anchor.
Now, what did we talk about is the really important thing that
comes from the cloud anchor?
All right, Cloud Anchor ID.
Thank you.
[LAUGHS] So then, it is completely up
to you what means of device-to-device communication
you want to use.
The demo that we're going to show you in a second
uses Firebase.
There's also two other demos in the Sandboxes
I'd encourage you to check out.
Those also use Firebase as well.
It's a great means to communicate between.
But you can use any means you want.
So then on Bob's device--
and it's a really important point here.
This is not limited to just Bob.
We could also have Bob, Jerry, Johnny, Eitan.
And it can be as many users as we want.
That all they need to do to join that cloud anchor is
receive the Cloud Anchor ID.
That's the one that Alice just sent over.
And then we need to resolve that cloud anchor.
In order to resolve the cloud anchor, it's dead simple.
All you need to do is pass in the Cloud Anchor ID.
In the background, under the hood,
we will take those visual features
from what the user is currently looking at.
So it's important that the user is, again, currently looking
where Alice was.
And we'll upload those features and then give you
that cloud anchor back.
And then at that point, you're good to go.
You can start placing assets relative to that cloud anchor.
So quick question, what operating system
were those devices in that code example running on?
Anyone?
All right.
So the really important point here
is Cloud Anchors work on both Android--
which means any ARCore-enabled Android device--
and any iOS ARKit-enabled device, which for today
is going to be iPhones.
And we believe this is incredibly important to making
shared AR a reality.
There's no reason that we should discriminate
which of our friends can play a game with us
based on which operating system they run on their phone.
That's not really important to whether or not
Eitan and I are friends.
If he has an iPhone, he should be
able to play shared AR with me, right?
So now, I'm going to invite Eitan on up on stage.
And we're going to give you guys a live demo.
Because it's one thing to say that everything
works cross-platform, but it's another thing
to show you guys with a live demo.
EITAN MARDER-EPPSTEIN: All right.
So maybe one last poll just to get started.
Who thinks I'm going to win this game?
Raise your hand.
Oh.
Oh, it's tough.
All right, who thinks James is going to win?
That's the rest of you, right?
[LAUGHS]
JAMES BIRNEY: So you guys are getting to know Eitan
better every minute.
And it's really important to know Eitan sandbags a lot.
EITAN MARDER-EPPSTEIN: OK.
I just got to join this room.
OK.
So now, I'm going to set up my board.
And James, you set up yours.
JAMES BIRNEY: Yeah.
EITAN MARDER-EPPSTEIN: And you want to get close to me, right?
JAMES BIRNEY: [LAUGHS]
EITAN MARDER-EPPSTEIN: You need that help.
JAMES BIRNEY: I'm also showing off a little bit here.
You can see as mine's moving around,
the same state is being reflected in both
at the same physical location.
So I'm going to press that.
Here's our futuristic-looking light boards.
EITAN MARDER-EPPSTEIN: All right, here we go.
JAMES BIRNEY: And we have people in the back that
are organizing bets, in case anybody wants
to make money off of this.
EITAN MARDER-EPPSTEIN: Yeah.
So the goal here is to turn the other person's board
your color.
And I feel like James has been sandbagging me
in all of our practice sessions because he's doing much
better than he has in the past.
Let's see.
JAMES BIRNEY: Oh, no.
That was [INAUDIBLE].
EITAN MARDER-EPPSTEIN: Oh, so close.
Hold on.
All right.
Just one more shot, one more shot.
JAMES BIRNEY: You'll notice that Eitan and I both can't
multitask--
EITAN MARDER-EPPSTEIN: Oh, no.
JAMES BIRNEY: --very well.
EITAN MARDER-EPPSTEIN: Did I get him?
JAMES BIRNEY: Oh.
All right.
Thank you.
Hey, it worked.
[APPLAUSE]
All right.
And just to reiterate, so that was an iPhone
that Eitan was using.
This is a Pixel 2.
But this very well could have been any Android
ARCore-enabled device.
That could have been any ARKit-enabled device.
And there's my clicker.
OK, so let's talk about use cases.
That was gaming.
That was an example of gaming working really well.
But shared AR does not need to stop at gaming.
We think there's a whole lot of other categories
where shared AR can make a big difference in the world.
Oops.
Lance, help me.
Can you go back a slide, please?
Pretty please?
Thank you.
OK, so four categories that briefly let's talk about.
So one is in the education space.
This is an example of--
let me phrase this as a question instead.
Raise your hand after I say two options.
Option A, you can learn about what
it's like to explore on Mars and the Mars missions
from a textbook, option A. Option B,
you can learn from an interactive 3D
model of the Rover that you can play with with your friends.
All for option A?
OK.
Option B?
All right.
See, we're making improvements in how people learn.
And the demo that we're showing you here,
this is an example that NASA built for us.
This doesn't need to stop at space exploration,
although that's a pretty big area to explore.
You could do this as well in any sort
of visual area such as biology.
There's a couple cool demos where you can explore
the human body together.
And I'll leave it at that.
Let's hop on down to creative expression.
So you saw our Just a Line example,
which is where we draw the white line in space.
But we can go beyond that.
Take for example this block building
app that was built by [INAUDIBLE],, where
you can build a full block building thing
and then 3D print it later.
It's very, very cool.
And you can imagine what this would look like
as well with the AR Stickers.
Raise your hand if you played with AR Stickers.
So you can imagine what this would look like if now
as you're placing Stormtroopers or--
help me, the Demogorgon--
as you're placing Demogorgon, someone else can place El
and have the fight be between your different phones.
That would be a very fun experience.
Gaming-- so now, you can do ice fishing with your friends.
Haven't you guys always wanted to do that?
[LAUGHING] Believe me, it actually
is an awful lot more fun than it sounds when you just say
ice fishing with your friends.
It's particularly fun on a hot day in San Francisco
to be able to look down at the sidewalk
and turn the sidewalk into a ice fishing pool.
Beyond ice fishing, you can imagine playing laser tag
with your friends.
Can now be just with your phones.
You don't need to buy special gear.
You can just-- two people, quickly pair,
do host and resolve.
And then you're off and going, and playing laser tag
with as many of your friends as possible
because Cloud Anchors are not limited just to two devices.
You can use n number of devices.
And then shopping-- so how many of you
guys have bought something and then had your partner, when
it actually showed up, veto it?
Then you had to return it.
Show of hands.
Yeah, that's a big pain, right?
Then you have to go through find the UPS store, the FedEx store,
mail it back.
That's not a good experience.
It's a lot better if you can preview it with your partners.
So now, with Cloud Anchors, if I'm placing a speaker system
here, I can have my wife also look at that speaker system
from her phone.
And there's a feeling of consistency and a feeling
of trust that you built if you're the advertiser
or the e-commerce site--
that if you have two users looking at it
and it shows up consistently for both of them,
you build this trust that the product I'm buying,
when I'm previewing it, is actually
going to look that way when it shows up.
Because it's showing up on multiple devices.
All right.
So that's everything for Cloud Anchors.
Now, let's talk about getting started.
So ARCore, no surprise, already supports Unity and Unreal,
your standard game engines.
And then obviously we support Android Studio
for Android Native Development.
As well, since Cloud Anchors are cross-platform,
we provide a SDK so that you can do your development
Xcode as well.
All four of these environments are live as of yesterday
at 1:00 PM.
[APPLAUSE]
Thank you.
So for the folks here at I/O, you guys
have a bunch of resources-- or you folks
have a bunch of resources that you have at your disposal.
Please take advantage of them.
There are three awesome demos in the Sandbox.
If you guys liked playing Light Board,
and especially if you want to play Eitan in Light Board,
our Sandbox is right over there in the AR Sandbox.
Eitan will be there up until somebody beats him.
Right, Eitan?
Yeah, thank you.
We also have the Just a Line demo over in the Experiments
Sandbox.
Please check that out.
And then the demo that Eitan showed with this picture
frame as well as two others are available in the AR Sandbox.
It's a really, really fun exhibit.
Please go ahead and play around with it.
I suspect it'll give you a bunch of very cool ideas for what
you can build.
For Codelabs, we have over 80 workstations set up.
Please play around with them.
Every workstation is also paired with an Android device.
So not only can you go through the code,
but you can actually compile it onto the phone.
And then you can see what the code you just built actually
works like on a phone.
And then we also have office hours.
Please take advantage of that.
We have some incredibly intelligent guru staff
to answer any questions you have.
And then a quick shameless plug.
Our team, the ARCore team is incredibly busy
giving talks this week.
Please take advantage of those.
Done an awful lot of work putting those in to you
to give you a very concise explanation.
There's two more today and two more tomorrow.
And then after I/O or for the folks online,
developers.google.com/ar has all the extra resources,
plus all the Codelabs are also available on there.
And again, all four of our SDKs are available as of yesterday.
So thank you very much.
Appreciate your time.
[MUSIC PLAYING]