Subtitles section Play video
[DING]
Hello.
And welcome to a new tutorial series
on The Coding Train about a piece of software
called Runway.
So what is Runway?
How do you download and install Runway and kind of tinker
around with it?
That's all I'm going to do in this particular video.
Now, let m be clear, Runway is not something that I've made.
Runway is made by a company, a new company
called Runway itself.
And it's a piece of software.
You can use it and download it for free.
You can use it for free.
There are aspects of it that require Cloud GPU credits,
which I'll get into later.
And you can get some free credits and a coupon code
that you'll find in the description of this video.
But really I want to just talk to you
about what it is cause I'm so excited about it,
and I'm planning to use it in the future,
in a lot of future tutorials and coding challenges, and teaching
things that I'm going to do.
And I also should just mention that I
am an advisor to the company Runway itself.
So I'm involved in that capacity.
All right.
So what is Runway?
Right here it says machine learning for creatives.
Bring the power of artificial intelligence
to your creative projects with an intuitive and simple
visual interface.
Start exploring new ways of creating today.
So this, to me, is the core of Runway.
I am somebody who's a creative coder.
I'm working with processing and P5JS.
You might be working with other pieces of software.
That's just commercial software, coding environments.
You're writing your own software.
And you want to make use of recent advances
in machine learning.
You've read about this model.
You saw this YouTube video about this model.
Can you use it in your thing?
Well, before Runway one of the things you might have done
is find your way to some GitHub repo that
had like this very long ReadMe about all
the different dependencies you need to install and configure.
And then you've got to download this and install this, and then
build this library.
And you can really get stuck there for a long time.
So Runway is an all in one piece of software
with an interface that basically will run machine learning
models for you, install and configure them
without you having to do any other work
but press a button called Install.
And it gives you an interface to play with those models,
experiment with those models, and then broadcast
the results of those models to some other piece of software.
And there's a variety of ways you
can do that broadcasting, through HTTP requests,
through OSC messages.
And all these things might not make sense
to you, which is totally fine.
I am going to poke through them and show you
how they work, with an eye towards at least showing you
how to pair Runway with processing,
and how to pair Runway with P5JS,
and I'll also show you where there's lots of other examples
and things you can do with other platforms, and stuff like that.
So the first step you should do is click here
under Download Runway Beta.
It will automatically trigger a download
for Mac OS, Windows, or Linux.
I've actually already downloaded and installed Runway.
So I'm going to kind of skip that step,
and just actually now run the software.
Ah.
And now it's saying, welcome to Runway.
Sign in to get started.
OK.
So if you already have an account,
you could just sign in with your account.
I do already have an account.
But I'm going to create a new one, just so we can
follow along with the process.
So I'm going to go here.
Create an account.
I'm going to enter my email address, which is-- shh.
Don't tell anyone-- daniel@thecodingtrain.com.
Then I'm going to make a username and password.
Now that I've put in my very strong password,
I'm going to click Next.
And I'm going to give my details, Daniel Schiffman,
The Coding Train.
Create account.
Ah.
And it's giving me a verification code
to daniel@thecodingtrain.com.
Account has now been created, and I can click Start.
So once you've downloaded, installed Runway, and signed up
for an account, logged into your account,
you will find this screen.
So if you've been using Runway for a while,
you might then end up here, clicking on open workspaces,
because workspaces are a way of collecting
a bunch of different models that you
want to use for a particular project into a workspace.
But we haven't done any of that.
So the first thing that I'm going to do
is just click on Browse Models.
So the first thing that I might suggest that you do
is just click on a model and see what
you can do to play with it in the Runway interface itself,
because one of the things that's really wonderful about Runway
is as a piece of software and an interface you can explore
and experiment with the model to understand how it works,
what it does well, what it doesn't do well,
what it does at all, before starting
to bring it into your own software or your own project.
So I'm going to pick this Spade Coco model, which I have never
looked at before.
This is very legitimate me.
I have no idea what's going to happen when I click on that.
And now, here I can find out some more information
about the model.
So I could find out what does the model do?
It generates realistic images from sketches and doodles.
I can find out more information about the model.
For example, this is the paper that describes this model,
"Semantic Image Synthesis with Spatially Adaptive
Normalizations Trained on COCO-Stuff Data Set."
Remember when someone asked, is this a tutorial for beginners.
Well, it is for beginners in that you're a beginner.
You can come here and play around with it.
But you can go very deep too if you want to find the paper,
read through the notes, and understand
more about this model, how it was built,
what data it was trained on, which is always
a very important question to ask whenever you're
using a machine learning model.
So we can see there are attributions here.
So this is the organization that trained the model.
These are the authors of the paper.
We can see the size of it, when it was created,
if it's CPU and GPU supported.
We could also go under Gallery.
And we can see just some images that have been created.
So we can get an idea.
This is a model that's themed around something
called image segmentation.
So I have an image over here.
What does it mean to do image segmentation?
Well, this image is segmented, divided into a bunch
of different segments.
Those segments are noted by color.
So there's a purple segment, a pink segment,
a light green segment.
And those colors are tied to labels in the model,
essentially, that know about a kind of thing
that it could draw in that area.
So you could do image segmentation in two ways.
I could take an existing image, like an image of me,
and try to say, oh, I'm going to segment it.
This is where my head is.
This is where my hand is.
This is where my hand is.
Or I could generate images by sort
of drawing on a blank image, saying put a hand over here.
Put a head over here.
So that's what image segmentation
is, at least in the way that I understand it.
What have I done so far?
I've downloaded Runway.
I've poked around the models.
And I've just clicked on one.
Now, I want to use that model.
I want to play with it.
I want to see it run.
So I'm going to go here to Add to Workspace.
It's right up here.
Add to Workspace.
Now, I don't have a workspace yet.
So I need to make one.
And I'm going to call this workspace,
I'm going to say Coding Train Live Stream.
So I'm going to do that.
I'm going to hit Create.
Now, I have a workspace.
You can see, this is my workspace.
I have only one model added to this workspace over here.
And it's kind of highlighting up for me right now what to do.
I need to choose an input source.
So every machine learning model is different.
Some of them expect text input.
Some of them expect image input.
Some of them might expect input that's
arbitrary scientific data from a spreadsheet.
Then the model is going to take that input in, run it
through the model, and produce an output.
And that output might be numbers.
Or it also might be an image.
Or it might be more text.
So now we're in sort of the space of a case by case basis.
But if I understand image segmentation correctly,
I'm pretty sure the input and the output
are both going to be an image.
Let's make a little diagram.
So we have this--
what was this model called again?
Spade Coco.
So we have this machine learning model.
Presumably there's some neural network architecture in here.
Maybe it has some convolutional layers.
This is something we would want to read that paper
to find out more.
Runway is going to allow us to just use it out of the box.
And I certainly would always recommend
reading more about this to learn more about how to use it.
So my assumption here is in my software that I want to build,
I want to maybe create a drawing piece of software
that allows a user to segment down an image.
So you can imagine maybe I'm going to kind of draw
something that's one color.
Look, I could use different colored markers.
I'm going to sort of fill this image in with a bunch
of different colors.
And then I am going to feed that into the model.
And out will come an image.
So we have input.
And we have output.
And again, this is going to be different for every model
that we might pick in Runway.
Although, there's a lot of conventions.
A lot of the models expect images
as input and output images.
Some of them expect text as input, and output an image,
or image as input and output text.
Et cetera.
And so on and so forth.
And so now what I want to do is choose the input source
in Runway for the model.
So something that's going to produce a segmented image.
So that could be coming from a file.
It could actually come from a network connection, which
I'll get into maybe in a future video,
or you can explore on your own.
I'm just going to pick segmentation.
I know.
This is like the greatest thing ever.
Because what's just happened is image segmentation
is a common enough feature of machine learning models
that Runway has built into it an entire drawing engine so
that you can play around with image segmentation.
And you can see, these are the colors for different labels.
So it looks like it's a lot of transportation stuff.
So maybe what I want is let's try
let's try drawing some people.
[MUSIC PLAYING]
Two people with an airplane and a wineglass flying overhead.
OK.
How are we doing?
Now, I'm going to choose an output.
And I just want to preview.
Right?
Cause preview right now is, I don't need to export this.
I don't need to use it somewhere else.
I just want to play around with it in Runway itself.
So I'm going to hit Preview.
Now I have selected my input, which is just the segmentation
interface of Runway itself.
I have selected my output, which is just a preview.
Now, it's time for me to run the model.
And here we go.
Run Remotely.
So remote GPU enabled.
And you can see, just by signing up
for Runway I have $10 in remote GPU credits
It'll be interesting to see how much just running this once
actually uses.
So one thing I'll mention now, if you
want to get additional credits, I can go over here.
This is like the sort of icon for my profile.
I can click on this.
I'm going to go now to here.
I'm going to go to Get More Credits.
And this is going to take me to a browser page.
And I could certainly pay for more credits.
But I'm going to click here.
And I'm going to redeem credits by saying CODINGTRAIN
right here.
So if you would like to get an additional $10 in credits,
you can do this.
And we can see now I should have $20 in credits.
So this icon up here, just so we're clear,
this icon up here is your workspaces,
of which I only have one with one model that's
connected to a remote GPU.
And if I wanted to look at other models,
I would go here to this icon.
All right.
Now, I'm going to press Run Remotely.
[DRUM ROLL]
Running the model remotely.
Whoa!
[TA-DA]
Oh, my.
Oh, it is so beautiful.
Mwuah.
I cannot believe it.
So this is what the Spade Coco machine learning
model generate.
It's really interesting to see the result here.
So you could think, me knowing nothing
about this model, kind of how it works and what to expect,
you get some pretty weird results with it.
Probably if I were a bit more thoughtful, maybe if I even
filled in the entire space--
I probably left so much of it blank,
and also included a giant wine glass with two people.
It's kind of creepy looking.
Although, I think this sort of resembles me
in some strange sort of way.
And we can see here.
Look at this.
$0.05.
So one thing I should mention is the reason
why that took a long time, it was spinning up
the server and everything to start actually
running the model.
But now that it's running in real time,
it can happen much more quickly.
So let's try filling it.
So what would be a good thing to fill it with?
Let's try floor wood.
Let's try filling it with wood floor.
Oh, whoa.
Then let's try to put some fruit.
Ooh.
This is looking much better now.
Let's put an orange next to it.
Let's put a couple oranges and make a little bowl of fruit.
Wow.
This is crazy.
Wow.
I got to stop.
That's pretty amazing.
So again, here was just a little moment later
of being a little more thoughtful to think about how
this model actually works.
And if I looked at the data set, which is fairly well-known,
I imagine, Coco image data set, then
that's probably going to give me even more information
to think about what it's going to do well.
But you can see how it's able to sort of see
a little pile of fruit here on a wood background.
It almost looks a little more like cloth,
like it's sitting on a table.
Pretty realistic.
And yes.
Charlie England points out, which is correct,
this is continuing to use the GPU credits.
And we can see that still, though,
even with doing a bunch of live painting,
I've just used $0.10 there.
So you can do a lot with the free $10,
just in playing around.
So input wise, I chose to do segmentation here.
But I could also use a file.
So if I wanted to open a file on the computer,
I could do it that way.
And then output, if I change to export,
I could also actually export that
to a variety of different formats.
But, of course, I could also right here just
under Preview I can click this Download Save button.
And now I am saving forever more this particular image
as a file.
Now, what's really important here actually,
more important here, is under Network.
So if what I wanted to do was click over here under Network,
this means I can now communicate with this particular machine
learning model from my own software.
Whether that's software that I've downloaded or purchased
that somebody else has made that speaks
one of these particular protocols,
or my own software that I'm writing in
just about any programming language or environment
if you have a framework, or module, or a library,
or support these types of protocols.
And one of the nice things here, if I click on JavaScript,
we can see there's actually a bit of code here that you can
actually just copy/ paste into your JavaScript to run it
directly.
So I'm going to come back.
OSC is also a really popular messaging network protocol
for creative coders.
It stands for Open Sound Control and allows
you to send data between applications.
So I will also kind of come back in a separate video
and show you about how some of these work.
I should also probably mention that your Runway software
itself works in a very similar way to a piece of software
called Wekinator that you might be familiar with.
Wekinator is a software that was created by Rebecca Fiebrink
years ago that allows you to train
a neural network with data sent over OSC messaging,
and then get the results of that after the fact.
Though, I think the real sort of key difference here is Runway
is really set up to support a huge treasure
trove of pre-trained models.
Whereas Wekinator was more for training neural networks
on the fly with small bits of data.
And I will say that one of the things that Runway is planning,
maybe by September, is to start coming out
with features for training your own model as well.
So thanks for watching this introduction to Runway,
just sort of the basics of downloading and installing
the software, what it is from a high level point of view, what
features of the interface work, how to get some free cloud
credits.
And what I would suggest that you
do after watching this video is download, run the software,
and go to this Browse Models page.
So you can see, there's a lot of different models
for looking at motion, generative, community, text,
recognition.
Click around here.
Let's try this recognition one.
Face recognition.
Dense cap.
Where is PoseNet in here?
That might be under motion?
DensePose PoseNet.
So here's a model called PoseNet which
performs real time skeletal tracking of one or more people.
I've covered this model in other libraries,
like the ML5 JS library with TensorFlow JS.
And so what I'm going to do in the next video
is use this model, PoseNet, in Runway with my webcam,
running it locally on this computer
without requiring cloud credits, and then
get the results of this model in [? processing ?] itself.
So I'm going to show you that whole workflow.
But poke around.
Click around.
Find a model that you like.
Let me know about it in the comments.
Share images that you made with it.
And I look forward to seeing what you make with runway.
Great.
Thanks for watching.
[MUSIC PLAYING]