Subtitles section Play video
[MUSIC PLAYING]
PAIGE BAILEY: Hi.
I'm Paige, and I'm a Developer Advocate for TensorFlow.
Machine learning techniques like Convolutional Neural Networks,
also called CNNs, and Generative Adversarial Networks, or GANs,
have shown great promise in a diverse range of applications--
everything from image classification
to scene reconstruction to speech recognition.
To efficiently train these models
on massive amounts of data, machine learning engineers
often need to use specialized hardware, such as Graphics
Processing Units, GPUs, or Tensor Processing Units, TPUs.
GPUs and TPUs are used as accelerators
for the portions of the model that can be broken up
into parallelizable operations.
Think of these chips as very specialized tools
that can do one particular task extremely well and extremely
quickly.
When using this specialized hardware,
tasks that used to take days or even weeks to complete now
can only take minutes.
The good news is that you can develop deep learning models
on Google Colab using GPU and TPU at no cost.
Let's dive into a notebook and check it out.
To change your runtime in Google Colab, all you have to do
is select Runtime, Change Runtime Type, and then
opt for None, GPU, or TPU.
For this selection, we'll go with GPU, and we'll hit Save.
Let's install TensorFlow 2.0 with GPU support.
We can confirm that TensorFlow can see the GPU by running
device name=tf.test.gpu device name.
Once the command has run, we can see that the device
is located at the 0 slot.
To observe the TensorFlow speed-up on GPU
relative to CPU, let's use this basic keras model.
We hit Run, and we find that mnest
is able to train to completion in around 43 seconds.
With CPU, it would take mnest 69 seconds
to achieve the same accuracy.
So you received almost a third of a boost of speed.
If you're interested in obtaining
additional information about hardware,
you can run these two commands from any Colab notebook.
This will tell you everything that you
need to know about CPU, RAM, and GPU.
For TPUs, let's try a bit more interesting of an example.
We'll change the runtime type again, select TPU,
and hit Save.
Here, we're predicting Shakespeare
with TPUs and keras.
In this Colab notebook, we'll build a two-layer forward LSTM
model, and we'll convert a keras model to its equivalent TPU
version, using the standard keras methods Fit, Predict,
and Evaluate.
As we scroll down, we can see that we're downloading data,
we're building a data generator with TF logging,
we're checking to see the size of the array coming in
from Project Gutenberg, and we're building our model.
This model will take quite a while to train, but remember,
you're training against every single aspect
of Shakespeare's corpus.
After we've cycled through 10 [? epochs ?]
and we have pretty good accuracy,
we can start making predictions with our model.
Let's take a look at what a neural network thinks
might be a Shakespearean play.
So it's obviously not perfect.
AI can't be the Bard yet.
But this does look like a traditional script.
And you can see that if you start adding layers, adding
nodes, and even adding clusters of TPUs as opposed to just one,
you'll improve your accuracy even more
and start generating Shakespeare-like plays
all on your own.
You just learned how to accelerate your machine
learning projects with GPUs and TPUs in Google Colab.
In the next video, I'll walk you through how
to upgrade your existing TensorFlow
code to TensorFlow 2.0.
So subscribe to the TensorFlow YouTube channel
and be notified when that video lands.
In the meantime, keep building cool projects with TensorFlow,
and make sure to share them on Twitter with the hashtag
#PoweredbyTF.
We're looking forward to seeing what you create.
And I'll see you next time.
[MUSIC PLAYING]