Subtitles section Play video
[MUSIC PLAYING]
DA-CHENG JUAN: Welcome to the fourth episode of Neural
Structure Learning series.
In this video, we are going to talk
about learning with implicit structured signals constructed
from adversarial learning.
I'm Da-Cheng, and I'm going to be your guide.
You do not need to know a lot about adversarial learning
to get started.
And we will learn the concept along the way.
Let's first quickly refresh the concept
of neural structure learning.
This is the example we mentioned in the first episode,
building a neural net to classify
an image into a cat or a dog.
In reality, there are usually other similar images
related to that input image, forming
a structure that represents the similarity among all
these images.
And a neural unstructured learning framework jointly
optimizes the sample features and the structured signals
existing among the samples to learn a better neural net.
You may want to ask, what if there's no explicit structure
we can use to train a neural net?
One approach is to construct the structure dynamically
by creating adversarial neighbors.
You may have another question, what
is an adversarial neighbor?
An adversarial neighbor is a modified version
of the original sample, which targets
at misleading the neural net to make it output
incorrect classification.
And the next straight-line question
is, how to generate such adversarial neighbors?
We craft a small amount of carefully designed
perturbation, usually based on the reverse gradient direction,
and apply that perturbation to the original sample.
Let's look at an example.
Say the sample is a panda image.
The constructed adversarial neighbor
also looks like a panda image.
Usually, human eyes cannot tell the difference between the two.
However, the neural net is confused
by the adversarial neighbor and classifies it incorrectly
into a gibbon.
This is because the small perturbation applied
on the sample successfully confuses the model,
even that we human cannot detect it.
After the adversarial neighbor is generated,
we add an edge to connect the sample
with its adversarial neighbor to dynamically construct
the structure.
Then this structure can be used in the neural structure
learning framework.
Let's pause for a bit and ask ourselves,
why do we want to have a structure connecting the sample
with its adversarial neighbor?
In a neural structure learning framework,
the neural net learns to maintain a structure
by keeping the similarity between a sample
and its neighbor.
So essentially, this is telling the neural net
the sample and its adversarial neighbor
are actually pretty similar.
So please keep their similarity, and don't be confused
by the small perturbation.
In a neural structure learning framework,
there are TensorFlow library and functions
that you can use to generate adversarial neighbors.
We also provide Keras APIs that you
can use to enable easy-to-use end-to-end training
with adversarial learning.
If you are interested in the details
of this library and APIs, please visit our website.
Let's use a task in computer vision
to see how the adversarial learning works.
Say we want to train a neural net to recognize
these handwritten digits.
In the next slide, we are going to write some Python
code to design this neural net and train it
with the adversarial learning.
Are you ready?
In this code example, we are going
to train a neural net to recognize
the handwritten digits using adversarial learning.
First, we load the mnist data set
that contains the image of the handwritten digits
and their corresponding labels.
The features of each image are pixels,
with values ranging from 0 to 255.
So here we normalize these features
so that they will stay in the range from 0 to 1.
Next, we build a neural net by using Keras APIs.
You can use Keras sequential APIs, functional APIs,
or build your model via subclassing.
The neural structure learning framework
supports all three types of Keras APIs,
so feel free to use your favorite one.
Here we invoke the Keras APIs from the neural structure
learning framework to enable adversarial learning.
There are several hyper-parameters
we need to configure.
For example, we need to specify the multiplier applied
on the adversarial regularization.
For each hyper-parameter, we also
provide different values that empirically we
know they work pretty well.
Then we invoke adversarial regularization
from the neural structure learning framework
to wrap around the neural net we just constructed.
After that, the rest of the workflow
follows the standard Keras workflow--
compile, fit, and eval.
That's it.
With the APIs from the neural structure learning framework,
we are able to enable adversarial learning
within three lines of code.
Let's take a look at the comparisons
between the neural nets with and without adversarial learning.
The true label of this image is a six.
Both the baseline model and the model with adversarial learning
correctly recognize this image as a six.
The next image is a nine.
Again, both models correctly recognize this image as a nine.
Let's see the third image.
This image is actually an adversarial image.
The baseline model is confused and recognizes it incorrectly
as a five, whereas the model with adversarial learning
successfully recognizes it as a six.
Let's look at one more image.
This image is again an adversarial image.
The baseline misclassifies it as an eight,
whereas the model with adversarial learning
correctly classifies it as a three.
So yes, adversarial learning indeed
makes a neural net more robust against this small but
misleading perturbation.
To summarize, in this video, we introduced
how to construct the structure by generating
adversarial neighbors.
We also guided you through a code example using the API
from the neural structure learning framework
to enable adversarial learning.
Here's more information in the video description below,
along with the link to a collab tutorial
covering the example we discussed.
Don't forget to subscribe to this channel.
Thank you.
[MUSIC PLAYING]