Placeholder Image

Subtitles section Play video

  • [MUSIC PLAYING]

  • ARJUN GOPALAN: Hi, I'm Arjun Gopalan,

  • and welcome to episode 3 of Neural Structured Learning.

  • In the previous episode, we learned about natural graphs

  • and how they can be used in neural structured learning.

  • While natural graphs are common, there

  • are many machine learning tasks where the input data does not

  • form a natural graph.

  • For instance, if you recall the document classification task

  • from the previous episode, we used citations

  • to form a natural graph.

  • In the absence of citations, we wouldn't have had a graph.

  • Similarly, if you're doing simple image classification

  • or text classification where the input data contains

  • just raw images or text, then we may not

  • have a natural graph in either case.

  • In this episode, we'll discuss how

  • we can apply neutral structured learning to such tasks.

  • So what do we do if we don't have a natural graph

  • to begin with?

  • The title of the video might have given this away.

  • But the main idea is to build or synthesize

  • a graph from the input data.

  • Building a graph can be done in many ways.

  • But in this video, we'll use the notion

  • of similarity between instances to build a graph.

  • In order to define a similarity metric,

  • we need to convert raw instances,

  • whether they are documents, text, or images,

  • to corresponding embeddings or dense representations.

  • We can do this using pretrained embedding models, such as those

  • on TensorFlow Hub.

  • Once we convert raw instances to their embeddings,

  • we can use a similarity function,

  • such as the cosine similarity, to compare

  • how similar pairs of embeddings are.

  • If the similarity score is greater than the threshold,

  • then we add a corresponding edge in the resulting graph.

  • Repeating this process to cover the entire data set

  • builds a graph.

  • And once we have a graph, using neutral structured learning

  • is straightforward, as we saw in the previous episode.

  • Let's illustrate this workflow for the task of sentiment

  • classification using the IMDb data set.

  • This data set contains movie reviews.

  • And the task is to classify them as good or bad.

  • Let's see what the code looks like to build

  • a neural structured learning model for this task.

  • Here again, we use Keras for illustration.

  • But neutral structured learning also supports estimators.

  • The first step is to load the IMDb data set.

  • For simplicity, we use a version of it that is part of Keras.

  • Once that is done, we want to convert

  • the raw text in the movie reviews to embeddings.

  • We use swivel embeddings in this example.

  • But any other embedding model, such as Bert, Word2Vec,

  • or an embedding model of your choice

  • may also be used instead.

  • Once we have created the embeddings,

  • we can build a graph using those embeddings.

  • Neural structured learning provides an API

  • called build graph to do so.

  • Notice that it accepts the similarity threshold

  • as one of its arguments.

  • This allows you to control the threshold

  • below which edges are dropped from the resulting graph.

  • In this example, we use a threshold of 0.8.

  • Once we have the graph, we define the features of interest

  • for our model and combine these features

  • with the graph using the partNeighbours API

  • in neural structured learning.

  • In this example, we use a maximum of three neighbors

  • to augment our training data.

  • Now that we have the augmented training data,

  • the next step is to create a graph regularized model.

  • This part is similar to what we did in the previous episode.

  • First, we define a base model, which

  • can be any type of Keras model, whether it's

  • a sequential model, a functional API-based model,

  • or a subclass model.

  • It can also have an arbitrary architecture.

  • Then we define a graph regularization configuration

  • object, which allows you to specify

  • various hyperparameters.

  • In this example, we use three neighbors

  • for graph regularization.

  • Once this configuration object is created,

  • you can draft the base model with the graph regularization

  • wrapper class.

  • This will create a new graph Keras

  • model whose training includes a graph regularization term.

  • What's left is then just compiling, training,

  • and evaluating the graph regularized model.

  • This example is also available as a colab-based tutorial

  • on our website.

  • You can find that link in the description below.

  • In summary, we looked at how to build a graph regularized

  • model when the input data does not form a natural graph.

  • This technique can be applied to all kinds of input data,

  • such as text, images, and videos.

  • Now, graph building is not the only approach

  • to handle input data that does not form a natural graph.

  • In fact, in the next video, you will

  • learn about another aspect of neural structured

  • learning called adversarial learning, which

  • can be very useful to improve a model's robustness

  • to adversarial attacks.

  • That's it for this video.

  • There's more information in the description below.

  • And before you get to the next video,

  • don't forget to hit the Subscribe button.

  • Thank you.

  • [MUSIC PLAYING]

[MUSIC PLAYING]

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it