Placeholder Image

Subtitles section Play video

  • ♪ (music) ♪

  • Hi, everybody and welcome to this episode

  • of TensorFlow Meets.

  • I'm absolutely ecstatic to be meeting with Yannick Assogba,

  • who works on the highly feverish,

  • the thing that it seems everybody loves-- TensorFlow.js.

  • I'm really delighted to have you here.

  • So, can you tell us, what it is that you do at TensorFlow.js?

  • Thank you.

  • So, I'm a software engineer on the TensorFlow.js team,

  • and I work a lot on the visualization parts of the library,

  • as well as core parts and tutorials and documentation and examples.

  • (Laurence) Okay. Now, one of the things

  • that's really interesting about TensorFlow.js

  • that doesn't maybe strike a lot of people right away

  • is that it's not just for running inference in the browser,

  • it's also training in the browser.

  • So can you tell us a little bit about that,

  • and what is it that just makes it work so well?

  • Sure, yeah.

  • So, we support full training in the browser.

  • We have an API to do that.

  • And in the browser we actually get GPU acceleration via WebGL,

  • so it can actually be pretty performant.

  • That is quite useful, whether you're doing transfer learning

  • or training small models from scratch right in the browser.

  • And you can make use of client data

  • that you don't need to send to the server,

  • which can be quite nice for certain applications.

  • Yes, that's actually really exciting, now that you say it.

  • So privacy is very important, so to be able to have that client data

  • to retrain in the browser, to build your models.

  • Have you seen any really good examples of that?

  • Yeah, so one example that I just heard about here at the Dev Summit

  • was in the medical domain,

  • which is one that we sort of had suspicions would happen,

  • which is basically being able to deploy models

  • to doctors in hospitals.

  • One, the install process is very easy,

  • because you can just open a web page.

  • And none of that data needs to leave that machine,

  • so all of the compute can happen right on the doctor's device

  • and get their answers there.

  • We've heard some fantastic stories about that here.

  • Nice! I can't wait to be able to share those with the world.

  • So maybe some case studies or something would be really cool.

  • Now, from the programmer's perspective, the training in the browser--

  • things like Keras, you still have Keras layers.

  • Basically our main API that we suggest people use

  • is what we call tf.layers, and it's pretty much the same as Keras.

  • It's compatible.

  • We can save models out that are convertible to Keras,

  • and we can convert Keras models

  • to something that can run in the browser.

  • So it's very familiar. It's Keras with JavaScript conventions.

  • Okay, (chuckles). I like the way you say that--

  • "Keras with JavaScript conventions."

  • And also managing data-- tf.data just works.

  • Yes. So we provide tf.data--

  • a nice API for data transformations, and some convenience, also,

  • for managing what's in memory versus out of memory,

  • which can be a bit constrained depending on the device you're on

  • in terms of how much GPU memory you're allowed to access.

  • So tf.data helps you have more control over that,

  • and will do the work of pulling data

  • from either main memory or a disk at the right time

  • to drive the training loop, so that's really convenient.

  • Nice, and then you can make the training much more efficient,

  • so you're not stuffing memory when you don't need to.

  • You can have a nice pipeline of data coming in, that kind of thing.

  • And we're also going to provide some nice interfaces

  • to sensors like web cams and other things,

  • just to make it easy and make that experience easy

  • - to build interactive applications. - Cool.

  • And there is the great Pac-Man demo right now,

  • that uses the web cam.

  • I actually showed it on a live stream at the TensorFlow Dev Summit.

  • That's always a crowd-pleaser. People love that one

  • because it does so much-- it's transfer learning,

  • it's using the web cam, it's in the browser, it's familiar,

  • those kind of things.

  • Now one of the things is that what I noticed with the release

  • is that there are some models, like BodyPix and Toxicity, built in.

  • I found Toxicity actually really cool for web developers.

  • Can you give me a little bit more of the background of that?

  • Yeah, so we have an interest in building these pre-trained models

  • for developers to use right off the bat

  • to get into building ML-powered apps

  • without having to get into the nitty-gritty of writing a model.

  • So we provide each of these models--

  • and we have about 8-10 of them now-- as individual NPM modules,

  • so you can install them individually and get to prototyping.

  • BodyPix does this nice person segmentation,

  • and tells you which pixels in an image belong to a body,

  • and the Toxicity model is a new one that, given a piece of text,

  • gives you a sense of... "Is there some insults?

  • Is this sort of toxic content?"

  • which is useful in a moderation context.

  • Yeah, almost any web site now has some kind of community element

  • or something like that,

  • and to moderate that is very, very difficult.

  • So to have that built in with TensorFlow.js,

  • that's reason enough alone to use the library, for many developers,

  • much less the rest of it.

  • And, from a deployment perspective,

  • you can run this on the reader or client's device,

  • so you don't have to set up a lot of infrastructure

  • to do this [inaudible].

  • Yeah, exactly. So the Toxicity model's really great.

  • Any another cool ones?

  • Yeah, one that we also recently released is the Universal Sentence Encoder,

  • and this is really a building block

  • for building all kinds of text models.

  • It will convert an input text into a 512-dimensional vector

  • that you can use to train all kinds of tasks,

  • and we really hope people build cool stuff.

  • Another one is a speech commands model

  • that we released for audio recognition.

  • Cool! So you can actually... Is it a limited set of speech commands,

  • or can it be trained to a voice?

  • Yeah, it's pre-trained to about, I think, 20 commands,

  • and you can retrain it to your commands

  • when they're in the browser.

  • - All in JavaScript? - Yeah, all in JavaScript.

  • Nice. So 1.0 is now available. Where should people go?

  • How should they get started with this?

  • Best place to go is Tensorflow.org/js,

  • and we have our guides and tutorials and examples,

  • as well as demos that you can check out, with links to source code,

  • and models just right available to use.

  • So, thank you, Yannick. This has been really fun stuff,

  • really inspirational stuff, as always,

  • and thanks, everybody, for watching this episode.

  • If you've any questions for me, or if you've any questions for Yannick,

  • just please leave them in the comments below.

  • And if you want to learn more, please go to TensorFlow.org/js.

  • Thank you.

  • ♪ (music) ♪

♪ (music) ♪

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it