Subtitles section Play video
LAURENCE MARONEY: Hi, everybody.
Laurence Maroney here.
I'm at TensorFlow World, and it's a great privilege
that I have to chat with Megan Kacholia, vice
president of engineering working on TensorFlow.
And you just gave a great keynote about TensorFlow
and about TensorFlow 2 and some of the new things in it.
Could you tell us a little bit about it?
MEGAN KACHOLIA: Yeah, we've been working on TensorFlow 2
for a little while, all right?
We talked about it at the Dev Summit
earlier this year in the spring, and then
just finalized the release in September, so just last month.
One big thing with TensorFlow 2 is
we're focused a lot on just usability, all right,
making it easier for users to be able to do what they need
to do with cleaner, more streamlined APIs, better
debugability, with eager mode, just try and make things
simpler and easier for folks.
LAURENCE MARONEY: One of the things I really
liked in your keynote was where you called out
like three audiences, right?
There was researchers, there was data scientists,
and there was developers.
I spend most of my time working with developers,
but what I'd love to really drill down on
is researchers and some of the great stuff
that's in TensorFlow 2 for researchers.
Could you tell us about that?
MEGAN KACHOLIA: Sure.
I think sometimes people worry that if they
want a lot of control--
that's one thing researchers really like,
is they want to be able to try out-- especially ML
researchers-- they want to be able to try out new things.
And they want to make sure they have that control
and be able to do that.
I think one thing to emphasize is that researchers still have
that control with TensorFlow 2.
Yes, we have done a lot to try and streamline
the high-level APIs, because we want
to make it easier for folks to commit at that point.
But I know there's a lot of folks who just want
to be able to go in deeper.
They want to go under the covers.
They want to go under the hood and be able to figure out,
well, I have this new model type I want to try out.
I have some new model architecture
I want to play with.
And all the things they loved about TensorFlow
and being able to do that before, that's all still there.
We're just trying to make sure there's also
a nice clean high-level surface as well.
It's not an either/or situation.
There's still both.
LAURENCE MARONEY: Yeah.
And then, all that great work that we've
done to make it easier for software developers who
aren't necessarily researchers doesn't preclude that.
MEGAN KACHOLIA: Exactly.
LAURENCE MARONEY: Everything the researchers
loved is still there.
MEGAN KACHOLIA: Exactly.
LAURENCE MARONEY: So-- and with TensorFlow 2,
there's also some state-of-the-art research has
gone on, right?
MEGAN KACHOLIA: Yes, definitely.
I mean, just even showing today the example that I went through
from Hugging Face.
Hugging Face has done so many cool things
in the NLP, the natural language processing space.
Everyone's very excited about BERT models right now.
I think you hear it everywhere.
Everyone's talking about BERT.
LAURENCE MARONEY: They just like saying BERT.
MEGAN KACHOLIA: It's a fun name.
Everyone's talking about BERT right now, and transformers.
And you can see even they were able to implement
some of the more advanced models in TensorFlow 2.0.
And recently, they did it.
And it really exciting for us to be
able to highlight that and show that no, it
wasn't us, the TensorFlow team who
has all the intricate knowledge of things.
It was the external community being
able to use this for really advanced research use cases.
LAURENCE MARONEY: And that's a really important thing, right?
So it's like we can build what we know, but it's
when people are able to take that platform
and bring their expertise to it--
MEGAN KACHOLIA: Yes, exactly.
LAURENCE MARONEY: --and make changes.
MEGAN KACHOLIA: And do what they need to do with it.
LAURENCE MARONEY: That's what I find particularly inspiring,
really, really cool.
And one of the other things that you drilled into
was performance.
You've been working really hard to tweak performance
on Tensorflow as well.
MEGAN KACHOLIA: Yes.
LAURENCE MARONEY: Could you talk a little bit about that?
MEGAN KACHOLIA: Yeah, so performance
is something that we always pay attention to.
I mean, one big thing that came with TensorFlow 2.0 as well was
distribution strategy.
Make it easy for people to be able to scale things up and not
have to worry about how do I set things?
How do I do things?
Like no, let us handle more for you.
So that's something that we focused a lot on,
is just how do you get that performance
and get it in an easy way so that way users don't
have to worry?
Now, again, if people want to go under the covers
and tinker and tweak every last thing, that's fine.
That's all still there, but we want
to make sure that it's easy for them
to get the performance they need.
So I talked some numbers for GPUs,
looking at different types of Nvidia GPUs, the 3x performance
improvement that we're able to get there with 2.0.
LAURENCE MARONEY: Which is amazing.
MEGAN KACHOLIA: Which is great.
Yeah, it's great, those things.
And then in the upcoming releases,
we'll have more things around TPUs,
as well as Mixed Precision support.
LAURENCE MARONEY: Wow.
OK, so still working hard.
MEGAN KACHOLIA: Yes.
It's never-- I mean, it's just like anything else.
The work keeps going.
LAURENCE MARONEY: Yes, exactly.
MEGAN KACHOLIA: There's always new things happening.
LAURENCE MARONEY: So I'm preventing
the advancement of this performance
by having you here to talk to us.
Thanks for taking the time.
MEGAN KACHOLIA: No, that's all right.
LAURENCE MARONEY: But a couple of other things
as well for developers, and for tooling around developers,
there's TensorBoard, right?
MEGAN KACHOLIA: Yes.
LAURENCE MARONEY: There's been advances in that.
MEGAN KACHOLIA: Yes.
LAURENCE MARONEY: Could you--
MEGAN KACHOLIA: One big thing with TensorBoard
that we announced here at TensorFlow World
is hosted TensorBoard.
So the whole idea here is that, again, people
love to be able to share their experiments.
Sometimes it's because they need help.
Let me show you.
Can you help me see what's going on?
Sometimes it's because like, oh, my gosh, look.
Look at what I found.
Look at the results I got.
Let me show someone else.
And right now, folks are generally
sharing with screenshots.
They'll take screenshots of TensorBoard
just showing what's going on with their experiment
or their current set-up.
And instead, we want to make it so folks can actually
share their TensorBoard instead of showing
a picture of some snapshot in time of their TensorBoard.
And that's where hosted TensorBoard comes in.
So this is something that we're just starting
to preview release for.
There's a lot of features we'll be adding
over the coming months as we stabilize it and get it to more
of the general availability.
But I think it's really exciting for folks
to be able to take that and then make use of it
to more easily share with others,
like, look what I'm doing.
Again, going back to the community aspect.
The more we can enable the community to share things,
the better off everyone is.
LAURENCE MARONEY: And I think it's a really powerful sharing
mechanism as well, as you've mentioned.
Because, I mean, I've gone on Twitter.
I've seen screenshots of TensorBoard,
and sometimes it's hard to believe
a discovery because when you just see a screenshot--
MEGAN KACHOLIA: Yes.
LAURENCE MARONEY: But when you can get hands on
and you can poke around.
MEGAN KACHOLIA: Yes, exactly.
LAURENCE MARONEY: And a discovery
made in isolation isn't really a discovery.
It's when you're able to share like that,
it's really powerful.
And then on the theme of sharing, there's also TF Hub.
MEGAN KACHOLIA: Yes.
LAURENCE MARONEY: And there's some good stuff
happening there.
MEGAN KACHOLIA: Yes, we've tried to make it-- and ease of use
is a big thing that we're continuing to emphasize.
And TensorFlow Hub, we wanted to make it much easier for folks
to be able to go to Hub and find things.
How can they know what there is?
How do they find the models they're looking for?
And just how do they become-- discoverability,
that's always a big thing with any sort of UI element.
It's like, how do people discover things?
And so we've made a lot of improvements on TensorFlow Hub
in order to improve that discoverability
and just make it easier for folks
to find the types of pre-train models that they want to use.
LAURENCE MARONEY: And when they can find them easily,
they can reuse them easily.
MEGAN KACHOLIA: Exactly.
LAURENCE MARONEY: They can transfer or learn.
They can make their own discoveries.
MEGAN KACHOLIA: Exactly.
And then, again, give back.
If there's something that they find that they think, oh, this
would be a great addition, I want
to be able to show this pre-train model.
We want to make sure we also have models
from the community on TensorFlow Hub,
have curated models available there.
So again, other folks can come and take it and use it.
LAURENCE MARONEY: And kickstart that
virtuous cycle, which is really, really cool.
So Megan, thank you so much.
As always, it's very informative and it's always a pleasure.
MEGAN KACHOLIA: Yeah, thank you.
It's always fun to chat.
LAURENCE MARONEY: Thanks.
And thanks, everybody, for watching this video.
If you have any questions for me or if you
have any questions for Megan, just
please leave them in the comments below,
and don't forget to hit that Subscribe button.
Thank you.
[MUSIC PLAYING]