Subtitles section Play video
>> [Announcer] From Denver, Colorado, it's the Cube
covering Super Computing 17, brought to you by Intel.
(techno music)
>> Welcome back, everybody, Jeff Frick with the Cube.
We are at Super Computing 2017 here in Denver, Colorado.
12,000 people talking about big iron, heavy lifting,
stars, future mapping the brain,
all kinds of big applications.
We're here, first time ever for the Cube, great to be here.
We're excited for our next guest.
She's Susan Bobholtz, she's the Fabric Alliance Manager
for Omni-Path at Intel, Susan, welcome.
>> Thank you.
>> So what is Omni-Path, for those that don't know?
>> Omni-Path is Intel's high performance fabric.
What it does is it allows you to connect systems
and make big huge supercomputers.
>> Okay, so for the royal three-headed horsemen
of compute, store, and networking,
you're really into data center networking,
connecting the compute and the store.
>> Exactly, correct, yes. >> Okay.
How long has this product been around?
>> We started shipping 18 months ago.
>> Oh, so pretty new?
>> Very new.
>> Great, okay and target market, I'm guessing
has something to do with high performance computing.
>> (laughing) Yes, our target market is high performance
computing, but we're also seeing a lot of deployments
in artificial intelligence now.
>> Okay and so what's different?
Why did Intel feel compelled that they needed
to come out with a new connectivity solution?
>> We were getting people telling us they were concerned
that the existing solutions were becoming too expensive
and weren't going to scale into the future,
so they said Intel, can you do something
about it, so we did.
We made a couple of strategic acquisitions,
we combined that with some of our own IP
and came up with Omni-Path.
Omni-Path is very much a proprietary protocol,
but we use all the same software interfaces
as InfiniBand, so your software applications just run.
>> Okay, so to the machines it looks like InfiniBand?
>> Yes. >> Just plug and play and run.
>> Very much so, it's very similar.
>> Okay what are some of the attributes
that make it so special?
>> The reason it's really going very well is that it's the
price performance benefits, so we have equal to,
or better, performance than InfiniBand today,
but we also have our switch technology
is 48 ports verses InfiniBand is 36 ports.
So that means you can build denser clusters
in less space and less cables, lower power,
total cost of ownership goes down,
and that's why people are buying it.
>> Really fits into the data center strategy
that Intel's executing very aggressively right now.
>> Fits very nicely, absolutely, yes, very much so.
>> Okay, awesome, so what are your thoughts here at the show?
Any announcements, anything that you've seen
that's of interest?
>> Oh yeah, so, a couple things.
We've had really had good luck on the Top 500 list.
60% of the servers that are running a 100 gigabyte fabrics
in the Top 500 list are running connected via Omni-Path.
>> What percentage again?
>> 60%
>> 60? >> Yes.
>> You've only been at it for 18 months?
>> Yes, exactly.
>> Impressive. >> Very, very good.
We've got systems in the Top 10 already.
Some of the Top 10 systems in the world are using Omni-Path.
>> Is it rip and replace, do you find,
or these are new systems that people are putting in.
>> Yeah, these are new systems.
Usually when somebody's got a system they like
and run, they don't want to touch it.
>> Right.
>> These are people saying I need a new system.
I need more power, I need more oompf.
They have the money, the budget,
they want to put in something new,
and that's when they look to Omni-Path.
>> Okay, so what are you working on now,
what's kind of next for Omni-Path?
>> What's next for us is we are announcing a new
higher, denser switch technology,
so that will allow you to go for your director class
switches, which is the really big ones,
is now rather than having 768 ports,
you can go to 1152, and that means, again,
denser topologies, lower power, less cabling,
it reduces your total cost of ownership.
>> Right, I think you just answered my question,
but I'm going to ask you anyway.
>> (laughs) Okay.
>> We talked a little bit before we turned the camera on
about AI and some of the really unique challenges of AI,
and that was part of the motivation behind this product.
So what are some of the special attributes of AI
that really require this type of connectivity?
>> It's very much what you see
even with high performance computing.
You need low latency, you need high bandwidth.
It's the same technologies, and in fact,
in a lot of cases, it's the same systems,
or sometimes they can be running software load
that is HPC focused, and sometimes they're running
a software load that is artificial intelligence focused.
But they have the same exact needs.
>> Okay.
>> Do it fast, do it quick.
>> Right, right, that's why I said
you already answered the question.
Higher density, more computing, more storing, faster.
>> Exactly, right, exactly.
>> And price performance.
All right, good, so if we come back a year from now
for Super Computing 2018, which I guess is in Dallas
in November, they just announced.
What are we going to be talking about,
what are some of your priorities
and the team's priorities as you look ahead to 2018?
>> Oh we're continuing to advance the Omni-Path
technology with software and additional capabilities
moving forward, so we're hoping to have
some really cool announcements next year.
>> All right, well, we'll look forward to it,
and we'll see you in Dallas in a year.
>> Thanks, Cube.
>> All right, she's Susan, and I'm Jeff.
You're watching the Cube from Super Computing 2017.
Thanks for watching, see ya next time.
(techno music)