Placeholder Image

Subtitles section Play video

  • I work for Hastexo, and

  • unfortunately my co-speaker couldn't make it today, so it's just me.

  • All right?

  • So forgive me if the talk's not as good as it would've been, but I'll do my best.

  • This is my Twitter handle, and this is our company's Twitter handle as well.

  • Feel free to drop a note if you feel like it.

  • We have these exact slides up on GitHub pages,

  • so if you wanna follow along, if you can't see right now, or if you

  • wanna look at them later, just point your browsers to Hastexo.GitHub.IO/2015.

  • All Right?

  • It's exactly what I'm gonna show here, sans the little pieces of demo.

  • Okay?

  • So without further ado, this is OpenStack for Open edX, inside and out.

  • What is this really about though?

  • What made us decide to use edX and OpenStack?

  • The motivation was this.

  • We are a consulting company, and we consult on OpenStack, and

  • we also do a lot of training.

  • Right? So we fly out to places, do training.

  • And we've discovered as all trainers do that training is time consuming,

  • and we'd rather develop the course once and have trainees and

  • students take it at their own pace.

  • Right? So,of course edX came to mind

  • as the MOOK of choice nowadays.

  • or the mooc platform of choice, nowadays.

  • The problem is on our training environments, when we give

  • trainings regularly, we give each and every student a training environment.

  • In other words, if we're teaching OpenStack or if we're teaching stuff,

  • we give students a cluster to play with, their own cluster so

  • they can deploy things, break things.

  • And then, we go in and fix them and so on and so forth.

  • How could we achieve the same thing in a self-pace massive online course?

  • Right?

  • So, this is the motivation.

  • How do we do that and not bankrupt anybody at the same time?

  • Okay?

  • I'm gonna assume a little bit of knowledge about OpenStack here, not too much.

  • Okay? So

  • I'm gonna use a couple of terms that I'm not gonna spend too much time to explain.

  • So you're gonna get the most of this if you know a little OpenStack.

  • You should know about edX configuration, or okay?

  • In other words, you should have probably deployed edX at least once so

  • you can understand what I say when I mean playbooks and roles.

  • Okay?

  • And you should probably have heard of XBlocks.

  • Has anybody not heard of XBlocks here.

  • Okay. Cool.

  • So you're gonna be fine.

  • Okay, so now, why?

  • Why did we choose OpenEdX?

  • Why did we choose OpenStack?

  • First of why OpenEdX?

  • Well, mostly, since we work from a background of open-source and

  • open-source consulting, we wanted to build our training on open-source platform.

  • And of course Open edX qualifies.

  • Now that's the only thing.

  • Being open source is not all it takes for an open source project to be good.

  • It needs to be alive as well.

  • Right? There needs to be leadership that

  • embraces the community and contributions.

  • And OpenEdX is, you know, top march on that, thanks to,

  • for instance, Singy Wolfboy here in the front.

  • [LAUGH] Or what's your real name again?

  • >> DB. >> DB?

  • >> Yeah.

  • >> Okay, so DB,

  • known as Singy Wolfboy on IRC, and

  • Netbat, also here in the room, which helped us a lot.

  • So this is what we think is a good open source project,

  • one that has people behind it that talk to you.

  • Okay so Open edX top marks on that, community and openness.

  • Technology.

  • Open edX is built on Python.

  • Right? And Python is

  • basically just English that a computer can understand.

  • Right? So it's awesome that you can go in and

  • read the code, okay, because of pep eight, right, thanks to Guido.

  • Right?

  • So the technology behind edX is rockable.

  • We can easily go in and understand it.

  • So that's another point.

  • The third and almost the most important point is extensibility.

  • We knew that no LMS platform out there would do what we wanted out of the box.

  • That's a given.

  • So we wanted something that we could use to build on.

  • Okay?

  • And Xblocks.

  • Right?

  • That's what caught our attention, and that's why I'm here, really.

  • And of course the coolness factor.

  • Right? Open edX is cool.

  • It's the cool kid on the block in LMS.

  • All right?

  • So we wanted to work with something that we would be glad to be

  • working with every day, and Open edX certainly qualifies.

  • Why OpenStack?

  • Well, openness in community, technology, extensibility, coolness factor.

  • Do you see the pattern here?

  • It's also an awesome open source project that has a great community around it.

  • The technology?

  • Well, it's written in Python as well.

  • So, right?

  • Awesome.

  • It's extensible by definition as all Cloud platforms should be.

  • You need to be able to build your own stuff in there.

  • Alright?

  • And it's also the latest,

  • greatest, coolest kid on the block as far as Cloud platforms go.

  • So bingo.

  • OpenStack, Open edX.

  • Yes. First problem we faced,

  • deploying Open edX on OpenStack.

  • So how do you go about doing that?

  • If we're just deploying a single node, it's pretty easy.

  • Right?

  • You take the sandbox.sh script in edX configuration,

  • port it to cloud-config, put it in a heat template, and

  • then use the edX send box playbook to fire up a single node.

  • Okay?

  • So that's pretty easy to do, really.

  • And it's very documented in the edX docs.

  • Okay? So we didn't have a lot of trouble there,

  • except with the fact that you do need eight gigs of ram to run a single VM.

  • A four gig VM won't, you know it'll crawl to its knees soon enough,

  • and we took a while to discover that.

  • Okay? So that's easy.

  • How about a cluster on OpenStack.

  • Right? that's the whole point.

  • And, you know, I sat with what's, your name again?

  • Sorry. >> Brandon.

  • >> Brandon.

  • I sat with Brandon over lunch, and

  • we discussed clusters on open second and otherwise.

  • Right? And

  • we all have our little stories to tell.

  • And doing clusters, anything, really, is not very easy.

  • Luckily, edX is not all that hard, either.

  • Okay? So, we'll get to that in a moment.

  • But the first question is, all right.

  • So these guys, these edX guys, they have an edX.org site, right, with edX.

  • And supposed with hundreds of courses in there, thousands of students.

  • So, they probably know what they're doing with edX [LAUGH].

  • Right? So how is open edX deployed on edX.org?

  • Luckily, last year there was a talk by a guy named Fenil Patel.

  • Is he here, by the way?

  • >> He's here today.

  • >> Is he in the room?

  • >> No.

  • >> That's too bad.

  • I wanted to shake his hand cuz that sort of,

  • that was our golden brick roads to what to do about it.

  • Okay. So if you had a really nice slides with,

  • if you're medium installation, you do this.

  • If you're bigger, you do that.

  • So that was awesome.

  • The only thing is, he didn't encode any technical details on how to do it, [LAUGH,

  • as he shouldn't really.

  • But anyway, we were still sort of in the dark as to exactly what to do, but

  • we have a way to design a cluster.

  • So now if you look, yeah,

  • it's gonna be kind of hard to understand here, but bear with me.

  • There's a user guy up here and an administrator guy here.

  • So, our cluster has a load balancer in front

  • of an arbitrary number of app servers, where an app server

  • is composed of an LMS the CMS and

  • everything that's not the mobile data store or

  • the bicycle store.

  • So everything resides on a scalable number or an arbitrary number of app servers.

  • That talk to three, or actually in our case exactly three nodes.

  • Why three?

  • Because that's the minimum number of nodes for Galera.

  • So Galera was our choice, of course,

  • based on the fact that there's a nice MariaDB role

  • in edX configuration for clustering MySQL.

  • There you go.

  • So we use that, and it requires at least three nodes to work.

  • So that's why we have three back-end servers here.

  • All of the app servers talk to one of the nodes for now.

  • It's an improvement to our cluster layer to have a load balancer between the two.

  • And the administrator, and this was key to how we did it.

  • The administrator gets a deployment node in the cluster

  • from which he runs the edX configuration roles, and deploys everything to the rest.

  • I'll get to how in a minute.

  • So we have a design for a cluster, how do we implement it?

  • Can we use examples from edX configuration?

  • For instance, the other US play books.

  • Or actually, it wasn't the play books, it was the call for information templates.

  • Can we use those?

  • No, because they were removed.

  • [LAUGH] Right?

  • And I talked to a couple of the guys here as to, okay,

  • so why did you guys remove these templates?

  • And the reason is nobody used them.

  • All right. So, all right.

  • Fine. We'll have to come up with something from

  • scratch.

  • How about pay books for clusters?

  • We come through edX configuration and we found something called vagrant cluster

  • YML, which is the sample cluster, I guess.

  • Sample cluster playbook, I guess.

  • And it gives us clues as to what to do about the point that's in a cluster.

  • Okay, so that was awesome.

  • Thanks to whoever wrote this.

  • Is he or she in the room?

  • >> [INAUDIBLE] >> Do you know who it is?

  • >> Carson. >> Oh, Carson.

  • Oh man, too bad, yeah.

  • I love Carson.

  • He did a lot of things that we ended up using.

  • Okay, so we have a playbook.

  • What variables do we set?

  • So we went on a quest for finding the proper variables to set so

  • that the cluster would work.

  • Okay, so for back end nodes you need to set vulnerable clusters,

  • elastic search clusters and OMDB clusters equals yes.

  • So awesome, that worked pretty well.

  • There are a couple more.

  • And for app servers you need to point all of the database variables and

  • backup variables to the database servers that you're deploying.

  • Okay, so this is kind of hard to read here, but we pulled a little Ansible

  • trick here, which is to say always point for MySQL, if your an app

  • server always look to the first back end server whatever its IP address is.

  • And for Mongo,

  • you just say use all of the back end servers that we have configured here.

  • I'll get to how this works in a minute and

  • this is the end result of a dynamic inventory generated,

  • which open stack helps us with.

  • All right, next we needed to write a heat template.

  • A heat is the equivalent of Amazon's cloud formation,

  • but it does basically the same thing.

  • You write a sort of recipe for the deployment of a whole cluster.

  • In our case, we needed the following shopping list, one security group,

  • one private network with a router, two cloud configs, a deploy node with an IP,

  • a load balancer with an IP, three back up servers,

  • and an arbitrary number of app servers.

  • So this is all coded in a heat template, and

  • you give this to OpenStack, and OpenStack will fire it up for you.

  • It won't deploy edX for you, but

  • it'll fire up the cluster, which is a nice step forward.

  • And you can configure parameters to the heat template,

  • such as, importantly, the number of app servers we want.

  • And you get as an output the public IP addresses that were created.

  • Next, the inventor generator I had mentioned before.

  • What is this?

  • Well we have an arbitrary number of app servers in the cluster.

  • When I run the Ansible from the deploy node, how do I know which ones they are?

  • What are their IP addresses?

  • How do I get this out of the open stack cloud?

  • And we, from our open stock experience,

  • we knew we can define something called metadata for VM.

  • So remember the deploy node that I mentioned that the administrator can use?

  • You can define as metadata for that VM,

  • the IP address of all of the up servers, which is very neat.

  • So when the administrator logs into the deploy node, he can make

  • an HTTP request to a fixed URL and he gets what the app servers are.

  • Ad this is awesome because they can then feed this in to Ansible so

  • that I just run Ansible Playbook Open Stack multi-node templates and it'll

  • do its thing without me needing to know, as a human, what the app servers are.

  • So this is actually the URL.

  • It's fixed for any and all VMs in Open Stack.

  • If you request this information from any VM in Open Stack open site,

  • you get metadata.

  • It's generic metadata, but in the heat template,

  • I can tell it to include information about the rest of the cluster.

  • Let's see how that works.

  • Sorry about this.

  • It's gonna be a little bit hard to read, but bear with me.

  • Okay, so first things first.

  • This is a fork of edX configuration on the cypress name release.

  • This is up on GitHub.

  • It's public, you can go in and use this if you want.

  • To fire up a stack on OpenStack,

  • whoops, sorry.

  • You run a command such as this one.

  • So, let's see.

  • Heat stack creates The path to the heats tuplet which

  • we have put up in that edX configuration fork, and

  • a couple of parameters specific to the public cloud we are using.

  • So I'm launching a heat stack or a set of VMs on a public OpenStack cloud.

  • It's not ours.

  • We pay for, as you would pay Amazon for a VM or two, we pay these guy, right?.

  • So we have no involvement with them, and

  • I'm gonna fire up a fac here with one app server.

  • I can't even see it.

  • There it is.

  • App countables one And p name equals my SSH key, so I can log in later, okay?

  • So this is how you create a stack enter.

  • Of course we don't have time to wait for this to finish, it takes a while, okay?

  • I'm just gonna show you what it looks like as it is created.

  • All right. So my

  • open edX 2015 stack has been created.

  • It takes about five minutes for the stack itself to come up.

  • And then you can log in and run the Ansible playbooks.

  • Since we don't have a lot of time, I'm going to cheat a little and

  • use a stack already created before.

  • Okay?

  • Fine with you?

  • All right.

  • So I'm messaging into that stack I created earlier.

  • And I'm going to issue that very

  • http requests I mentioned to you guys a little bit earlier.

  • So, once again, curl that address.

  • And, I'm piping the output to Python and JSON,

  • so that we can understand what it says really.

  • Okay?

  • All right.

  • So, on this stack, can you see the meta item here?

  • I can see a list of my app servers and my lists of back end servers, all right.

  • How is this useful?

  • Well this is useful because I can write it a little script in Python or

  • any language really that outputs JSON in a certain format.

  • And Ansible can then read this, it has a list of hosts to deploy to, right?

  • So, we created such a script.

  • Right? And

  • it lives in playbooks open stack on edX configuration.

  • So if I run openstackinventory.py- Dash, dash list, right?

  • Yeah, there you go.

  • I got a nice little list of app servers and

  • back end servers on my cloud from the deployment, right, cool?

  • Do you take my word for it that if I run Ansible here it'll work?

  • [LAUGH] Please do.

  • [LAUGH] Because I already ran it, so yes it does work.

  • You'll simply run Ansible playbook.

  • Pointing the host, dash i,

  • openstackinventory.poi.

  • And the open stack ansible multi-note playbook that we have also put up there.

  • And this will deploy the back end servers and the app servers, and you're done.

  • That's it, you have an OpenStack, you have edX installation.

  • That's one thing How about So

  • you have a nice cluster, a nice cluster with edX on it.

  • How about if we want to teach clusters on that.

  • Okay, so getting back to our initial motivation.

  • Remember, we want to give students each and every one of the students or

  • trainees their own cluster to play with.

  • We think that's essential for the learning experience, so to speak.

  • So, how do you do that on edX?

  • Well, you can imagine it's not that hard to automate the creation

  • of a cluster on Open Stack or on Amazon.

  • Right, or anything like that.

  • Okay? So, the problem is, by definition,

  • this is self paced courses.

  • If the student takes three months to finish the course, you're gonna have

  • a cluster that's been running for three months and it's gonna cost a lot of money.

  • All right, so we needed a way out of this.

  • And once again, heat to the rescue.

  • What you do is you create a heat stack for each student and

  • then you suspend it when he or she is not using it.

  • All right?

  • So no charge if it's not being used.

  • So that works financially.

  • OKay? It's a financial solution whereas

  • technically we knew we could do it.

  • Okay? So, enter XBlocks.

  • The cool parts, or the coolest part of edX we think it's, XBlocks are just Python.

  • You can write anything you want in there and edX will,

  • the edX platform, the LMS or the CMS will run it for you, so that's awesome.

  • We have a way now to automate the creation and suspension of stacks, okay?

  • By the way,

  • this is, the x block I'm gonna demonstrate in a while is up on GitHub, as well.

  • ATPL, open source, etcetera.

  • Feel free to mess with it, criticize it, contribute, who knows?

  • [LAUGH] This, what is this then?

  • This is how you would define or

  • invoke this XBlock in a course.

  • You specify where your cloud is,

  • where your open site cloud is, the authentication information.

  • You upload the heat template to the data store of the course

  • you're developing and you refer to it here.

  • Hot Lab Yamo.

  • You give it the SSH user that's going to connect to it and basically that's it.

  • Right?

  • The only other thing is that the heat template must output two

  • things which the XBlock expects.

  • And that's a public IP address of the stack that's been created for

  • the student, of course.

  • And a private key that needs to be generated by the heat template.

  • So just two restraints on the heat template itself.

  • Okay, now here's where we became a little naughty.

  • Since it's just Python we access the data stored directly from the XBlock.

  • Oh. [LAUGH] Right?

  • The edX developers probably won't like this, to hear this,

  • but this is what we came up with to, yeah,Matt is shaking his head.

  • [LAUGH] Don't do that man.

  • Yeah.

  • It's part of the reason I'm saying this here.

  • We want suggestions as to how to improve it.

  • Okay?

  • And we also use the LMS auxillary workers.

  • We hijack those.

  • In our defense, we sort of copied what the mentoring XBlock already does.

  • It's now called, what's it called now?

  • Problem builder?

  • Anyway, is Xavier around?

  • Yeah, there's Xavier.

  • Hey, man.

  • Long time no see.

  • So we do that.

  • So we want a better way to do this and we'd like suggestions later on.

  • Okay.

  • Finally connecting the browser to the lab environment.

  • We have an awesome outer suspending outer resuming environment,

  • how does the student get to it?

  • Right. We came across something called gate one,

  • and it's an SSA terminal emulator written in Python and JavaScript that you can.

  • And it has a nice API so you can embed that into, for instance,

  • the output of an x block.

  • So that's exactly what we did.

  • And that's what we used to connect the students' view of

  • the elements into the actual cluster.

  • All right?

  • And a couple more bells and whistles.

  • We developed a little role to deploy this x block.

  • And a role to deploy courses from Git.

  • Because we like Git, as does MIT I learned just a while ago.

  • Okay?

  • Unfortunately, I won't have time for

  • an actual demo because I have, what 20 seconds?

  • 10 seconds left?

  • Five minutes.

  • Okay you guys wanna see this working?

  • All right.

  • Damn [LAUGH].

  • All right, there you go.

  • So this is that stack I was SSH ing in to.

  • It's a regular edX stack with a Stanford theme.

  • And that x block I was telling you about.

  • Now I'm going to view a course here.

  • And we built a demo course on SEF].

  • Just as is obvious by the name for demonstration purposes.

  • Where's my mouse?

  • All right, so I'm going into the Introduction.

  • And what's happening now All right, I don't know if you can read this.

  • It says, please wait until your environment has been fired up.

  • What's happening behind the scenes is a heat stack,

  • described by the course author, is being deployed to an OpenStack public cloud.

  • By sheer coincidence.

  • it's the same call that edX is running on but ti didn't have to be.

  • It could be somewhere else entirely, okay?

  • I had a stack already up there.

  • And it's being resumed right now.

  • And if I am lucky, the demagogues are going to be nice with me, and

  • this is gonna work.

  • And in a minute, we're gonna be SSH'd into the environment itself.

  • So let me check Hey, it worked.

  • [LAUGH] There we are.

  • So, as you can see, the student didn't have to enter

  • IP addresses, passwords, anything.

  • This is all the course author's work.

  • So the course author defines the heat template that fires up the environment.

  • This particular environment has four VMs, I think.

  • So they all talk to each other.

  • You can fire up any kinds of services in there that work in a cloud.

  • Okay? So, that's it.

  • Any questions?

  • One?

  • >> [INAUDIBLE] >> Good question.

  • The credentials are.

  • Oh repeat the question, sorry.

  • I think he asked, where are the OpenStack credentials defined?

  • Right? So

  • they're defined by the course author in the courseware.

  • So if you're using XML to write the course, it's in the XML.

  • If you're using Studio to develop the course,

  • it's when you instantiate the X box, the Advance Module I think it's called.

  • It’s in those options right there.

  • So define options for an advance module.

  • You can do that from Studio.

  • Does that answer your question?

  • Okay, anything else?

  • Any other questions?

  • >> [INAUDIBLE]

  • >> Good question.

  • This edX cluster, question.

  • He asked if I'm self-maintaining an open stock cloud just for this.

  • Or, or am I using an external provider.

  • In this case I'm using an external provider.

  • Both edX itself is running on a public cloud that's not run by us.

  • And the students environments as well.

  • They're running on a public cloud no fancy stuff going on.

  • It's even an old version of OpenStack these guys are running.

  • I think it's Ice House.

  • It's a year old.

  • Really old in open source terms.

  • Okay.

  • Cool. Another question.

  • >> [INAUDIBLE]

  • >> Excellent question.

  • So I cheated, as I said.

  • I cheated a little bit.

  • [LAUGH] Right?

  • I'm repeating the question again.

  • I always forgot about that.

  • So he asks, how much time does it take if the environment wasn't already fired

  • up before?

  • So in this case, as I said, I cheated.

  • The environment was already there, just suspended.

  • So what you saw is the time it takes to resume a student environment.

  • So it was about, what, 10, 20 seconds, 30 seconds, maybe?

  • From scratch it take more like a minute or two.

  • Depending on the cloud and the number of vm's you have in your cluster.

  • So it's a function of the cloud you're using and

  • the size of your heat template.

  • Any more questions?

  • >> [INAUDIBLE] >> Excellent question.

  • Does the cluster, or does the student cluster, suspend on the time out or what?

  • Okay, so what happens is if I close the browser window now.

  • Well, this thing's already on a dead man's timer or dead man's switch, right?

  • If the browser window is open,

  • it request to send every I think it's two minutes or a minute.

  • And if that request is not sent for two minutes then the stack is suspended.

  • So that's the workaround which we came up with.

  • So I don't have anymore time, that's it.

  • Thank you very much.

  • I hope you enjoyed it.

  • All of this, by the way, sorry, is open source AGPL on GitHub, all right?

  • On fork of edX configuration, and the X block is also same deal.

  • We'd love it for people to actually use this stuff.

  • And we do intend to contribute as much as we can to edX itself.

  • Okay, thank you.

  • >> [APPLAUSE]

I work for Hastexo, and

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it