Placeholder Image

Subtitles section Play video

  • what's going on?

  • Everybody.

  • And welcome to a video about cloud of verse, local GP use and when should you use which one or which one should you use?

  • I think it's pretty clear that one that you should use is most likely cloud or local.

  • We're both so so in which circumstances should you use which one and so on.

  • So I think you know, most of the hosts are gonna vary in price at least a little bit.

  • But in Shana roll, a good average number to look at is gonna be about $2 an hour per $4000 of GPU.

  • So you're gonna pay $2 an hour for $4000 of GPU that you rent.

  • So it's kind of regardless of which GPU it is that's gonna be in general, the ratios a point.

  • What would that be?

  • 40000.0.0 five.

  • I think that's correct.

  • Anyways, dollars per hour.

  • So whether you rent a smaller Jeep, you or even more power than a $4000 GPU, that's a close to the same ratio, right?

  • Is gonna very significantly as we've seen, we can see hosts literally cost twice as much for the same amount of V.

  • Aram.

  • So anyways, keep that in mind, but that's the number I'm gonna be working off of.

  • So let's say it's $2 an hour for a $4000 Jeep.

  • You Well, it's pretty simple math.

  • How many hours is going to take Thio equal?

  • $4000?

  • Well, it's just 4000 voted by two.

  • So 2000 hours.

  • So that's going to be about 83 days, which is like two point eight months.

  • So once we also factor into so basically, we take 2.832 point eight months to equal $4000.

  • So let's just say no matter what the GPU is that you're going to rent, let's assume you were going to consider buying it.

  • You would have to spend 2000 hours on that Jeep.

  • You constantly training with that GPU to equal the cost of that GP if you were to buy it, you know locally, as opposed to hosting it.

  • Now.

  • Of course, there's other considerations, like if you buy the hardware, you could later sell the hardware, so there would be usually, I don't know.

  • I try to budget about a 50% depreciation per year just because one it's old, but also like one it's used, but then to usually by the time you're ready to sell it, there's new or bed or ones out.

  • So it also gets depreciated for that reason.

  • So generally, a 50% hit per year tends to be the case on high end GPU.

  • Some of the mid range ones holder price better but anyway s.

  • Oh, there's that also.

  • But the other thing you have to consider if you run locally is it's not just the cost of the GPU, right.

  • You could in theory, take that GPU and put it in your main machine.

  • And so that's what I did when I first started out.

  • Andi, I'm actually gonna recommend against doing that if you're actually starting out because it's not cost efficient.

  • The only reason I did that is there were no cloud GPU hosts when I first was getting into machine.

  • Learning these GPS in the cloud are actually kind of a new thing that hasn't really been around for very long, so I think if you're just starting out, you can't possibly fathom actually training models for 2000 hours, so just rent in the in the cloud.

  • But that's where you should go now.

  • I will throw in one more kind of caveat in that is and slash warning to watch out for these, like value added hosts.

  • So I see them recommended or people ask about them all the time.

  • An example.

  • This might be Floyd hub now.

  • Nothing against Floyd Hub.

  • I think they have a decent enough service.

  • Uh, it's just that I think some people mistakenly go to them when they shouldn't.

  • So if you're trying to learn how to do machine learning and actually program at all yourself and run it yourself, go into production yourself.

  • Ah, place like Floyd Hub kind of becomes a trap again.

  • They're not trying to become a trap.

  • It's your fault, not their fault.

  • But you'll get stuck there because you get used to that specific platform and leaving becomes a significant hardship because you've gotta learn howto actually do hosting rather than using their pretty Gu Ys and their pre built models and all this stuff.

  • So you you want to be careful about those Service is also the problem, and the reason why it becomes a trap is Floyd Hub or other sites like it or going to just always be a little bit more expensive, their value added.

  • So like comparing the cost of Floyd hub to other hosts, even some of the more expensive ones you're still gonna pay, like a 33% premium over what you would pay if you hosted it on a more traditional host.

  • That does not scale well, so I would recommend against making that mistake.

  • There's also like free tears that you can get honestly when you're just learning and you don't even want to rent a GPU or you don't want to buy a GP, or you don't have one powerful enough, just go off of your CPU.

  • Your CPU is powerful enough.

  • Toe.

  • Learn deep learning.

  • You really don't need a GP.

  • You start needing a GPU when data sets are like million's large, and obviously you've got how many features per sample or something like that.

  • But but still, you get the idea.

  • You don't really need a GPU toe.

  • Learn.

  • You need a G P u T actually produce things.

  • So anyway, and you also don't need a GPU for, like production like once you actually go into production with a specific model.

  • Chances are you're doing it on a CPU just because you don't need the GPU.

  • We use GPS so we can feed in batches of these samples.

  • Unless you're running a big business or you're processing things for Facebook, who needs to classify it just a ton of stuff.

  • Every second, um, you can just run out on a CPU.

  • I mean, every model I've ever put into production ran on the CPU, even though we trained on the GPU.

  • So anyway, getting back to our let's, say, or $4000 jeep you for $2 an hour.

  • You know the other costs that you have when your local is just electricity.

  • So let's say you are running a $4000 jeep you 24 hours a day.

  • It's going to draw like the entire machine, maybe 50 to $70 per month, just in power for the machine.

  • The machine also is going to generate heat, which has to be accounted for with air conditioning, usually unless it's like in the winter.

  • But here in the south, I tend to budget about $100 per machine per month that I run If I'm running a 24 7 training models, so again, that adds up.

  • So so again, it kind of basically becomes about three months.

  • So if you were training the model nonstop for three months straight or a cumulative off three months and let's say three months over the course of a year, so 25% of the time you are training some sort of model on that GPU, Uh, then it starts to make sense to buy this GPU and have it local.

  • But there's other reasons you might want to buy a GP and have it local.

  • Right?

  • First of all, computer hardware is just cool, like I understand that.

  • So sometimes you know, you might be trying to justify buying it locally because it's going to save you money.

  • But that's that's not true.

  • Just be, you know, realistic and honest to yourself.

  • Just admit you wanna buy cool stuff and that's okay.

  • Like if you just want to buy it toe own it.

  • That's also cool.

  • But you're not saving any money like the cloud just makes more sense because you have options, right?

  • Opportunity cost.

  • If you're in the cloud.

  • At any point, you can change to a different cloud provider.

  • You can also come down.

  • You could at any point decide I'll just go local now, right?

  • You have that opportunity to do that.

  • But if you go local, you spend $5000 in here and then you train for only 100 hours over the course of a year because I'm gonna be honest, like it's actually pretty difficult to hit that, um, 2000 hours of training mark in a year, I've done it.

  • But I've also gone long periods of time where I don't do it, and it really depends, like the only reason I'm able to do it is because I have clients.

  • And then I also do like tutorials, and so some always like working on something.

  • But I think if you're just like a hobbyist and you're just trying to learn, you're like a student or a hobbyist.

  • Chances are a local GP.

  • You is a huge fiscal mistake, right?

  • If you're doing it for business purposes, it makes a little more sense.

  • Like if you're earning a profit in some way, you're getting grant money where you're being paid by some corporation or whatever.

  • Your you've got clients or whatever.

  • It makes a little bit more sense.

  • Plus, if you're running it through a business, it's tax deductible either through like depreciation, but also just the purchase price.

  • So it starts to make more sense to have things locally again.

  • But anyway, there's so many things to think about, but honestly, for me personally, I end up with a bit of a hybrid approach.

  • So I've got ah, Titan Artie X in my main machine.

  • Prior to that, I had the 10.

  • 80 t I, which I strongly recommend.

  • That was like the best bang for your buck card to date.

  • Before that was the 9 70 Was it a Tia, I think, was a 9 70 t i?

  • Anyways, that was like an amazing one.

  • We haven't had any amazing bang for your buck sense.

  • Like the new Artie X cards air pretty awesome for their price.

  • But the 10 8 I would've kept my 10 80 ti I I just happened to get a Titan r t x er.

  • So I use that now.

  • Um, but I think for the typical average person, you would really want to.

  • You want to have something that you can like developed locally on.

  • So, like a 10 80 t i or maybe one of the new Artie X 10 eighties or the 20 eighties or whatever something you can at least developed models locally on.

  • But then, when you go to actually start training for long durations, you'd probably do in the cloud.

  • The cloud is way more convenient.

  • Also, if your local you kind of like for me I trained models locally.

  • But then sometimes I wanna train like many models at the exact same time.

  • Well, to do that, to train many models at the exact same time.

  • Uh, I just expand into the cloud.

  • So maybe I've got one locally, but then I expand out into the cloud.

  • I'm doing every like a bunch of other models in the cloud, so I just kind of I kind of do it that way.

  • But I'd say 95% of my GPU time nowadays happens in the cloud.

  • It's just more convenient.

  • There's more powers, as much power as you could possibly afford and desire.

  • So So, um, anyway, I hope that was clear.

  • I like I said.

  • I recognize most people are not going to be buying maybe a 4000 even dollar GPU.

  • And then maybe some people want even more than that.

  • So, for example, a TTE some point I'm going to review this Lenovo Data Science workstation, which is a $30,000 computer that you can plausibly by.

  • Well, even this three month thing extrapolates to high end machines even like that, where if you want 96 gigabytes of the Ram, it's gonna be like $13,000 a month from Azure or AWS, And then it was like about $10,000 a month from Google.

  • Or it could be as cheap as $4000 a month from Lynn owed.

  • So uh, yeah, I'm quite the Leonard shill lately.

  • If you want to check out Leonard, by the way, for clergy pues esos lulu dot com slash Centex $20 credit.

  • Um, honestly, I think the other benefit to Cloud Jeep use is like what we're seeing with Leonard right now.

  • They're coming in with same like, let's say, for per dollar V Ram, uh, and tenser cores.

  • They're coming in with fit faster 10 30 flops and the same outlets Avie Ram for half the price less than half the price.

  • It's less than half, and it's not because Leonard is amazing.

  • It's because all of these cloud hosts providers are going to be just clawing at each other and just fighting it out for the actual users.

  • Because right now, this cloud hosting GP use thing is this kind of new markets brand new market with a lot of money that's coming in.

  • And if you ask me, I think you know, I think GP use just like GP rentals is going to be a far larger markets.

  • The average person who rents a GPU right to do data analysis and, like all companies, air starting to need to do this.

  • These people are spending way more money than they were ever going to spend on CPU and RAM because it's a completely different challenge.

  • I mean, we're talking like 10 X, the hosting cost.

  • So I think GPU hosting, especially as we move forward, is so valuable that these companies are going to just be competing the heck out of each other.

  • And in the meantime, you and me get to enjoy really, really, really cheap cloud hosting prices like I'm Britisher.

  • I I don't know.

  • I can't imagine how even I think Leonard has to be losing money on these Are this offering, and it's not Leonard is not gonna have the last word.

  • I don't think someone else is gonna come in with even better prices.

  • So anyways, stay tuned for that.

  • But as long as these companies are fighting each other toe for for actual users, um, the cloud is just becomes even more enticing to use over local.

  • But there are reasons to have local.

  • It is.

  • It's definitely easier can be can be sometimes more convenient to have locals.

  • Or you could just develop right at your main machine, and then you push it to the cloud and let it run while you forget about it for a few days.

  • So anyway, um, hopefully that was helpful for some of you guys.

  • I didn't want to get too deep into the numbers, but basically no matter which way you cut it, it's some number of thousands of hours that you need to spend on a GPU and most people, I don't think we're going to spend that number like thousands of hours on a jeep, you per year.

  • And if you just start in the cloud and you find that you are spending thousands of hours, you really haven't lost that much money, right?

  • You've retained your opportunity costs.

  • Therefore, you can then think about further getting some sort of local local GP you power.

  • I think in the meantime, you can either get by with CPU and RAM or get like a mid range GPU that you could just dev on locally and then just use the cloud.

  • So anyways, that's it for now.

  • Questions, comments, uh, disagreements.

  • Whatever feel for those below.

  • Come hang out with us in the discord Discord, Augie's life.

  • Centex.

  • Otherwise, I'll see you guys in some other video.

what's going on?

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it