Placeholder Image

Subtitles section Play video

  • What's up, everybody?

  • And welcome to part seven of our unconventional neural network.

  • Siri's in this tutorial.

  • What room you doing is getting into deep dream.

  • So the following here is a video I posted a little bit ago.

  • It's a video of a deep dream.

  • Basically where I let the machine have a deep dream, we zoom in a little bit, have a deep dream a little bit more.

  • It's actually a few pixels at a time.

  • And then you can basically take those as frames and render a video out of it.

  • And this is the result.

  • We're gonna run through all the steps there.

  • But the first step, of course, is just tow.

  • Have a single deep dream example.

  • So with that, this is like an example of just one frame.

  • So we start with the Andromeda Galaxy and then let a deep dream kind of run wild for a few iterations, and you get something like this, and that's basically what deep Dream is.

  • So so how do we do that?

  • So, first of all, let me and close out of this, and the main bit of code that we're gonna use is actually from here It's from these tents airflow tutorials, which I'll link in the text based version of this tutorial because they're actually really good tutorials.

  • They're a bit dated, but the information that's in them is very good.

  • So not only did that doesn't cover stuff on on Deep Dream, but there's just a whole bunch of other stuff that is useful information to know how to incorporate it with tensorflow.

  • Just a real good tutorial.

  • Siri's.

  • We're currently just gonna be working with part 14 here, but yeah, definitely.

  • Check it out.

  • Um, so in order to fall along, you will need one thing for sure.

  • Well, first you're gonna need a python.

  • 36 I'm using tensorflow 17 You should be able to get by, actually with anything 1.4 or greater.

  • Um, I'm using 1.7 and then you're also gonna need open CV.

  • So Pip, install open C v dash python.

  • Otherwise, you can always go to the unofficial binaries.

  • If you're on windows and you can't get it, just search for open sea.

  • Actually, just open CV One word click on that boom done with the one that matches you.

  • Okay, um, otherwise what I've done actually is I start I hosted on python programming dot net.

  • I'll tryto remember, Put a link to this in the description, but I might forget to just deep dreaming start dot zip.

  • So go ahead and grab that.

  • I'm just gonna grab that and put that in my working directory here.

  • Right?

  • And once you have that, let's go ahead and just extract.

  • It's kind of large because we have the inception model also have a starting image.

  • I would suggest you use something relatively small.

  • This is an 800 by 4 50 The larger the image, the more processing it's gonna take to run through that entire image.

  • Um, but at the end of this, I'll show you another deep dream that I made in this time.

  • I did a full like it's a 10 80 p dream.

  • It starts off the story night, but it's actually a 10 80 p.

  • You could even go to four K, but it will take a really long time.

  • Anyway, once you've got that extracted, this is all our starting.

  • Could and ah, basically download.

  • We don't really need touch that inception.

  • We don't need to touch that we've got dream image, and then later we'll make something for a deep dream.

  • So this is basically the actual dream code that belongs to Ah, this Magnus Eric H best Peterson.

  • Ah.

  • Anyway, this is basically identical to what?

  • We were just looking at that I python notebook.

  • I just kind of turned it into a ah module.

  • And then we can use dream image here, which, Let's see five.

  • Yeah.

  • So this one I have populated.

  • Um, so we don't even have to really run.

  • I'll just run through what I've done.

  • I can't remember if I actually populated this or not.

  • So, um, so what's going on here is Let's just explain that, I guess line by line, I have some notes up here that all we can reference as well.

  • But basically, everything you need to know is in here.

  • In fact, I'm gonna open this up in sublime, huh?

  • Stew during this over.

  • So from the little basically package that I grabbed, I'm just going to say we're gonna import the model load image and recursive optimized, which is what's going to the recursive optimized function there.

  • That's what's basically doing the deep dream.

  • Then we specify which later we want to use.

  • Now each layer is a little different.

  • So, like layer one is, like just kind of like we venous.

  • Basically, there's really not much than layer two, and basically, as you go, the layers are more likely to get more and more complex.

  • Later, too, is like lines.

  • Layer three is like boxes and squares and stuff later for his circles.

  • Um, I could even just added in here later.

  • Five is like eyes like It's very good.

  • It's like sees eyes.

  • It's kind of a creepy layer, to be honest.

  • Ah, but very quickly.

  • Transitions into like dogs bears just kind of cute, furry animals.

  • Later, seven gets into like faces and buildings.

  • Uh, eight is like fish, frogs, reptiles, stuff like that.

  • 10 is like monkeys, lizards, snakes, ducks like kind of like.

  • And it's sort of like everything before it, um, and then layer 11.

  • I can't remember what Layer 11 is.

  • It's like sometimes like it's like weird looking things like, I can't really figure out exactly what Layer 11 is, but again, it's always like a combination of everything before it.

  • So anyway, coming back down here That's the layer.

  • So you'll specify Which layer are we going to attempt to use in this case?

  • Uh, we also need a file name.

  • In my case, it's not saved in the starry night.

  • It's actually just local.

  • So it's just the starry night 800 by for 50.

  • And let me just confirm that quick.

  • Make sure.

  • So it's actually not in this director yet.

  • It is in the story night.

  • Oh, and I saved Oh, man, I So I made this all like days ago.

  • Have, you know, days and Internet time is, you know, a year.

  • So I forgot I did this anyway, so they're all here for you, But I encourage you.

  • You don't have to use my images.

  • Um, I encourage you to use your own images, but I guess I'll just leave.

  • I'll leave that.

  • They're cool.

  • I just made my job so easy.

  • Um, anyway, you can use any image you want.

  • Just take note of the resolution.

  • But other than that and even in this case, we don't even have to pass a resolution later to make the video.

  • You do need to pass a resolution because we're going to be like zooming into re sizing.

  • Hopefully, actually, when you guys will come up with a better way than what I'm doing, I'm sure someone has something smarter than what I'm doing.

  • So anyways, maybe you won't need it, but Amy s So then once we have a file name were just loading in the image from, um basically, it's it's using the deep dream load image.

  • And in fact, I'm kind of curious what their little function is for that.

  • So I'm not sure what's special about theirs.

  • Um, dreaming is that deep dreamer load image?

  • Yeah, I was gonna say Okay, Yeah.

  • So basically, it's all to do is just using pill load the image converted toe float 32.

  • Anyway, nothing fancy anyway.

  • So reason the image that is gonna pass it to that recursive optimize later.

  • That's what's actually gonna do the dream the other things pay attention to is like how many iterations air we're gonna take?

  • What's the re scale factor?

  • Number repeats and blend the two major ones that I tend to kind of muck around with is number of iterations and number of repeats.

  • Also re scale factor is kind of takes into consideration how many iterations you're going to do?

  • Um, but all these things you can just kind of tinker with And I think the best way to get an idea for what they do is to tinker with them and see what they dio.

  • Um, but I will just say the number of iterations is generally gonna be, um it's like, how clear is the dream versus there?

  • So, like, how smooth the transition are we going to make?

  • So, like, for example, from here to here is a lot of it orations like that's that's gonna take a lot.

  • I forget how many exactly that is.

  • But I would guess I don't know, 40 60 something like that.

  • Like many, many, many iterations.

  • Um, whereas when you're doing a deep dream, for example, if you want it to be relatively smooth and not be super jumpy, you probably have number of iterations as one like each frame is just one generation from there.

  • Um so yeah s Oh, yeah, and then numb repeats.

  • This is just basically how clear will the dream itself be?

  • So, like the number of repeats that you're gonna have that's like, how many passes over the data or how far deep into the layer.

  • I guess it's really hard for me to put it into words.

  • I really like.

  • I said, I think that probably best thing to do is just to run this through and see what I mean.

  • But anyway, moving along, um, then we're going to just go ahead and clip it.

  • So the reason why we're doing that is because you let the neural network run wild.

  • It doesn't understand that the bounds are 2 55 like it doesn't understand that you can't go below zero.

  • It probably won't go below zero, but because they never got negative data in the first place.

  • But it doesn't understand that it can't necessarily go beyond 2 55 So we want to make sure we clip it.

  • Ah, Then we're gonna go ahead and convert the type, and then we convert it to an image.

  • We save the image, and then if we want to, we can make that image pop up.

  • So what?

  • I'm gonna go and do it.

  • I'm not quite sure what I changed.

  • It says I change some things.

  • I'll save it on then.

  • And in fact, Actually, I could just run it from here, so this should work.

  • Um, now I'm doing 20th orations, so that might actually take a decent amount of time.

  • I might posit, while this is going, um, each generation that you go do is gonna be a little slower, and then depending on how deep you're in the network, um, it's gonna take even longer.

  • So I forget overall now in layer three.

  • So this actually should be pretty quick.

  • So we're starting with, uh, this image, which is, you know, in general, pretty wavy image.

  • Um, so the results already up for us.

  • So this was where we started was with this thing, and we ended on this image, which is, you know, it's actually kind of interesting because you pretty much just changed.

  • Any anyone who's like a art person, like you've changed like the entire thing.

  • Like it's supposed to just be like strokes and very wavy.

  • And, like, almost no jagged edges except for the buildings.

  • And then, boom, everything's jagged edges.

  • Uh, anyway, pretty cool, like I just looks cool.

  • Um, so that's how you can at least performing deep dream on a single image again I suggest to you that you go ahead and try to do it with a variety of images.

  • You don't have to use my images if you don't want um and then like one another.

  • Thing I'll do is just like, let's say I say Instead, let's do layer, let's let's do three But in this this case, let's do only one it oration actually d like to iterations Justo.

  • So it's not just like the original image that should go pretty darn quick.

  • We'll see.

  • Yes, As you can see, this doesn't I mean, it kind of changed it, but but barely like you wouldn't even, I think if you just saw this image Ah, you wouldn't say, Oh, that's been changed from the original Like unless you saw them side by side.

  • I don't think you even recognize that there's a difference.

  • Um, but then, like let's go back to like, 20 iterations and then let's do layer still air Well, so actually, let me just be clear.

  • Layers started zero.

  • So we would say later, 11.

  • But that's actually later 12 just for the record.

  • So we'll do that and let's see what that one looks like And for this, I'm gonna pause because it's probably gonna be a little slow, so positive we'll be back when it's done.

  • Okay, So actually, here are a few different ones.

  • So this would be with eight repeats just on layer three.

  • So just a little more, Um, this was with with 10.

  • Repeat.

  • So you can already see, Like, how much more in depth it gets?

  • Um, this would be layer 10 with 20 iterations and eight repeats.

  • Looks like another layer three just with a different re scale.

  • Ah.

  • Then this would be 20 durations.

  • 200.9 re scale.

  • Looks like layer eight or nine.

  • I didn't actually save which layer that was.

  • And then again, um, I'm gonna guess this is layer 11 were actually later 12 but the index would be 11.

  • Anyway, this would be 40 iterations.

  • 25 repeats in a 250.99 re scale.

  • So anyways, uh, those were some examples, But like I said, you definitely want to play around.

  • You could probably spend a long time playing with your own images.

  • Um, and just doing even single images that the outputs are pretty cool.

  • But in the next tutorial, what I'm gonna show you guys is how you can do it to a video just on basically your own dream and how you can actually just make it frame by frame.

  • So that's what we're gonna be doing.

  • The next tutorial, if you have questions, comments, concerns, whatever.

  • Feel free to leave blow.

  • Otherwise I will see you in the next tutorial.

What's up, everybody?

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it