Placeholder Image

Subtitles section Play video

  • All right, so look, if you follow this weekly show, you know I don't say this lightly.

  • I believe in this week's edition, there's going to be a use case with AI for everybody watching this video.

  • And that's a big claim, because most people don't even use AI regularly.

  • But I believe this week is so colorful in terms of different applications, that even the AI haters will find something they might want to try.

  • Because we do have a brand new chat GPT feature, we have to talk about that.

  • And then the best in class image generator, Midjourney, finally is rolling out the alpha access to the website to almost all users.

  • This took a while.

  • But beyond that, there's the specialty websites where you can try on clothes or create virtual avatars that should be interesting to people that don't even use this in their everyday workflow.

  • So without further ado, let's dive into this week's AI news you can use.

  • Okay, first things first, chat GPT update memory.

  • I almost created a separate video on this, but then I played with it for a few hours.

  • I was like, this is going to be a segment in the weekly news show.

  • And that's it, because it's not as revolutionary as many people make it out to be.

  • What's the deal?

  • Well, let me give you a super quick summary, and then I'll give you some tips on how to actually get something out of this feature.

  • So basically, they have a new feature here where you can go to settings, personalization, memory.

  • Now, if you're in Europe or South Korea, you don't get this feature.

  • And if you have a Teams or Enterprise account, you don't get this feature.

  • Let me tell you, me as somebody who sits in Europe and has a chat GPT Teams account for the entire AI Advantage team, I was not pleased to hear this at all.

  • So I had to create a brand new one and use a VPN to tell my browser I'm in the US.

  • And that's why I have access to it over here.

  • It comes turned off by default, and it's basically automatically generated custom instructions for all of your chat GPT for usage, meaning this is a beginner feature.

  • If you've been creating your custom instructions manually, as I've been teaching on this channel since the day they released, you should know the power of adding context to all of your prompts.

  • Here, it does it automatically for you.

  • So look, I had some conversations with it.

  • I mentioned that my boss's name is Karen.

  • Are you filming me?

  • Okay, Karen.

  • Luckily, that is not true.

  • And then I had it store my style and my name.

  • And it simply does this by picking up on some conversations you have with it here, okay?

  • So as you can see, you can have a conversation here.

  • And as you go back and forth, you can say something like, great, store details about my preferences and feedback in memory.

  • And then it updates the memory like so.

  • You can always manage them.

  • You can delete some of them.

  • What I did up here is I just fed it all my custom instructions like I teach them on the channel.

  • It's probably the most important video on ChatGPT that you should check out.

  • But basically, I fed it my custom instructions, and then it stored them to memory.

  • Fun fact, it rewrote them.

  • So they're a natural language instead of bullet points.

  • But pretty much, it's the preset I have in custom instructions all the time.

  • Again, in some other videos, I teach you how to set these up for yourself.

  • But the point here is this.

  • This is not novel.

  • We have had custom instructions since a while.

  • And they do the same thing as this.

  • Now, look, there is one difference.

  • I know there's going to be somebody in the comments pointing out, but hey, Igor, this history feature doesn't work exactly like custom instructions.

  • It creates an embedding that is stored in a vector database.

  • And then it retrieves it every time.

  • That's why it doesn't add to your context window just like custom instructions do.

  • And while that is true, what is inside your memory doesn't count as a part of the limited context window.

  • There's no confirmation whatsoever that this is using a vector database.

  • People are just speculating on this.

  • At the end of the day, a few tokens are not going to change the game for you with the custom instructions.

  • We have a 32k context limit with GPT-4 these days.

  • Anything you want it to retain, you can just put into the custom instructions and keep it there without having to worry about the fact that ChatGPT might just pick up some context here and there and add it to your history.

  • I mean, what if you're prompting something for work and then you're maybe creating a trivia quiz for the birthday party of your child?

  • You don't want your work context to infect that generation.

  • But that's what this does automatically.

  • So to anybody who's a little more advanced, aka if you're watching this channel, you're probably at that level.

  • I still recommend using custom instructions.

  • There you get full control.

  • Now, look, where this does get interesting is something that they announced for the future, but that's simply not the fact today.

  • They announced that GPTs are going to have localized history.

  • So as you use a GPT, it's going to pick up a piece of context, store it to the local history of that GPT.

  • And that's an amazing feature.

  • You want that because the GPT is context specific.

  • But using ChatGPT, not context specific, use it for all sorts of things.

  • So in summary, I think this feature makes a lot of sense for GPTs.

  • It might be a great addition for newcomers that have no idea that custom instructions exist and that you can put your preferences in there.

  • But for anybody watching this channel, chances are you're better off using custom instructions as of now.

  • And with that being said, let's move on to the next one.

  • Which is a very interesting app.

  • It's called AnyTopic and it's in beta.

  • And look, this is not something revolutionary, but it's really cool.

  • And at the time of this recording, it's free.

  • You can go in here and you can pick one of these categories.

  • What it does is it turns anything into an audiobook that you can then listen to.

  • So for example, I could go in here and say, hey, I want an audiobook for my commute.

  • I could set that my commute is, let's say, 15 minutes long.

  • And then I could post the article in here.

  • So you might have caught these speculations about the GPT 2 chatbot that came out this week.

  • We're not covering that in depth because this is AI news you can use.

  • And as of now, you can't even use GPT 2 chatbot.

  • It's down.

  • Anyway, that's besides the point.

  • The point is I have this article here and maybe I want to consume this on my commute as a audiobook and not read this right now.

  • So what you could do is you could just copy this.

  • You could paste the link in here and it would create, let's say, like a five-minute audiobook.

  • You just hit this and for free, it's going to send you an email.

  • Now, I actually did exactly that with a 10-minute duration.

  • So here you can have a listen to the audiobook it created and use this narrative structure.

  • So watch out for that.

  • I'm going to skip around here a little bit so you can get a feeling.

  • But kind of a cool use case, isn't it?

  • Imagine stepping into an arena where titans of technology spar with words instead of weapons, showcasing their prowess in solving puzzles and answering questions with the precision of a grandmaster chess player.

  • OK, so it introduces it like a story.

  • And then let's dive a little deeper here.

  • Waters of a new technological epoch, GPT 4.5, or more ambitiously, GPT 5.

  • What made this debutante chatbot compelling and curious all at once?

  • And there you go, you get the entire article as a little audio file that you can listen to while you commute.

  • That's a cool little use case.

  • I personally love consuming content via audio or video.

  • And yeah, I can see myself bookmarking this and using this maybe on a long train ride, load a bunch of articles into it and then have a little listening session like this.

  • Oh, and here's another super quick one.

  • This one is called EDM Vton.

  • And look, there's many alternatives to this.

  • We've seen apps like this before where you can use clothing and put them on pictures of people.

  • This is a free Hugging Space demo.

  • Link is in the description, as with all the others.

  • And basically, you can pick a model, you can pick a piece of clothing, you can upload your own pictures, and then you click try on in about 20 to 30 seconds, you get a result.

  • Look, the thing with these apps is usually they're not perfect.

  • This is one of the use cases that in order for this to be useful, it needs to be perfect.

  • But nevertheless, I wanted to feature it because it's something fun you can play with.

  • It's something fun you can use.

  • Honestly, I don't think this one is way better than all the other ones I've seen.

  • It is new though.

  • And maybe there's an incremental improvement in these.

  • Sooner or later, we're going to get perfect ones.

  • Nevertheless, something fun that can really show off the potential of AI well.

  • All right, the next one is going to be super fast.

  • And this is something we had, I think, a day or two after Llama 3 release.

  • But if you follow the show, you know that we cover Po here and there.

  • This is an alternative to the GPT store.

  • They have all kinds of bots.

  • And you can just simply log in with your Google account and access this for free too.

  • And what they did is they took the Llama 7TB model, namely the version running on Grok chips, which we also covered before.

  • Essentially, they're the fastest chips for inference that we have right now.

  • Meaning the text generates really, really fast.

  • And in combination with the Llama 3 model, this makes for a fantastic user experience.

  • And as I really like the Po interface, I guess I'm recommending it right now.

  • You can use Llama 3 7TB in a great interface that is super, super fast at generating.

  • And essentially, you have a model that does 90-95% of the GPT-4 use cases almost pretty much equally as well for free.

  • And it generates way faster.

  • Have a look at this.

  • I'll just click one of these prompts.

  • Boom, it's almost done.

  • Isn't that incredible?

  • So sure, these have been available, but this is so simple.

  • You just log in with your Google account.

  • You can bookmark this.

  • And yeah, I just wanted to let you know that the combo of Llama 3 and Grok is fantastic.

  • And if you're not paying for GPT-4, Clawed, or Gemini Advanced, this is probably your best bet as of today.

  • Okay, so next up, I will continue to tell you a little story about this week's sponsor and how their product is really interesting.

  • You should check it out.

  • Not this week.

  • We're keeping this video sponsor-free in order to tell you in about one minute why I haven't been uploading so much to YouTube as I wish I would have and why hopefully that's going to change soon.

  • And it's essentially because of the AI Advantage community.

  • So hear me out.

  • A few months ago, I ran into this wall with YouTube and uploading to YouTube that kind of felt super restraining and it felt super limiting because you just can't do everything on YouTube.

  • You're goddamn right.

  • And don't get me wrong.

  • I love YouTube.

  • It's been my main content consumption platform all my life, but it doesn't cater well to long-form content.

  • It doesn't cater well to content that requires some previous knowledge.

  • The community interaction is very limited.

  • You can leave a comment here.

  • There's a 100% chance that I'll read it.

  • I might reply, but that's usually where the interaction ends.

  • And I wanted more.

  • I love technologies.

  • I'm really into AI.

  • I generally care about these topics and I want to teach people more.

  • And this is where I felt limited on YouTube.

  • You just can't do one-hour lectures that build up on previous videos here.

  • It just doesn't work.

  • It'll get slotted by the algorithm and then it pulls all your other content down with it.

  • That's just how it works.

  • And fun side note, this Monday, we held a stream with some of the other biggest YouTube creators around AI.

  • Now that brought this question up because this has been a serious frustration for me.

  • I asked them, guys, look, if analytics and views weren't a thing, what would you be creating more of?

  • And everybody had their unique example.

  • Matthew, for example, said he would be creating way more interviews, but they don't perform so well, etc.

  • You can go listen to the stream.

  • It has been a fantastic stream.

  • I'm so glad we did this.

  • By the way, we plan on doing more on like a bi-weekly basis with rotating guests.

  • But for me, the one thing that I would like to do more of is teach people in depth.

  • Hold workshops, hire an entire team that holds workshops, creates guides, and maybe go beyond that and play with creative ideas, like hosting challenges where we teach you an AI skill and then a community of people completes it.

  • Then you get to compare, compete, learn from each other.

  • All of this is just not possible on YouTube, but it is possible if we create a separate platform.

  • So that's what I did.

  • And that's where a lot of my attention went recently.

  • And I named the whole thing AI Advantage Community.

  • It is paid, yes.

  • You can check it out in the description below.

  • But essentially, I've been holding bi-weekly events since last June.

  • And these are all the topics.

  • You can check them out here.

  • There's way more.

  • The team also holds events on development, mid-journey, stable diffusion.

  • But look at that.

  • Yesterday, I held a one-hour lecture on chat GPT memory, RAG, and local LLMs.

  • It was basically a beginner-friendly explanation to RAG and vector databases.

  • And it showed off what memory is trying to be, but it's not.

  • So this was an opportunity to talk about memory, share my thoughts on it, but then go into a 40-minute prepared lecture teaching you what RAG is.

  • And let's be honest, if I create a YouTube video that is titled How Does RAG Work for Beginners?

  • If you understand how YouTube works, you would just know that this would harm the channel.

  • But on here, I can do that and cover all these other topics.

  • So I hold these bi-weekly lectures right now.

  • Not promising I'll be doing this forever.

  • And then we do all this other stuff in the community.

  • I'm gonna end my little rant here.

  • Let's move on to the next use case.

  • But I just wanted to share this with you because this is where most of my time went.

  • I built a team of 10 people, four of them full-time.

  • And all of their work is working on the community and YouTube.

  • And as you can tell, the YouTube activity has been slowing down recently.

  • And my plan is to pick it back up.

  • I want to share a lot of the content from the community with you here with my newsletter.

  • We're gonna give guides, live events, and a lot of the learnings we produce in there away for free.

  • But you know, if you have that many mouths to feed, there's only so much you can do for free.

  • So that's it.

  • This is this week's sponsored segment.

  • And I just wanted to share this so you know that I've been working my ass off in the AI space.

  • Just not with a focus on the YouTube.

  • And I just wanted to share this because the show that we're doing today is all about What can you use?

  • And the community is the second part to that, which is how can you use it?

  • All right, let's get back to the next piece of AI news you can use.

  • Which is as simple as it is useful.

  • Now, given this is one of those apps that could be a prompt.

  • But it's really good to have it in an interface like this.

  • You don't have to worry about crafting it.

  • It's basically FAQ generator from website.

  • So what I'm gonna do is I'm just gonna take our website here.

  • And I'm gonna throw the URL in here.

  • And basically, this is gonna look at the website.

  • And it's gonna generate a set of frequently asked questions that people might have.

  • And then you can add this to your website.

  • Fantastic little use case.

  • So simple.

  • If you ever built a website for yourself, whether it's a portfolio website or whatever, adding a little FAQ section has never been simpler.

  • You can just use this.

  • Again, at the time of this recording, this is free.

  • If the change is down the line, honestly.

  • You could just take screenshots of your website.

  • Go into GPT-4, upload them, and then tell it, hey, create a FAQ for this website of mine, considering the target audience.

  • And then define your target audience.

  • That should do the trick.

  • Cool.

  • In the next piece of AI news you can use, Synthesia AI actually came up with brand new avatars.

  • And these avatars are upgraded.

  • They now express emotions.

  • Now, here's the thing.

  • We talked about this.

  • You might be aware of this.

  • Most people agree that HeyGen is currently the market leader in generating the most realistic AI avatars.

  • And they're really good at that because they looked very human-like.

  • And what do humans do when they speak?

  • They express emotions with their face.

  • They don't just talk like a robot.

  • And this is Synthesia's new update.

  • So what I decided to do is I decided to do a quick little shootout.

  • I just took a little piece of text that tells you to like this video if you enjoyed.

  • And I ran it for both Synthesia and HeyGen.

  • So now we get to compare these two.

  • And you get to make up your own mind which is better if they're equally as good.

  • Let's have a look.

  • Don't forget to hit the like button.

  • It really helps the channel.

  • As you can see, there were some emotions in her face.

  • Whereas before, it was very robotic.

  • It was just talking like this and it moved the lips accordingly.

  • But there was really no emotion at all.

  • Okay, so that was Synthesia.

  • Let's look at HeyGen.

  • Don't forget to hit the like button.

  • It really helps the channel.

  • And now, editing team, let's do a little side-by-side comparison.

  • On one side, Synthesia.

  • On the other one, HeyGen.

  • So my opinion here is that they're at a similar level.

  • But these are the preset avatars.

  • So if you want a custom avatar of yourself, I believe that Synthesia doesn't offer this yet.

  • It's only on their preset avatars and a limited number of them.

  • When you do this...

  • And by the way, I don't have a premium account.

  • You can just try this by yourself as of now.

  • All the ones that have this little star sign have the expressive avatars with the emotions.

  • So as you can see, there's not that many.

  • HeyGen has way more.

  • But my verdict would be they did catch up with HeyGen.

  • Which is great because HeyGen is really good.

  • All right, moving on to the next use case.

  • This is going to be a super quick one for all you Majorni enthusiasts.

  • They rolled out Alpha to all users who have generated 100 images or more.

  • And I believe this is the first time where the bar is so low that it's reasonable.

  • Anybody who has played a bit with Majorni will have generated 100 images or more because you just reiterate a lot.

  • And you can just go to midjourney.com, log in with your Discord account, and then you're finally free of Discord.

  • You don't have to use Discord anymore to generate.

  • You can just imagine stuff up here.

  • Of course, we do a quick cat with a hat and it just creates it in here.

  • You don't need Discord anymore.

  • You also get to explore as before, but the main thing is that you can generate images in here and also refine them.

  • Now look, some of the advanced features like in-painting that is available in Discord you don't have here, but you have a lot.

  • You can use all the flags.

  • You can use all the different features that Majorni offers in a web interface, which I think is fantastic.

  • So if you're a Majorni user and you didn't have this yet because I believe you didn't do 1000 images or more, there you go.

  • Majorni is obviously paid, but it is the best image generator as of now.

  • I think most people would agree on that.

  • Okay, and then I have one more thing that I want to mention.

  • And this is kind of like a future use case.

  • And I like to do this in the end.

  • And this is essentially a little discussion around the Rabbit Air 1.

  • If you're not familiar, it's orange little square that runs a very basic software where you can basically ask it questions and it answers.

  • So it's like an LLM with voice input and output.

  • And you can also use a basic camera to take pictures and prompt on top of those.

  • Now, if you're familiar with this, you'll also be familiar with the fact that again, the reviews have not been so great as this came out, just like with the Humane AI pin that came out a few weeks ago.

  • But this is the interesting part.

  • People that took it apart found out that it actually runs on an Android operating system, which means you can really easily run the entire software on an Android phone.

  • And that's exactly what they did right here.

  • All the features like shaking to return and using the side buttons on it, port directly over.

  • And I guess my point here is the one that has been made on Twitter a lot, which is that this $200 device could have easily been a $2 app on a phone.

  • Nevertheless, I think devices like this have a place in the marketplace at $200.

  • It's semi-reasonable.

  • If you want to get your hands on AI and like use it in your everyday routine, hey, it might just be worth it.

  • You be the judge of that.

  • But I think this is interesting because it paints a picture of the future.

  • And the picture of the future is that we're going to get these apps that are AI powered.

  • I think they're going to come out of Apple or Google.

  • And you bet I'll be here covering all those use cases, whether it's on iOS or Android.

  • I think the future is bright.

  • All software is going to be enhanced by this.

  • And for the foreseeable future, the consumers are going to be getting the upside here.

  • We're going to be getting quicker apps that can do more, but you do need a little bit of skill to operate them, right?

  • You do need to have certain communication skills to talk to these large language models, aka AIs.

  • People call it prompt engineering, but really it's just the art of asking relevant questions.

  • And if a device like the Rabbit helps you do that more on a daily basis, get a little bit of practice in, then great, because it prepares you into the future where knowing yourself and having confidence in your taste and having refined communication skills will be an advantage like no other.

  • Anyway, I hope these weekly videos help you out.

  • If you enjoy this and you want to check out more, feel free to check out the full playlist.

  • I do these every single week and the apps from a few weeks back are equally as good today as they were back then.

  • Maybe they're even better at this point.

  • All right, that's really all I got for today.

  • Thank you so much for hanging around all the way till the end.

  • And I'll see you next Friday.

All right, so look, if you follow this weekly show, you know I don't say this lightly.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it