Placeholder Image

Subtitles section Play video

  • So the example I'm going to try out is almost a code-cracking of a badly corrupted Korean sentence.

  • So here I pasted in the prompt, and I'm asking the model to translate this badly corrupted Korean sentence to English.

  • And as you can see, this is not an invalid Korean sentence.

  • So let's start with the existing model, GPT-40, and see how it does.

  • The model is just not able to understand this text, which is a valid response because this is not a valid language.

  • So what's happening here?

  • So Korean is an interesting language in that when we form a character, we can combine vowels and consonants, sometimes put the consonant at the bottom, and so on.

  • One way to corrupt this character is to add in some extra unnecessary consonants to it.

  • And that combination is so unnatural to native speakers, so they can just, when they see it, just automatically undo that change and understand the text.

  • So this is character-level corruption.

  • We can do that at the phrase level, we can also do that at the sound level, and so on.

  • People have come up with various methods like this, and I found it really interesting, so I adopted a few of them to create this example.

  • So if you understand Korean, this part that I'm highlighting now, you can read it off as a I'm not going to read off the whole thing, but this is the idea.

  • Koreans can read it, but the models find it so difficult to understand.

  • So now let's go on to our new model, O1 Preview, and see if reasoning can help solve this problem.

  • So I typed in the same thing.

  • Unlike GPT-4.0, this model starts thinking through this problem before outputting the answer.

  • So it's decoding the garbled text.

  • So that's actually the right approach, because I gave a translation task, but the underlying task is actually decoding this problem.

  • So it started off with the right path, and then I'm examining this text, deciphering the text.

  • Deciphering is actually the right verb to use here, enhancing the translation.

  • And then actually, it starts unpacking some part of it.

  • So here, and so on.

  • This is already a decrypted part of this.

  • And then once the model figures it out, that part, everything else is just easy enough.

  • So it does the other sentence too.

  • And so let's close this thought.

  • So it thought for 15 seconds.

  • The final translation, the model output is, No translator on earth can do this, but Koreans can easily recognize it.

  • There's a method of encrypting Hangul by inputting various transformations of vowels and consonants.

  • It creates a way to make it look different on the surface.

  • It can even confuse AI models.

  • I think this is a perfect translation of the sentence.

  • So this illustrates how general purpose reasoning models like O1 Preview can help seemingly unrelated questions like this, which is almost like a code cracking.

  • So I hope this illustrates how reasoning can be a powerful tool for solving your problems.

So the example I'm going to try out is almost a code-cracking of a badly corrupted Korean sentence.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it