Subtitles section Play video Print subtitles in computer games and animation movies, we have a 3D scene with objects and materials, and we wish to see an image. How it would look in reality. That is called rendering. One of my favorite ways to do rendering is through ray tracing, which is an incredible technique that simulates how light interacts with the scene and gives you these beautiful results. That was rendering. Scene goes in, an image comes out. But now, imagine the opposite. What is this crazy talk? Well, hear me out. Imagine that you have images, and you want to get what the scene was behind the image. Get a digital 3D model of the scene. An image goes in, and a scene comes out. And imagine how cool that would be when you want to create a video game, you just make an image, and out comes the scene for the game. However, not so fast. Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér, and here, the question is, what is the geometry there, what materials are they made of, lighting everything? That needs an expert and many-many work hours. Here you see Andrew Price assembling such a scene in Blender, a 3D editor program. Imagine that you have a photo of the result, then, you have to do all this sculpting, assigning materials, lighting, and more. And then, when you are done, rendering. But at this point, you just think that you are done, because your results are obviously not the same as the target image, so you have to play with the geometry, lighting, and materials, render the image again. This takes forever. Hours. Days. Even weeks, depending on the complexity of the scene. And it is super challenging. So, we look at a photo and build a scene by hand. I imagine that in a science fiction world, we could, perhaps, have an amazing algorithm that could do this automatically. Let's call this inverse rendering. How would that even work? Well, few of you Fellow Scholars know, there are previous works that do that. They are absolutely amazing. You see how these beautiful pieces of geometry grow out of nothing. You just specify what you want with a 2D image, and it creates a 3D model. And we are talking detailed 3D models here. Super incredible. But this is just 3D modeling. What about materials? And now, have a look at this research paper from the University of California, Irvine and NVIDIA. And it can do some amazing things. First, if we put a few light sources on a painting, it can reconstruct the painting itself. It is similarly good with reconstructing the material of this object too if it can take a look at a set of images of it. But here come my favorites. Here is a tree. Now, hold on to your papers Fellow Scholars, because the task is super challenging. What you get is not even a view of the plant. No, no, you get a look at just the shadow of the plant. Now, figure out what the plant looks like. Previous techniques couldn't do it, they were just trying and trying, and not getting anywhere. But the new method, first, it tries to sculpt the object in different possible ways to be able to match its shadow, and as it does that, you can see its current guess for the geometry of the tree itself. So, can it do it? It can! Wow! Absolutely amazing! And all this from just seeing a shadow. The processes that you see here are sped up, and this one took only 16 minutes. I think, for a human being, doing this sounds almost impossible in any amount of time, let alone in a few minutes. Incredible! Now, we have two more really cool tests for it, and then, a huge surprise! Here is one more case where we reconstruct an octagon by merely looking at its shadow again. This one checks out, and now, we have a world map relief in a large room with a window. It gets to look at some images of it, but ultimately, what it needs to reconstruct is the bumps in the geometry that create this effect. And I love seeing how it morphs this solution into being. So cool! And here comes the best part! This technique is up to a hundred times faster than its predecessors. A hundred X! That is an incredible leap forward in just one research paper. And since the quickest scene goes from 12 minutes up to about 2 hours, that can already be useful for some tasks, but just imagine what we will be capable of two more papers down the line! My goodness! So, we don't need to live in a science fiction world for a technique like this, it is right here! I am getting goosebumps. And it is clearly not as good as Andrew, no one is, but it is an amazing step forward in creating virtual worlds, maybe even video games from just an image or a drawing. Don't forget, scientists at Google DeepMind are already working on the video game part. And good news, the source code is also available. So we get all this knowledge for free! I don't know if I told you before, but I love research papers. And if you wish to run your own experiments in the cloud, make sure to check out Microsoft Azure AI. And by the way, Azure AI is a powerful cloud platform that offers you the best tools for your AI projects with responsible AI built-in. And here comes the best part, you can even try it out for free through the link in the video description.
B1 image scene rendering geometry reconstruct shadow NVIDIA’s New Tech: Next Level Ray Tracing! 13 1 VoiceTube posted on 2024/06/11 More Share Save Report Video vocabulary