Nvidia: Photorealistic VR Around the Corner
Generating life-like visuals on ANY system is an extremely difficult task. And unfortunately, lack-luster visuals are one of the existing barriers to a truly immersive VR experience. Get started with VR investing right here.
Sure, there is some promise in global illumination algorithms (a group of algorithms used in 3D computer graphics that add more realistic lighting to 3D scenes). But the best of them are very computationally expensive.
Some would say today’s graphics engines just can’t handle the requirements...
But Nvidia disagrees. They recently published some exciting updates that made VR enthusiasts giddy.
First, Nvidia announced their new Titan RTX graphics card. This card is the latest to use their Turing architecture.
The Turing architecture is famous because it’s optimized for quick and realistic graphics. Standard graphics cards have thousands of cores. This allows them to solve parallelizable problems that can be divided and conquered—problems like rendering graphics. The Turing architecture supports different types of cores for solving specific mathematical problems.
The Titan RTX has 72 RT Cores and 576 Tensor Cores. RT cores are optimized for ray tracing, which is a computationally expensive algorithm for producing life-like visuals.
The Tensor Cores are optimized for 4x4 matrix operations, a key calculation for many deep learning algorithms. For reference, deep learning models are somewhat inspired by information processing found in biological nervous systems—like your brain.
The Tensor Cores can also support graphics rendering by training the system to fill in the gaps of images that were partially rendered. But that’s not all, because in the same week, Nvidia announced they had created the first AI capable of rendering an interactive virtual world.
Since this AI was trained from real-world footage, the graphics it generates look realistic. With the exception of some smearing, which is common with today’s image-generating AIs.
Nvidia used the Unreal Engine 4 to create a bare-bones 3D environment. Then, in real time, the AI could paint over the models to add color—and life—to them.
Letting the neural network produce the detailed graphics is much more efficient than today’s methods of graphics generation.
But not only is time saved in rendering, but time is also saved in creating the 3D models. Virtual reality and graphics content creators could see an improvement in not only development time, but also project costs. The lack of content is a big hurdle for the extended reality industry, so this is huge news.
Photorealism being tackled from both sides—ray tracing on the left and deep neural networks on the right— promises truly immersive experiences in the future. And with the software and hardware improving every day, photo-realism may not be too far away.