Jump to content

8K HEVC on mainstream consumer GPUs


Recommended Posts



Finally, while NVIDIA only briefly touched upon the subject, we do know that their video encoder block, NVENC, has been updated for Turing. The latest iteration of NVENC specifically adds support for 8K HEVC encoding. Meanwhile NVIDIA has also been able to further tune the quality of their encoder, allowing them to achieve similar quality as before with a 25% lower video bitrate.


Way back in the early 2000's I remember doing basic ray tracing with my personal PC, it was very very very very very slooooow. Is cool there are now mainstream consumer cards which are specifically targeted to doing this, & in real time! Astonishing.





 Graphics processing units, or GPUs, have been moving forward since 1990 at a rate of improvement of 10 times the rate of Moore’s Law (doubling every two years), Huang said. It would take another 10 years to get from teraflops to petaflops in performance based on Moore’s Law advances.

“We didn’t want to wait that long, so we invented the Nvidia RTX,” Huang said. “Ray tracing on a brute force level won’t get us there unless we use this technology [the industry] discovered six years ago called deep learning.”

Nvidia is leveraging deep learning artificial intelligence in its RTX series, and Huang said the most powerful lever the company can use in speeding up Moore’s Law is architecture, or the underlying design of its graphics chips. Nvidia also worked with Microsoft, Industrial Light & Magic, and Epic Games to integrate RTX into their tech and create a real-time ray tracing demo, the Star Wars Stormtroopers.

The demo has reflections of reflections and soft shadows, and it requires a massive amount of graphics processing. It took four Tesla V100 graphics cards in a $68,000 supercomputer (or 3,000 easy payments of $19.95, Huang said), the Nvidia DGX, to do the real-time ray tracing demo at 20-something frames per second, Huang said.

A Turing-based chip can do the same thing, Huang said. There are 18.9 billion transistors in Nvidia’s first Turing chip, which features the hybrid rendering of a few types of processors. In can do in 45 milliseconds what the prior generation, Pascal, could do in 308 milliseconds.

“It took us 10 years to do this,” Huang said.

Turing has lots of Turing SM, Tensor Core, and RT Core subprocessors — or three different kinds of subprocessors that can do a lot of work in parallel. The RT Cores alone can do 10 Giga Rays of processing per second, or 10 times an older Nvidia GeForce 1080 Ti chip. The Tensor Cores can do 110 teraflops of processing, once again 10 times the performance of a 1080 Ti. 




But will it play Crysis?

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...