Jump to content

How Important is 10-Bit Really?


Mark Romero 2
 Share

Recommended Posts

20 minutes ago, cantsin said:

I found exactly three references for this equation, all in camera forums, and all posted by a forum member called tupp...

But seriously, I see the point that in analog film photography with its non-discrete color values, color depth can only be determined when measuring the color of each part of the image. Naturally, the number of different color values (and thus color depth) will increase with the resolution of the film or the print.

In digital photography and video, however, the number of possible color values is predetermined through the color matrix for each pixel. Therefore, in digital imaging, color depth = bit depth.

 

And this is where my creative brain sucker punches my technical brain. ??

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
57 minutes ago, cantsin said:

I found exactly three references for this equation, all in camera forums, and all posted by a forum member called tupp...

So what?

 

 

57 minutes ago, cantsin said:

But seriously, I see the point that in analog film photography with its non-discrete color values, color depth can only be determined when measuring the color of each part of the image. Naturally, the number of different color values (and thus color depth) will increase with the resolution of the film or the print.

It works the same with digital imaging.  However, in both realms (analog and digital) the absolute color depth of an image includes the entire image.

 

I will try to demonstrate how color depth increases in digital imaging as the resolution is increased.  Consider a single RGB pixel group of size "X," positioned at a distance at which the red, green and blue pixels blend together and cannot be discerned separately by the viewer.   This RGB pixel group employs a bit depth that is capable of producing "Y" number of colors.

 

Now, keeping the same viewing distance and the same bit depth, what if we squeezed two RGB pixels into the same space as size "X"?  Would you say that the viewer would still only see "Y" number of colors -- the same number as the single pixel that previously filled size "X" -- or would (slightly) differing shades/colors of the two RGB pixel groups blend to create more colors?

 

What if we fit 4 RGB pixel groups into space "X"?  ... or 8 RGB pixel groups into space "X"?

 

 

57 minutes ago, cantsin said:

In digital photography and video, however, the number of possible color values is predetermined through the color matrix for each pixel. Therefore, in digital imaging, color depth = bit depth.

As I think I have shown above, resolution plays a fundamental role in digital color depth.

 

Resolution is in fact an equally weighted factor to bit depth, in digital color depth.  I would be happy to explain this if you have accepted the above example.

Link to comment
Share on other sites

You talk about perceptual color depth, created through dithering, not technical color depth. And even that can't be measured by your formula, because it doesn't factor in viewing distance.

Or to phrase it less politely: this is bogus science.

Link to comment
Share on other sites

5 hours ago, cantsin said:

You talk about perceptual color depth,

No.  I am referring to the actual color depth inherent in an image (or imaging system).

 

 

5 hours ago, cantsin said:

created through dithering,

I never mentioned dithering.  Dithering is the act of adding noise to parts of an image or re-patterning areas in an image.

 

The viewer's eye blending adjacent colors in a given image is not dithering.

 

 

5 hours ago, cantsin said:

And even that can't be measured by your formula, because it doesn't factor in viewing distance.

Again, I am not talking about dithering -- I am talking about the actual color depth of an image.


The formula doesn't require viewing distance because it does not involve perception.  It gives an absolute value of color depth inherent in an entire image.  Furthermore, the formula and the point of my example are two different things.


By the way, the formula can also be used with a smaller, local area of images to compare their relative color depth, but one must use proportionally identical sized areas for such a comparison to be valid.

 

 

5 hours ago, cantsin said:

Or to phrase it less politely: this is bogus science.

What I assert is perfectly valid and fundamental to imaging.  The formula is also very simple, straightforward math.

 

However, let's forget the formula for a moment.  You apparently admit that resolution affects color depth in analog imaging:

6 hours ago, cantsin said:

I see the point that in analog film photography with its non-discrete color values, color depth can only be determined when measuring the color of each part of the image.  Naturally, the number of different color values (and thus color depth) will increase with the resolution of the film or the print.

Not sure why the same principle fails to apply to digital imaging.  Your suggestion that "non-discrete color values" of analog imaging necessitate measuring color in parts of an image to determine color depth does not negate the fact that the same process works with a digital image.

 

The reason why I gave the example of the two RGB pixels is that I was merely trying to show in a basic way that an increase in resolution brings an increase in digital color depth (the same way it happens with an analog image).  Once one grasps that rudimentary concept, it is fairly easy to see how the formula simply quantifies digital, RGB color depth.

 

In a subsequent post, I'll give a different example that should demonstrate the strong influence of resolution on color depth.

Link to comment
Share on other sites

 

 

Here is a 1-bit image (in an 8-bit PNG container):

radial_halftone.thumb.png.2c11aa87f4f80287c6379767baa7b870.png

 

If you download this image and zoom into it, you will see that it consists only of black and white dots -- no grey shades (except for a few unavoidable PNG artifacts near the center).  It is essentially a 1-bit image.

 

If you zoom out gradually, you should at some point be able to eliminate most of the moire and see a continuous gradation from black to white.

 

Now, the question is:  how can a 1-bit image consisting of only black and white dots exhibit continuous gradation from black to white?

 

The answer is resolution.  If an image has fine enough resolution, it can produce zillions of shades, which, of course readily applies to each color channel in an RGB digital image.

 

So, resolution is integral to color depth.

Link to comment
Share on other sites

Actually, well exposed you can do wonderful things with 8 bit, if you choose the right software.

But if you shoot log/HLG or plan to do a lot of grading/keying/vfx insert, and don't want any banding or artefact, 10bit is nessecary for perfect results.

That being said, "perfect" depends a lot of your sensitivity, your eyes, and of course, the broadcast media.

Since I have a GH5 with its 10 bit, my videos are not fundamentally better, but I'm more satisfied about what I can do in postprod :)

Link to comment
Share on other sites

1 hour ago, Alex Uzan said:

Actually, well exposed you can do wonderful things with 8 bit, if you choose the right software.

But if you shoot log/HLG or plan to do a lot of grading/keying/vfx insert, and don't want any banding or artefact, 10bit is nessecary for perfect results.

That being said, "perfect" depends a lot of your sensitivity, your eyes, and of course, the broadcast media.

Since I have a GH5 with its 10 bit, my videos are not fundamentally better, but I'm more satisfied about what I can do in postprod :)

8bit has very little bandwidth to be pushed around in grade. 8bit cameras are setup by the manufacturer through internal testing to capture the optimum image gradiation for it's respective sensor encoding. Just because an 8bit camera offers log or exposure tools doesn't mean that the image is that mallabe. 

The 8bit HEVC images coming out of my NX1 are vibrant and brilliant stock, and I notice very little artifacts as is. But I have little room to push channels in Lumentri before I break it and see artifacts, banding, macroblocking, noise, etc.

Thankfully, Neat Video comes to the rescue and does a stellar job at cleaning up that mess, which is exactly what it was designed for. When I then view those results on my 10bit monitor, the image holds up rather nicely. 

For those with 8bit systems, don't feel discouraged or left out of the conversation, you have viable options. 

Link to comment
Share on other sites

20 hours ago, Matthew Hartman said:

8bit has very little bandwidth to be pushed around in grade.  8bit cameras are setup by the manufacturer through internal testing to capture the optimum image gradiation for it's respective sensor encoding. Just because an 8bit camera offers log or exposure tools doesn't mean that the image is that mallabe. 

The 8bit HEVC images coming out of my NX1 are vibrant and brilliant stock, and I notice very little artifacts as is. But I have little room to push channels in Lumentri before I break it and see artifacts, banding, macroblocking, noise, etc.

Barring compression variables, the "bandwidth" of 8-bit (or any other bit depth) is largely determined by resolution.  Banding artifacts that are sometimes encountered when pushing 8-bit files (uncompressed) are not due to lack of "bandwidth" per se, but result from the reduced number of incremental thresholds in which all of the image information is contained.

 

 

Link to comment
Share on other sites

2 hours ago, tupp said:

Barring compression variables, the "bandwidth" of 8-bit (or any other bit depth) is largely determined by resolution.  Banding artifacts that are sometimes encountered when pushing 8-bit files (uncompressed) are not due to lack of "bandwidth" per se, but result from the reduced number of incremental thresholds in which all of the image information is contained.

 

 

One more time in English. :grin:

Link to comment
Share on other sites

10 hours ago, Matthew Hartman said:

One more time in English. :grin:

Ha, ha!

 

If you start out with an uncompressed file with negligible noise, the only thing other than bit depth that determines the "bandwidth" is resolution.  Of course, if the curve/tone-mapping is too contrasty in such a file, there is less room to push grading, but the bandwidth is there, nevertheless.

 

Bandwidth probably isn't the reason that  8-bit produces banding more often than 10-bit, because some 8-bit files have more bandwidth than some 10-bit files.

 

 

Link to comment
Share on other sites

On 27/2/2018 at 7:24 PM, Deadcode said:

Dave Dugdale proved me with this video that 10 bit is a fairy dust if you are a run& gun shooter, wedding cinematographer or you are in love with architecture cinematography. 

I have never faced banding issues with 8 bit codec, and if you are shooting with correct WB you don't have to twist your footage till it breaks. And if it breaks it's probably because of the 100Mbps bitrate and not the 8 bit...

I think for most cases 10 bit gives you just a little more advantage over 8 bit if you nail your exposure and WB.

And i have worked with BMPCC in ProRes/RAW and with 5D RAW before.

It seems like the next big think is shooting in HLG with 8bit which works perfectly fine.

I'm sorry what is it hlg? thx

Link to comment
Share on other sites

Since Deadcode revised the standard for HLG from 10-bit to 8-bit, it must be fine. After all, he says so. Never mind that every single authority without exception says otherwise. And I’m sure he’s also uploaded countless HLG  HDR videos online.

Link to comment
Share on other sites

1 hour ago, jonpais said:

Since Deadcode revised the standard for HLG from 10-bit to 8-bit, it must be fine. After all, he says so. Never mind that every single authority without exception says otherwise. And I’m sure he’s also uploaded countless HLG  HDR videos online.

Actually i never said even anything similar :D

Both 8bit/10bit HLG is awesome,  finally those who dont know anything about colorgrading can have a decent looking image without effort or knowing how to use their camera! Or something like that.

Link to comment
Share on other sites

3 minutes ago, jonpais said:

 

I see your point. HLG can work perfectly fine with 8 bit codec. Of course 10 bit is better, but: do you see the difference on the out of camera footage?

I think even on your shogun you cannot see any difference between the 8 bit HLG  and 10 bit HLG from the GH5.

And nowdays the consumer HDR TV is around 500/700nit bright... not 1500

Link to comment
Share on other sites

This article is about TV's but it has a lot of good info in it about HDR, and well 8 bit Ain't cutting it. Sorry.

https://dgit.com/4k-hdr-guide-45905/

And this article is probably more than you want to read LoL. But some Really good stuff in it. Even I understand a little about it.

http://www.murideo.com/news/what-you-need-to-know-about-uhdtv-hdr-hlg-wcg

Link to comment
Share on other sites

21 hours ago, jonpais said:

Panasonic does not even offer 8-bit HLG.

 

15 hours ago, Dan Wake said:

which camera offer hlg 8 or 10 bit? thx

Panasonic GH5 actually can shoot 8 bit HLG 4k 30P when using a special trick. I think it is a bug which allows 8 bit HLG but only in 30P.

When I save 4k 60P in C-memory and then select record mode HEVC the camera uses h.264 8bit 4k 30P 100Mbs but allows HLG.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...