Jump to content

How much bit depth? (10-bit / 12-bit / 14-bit)


kye
 Share

Recommended Posts

5 hours ago, KnightsFan said:

@Deadcode each bit you add doubles the number of possible values. Adding two bits means there are 4 TIMES as many shades. 10 bit has 1/16 the shades vs 14 bit.

If i remember correctly the lower bit depth raw is developed the way i wrote earlier, so the least important bits were chopped off while recording, and added back in post process. Unfortunetly I cant find the discussion about it in the 12-bit (and 10-bit) RAW video development discussion thread, probably it was in it's parent topic.

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

@Deadcode That's correct. 10 and 12 bit simply truncate the lowest 4 or 2 bits in Magic Lantern. Removing each bit divides the number of possible values by two.

(So, to answer the original question, lower bit depths will manifest themselves by crushing the blacks. Though, as has been mentioned, the "blacks" have a low signal to noise ratio anyway).

Link to comment
Share on other sites

1 hour ago, KnightsFan said:

@Deadcode That's correct. 10 and 12 bit simply truncate the lowest 4 or 2 bits in Magic Lantern. Removing each bit divides the number of possible values by two.

(So, to answer the original question, lower bit depths will manifest themselves by crushing the blacks. Though, as has been mentioned, the "blacks" have a low signal to noise ratio anyway).

Interesting... I take back what I wrote before.

Though 12 bit might still be okay. The 5D Mark III never seemed to have more than 12 stops of DR to me, noisy shadows etc.

Link to comment
Share on other sites

@HockeyFan12 you are most certainly right about that. The shadows are filled with chroma noise. You have to expose as far to the right as possible...hell you might even have to sacrifice some DR in the highlight range to get better shadow performance ! Thats why I feel like every bit counts with these things but you shouldn't take any visible decline in image quality at 12 bit

Link to comment
Share on other sites

@HockeyFan12 I re-read your original post. I agree wholeheartedly that there is no practical difference between 12 and 14 bit color video. However, Raw is different. Essentially, each 14 bit raw sample becomes three separate 8 bit values, to make a 24 bit color pixel. That means that each pixel in the final video has 1024 times as many possible colors as each corresponding raw pixel--and that's when you're outputting to just 8 bit! (I said "essentially" because color pixels are reconstructed based on surrounding pixels as well)

I don't know if it makes a practical difference on a 5D with magic lantern, so you could be completely right about 12 bit being practically equivalent to 14. I haven't had access to a 5D since before the lower bit hacks became available, unfortunately.

Link to comment
Share on other sites

 

11 minutes ago, KnightsFan said:

@HockeyFan12 I re-read your original post. I agree wholeheartedly that there is no practical difference between 12 and 14 bit color video. However, Raw is different. Essentially, each 14 bit raw sample becomes three separate 8 bit values, to make a 24 bit color pixel. That means that each pixel in the final video has 1024 times as many possible colors as each corresponding raw pixel--and that's when you're outputting to just 8 bit! (I said "essentially" because color pixels are reconstructed based on surrounding pixels as well)

I don't know if it makes a practical difference on a 5D with magic lantern, so you could be completely right about 12 bit being practically equivalent to 14. I haven't had access to a 5D since before the lower bit hacks became available, unfortunately.

 

Hang on, your maths isn't correct.

When we talk about "8-bit colour" we mean that every pixel has 8-bits of Red, 8-bits of Green and 8-bits of Blue.  

If it truly was 8-bits per pixel then that would mean each pixel could only be one of 256 colours, and we'd be back to the early days of colour VGA, and images like the one on the right :)

carousel-colour-depth.jpg

 

Link to comment
Share on other sites

26 minutes ago, KnightsFan said:

I said "three separate 8 bit values, to make a 24 bit color pixel"

I'm not so sure I agree with that, either.

I agree that 12 bit vs 14 bit raster video should be irrelevant.... both are more than good enough if the range of values in the recorded is compressed into that space. Even for HDR it's way more than you need.

And I also agree that if 12 bit raw is cutting out two bits in the shadows rather than compressing the range of values into a smaller space, you're losing information permanently in the shadows, not just losing precision (which matters much less). Of course, the 5D has such noisy shadows (maybe 11.5 stops DR total) that it's possible 12 bit is still effectively identical to 14 bit since those last two bits are entirely noise... but 10 bit would almost definitely imply losing actual shadow detail, probably cutting your dynamic range hard at 10 stops. (Unless the scene is overexposed or ETTR with low scene dynamic range.)

The rest I disagree with. With a bayer array, each photo site only represents one either R, G, or B value anyway because it only has one color filter on it. With the 5D, it's 14 bit grayscale that's recorded for each pixel. And that 14 bit grayscale value isn't interpolated directly into three 8 bit values for R, G, and B; it can't be because there's literally only one color represented there, either R, G, or B depending on the color filter. 

When that value is transformed into an RGB value in the final image, it's through interpolating the nearby pixels which have different color filters, and while I'm not sure what the exact algorithm is, it's definitely drawing on multiple pixels, each with 14 bit precision in the case of the 5D. So the loss of color detail there doesn't have to do with bit depth but rather bayer interpolation being imprecise. You're losing resolution through bayer interpolation, but not bit depth. (Which is one reason single chip sensors don't really have "4:4:4" color....)

Link to comment
Share on other sites

All that is true. I was responding to your first post, where you said:

"The human eye is estimated to see about 10 million colors and most people can't flawlessly pass color tests online, even though most decent 8 bit or 6 bit FRC monitors can display well over ten million colors: 8 bit color is 16.7 million colors, more than 10 million."

My point was that that this reasoning does not apply to raw, and that Raw samples NEED a higher bit depth than each individual color sample in a video file. I was illustrating the point by showing that if you count the bits in a lossless 1920 x 1080 Raw image at 14 bit depth, it will be considerably smaller than a 1920 x 1080 color image at 8 bit.

11 minutes ago, HockeyFan12 said:

But that 14 bit grayscale value isn't interpolated directly into three 8 bit values for R, G, and B; it can't be because there's literally only one color represented there.

True! I was simplifying. But aggregate over the entire image, you still end up with less overall data in the Raw file than the color file.

12 minutes ago, HockeyFan12 said:

So the loss of color detail there doesn't have to do with bit depth but rather bayer interpolation.

Two ways of saying the same thing, in terms of data rate.

I think we're in agreement for the most part :)

Link to comment
Share on other sites

25 minutes ago, KnightsFan said:

All that is true. I was responding to your first post, where you said:

"The human eye is estimated to see about 10 million colors and most people can't flawlessly pass color tests online, even though most decent 8 bit or 6 bit FRC monitors can display well over ten million colors: 8 bit color is 16.7 million colors, more than 10 million."

My point was that that this reasoning does not apply to raw, and that Raw samples NEED a higher bit depth than each individual color sample in a video file. I was illustrating the point by showing that if you count the bits in a lossless 1920 x 1080 Raw image at 14 bit depth, it will be considerably smaller than a 1920 x 1080 color image at 8 bit.

True! I was simplifying. But aggregate over the entire image, you still end up with less overall data in the Raw file than the color file.

Two ways of saying the same thing, in terms of data rate.

I think we're in agreement for the most part :)

I don't agree entirely on the semantics, but I don't understand all the details there anyway, to be fair. On the overall message, I agree.

Link to comment
Share on other sites

  • 2 years later...

I know this is an old topic, but I wanted to see for myself the differences that full frame MLV raw video bit-rate does on the 5D2 so here is a quick test I did.

First the variables :

Canon 5D Mark II 2.1.2 w/94k actuations
Canon 35mm F/2 IS @ F/7
VAF 5D2 AA Filter
SanDisk ExtremePro 32Gb UDMA 7 160MB/s CF card

magiclantern-crop_rec-3k_Updated_Center_4.20pm-5D2-eXperimental.2019Nov14.5D2212 from Reddeercity
100 ISO
1866 x 1044 1.00x Crop @ 23.976 fps 1/50s

I shot the same scene with both highlights and shadows clipping so we can have a good sense of dynamic range.

Using the ML spot-meter, the sky is 100%, the wall in-between the two windows is 55% and the towel is 1%.

Here is an example of the ACR settings I used. I only changed two things : Highlights and Shadows

Example-of-ACR.jpg

First test : ACR settings : -100 Highlights & +100 Shadows : This is an extreme test, you can see banding so this is not usable footage, but a good example.

100-HIGH-SHAD.jpg

Second test : ACR settings : -70 Highlights & +70 Shadows : This is the highest shadows I could push before seeing banding, so I would consider this the limit of usable footage.

70-HIGH-SHAD.jpg

You can click on the images to view in full resolution. Areas to look at are the electric baseboard as it shoes the color shifting very clearly. Towel shoes the banding and the wall in between the two windows shows the effects on middle exposure (55%).

Conclusion :

I found that the cleanest was the 14-Bit (not surprising) compared to the two others. However, this is not a continuous setting on the 5D2 as it starts skipping frames around 24-26 seconds. The 12-Bit sounds like a good compromise as I can pretty much get it continuous with sound. It does have a very slight color shift towards green in the -100+100 test and a bit more banding visible on the towel, but in the -70+70 it looks (to my eye) 99% the same as the 14-Bit. The 10-bit is the worst of the bunch (again, not surprising) but I was very surprised at the green color shifting in the shadows. It was much more visible than I anticipated. Finally, the banding is less severe in the towel, but the green noise takes over that whole area anyways. Overall, the highlight retention of all the bit depths seem to be roughly equal, as I cannot see a difference between them.

Conclusion of the conclusion :

I will be using the 12-bit for 95% of shooting scenarios, especially ones where I need longer takes than 24-26 seconds. I'll use 14-Bit if I am in a low-light situation where I know I will need to pull up exposure and shadows. Finally, I'll never touch 10-bit as the color shifting is too severe for my taste.

Hope you found my tests useful !

Link to comment
Share on other sites

16 hours ago, heart0less said:

Cool!

It's nice to see 5D2 getting some proper love. 

Thank you ! I love this camera.

Ok so following feedback I got from the magic lantern forum, I did another test. This time, I am testing the highlight detail retention capabilities of different bit depths.

First the variables :

Canon 5D Mark II 2.1.2 w/94k actuations
Canon 35mm F/2 IS
VAF 5D2 AA Filter
SanDisk ExtremePro 32Gb UDMA 7 160MB/s CF card

magiclantern-crop_rec-3k_Updated_Center_4.20pm-5D2-eXperimental.2019Nov14.5D2212 from Reddeercity
100 ISO
1866 x 1044 1.00x Crop @ 23.976 fps 1/50s

Methodology :

I started with a very over-exposed sky and brought down the exposure by 1/3 ev using the aperture.

Test starts at f/6.4 and using the raw zebras as mentioned in the thread. I then pulled back two settings in ACR, exposure and Highlights. Exposure is brought down relatively to the aperture setting so that the resulting image is the same exposure across the board. Highlights are always pulled back @ -100.

I did two rounds, first at 10-bit and the second at 14-bit. Something that I encountered however during the recording is that the RAW zebras did not show while recording 10-bit footage. They were displaying fine while looking at live-view without recording, but as soon as I started recording, the zebras turned standard.

Here is what I am talking about. First image is while recording 14-bit, second while recording 10-bit.

IMG-3822.jpgIMG-3821.jpg

Here is a reference picture of the scene I made with my iPhone 7 Plus :

Reference-picture.jpg

Onto the test itself.

Here is the 14-bit :

14-Bit-Exposure.jpg

And here is the 10-bit :

10-Bit-Exposures.jpg

Don't forget that you can click on these images to view in full resolution.

Can you tell the difference ? 😛


Here are all my files from the test, DNG's, xmp profiles created in ACR, PSD files, exported JPEGs and everything in between.

Conclusion :

It seems that both 14-bit and 10-bit depths handle high exposure detail very, very well. Both can be ''metered'' about the same for the highlights. From what I can tell, we lose detail in the clouds at f/6.4 and f/7. We start seeing consistent detail in the clouds at f/8. Coincidentally, f/8 is when the black bars in the RAW zebras don't appear anymore. I would conclude that using zebras for exposing and looking for an exposure just before black bars is a great way of getting all the highlight capacity of the sensor. I had trouble seeing the difference in this scene from 10-bit and 14-bit. In my eye, they both seem equal. I did not test 12-bit because the difference was marginal compared to 14-bit in the first test. 

The bulk of the difference that I wanted to expose is between the two extremes. This does support my hypothesis in the first test that highlight detail seems unaffected by bit depth. 

I also took the good exposure picture (f/8) and pushed the shadows to 100%. I looked for areas where I could see differences. Here is another example of the green cast in the shadows when pushed to the extreme (Shadows -100) :

Shadow-CUT.jpg

You can see what I am talking about in two places : the back of the stop sign and the window frame. In the 10-bit portion (left) you can see some green cast in the recovered shadow areas, but the cast is much less severe than the initial test. There is however much more grain. Its especially visible in the window frame section in the middle.

Conclusion of the conclusion :

10-Bit seems AS GOOD AS as 14-Bit for highlight detail retention @ 100 ISO. For shadow detail when pushed up +100, 14-Bit still holds an advantage, albeit less than previously thought.

Link to comment
Share on other sites

@Volumetrik  Nice tests!

 

 

7 hours ago, Volumetrik said:

It seems that both 14-bit and 10-bit depths handle high exposure detail very, very well. Both can be ''metered'' about the same for the highlights.  [snip]

I had trouble seeing the difference in this scene from 10-bit and 14-bit. In my eye, they both seem equal.

That's because bit depth and dynamic range are two completely independent properties.

 

There seems to be a common misconception that bit depth and dynamic range (or contrast) are the same thing or are somehow tied together -- they're not.

 

Testing the look of various bit depths is more suited to a color depth perception experiment, but we're not viewing your images on a 14-bit monitor, so the results don't give true bit depth (and, hence, true color depth).  Of course, greater bit depths give more color choices when grading, but bit depth doesn't inherently affect contrast.

 

By the way, another common misconception is that bit depth and color depth are the same property -- they aren't.  Bit depth is merely one of two equally-weighted factors of color depth, with the other factor being resolution.  Essentially,  COLOR DEPTH = BIT DEPTH x RESOLUTION.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...