Jump to content

10-bit vs 8-bit: Hype or Real?


Anil Royal
 Share

Recommended Posts

I bought an Atomos Ninja external recorder after reading a good deal on the internet, and watching a ton of YouTube videos that tell why 10-bit color depth is superior to 8-bit, how it makes a huge difference during color grading, etc. In theory, 10-bit gives each pixel the ability to pick from over 1 billion shades of color; compared to 16 million shades offered by 8-bit. This allows for smoother color gradations, avoids banding, etc. I got that part. But what about in reality? Does it make a noticeable difference?

I searched everywhere, for a side-by-side comparison of the same subject shot on 8-bit 4:2:0 internal vs 10-bit 4:2:2 external. I found none. Everybody explained why 10-bit is great, but offered no visual proof. At least nothing my eyes could spot.  

So I made these comparison tests myself, after getting the Atomos Ninja. Same test subject. Filmed using EOS-R. Three clips: 8-bit 4:2:0 internal H.264; 8-bit 4:2:2 ProRes external; 10-bit 4:2:2 ProRes external. All are 4K. All are CLOG.

On the first look, there's one obvious difference. 8-bit 4:2:0 CLOG internal footage looked noticeably flatter than the external CLOG footage (both 8-bit and 10-bit). I don't know if that's a good thing or bad. I'd expect CLOG to be quite flat. Why is the external recorded footage less flat? Is the ProRes codec used by Atomos Ninja playing with the contrast levels before recording it? Internal 8-bit CLOG recording, when viewed through waveform monitor, showed more room for brightness and darkness adjustments before clipping - compared to the external recordings! Isn't it supposed to be the other way!?! 

Other than that difference in flatness, I can't find anything else. 10-bit or 8-bit, both externally recorded clips didn't give me any additional leg room while color grading - compared to the 8-bit internal clip. 

What more, 8-bit and 10-bit external recordings looked exactly the same untouched, as well as after applying the same grades. No matter how hard I tried to look (zooming in, for instance), I can't find a difference. It may have to do with ProRes encoding everything as 10-bit 4:2:2 irrespective of the source bit depth. It is said that extreme color grading breaks apart 8-bit while 10-bit holds pretty good. This argument is backed by a strong theory. But if I can't see how much more better 10-bit is than 8-bit, what's the point with these external recorders? Is Atomos Nijna merely an overpriced monitor? (It does a good a job of that, for sure).

And, assuming 10-bit makes color graded videos have less artifacts, what happens when the edited video is exported using an 8-bit codec, and viewed on an 8-bit monitor?  

There may be rare cases where 10-bit is useful. But for most scenarios, I started to believe 10-bit isn't any better than 8-bit.  

What am I missing? I'd be glad if somebody could prove me wrong and confirm that 10-bit is worth it.

Link to comment
Share on other sites

The different contrast levels are probably due to data vs video levels. Not sure if your particular camera lets you choose which to use over HDMI, but if not you can fix in in your NLE. Resolve has an option to swap between video or data levels, and in Premiere you need to add a fast colour corrector with output set to 16-235 before you add any other colour effects.

You might not always notice the difference when you're editing, but those times you do, you'll be very thankful. It also helps you reach minimum requirements for a lot of broadcasters and agencies which means more work opportunities. 

Link to comment
Share on other sites

19 hours ago, Anil Royal said:

I bought an Atomos Ninja external recorder after reading a good deal on the internet, and watching a ton of YouTube videos that tell why 10-bit color depth is superior to 8-bit, how it makes a huge difference during color grading, etc. In theory, 10-bit gives each pixel the ability to pick from over 1 billion shades of color; compared to 16 million shades offered by 8-bit. This allows for smoother color gradations, avoids banding, etc. I got that part. But what about in reality? Does it make a noticeable difference?

I searched everywhere, for a side-by-side comparison of the same subject shot on 8-bit 4:2:0 internal vs 10-bit 4:2:2 external. I found none. Everybody explained why 10-bit is great, but offered no visual proof. At least nothing my eyes could spot.  

So I made these comparison tests myself, after getting the Atomos Ninja. Same test subject. Filmed using EOS-R. Three clips: 8-bit 4:2:0 internal H.264; 8-bit 4:2:2 ProRes external; 10-bit 4:2:2 ProRes external. All are 4K. All are CLOG.

On the first look, there's one obvious difference. 8-bit 4:2:0 CLOG internal footage looked noticeably flatter than the external CLOG footage (both 8-bit and 10-bit). I don't know if that's a good thing or bad. I'd expect CLOG to be quite flat. Why is the external recorded footage less flat? Is the ProRes codec used by Atomos Ninja playing with the contrast levels before recording it? Internal 8-bit CLOG recording, when viewed through waveform monitor, showed more room for brightness and darkness adjustments before clipping - compared to the external recordings! Isn't it supposed to be the other way!?! 

Other than that difference in flatness, I can't find anything else. 10-bit or 8-bit, both externally recorded clips didn't give me any additional leg room while color grading - compared to the 8-bit internal clip. 

What more, 8-bit and 10-bit external recordings looked exactly the same untouched, as well as after applying the same grades. No matter how hard I tried to look (zooming in, for instance), I can't find a difference. It may have to do with ProRes encoding everything as 10-bit 4:2:2 irrespective of the source bit depth. It is said that extreme color grading breaks apart 8-bit while 10-bit holds pretty good. This argument is backed by a strong theory. But if I can't see how much more better 10-bit is than 8-bit, what's the point with these external recorders? Is Atomos Nijna merely an overpriced monitor? (It does a good a job of that, for sure).

And, assuming 10-bit makes color graded videos have less artifacts, what happens when the edited video is exported using an 8-bit codec, and viewed on an 8-bit monitor?  

There may be rare cases where 10-bit is useful. But for most scenarios, I started to believe 10-bit isn't any better than 8-bit.  

What am I missing? I'd be glad if somebody could prove me wrong and confirm that 10-bit is worth it.

I can tell you that I'm a believer. I have the S5 and have had to recover from some really horrible lighting and had to push the WB to the other side of the spectrum....and the footage held up flawlessly, I am able to recover from situations where I had no control over the lighting in ways that I simply was not able to when shooting 8 bit. I am the first one to say I think something is overrated....like I personally think ALL-I is just a waste of storage space, RAW footage for YouTube and Vimeo is an even bigger waste of storage space (unless your editing machine really can't handle LongGOP), external recorders are a waste of money for most people, etc, etc. I have reached the conclusion that for anything short of TV quality commercial work, documentaries (maybe), and feature length films, LongGOP, H.264, and H.265 are good enough. 

But after shooting 8 bit for years and fighting with the color grade especially when the WB was off, I am a believer in 10 bit. I also think 10 bit handles highlight roll-off better than 8 bit which is something that even YouTube viewers will notice if you don't manage to get it under control during editing, but anything beyond that (ProRes, BRAW, etc) as I mentioned before I think is a waste of space, time, and money. External recorders for certain cameras do let you record higher resolutions (like 6K for example) but unless you really do need to crop and recompose that much I think anything over 4K is also a waste.

There is a video that shows 8 bit vs 10 bit side by side on YouTube, I don't remember the title now, but in his side by side tests he reached the same conclusions that I reached on my own; there is no noticeable difference in the final footage, but 10 bit handles highlights and WB correction better.  I think 8 bit is the better way to go if you have the time to properly WB, expose the footage, etc. but when you are shooting hectic events like I typically shoot, or real estate where the lighting is always some weird mixture that you can't control; the extra editing latitude that 10bit provides is very noticeable.

Link to comment
Share on other sites

9 minutes ago, herein2020 said:

I can tell you that I'm a believer. I have the S5 and have had to recover from some really horrible lighting and had to push the WB to the other side of the spectrum....and the footage held up flawlessly, I am able to recover from situations where I had no control over the lighting in ways that I simply was not able to when shooting 8 bit. I am the first one to say I think something is overrated....like I personally think ALL-I is just a waste of storage space, RAW footage for YouTube and Vimeo is an even bigger waste of storage space (unless your editing machine really can't handle LongGOP), external recorders are a waste of money for most people, etc, etc. I have reached the conclusion that for anything short of TV quality commercial work, documentaries (maybe), and feature length films, LongGOP, H.264, and H.265 are good enough. 

But after shooting 8 bit for years and fighting with the color grade especially when the WB was off, I am a believer in 10 bit. I also think 10 bit handles highlight roll-off better than 8 bit which is something that even YouTube viewers will notice if you don't manage to get it under control during editing, but anything beyond that (ProRes, BRAW, etc) as I mentioned before I think is a waste of space, time, and money. External recorders for certain cameras do let you record higher resolutions (like 6K for example) but unless you really do need to crop and recompose that much I think anything over 4K is also a waste.

There is a video that shows 8 bit vs 10 bit side by side on YouTube, I don't remember the title now, but in his side by side tests he reached the same conclusions that I reached on my own; there is no noticeable difference in the final footage, but 10 bit handles highlights and WB correction better.  I think 8 bit is the better way to go if you have the time to properly WB, expose the footage, etc. but when you are shooting hectic events like I typically shoot, or real estate where the lighting is always some weird mixture that you can't control; the extra editing latitude that 10bit provides is very noticeable.

Thank you. I guess I'll have to do use 10-bit a lot more (than the few tests I did) to see the real benefits. 

Link to comment
Share on other sites

How about a curve ball, 10-bit internal vs. Ninja/prores? With the a7sIII, the image just looks a bit better recorded externally using the same settings. Everything just looks better. And I’m not talking raw. I had the Eos R for a few months and I thought the Ninja stuff looked better as well. It’s cool to have options. 

chris

Link to comment
Share on other sites

48 minutes ago, barefoot_dp said:

The different contrast levels are probably due to data vs video levels. Not sure if your particular camera lets you choose which to use over HDMI, but if not you can fix in in your NLE. Resolve has an option to swap between video or data levels, and in Premiere you need to add a fast colour corrector with output set to 16-235 before you add any other colour effects.

You might not always notice the difference when you're editing, but those times you do, you'll be very thankful. It also helps you reach minimum requirements for a lot of broadcasters and agencies which means more work opportunities. 

Thank you. 

Link to comment
Share on other sites

1 minute ago, Trek of Joy said:

How about a curve ball, 10-bit internal vs. Ninja/prores? With the a7sIII, the image just looks a bit better recorded externally using the same settings. Everything just looks better. And I’m not talking raw. I had the Eos R for a few months and I thought the Ninja stuff looked better as well. It’s cool to have options. 

chris

EOS R won't record 10-bit internal, which is why I had to get Ninja 🙂 

Ninja stuff didn't look bad, but so far it didn't look any better than 8-bit internal too. But again, may be my tests didn't cover the right use cases. I think I'll have to spend more time figuring this out, and experimenting. 

Thank you.

Link to comment
Share on other sites

Here's a flat scene filmed in 8-bit C-Log on the XC10.

Ungraded:

1255986449_CinquedeTerre_1.9.1.thumb.jpg.02bb66156aff6403512afd8777cdb885.jpg

With a conservative grade:

715922396_CinquedeTerre_1.9.3.thumb.jpg.b18950cdb4b48286af9026c56065e5d0.jpg

Cropped in:

image.png.531743ddf064d0cd8f9ada6ccd7a61f2.png

Here's the vectorscope that shows the 8-bit quantisation, which is made worse by the low-contrast lighting conditions and the low-contrast codec (C-Log):

image.png.260420ba4b182558111ab6b32cf4706a.png

I managed to "fix" the noise through various NR techniques, which also fixed the 8-bit quantisation:

image.png.7e14a1ece8514be7d09b905d780a9321.png

image.png.be0ab715448d4c533783771ce9243376.png

Yes, this is a real-world situation.  Is it a disaster, not for me, an amateur shooting my own personal projects.  Would it be a disaster for someone else?  That's probably a matter of taste and personal preference.

I have other clips where I really struggled to grade the footage and although the 8-bit codec wasn't the cause, it also added to the difficulty I experienced.

I now shoot 10-bit with the GH5 and don't remember seeing a 'broken' vectorscope plot like the one shown above.

When I tested 10-bit vs 12-bit vs 14-bit (using Magic Lantern RAW) I personally saw a difference going from 8 to 10-bit, I saw a very small difference going from 10 to 12-bit, and I couldn't see a difference going from 12 to 14-bit.  Others said they could see differences, and maybe they could.  A couple swore that 14-bit is their minimum.

I've also seen YT videos where people test 8-bit vs 10-bit and some tests found they could break the 8-bit image under normal circumstances, others couldn't break the 8-bit under ridiculously heavy-handed grading that you'd never do in real life.

Here's me trying to break a 10-bit file from the GH5...  Ungraded 4K 150Mbps HLG:

2108838446_ScreenShot2019-01-09at5_30_02pm.png.d1e7031da3a5cb75fdaaf0b292505467.png

Unbroken despite horrific grading:

401776957_ScreenShot2019-01-09at5_30_55pm.png.4d24ba96b5062625c73873217d8e23c9.png

It's probably a matter of tolerance.  From my experience 8-bit is good enough in most situations and 10-bit is basically bullet-proof, but others have different tolerances and shoot in different situations.

Also, different cameras perform differently.

Link to comment
Share on other sites

Thank you @kye. That's a great explanation from real experience. 

When I was color grading my last short film (shot on EOS-R, 8-bit internal CLOG; graded using Davinci Resolve), I noticed artifacts popping up if I went extreme. So I had to be gentle. (It was a horror flick involving all indoor, night scenes, mostly under lit). I later learned that 10-bit would've let me grade to the extremes. So I got an external recorder. My test shots (for 8-bit vs 10-bit comparison) mimicked some of the indoors night shots from my short-film. And I found no visual differences between 8 & 10 bit before or after grading. That's the story behind my lengthy post. 

I know, just one test can't disprove what a lot of people believe in. Which is why I started this thread.   

Link to comment
Share on other sites

There definitely is a difference, though I don't have experience with recording on a Ninja from a Canon camera in 10-bit. Canon though has always impressed me with their 8-bit files, the C100 being the camera I have the most experience with. 

I shoot with a GH5 and have found the 10-bit to be really beneficial but I generally use 8-bit most of the time now. I used to film everything in 10-bit but it just was unnecessary for 90% of my work. It's wonderful to have, for sure, but if I don't need that extra data then there's not much of a reason to deal with storing the large file sizes. 

So I wouldn't say it's just hype, but I think it's maybe slightly overrated. 

Link to comment
Share on other sites

10 bit over 8 and 4K over 1080p never mind 10 bit 4K over 8 bit 1080.

It’s all about latitudes for me.

In non-tricky lighting, less noticeable difference, but when it’s not and you need to salvage something etc...

Anything more though is overkill for me. Right now anyway, but I think for some time.

Link to comment
Share on other sites

Its Canon.  I'm not sure I trust their 10 bit output to an external recorder.   What sort of signal is being sent out to the recorder? Is it RAW data then converted to a 10 bit ProRes file, or a H264 10 bit file that is recorded as ProRes?  

That said, under certain conditions, 8 bit and 10 bit don't yield too much difference depending on the grade.  I have found differences on occasions where I've switched to 50p on my GH5, which is 8 bit.  Filming vLog on both 8 bit and 10 bit, there are noticeable shifts in quality in how far you can push grades under certain lighting conditions.

That said, I'm a huge fan of BRAW and that is whole different level of quality when it comes to grading, so I rarely seriously grade 10 bit footage these days, except to colour balance my C and D camera footage in multi camera shoots.  

 

Link to comment
Share on other sites

On 1/19/2021 at 11:02 PM, Anil Royal said:

Internal 8-bit CLOG recording, when viewed through waveform monitor, showed more room for brightness and darkness adjustments before clipping - compared to the external recordings! Isn't it supposed to be the other way!?! 

No.  Bit depth and dynamic range are two completely independent properties (as the contrast "discrepancy" readily proves).

 

Merely mapping two different bit depths to the same contrast range should not change the contrast.  Something else is causing the contrast difference in contrast.  Note that there is no difference in contrast between the 8-bit and 10-bit images from the recorder, but the internal 8-bit differs.  So, the camera is affecting the contrast.

 

In regards to generally seeing a difference between 8-bit and 10-bit, you would likely see a difference if you compared the 8-bit and 10-bit images on a 10-bit monitor/projector.

Link to comment
Share on other sites

2 hours ago, Anil Royal said:

Thank you @kye. That's a great explanation from real experience. 

When I was color grading my last short film (shot on EOS-R, 8-bit internal CLOG; graded using Davinci Resolve), I noticed artifacts popping up if I went extreme. So I had to be gentle. (It was a horror flick involving all indoor, night scenes, mostly under lit). I later learned that 10-bit would've let me grade to the extremes. So I got an external recorder. My test shots (for 8-bit vs 10-bit comparison) mimicked some of the indoors night shots from my short-film. And I found no visual differences between 8 & 10 bit before or after grading. That's the story behind my lengthy post. 

I know, just one test can't disprove what a lot of people believe in. Which is why I started this thread.   

No worries. You may find differences between how Canon and the external recorder encode things.  Not all electronics / algorithms are equal and even something like throwing more CPU power at something might mean it can do the encoding at a higher quality.

Ultimately when you push your footage you're pushing everything, not just the bit-depth.  14-bit footage will still break if you've got lots of noise, or if it's heavily compressed with lots of jagged edges.

53 minutes ago, MrSMW said:

10 bit over 8 and 4K over 1080p never mind 10 bit 4K over 8 bit 1080.

It’s all about latitudes for me.

In non-tricky lighting, less noticeable difference, but when it’s not and you need to salvage something etc...

Anything more though is overkill for me. Right now anyway, but I think for some time.

I came to the conclusion it's all about latitudes too.  

I then extrapolated that to the idea that the less I have to stress an image the better, so now I shoot in a modified Cine-D profile but also in 10-bit, rather than shooting HLG 10-bit and then having to push it around to get a 709 type of image.  To put it another way, I get the camera to push the exposure to 709 before the conversion from RAW to 10-bit, instead of me doing it afterwards in post.

Recording something flat and then expanding it to full contrast in post is really just stretching the bits further apart.  If you recorded something in a LOG profile that went from 30IRE to 80IRE in 10-bit for the full exposure range and then expanded it to put the range from 0-100IRE then you've multiplied the signal by two, effectively giving you a 9-bit image.  If that LOG image was 8-bit to begin with then you're now dealing with a 7-bit image by the time it gets to 709.

Cinematographers talk about things like the Alexa only having a few stops of latitude, and they're shooting RAW!  If we're shooting 8-bit or even 10-bit then that means we have almost no latitude at all, so that's the philosophy I've taken.

Link to comment
Share on other sites

I can only speak from my own experience, but the experience of grading the 10 bit HD output from my FS5 is very different (in a good way) to that of grading the 8 bit HD I used to get from my C100 (I've had both the Mk2 and the Mk1). Loved that Canon colour, but given that the artistic vision I'm pursuing involves quite extensive grading as I chase the elusive 'film' look (my ideal would have my footage looking like a moving 35mm still print) the lure of 10 bit on the FS5 was too much to resist. Simply, it holds up to the grade in a way that the Canon's recordings just couldn't - in terms of artefacts, banding and grain.

Link to comment
Share on other sites

20 hours ago, kye said:

No worries. You may find differences between how Canon and the external recorder encode things.  Not all electronics / algorithms are equal and even something like throwing more CPU power at something might mean it can do the encoding at a higher quality.

Ultimately when you push your footage you're pushing everything, not just the bit-depth.  14-bit footage will still break if you've got lots of noise, or if it's heavily compressed with lots of jagged edges.

I came to the conclusion it's all about latitudes too.  

I then extrapolated that to the idea that the less I have to stress an image the better, so now I shoot in a modified Cine-D profile but also in 10-bit, rather than shooting HLG 10-bit and then having to push it around to get a 709 type of image.  To put it another way, I get the camera to push the exposure to 709 before the conversion from RAW to 10-bit, instead of me doing it afterwards in post.

Recording something flat and then expanding it to full contrast in post is really just stretching the bits further apart.  If you recorded something in a LOG profile that went from 30IRE to 80IRE in 10-bit for the full exposure range and then expanded it to put the range from 0-100IRE then you've multiplied the signal by two, effectively giving you a 9-bit image.  If that LOG image was 8-bit to begin with then you're now dealing with a 7-bit image by the time it gets to 709.

Cinematographers talk about things like the Alexa only having a few stops of latitude, and they're shooting RAW!  If we're shooting 8-bit or even 10-bit then that means we have almost no latitude at all, so that's the philosophy I've taken.

I think the biggest thing that I have discovered is that 10 bit lets you push around the mids more than 8 bit without losing color quality. Everyone typically talks about recovering the highlights and preventing noise in the lows and looks at dynamic range...for me the biggest problem is usually recovering exposure on the skin in difficult lighting situations when you have the highs and the lows all well within the WFM but the skin tones are under exposed which you can't control because you can't properly light the subject. I run into that situation all the time and 10 bit lets you fix the exposure on the skin tones without losing color quality. 8 bit used to fall apart every time and typically it was better to just leave them underexposed than to try to fix them without fill lighting.

Even with 10bit there's only so much you can do to the mids before you start losing color (that's where the whole latitudes limitations come in) but I will take 10 bit any day over 8 bit for this situation. A good example is with a sunset behind a subject and no fill lighting, with 8 bit you have no choice but to severely under expose the subject or completely blow out the sunset, with 10 bit (and the benefits of VLOG and the S5's sensor) I am able to recover the skin tones almost to the bottom edge of the proper IRE while retaining the highlights and not washing out the lows.

One day I may try to shoot the exact same scene right after the other with a sunset behind the subject one with 4:2:0 8bit and the second shot with 4:2:2 10 bit then demonstrate how the mids fall apart when trying to recover the subject's exposure, but I typically don't have that kind of time during a shoot.

Link to comment
Share on other sites

I don't know if it is a 10-bit vs 8-bit thing, or an S-LOG vs V-LOG thing, or a Sony vs Panasonic thing, or it's a color-science thing, but I just don't get the same sort of weird colors in V-LOG on the S1 that I got in S-LOG 2 on the a6500.

I know a couple of the more notorious people on youtube have recommended workarounds for 8-bit Sony; dunna did it said just crank the saturation all the way up. Caleb (dslr video shooter) said to use the 709 matrix gamut instead of s-gamut

Link to comment
Share on other sites

12 hours ago, MrSMW said:

All that aside @herein2020 when are you going to update your forum name to Herein2021? 😀

haha, I doubt they let me change my username. It will remind me what year I signed up if I'm still here talking about 8bit vs 24 bit or whatever we are up to by then 🙂

 

10 hours ago, Mark Romero 2 said:

I don't know if it is a 10-bit vs 8-bit thing, or an S-LOG vs V-LOG thing, or a Sony vs Panasonic thing, or it's a color-science thing, but I just don't get the same sort of weird colors in V-LOG on the S1 that I got in S-LOG 2 on the a6500.

I know a couple of the more notorious people on youtube have recommended workarounds for 8-bit Sony; dunna did it said just crank the saturation all the way up. Caleb (dslr video shooter) said to use the 709 matrix gamut instead of s-gamut

 

I have that problem too, I've never done a real side by side test with the exact same sensor, color profile, and scene but with the only change being the bits, partly because up until now I didn't have a camera that would let me test something like that all internally. Going from 8bit internal to 10bit external is 3 changes (external recorder, external over HDMI, and going from 8 bit to 10 bit) so I don't feel that would be a true test of just 8bit vs 10bit out of the same camera.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...