Jump to content

Sony A7S III – 10bit vs 8bit 4K/60p


Recommended Posts

  • Administrators

In 99% of cases there seems to be little benefit to using 10bit other than more difficult to edit, larger file sizes.

10bit is one of those easy to get excited about specs on paper. It’s a bit like the amount of memory in your PC or the speed of the CPU.

The higher number, the better!

What actually makes the difference when it comes to image quality? Well it’s a bit complicated. RAW is the ultimate codec, and the ultimate pain in the ass. LOG is great, no question. Lots of dynamic range, small file sizes, easy to edit and to apply a LUT. And for great LOG you need 10bit, right?

Well it seems that on the A7S III the difference is much of the time impossible to even see.

Not that this was not already proven by the Canon 1D C and that infamous MJPEG codec. The 8bit Canon LOG mode on that was incredibly nice. Fast forward nearly 10 years and there’s far less difference between 10bit 4:2:2 and 8bit 4:2:0 on the Sony A7S III than you might expect.

New blog post:

https://www.eoshd.com/news/sony-a7s-iii-10bit-image-quality-vs-same-camera-in-8bit-with-surprising-results/

DSCF0083.jpg

Link to post
Share on other sites

it seems to vary camera to camera, the whole 8-bit vs 10-bit thing,

Like Panasonic S1H the 8-bit in vlog has the weird banded magenta thing and can render it unusable, Sometimes even in non-vlog profiles, but the 10-bit is great.


But the 12-bit BRAW for the Video Assist is just excellent! Such a huge jump up if you're going to do keying and heavy grading and so on.

Link to post
Share on other sites

I have to agree that when looking at 8 bit vs 10 bit file straight out of the camera, there is not a lot of difference if exposed correctly. When doing heavy grading or keying or if improperly exposure it makes a difference.  

Also, the look of 8bit H265 with the mode recent H265 decoder algorithm can look excellent and comparing the older decoder to the newer decoder providing more highlight recovery - so something so tweaked as the recording did not change. I image if you have perfect 8 bit exposure, encode and decode it would not be worth the extra data for 10 bit. I feel like 12 bit due to the ability to change exposure after it has been recorded is a clear advantage, but is RAW worth extra data??? Is heavily compress 12 bit RAW worth the extra data? I think so...

Link to post
Share on other sites
  • Administrators
58 minutes ago, jgharding said:

it seems to vary camera to camera, the whole 8-bit vs 10-bit thing,

Like Panasonic S1H the 8-bit in vlog has the weird banded magenta thing and can render it unusable, Sometimes even in non-vlog profiles, but the 10-bit is great.


But the 12-bit BRAW for the Video Assist is just excellent! Such a huge jump up if you're going to do keying and heavy grading and so on.

Yeah RAW and ProRes are the natural codecs for 10bit and above.

I can't work out if Sony's 8bit is just very good if 10bit H.264 and H.265 is a waste of disk space.

Link to post
Share on other sites
  • Administrators
2 minutes ago, majoraxis said:

I have to agree that when looking at 8 bit vs 10 bit file straight out of the camera, there is not a lot of difference if exposed correctly. When doing heavy grading or keying or if improperly exposure it makes a difference.

Yeah but I show that in the article. I raised the exposure 5 stops!

Still no difference.

Link to post
Share on other sites
  • Administrators
17 minutes ago, independent said:

Andrew, don't you have the R5? I'm surprised you didn't compare its internal raw w/ these compressed codecs. 

It's sold.

Link to post
Share on other sites

The more I learn about each aspect of the image pipeline, the more I realise that everything matters.

If you have a camera like the A7S3 where it's a good sensor, good image processor, and good codec, then the fact that it's 8-bit isn't too bad because the other things are fine.  

I've run into issues shooting 8-bit C-Log with the Canon XC10.  This should be a reasonable example of 8-bit done well as it's a Canon cine/video camera with 4K 300Mbps 8-bit codec, but alas....

Here's this shot, which I will admit is underexposed:

Violinist1_1.1.1.png

which gives us this when we convert the colour space:

Violinist1_1.1.2.png

but unfortunately, if we zoom into the seat in the bottom left:

Violinist1_1.1.6.png

Is this an 8-bit issue?  Is this a poor codec issue?  I'll leave that as an exercise for the reader.

Let's take another example:

Cinque de Terre_1.9.1.jpg

Which after a transform gives this:

Cinque de Terre_1.9.2.jpg

and look at the noise!

image.png

Now, is that ISO noise?  8-bit quantisation?  a "poor" 300Mbps codec on a Canon cine camera?

This is the vector scope, which clearly shows the 8-bit quantisation:

image.png

Yes, this is an extreme example with a low-contrast scene, a flat log profile, and 8-bit codec, but this was a $2K Canon cine camera with a 300Mbps codec.

So, I replaced it with a GH5, with the decisive factor being the internal 10-bit.  

Let's take this image here, a 10-bit HLG frame with almost zero contrast:

Screen Shot 2019-01-09 at 5.30.02 pm.png

and actually try to break it by applying more contrast than you would ever use in real life:

Screen Shot 2019-01-09 at 5.30.55 pm.png

and it holds together.  

Is that the 10-bit?  Is that the 150Mbps 422 codec?  Is that the fact it's a downsampled 5K image?  Who knows, but it's a night and day difference where with an 8-bit example I struggle to get a good image from a real world shot and a 10-bit image where I can't break it even if I tried.

I like to think about it like this - 8-bit can be good enough if you have a good enough implementation of it (unlike the XC10) but a good 10-bit implementation will give you security that you're not going to run into bit depth issues, so for me it's a safety thing.

Link to post
Share on other sites

I'm always glad to see people doing actual tests on these things! Thanks Andrew. I don't really see any difference in the pictures you posted.

I did quickly grab some images with my Z Cam. I can't do the same format in 8 and 10 bit, so I compared 8 bit H.264 at 120 mbps vs. 10 bit H.265 at 60 mpbs. Big difference there... here's a 1:1 crop of the sky dropped 4 stops in post and no other color adjustment. 4 stops is a lot, but not unreasonable for exposing a Zlog2 image particularly with high dynamic range. Lots of very ugly colored bands on the H.264 and actually it looks worse with the full image. It might be worth comparing your highlights as well as shadows, especially in log footage. I don't see any banding in the shadows in my image when pushed, just in the highlights.

banding.thumb.png.44a8437df90dd205faa7dd1a249f1187.png

 

So I will agree with @jghardingthat it will vary camera to camera, and encoder to encoder.

Link to post
Share on other sites

What the specs never say is the bit depth used to scan the sensor. When stills are taken one might expect the full bit-depth is used (14-bit?). But in video mode, to reduce rolling shutter, a read-out mode with reduced bit-depth may be used. For 8-bit log or gamma-corrected recordings a 10 bit (lineair) read mode might be used. If that same 10-bit linear mode is used to create a 10-bit log, the quality wouldn't be much better than with an 8-bit log except for reduction of round-off errors.

Link to post
Share on other sites
19 minutes ago, Michael S said:

What the specs never say is the bit depth used to scan the sensor. When stills are taken one might expect the full bit-depth is used (14-bit?). But in video mode, to reduce rolling shutter, a read-out mode with reduced bit-depth may be used. For 8-bit log or gamma-corrected recordings a 10 bit (lineair) read mode might be used. If that same 10-bit linear mode is used to create a 10-bit log, the quality wouldn't be much better than with an 8-bit log except for reduction of round-off errors.

Sony sensors use linear 12bit ADC readout in video mode.

Link to post
Share on other sites

The 14 bit RAW of the Canon 5d2 and 5d3 in Magic Lantern remain the best image quality I've seen outside of Alexa and RED.  As in your test with the still photo, this is because Magic Lantern just made the cameras quickly shoot raw photos.

In my real world shoots Andrew's Z-Log picture profile in 8-bit holds up extremely well against 10-bit N-Log on the Nikon z6.  I like having the Atomos Ninja V monitor for the LUT preview and other monitoring features, but the extra bitrate really doesn't seem to add much.

Link to post
Share on other sites
18 hours ago, kye said:

Is this an 8-bit issue?  Is this a poor codec issue?  I'll leave that as an exercise for the reader.

It's the implementation of a codec. Sometimes steps are cut to speed encoding because of hardware/power consumption issues.

4 hours ago, Michael S said:

What the specs never say is the bit depth used to scan the sensor.

This is also the endless confusion that photographers and videographers fight over because most camera's use different readouts (12-bit vs 14-bit for stills) for different modes.

I wish I had 14-bit RAW on the R5 it would be amazing. Well, I do, it's just at 12fps and in photo mode only.

Link to post
Share on other sites

A lot of this is due to 32bit float color space in NLEs. As long as the 8bit has enough to not have posterization the 32bit float will likely be able to fill in any gaps as the image is pushed hard. Grading is much easier for math to fill in gaps than say upscaling an image.

In the case of upscaling new pixels can be averaged but averaging doesn't work for fine details like a hair. Grading however we are trying to prevent posterizing.  That is done through smooth gradients.  Sometimes averaging surrounding values does perfectly.

For example if you have a color of value 200 and another value of 250 its easy in grading to averaging an in between value of 225 which still creates a nice smooth gradient.

Where 10bit is important is making sure the shot is captured well the first time.  Once you have posterization it will always be there and no 32bit float processing can magically make it go away. Visually ion the shot has no posterizing than no matter how hard it is pushed it likely never will have any or pushing the 10bit would show just as much. Thats why 32bit float was created.

10bit is a lot like 32 bit audio or 16 stops of DR that are graded down to 10 stops.  We record more so we have it and can better manipulate it.  Most of the shots above likely would have still looked good with 6 bits. You need a very long and complex gradient to break 8bit.  It can and does happen.  The more noise the camera has the less it will happen because of dithering. I think this is partially why Sony always had such a high base ISO for log.

Finally 10bit never promised to have better color, more dynamic range or less compression artifacts. Thats not what bit depth does.  Its all just about how many different color samples can be used across the image. The single and only real side effect is posterizing. Many computer monitors at one point were only 6 bit panels even if they claimed 8bit.  Most never really noticed unless they did something like use the gradient tool in Photoshop to span a full 1920 wide image. In the case of the clear blue sky image in the article that wasn't even a difficult gradient. Most of the sky was a similar shade of blue. To break 8bit you need to create a gradient going from 0 blue to 255 blue across the full 3840 pixels for 4k video. That means there is a unique blue sample every 15 pixels if you create a gradient like that. So your sky needs to go from black on one end of the screen to bright blue on the other side. Not always realistic but you can shoot Skys around dusk and dawn that spread the values out a lot more than mid day. By comparison 10bit has a unique blue color sample every 3.75 pixels for UHD video.

It doesn't even have to be something that covers the full screen.  If you have a gradient over 100 pixels from 200 blue to 205 blue that still means a new blue sample every 20 pixels.  Even though the screen area is very small. I develop mobile apps and when I add a gradient I run into the same problem trying to do a subtle gradient across something like a button. The gradient needs enough range to cover the area of pixels or it will look steppy. 10bit and higher is a safety net or guarantee to likely never have any kind of posterizing. In the professional world thats important and nobody wants surprises after the shoot is done.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...