Jump to content

Compulsory viewing for EOSHD readers!


Guest 356f6ad990df9d31954b83fbbb12590b
 Share

Recommended Posts

Guest fe4a3f5e8381673ce80017d29a8375f1

Maybe I'm too naive about the dynamic range of an analogue image, and maybe it leads to far off topic... but I thought:

 

13 stops dynamic range gives you 2¹³ = 8192 discernible grey levels, while a 10 bit image can just convey 1024 levels of grey. So you're loosing what your camera delivered in the encoding process.

 

Oh ok I see what you're saying now. Well I don't know much about this stuff, but the BMPCC for example - even in 10bit prores mode - has far superior DR to the 8bit hybrids out there. I assume it has something to do with the base capabilities of the sensor (before compression).

 

We know that with resolution when you downscale the full-pixel readout of an 8 or 12 megapixel sensor down to 1080 it looks very detailed, even though the resulting output is only HD. I assume it's the same with DR and 10bit. But I have no idea really.

 

I know what my eyes see though. My original point is that a more robust image (13 stops, 10 bit, whatever) for both colours and DR is more important to me than resolution.  :)

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

Maybe I'm too naive about the dynamic range of an analogue image, and maybe it leads to far off topic... but I thought:

 

13 stops dynamic range gives you 2¹³ = 8192 discernible grey levels, while a 10 bit image can just convey 1024 levels of grey. So you're loosing what your camera delivered in the encoding process.

 

 

the key word is "discernible".

 

*********** technobabble below ***************

 

in an 10bit file you have 2^10 or 1024 shades.. you could interpret that as holding 10 stops if you calculate that as a contrast ratio of 1:1024, but that has nothing to do with what the eye really notices in terms of steps of contrast. So the 13 stop formula above is not what you need for number of shades to encode 13 stops of light from a scene. the just-noticeable difference for levels of contrast is actually based on other factors, including resolution, intensity and refresh rate. you can also encode light intensity as linear, or as a log curve, which has a major impact on the number of shades necessary for noticeable contrast. some tests indicate 12bit is the minimum, but it's standard when working with film or 2k digital footage (which adds grain and or noise), that you can encode 13-14 stops into 10bit log gamma w/o discernible steps.

 

**************** end babble **********************

 

translation.. 10bit log is good enough for 13-14 stops. 

Link to comment
Share on other sites

  • 2 weeks later...

the key word is "discernible".

 

*********** technobabble below ***************

 

in an 10bit file you have 2^10 or 1024 shades.. you could interpret that as holding 10 stops if you calculate that as a contrast ratio of 1:1024, but that has nothing to do with what the eye really notices in terms of steps of contrast. So the 13 stop formula above is not what you need for number of shades to encode 13 stops of light from a scene. the just-noticeable difference for levels of contrast is actually based on other factors, including resolution, intensity and refresh rate. you can also encode light intensity as linear, or as a log curve, which has a major impact on the number of shades necessary for noticeable contrast. some tests indicate 12bit is the minimum, but it's standard when working with film or 2k digital footage (which adds grain and or noise), that you can encode 13-14 stops into 10bit log gamma w/o discernible steps.

 

**************** end babble **********************

 

translation.. 10bit log is good enough for 13-14 stops. 

Well,  somwhere in your technobabble you're still loosing (or actually throwing away) information.

Then comes the argument that the human eye can distinguish grey levels at some brightness levels better than at others, and you can therefore cpompress certain ranges, be it via log, a gamma curve, or something else *may* be true.

 

I would argue that

 

  a) You should never ever reduce the amount of bits before you have done anything you could possibly want to do to the image!

  b ) Human vision varies between individuals, and the areas of dynamic range that you have compressed might have been indiscernible to some, but not to others.  Since you have to "unlog" (i.e. exp) before displaying on a screen, soem individuals may still notice banding where others don't.

Link to comment
Share on other sites

yes, you are always losing information, from the start of the signal chain to the end, the question is what can you get away with (for what target end use). but there actually have been extensive human tests done on the visual system, i'm always saying this..  i.e. the just noticeable difference for example, with respect to noticeable contrast levels and color changes.. . most of the movies you've watched projected in a theater were probably scanned at 2k 10bit Cineon log (which set the 18% middle gray card to code value 470).. for me probably around 1/2. I'm assuming you're a little younger than me :( . Even the Alexa, which is 16bit internal and usually recording to 10bit 2k Log-C, uses a very similar gamma curve to Cineon (that's what the "C" stands for). 

 

This is from Arri's Alexa FAQ page:

 

"If there are not enough steps between the brightest and the darkest part of the image, you will see banding artifacts, where, rather than seeing a gradual change in lightness, you will see distinct bands of lightness. However, 10 bit images usually have enough steps to avoid such artifacts."

Link to comment
Share on other sites

I do not doubt this, but I think you should not start throwing away information right after you have extracted it from the sensor.

Inevitably, you are going to loose further information in every post-processing step you apply. Compressing to log scale and fewer bits *before* going into that process would be like applying a dolby (anyone remember that?) noise reduction when recording a master tape...  that system also applied sort of a compression curve to store information in restricted frequency spectrum. Or the RIAA mastering curves.

 

No one would ever have had the idea of using it before the data got written to the final medium!

 

Imagine e.g. filming the night sky... 90% of the image will be around 5% peak brightness, i.e. all changes in most of the image will use only 50 different gray scale values when recording 10bit i.e. 1024 levels.  That's when compressing linearly, some clever algorithm might even assign fewer values since the eye can certainly not see any differences in that area anyway.

 

13 stops dynamic range on the other hand could of course in principle deliver 8192 levels, and roughly 400 of these would be available to reproduce grey levels in the very large dark part of the image.

Now you may say that the eye is not fit to resolve anything in this part anyway, which is true. But some clever post production guy might want to bring up the milky way or even more subtle nebular structures.  Guess with what he's better off when he starts stretching those 5% to ensure visible contrasts...?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...