Jump to content

What does 16 stop dynamic range ACTUALLY look like on a mirrorless camera RAW file or LOG?


Andrew - EOSHD
 Share

Recommended Posts

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

The high dynamic range (using DGO technology) in the Sony A7 V is for low to middle ISO stills when using the mechanical shutter; DGO is not used for video, and certainly there won't be any 16-stop dynamic range at ISO 3200 or 8000. The claimed 16 stops is likely achieved on a signficantly downsampled ISO 100 still image and criteria based on engineering dynamic range  (SNR = 1).

 

Do the EOSHD website and browsers used by visitors support high dynamic range photos on Super Retina XDR and other HDR screens? Otherwise, I'm not sure what the OP is looking to see. Having lower noise can't harm the image and it's up to the user to make use of the higher fidelity, or not make use of it.

Link to comment
Share on other sites

On 12/14/2025 at 5:07 AM, Andrew Reid said:

I want to see it.

Real world subjects only (like landscape scenes, people, and so on).

For a still image?  Photons to photos has never measured one - their highest measured DR is between 13 and 14 stops on some Phase One.  They have the A7V and the GFX 100 II about equal at ISO 80/100.  If you want some landscape photos from my GFX 100 II, I can certainly share a few.

For video, it's not exactly a mirrorless (but same sensor as in the mirrorless S1R II), but supposedly the Ronin 4D 8K in DR expansion mode has 16.3 total stops on the cined chart, but like most cameras, a lot less than that at a usable SNR.

Link to comment
Share on other sites

  • 3 weeks later...

I shoot in uncontrolled conditions, using only available light, and shoot what is happening with no directing and no do-overs.  This means I'm frequently pointing the camera in the wrong direction, shooting people backlit against the sunset, or shooting urban stuff in midday-sun with deep shadows in the shade in the same frame as direct sun hitting pure-white objects.

This was a regular headache on the GH5 with its 9.7/10.8 stops.  The OG BMPCC with 11.2/12.5 stops was MUCH better but still not perfect, and while I haven't used my GH7 in every possible scenario, so far its 11.9/13.2 stops are more than enough.

The only reason you need DR is if you want to heavily manipulate the shot in post by pulling the highlights down for some reason, or lifting the shadows up for some reason.

Beyond the DR of the GH7 I can't think of many uses other than bragging rights.  When the Alexa 35 came out and DPs were talking about its extended DR, it was only in very specific situations that it really mattered.  

Rec709 only has about 6 stops of DR, so unless you're mastering for HDR (and if you are, umm - why?) so adding more DR into the scene only gives you more headaches in post when you have to compress and throw away the majority of the DR in the image.

Link to comment
Share on other sites

15 hours ago, kye said:

Rec709 only has about 6 stops of DR, so unless you're mastering for HDR (and if you are, umm - why?) so adding more DR into the scene only gives you more headaches in post when you have to compress and throw away the majority of the DR in the image.

I shoot also mostly with available light, and when the sun has set in the light of dim headlamps. So being able to push and pull shadows and highlights is extremely important. In that regard GH7 is no slouch, but it is not quite the same than Z6iii, ZR nor even S5ii was either.

If you have a good HDR capable display (and I don’t mean your tiny phones, laptop or medium sized  displays, but a 65” or bigger OLED with infinite contrast, or a JVC projector with good contrast and inky blacks) one must be a wooden eye to not notice the difference between SDR and HDR masters. 
At least with my grading skills the 6 stops of DR in SDR look always worse than what I can get from HDR.

Link to comment
Share on other sites

11 hours ago, Jahleh said:

I shoot also mostly with available light, and when the sun has set in the light of dim headlamps. So being able to push and pull shadows and highlights is extremely important. In that regard GH7 is no slouch, but it is not quite the same than Z6iii, ZR nor even S5ii was either.

If you have a good HDR capable display (and I don’t mean your tiny phones, laptop or medium sized  displays, but a 65” or bigger OLED with infinite contrast, or a JVC projector with good contrast and inky blacks) one must be a wooden eye to not notice the difference between SDR and HDR masters. 
At least with my grading skills the 6 stops of DR in SDR look always worse than what I can get from HDR.

I'm seeing a lot of connected things here.

To put it bluntly, if your HDR grades are better than your SDR grades, that's just a limitation in your skill level of grading.  I say this as someone who took an embarrassing amount of time to learn to colour grade myself, and even now I still feel like I'm not getting the results I'd like.
But this just goes to reinforce my original point - that one of the hardest challenges of colour grading is squeezing the cameras DR into the display space DR.  The less squeezing required the less flexibility you have in grading but the easier it is to get something that looks good.  The average quality of colour grading dropped significantly when people went from shooting 709 and publishing 709 to shooting LOG and publishing 709.

Shooting with headlamps in situations where there is essentially no ambient light is definitely tough though, and you're definitely pushing the limits of what the current cameras can do, and it's definitely more than they were designed for!

Perhaps a practical step might be to mount a small light to the hot-shoe of the camera, just to fill-in the shadows a bit.  Obviously it wouldn't be perfect, and would have the same proximity issues where things that are too close to the light are too bright and things too far away are too dark, but as the light is aligned with the direction the camera is pointing it will probably be a net benefit (and also not disturb whatever you're doing too much).

In terms of noticing the difference between SDR and HDR, sure, it'll definitely be noticeable, I'd just question if it's desirable.  I've heard a number of professionals speak about it and it's a surprisingly complicated topic.  Like a lot of things, the depth of knowledge and discussion online is embarrassingly shallow, and more reminiscent of toddlers eating crayons than educated people discussing the pros and cons of the subject.  

If you're curious, the best free resource I'd recommend is "The Colour Book" from FilmLight.  It's a free PDF download (no registration required) from here: https://www.filmlight.ltd.uk/support/documents/colourbook/colourbook.php

In case you're unaware, FilmLight are the makers of BaseLight, which is the alternative to Resolve except it costs as much as a house.  

The problem with the book is that when you download it, the first thing you'll notice is that it's 12 chapters and 300 pages.  Here's the uncomfortable truth though, to actually understand what is going on you need to have a solid understanding of the human visual system (or eyes, our brains, what we can see, what we can't see, how our vision system responds to various situations we encounter, etc).  This explanation legitimately requires hundreds of pages because it's an enormously complex system, much more than any reasonable person would ever guess.
This is the reason that most discussions of HDR vs SDR are so comically rudimentary in comparison.  If camera forums had the same level of knowledge about cameras that they do about the human visual system, half the forum would be discussing how to navigate a menu, and the most fervent arguments would be about topics like if cameras need lenses or not, etc.

Link to comment
Share on other sites

On 1/3/2026 at 10:16 PM, kye said:

I shoot in uncontrolled conditions, using only available light, and shoot what is happening with no directing and no do-overs.  This means I'm frequently pointing the camera in the wrong direction, shooting people backlit against the sunset, or shooting urban stuff in midday-sun with deep shadows in the shade in the same frame as direct sun hitting pure-white objects.

This was a regular headache on the GH5 with its 9.7/10.8 stops.  The OG BMPCC with 11.2/12.5 stops was MUCH better but still not perfect, and while I haven't used my GH7 in every possible scenario, so far its 11.9/13.2 stops are more than enough.

The only reason you need DR is if you want to heavily manipulate the shot in post by pulling the highlights down for some reason, or lifting the shadows up for some reason.

Beyond the DR of the GH7 I can't think of many uses other than bragging rights.  When the Alexa 35 came out and DPs were talking about its extended DR, it was only in very specific situations that it really mattered.  

Rec709 only has about 6 stops of DR, so unless you're mastering for HDR (and if you are, umm - why?) so adding more DR into the scene only gives you more headaches in post when you have to compress and throw away the majority of the DR in the image.

I think the one major use case for the high DR of the Alexa 35 is the ability to record fire with no clipping. It's a party trick really, but a cool one. It's kind of fun to be able to see a clip from a show and be able to pick out Alexa 35 shots simply because of the fire luminance. That being said, it has no real benefit to the story in any real way. 

I did notice a huge improvement in the quality of my doc shoots when moving from 9-10 stop cameras to 11-12 stop cameras though. But around 12-12.5 stops, I feel like anything beyond has a very diminishing rate of return. 12 stops of DR in my opinion can record most of the real world in a meaningful way, and anything that clips outside of those 12 stops is normally fine being clipped. This means most modern cameras can record the real world in a beautiful, meaningful way if used correctly

Link to comment
Share on other sites

11 hours ago, kye said:

I'm seeing a lot of connected things here.

To put it bluntly, if your HDR grades are better than your SDR grades, that's just a limitation in your skill level of grading.  I say this as someone who took an embarrassing amount of time to learn to colour grade myself, and even now I still feel like I'm not getting the results I'd like.
But this just goes to reinforce my original point - that one of the hardest challenges of colour grading is squeezing the cameras DR into the display space DR.  The less squeezing required the less flexibility you have in grading but the easier it is to get something that looks good.  The average quality of colour grading dropped significantly when people went from shooting 709 and publishing 709 to shooting LOG and publishing 709.

 

Appreciate the long reply and blunt reply start to my own blunt message before😀 Part of the reason my HDR grades are looking better to me might be explained by Macbook Pro having so good HDR display and accurate display color space P3-ST 2084 for grading in Rec.2020 ST2084 1000 nits timeline in Resolve.

For SDR it has been a bit of a shit show in macOS, whether to use Rec.709-A, Rec.709 (Scene) or just Rec.709 gamma 2.4, and then wonder whether to set the MacBooks display to default Apple XDR Display (P3-1600 nits) or to HDTV Video (BT.709-BT.1886) which should be Rec.709 gamma 2.4 I believe, but makes the display much darker. 

The other part could very well be what you wrote, that squeezing the cameras DR into the display space DR is not easy. From Canon 550D up to GH5 4k 8 bit Rec.709 I remember grading felt easier, image looked as good or bad as it was shot, as there was not much room to correct it. But from GH5 5k 10bit H.265 HLG onwards things have gotten more complicated, as you have more room to try to do different things to the image.

11 hours ago, kye said:

Shooting with headlamps in situations where there is essentially no ambient light is definitely tough though, and you're definitely pushing the limits of what the current cameras can do, and it's definitely more than they were designed for!

Perhaps a practical step might be to mount a small light to the hot-shoe of the camera, just to fill-in the shadows a bit.  Obviously it wouldn't be perfect, and would have the same proximity issues where things that are too close to the light are too bright and things too far away are too dark, but as the light is aligned with the direction the camera is pointing it will probably be a net benefit (and also not disturb whatever you're doing too much).

Sorry about giving the impression of using the headlamps only as they were intended, in the head. Usually we have two headlamps rigged to trees to softly light the whole point of interest area from two angles. A third one used as an ambient light for the background is even better and can generate some moody backgrounds, instead of complete darkness. The amount of light in the headlamps is adjustable too, as too brightly lit subjects won’t look any good. In grading it can be then decided whether to black out the background completely and give more focus to the subject.

In that context shooting NRaw has worked pretty well, overexpose below clipping point and bring it down in post, maybe lift the shadows a bit and include NR. The GH7 should have somewhat similar DR than Z6iii and ZR, but for some unknown reason my grading skills can't get as good results. Of course it's also GH7 H.265 vs NRaw or R3D. In normal, less challenging scenario, there is not that big of a difference between GH7 and NRaw, but the difference is there, nevertheless.

11 hours ago, kye said:

In terms of noticing the difference between SDR and HDR, sure, it'll definitely be noticeable, I'd just question if it's desirable.  I've heard a number of professionals speak about it and it's a surprisingly complicated topic.  Like a lot of things, the depth of knowledge and discussion online is embarrassingly shallow, and more reminiscent of toddlers eating crayons than educated people discussing the pros and cons of the subject.  

If you're curious, the best free resource I'd recommend is "The Colour Book" from FilmLight.  It's a free PDF download (no registration required) from here: https://www.filmlight.ltd.uk/support/documents/colourbook/colourbook.php

In case you're unaware, FilmLight are the makers of BaseLight, which is the alternative to Resolve except it costs as much as a house.  

The problem with the book is that when you download it, the first thing you'll notice is that it's 12 chapters and 300 pages.  Here's the uncomfortable truth though, to actually understand what is going on you need to have a solid understanding of the human visual system (or eyes, our brains, what we can see, what we can't see, how our vision system responds to various situations we encounter, etc).  This explanation legitimately requires hundreds of pages because it's an enormously complex system, much more than any reasonable person would ever guess.
This is the reason that most discussions of HDR vs SDR are so comically rudimentary in comparison.  If camera forums had the same level of knowledge about cameras that they do about the human visual system, half the forum would be discussing how to navigate a menu, and the most fervent arguments would be about topics like if cameras need lenses or not, etc.

I’ve actually downloaded that book back in the day when you brought up the subject in another thread. Just scrolled through 1st half of it again. Very interesting subject. I wonder if more camera forum people were into HiFI and big display technology too (not just monitors), would it make people more interested from where the look their end results, be it video or photos. 

To my eyes very bright HDR videos, that most people nowadays post straight from their phones to social media just burn the eye balls out. Bluntly put it looks like shit. I have had a proper 4k (not UHD) HDR projector about 6 years (contrast ratio about 40 000:1 and it uses tone mapping for HDR)), watched good amount of SDR and HDR movies, series and own content on it, and to my eyes well graded HDR always has more information than well graded SDR and is more pleasing to the eyes to watch. This is also something I try to pursue with my gradings, as 99,9% is for my own use, viewed on the big screen, or on the worst case scenario on the tiny 65" OLED. Before the good HDR projector I had many cheap SDR projectors (contrast ratio about 2000:1 at best) and grading SDR for them was easy, as you could not see shit in the shadows anyway because of the projector contrast limitations.

Link to comment
Share on other sites

7 hours ago, Benjamin Hilton said:

I think the one major use case for the high DR of the Alexa 35 is the ability to record fire with no clipping. It's a party trick really, but a cool one. It's kind of fun to be able to see a clip from a show and be able to pick out Alexa 35 shots simply because of the fire luminance. That being said, it has no real benefit to the story in any real way. 

I did notice a huge improvement in the quality of my doc shoots when moving from 9-10 stop cameras to 11-12 stop cameras though. But around 12-12.5 stops, I feel like anything beyond has a very diminishing rate of return. 12 stops of DR in my opinion can record most of the real world in a meaningful way, and anything that clips outside of those 12 stops is normally fine being clipped. This means most modern cameras can record the real world in a beautiful, meaningful way if used correctly

I agree with this 100%. More DR is great but it will be at the extremes of the image but all the magic happens in the middle.

Link to comment
Share on other sites

On 1/6/2026 at 1:58 AM, Benjamin Hilton said:

I think the one major use case for the high DR of the Alexa 35 is the ability to record fire with no clipping. It's a party trick really, but a cool one. It's kind of fun to be able to see a clip from a show and be able to pick out Alexa 35 shots simply because of the fire luminance. That being said, it has no real benefit to the story in any real way. 

I did notice a huge improvement in the quality of my doc shoots when moving from 9-10 stop cameras to 11-12 stop cameras though. But around 12-12.5 stops, I feel like anything beyond has a very diminishing rate of return. 12 stops of DR in my opinion can record most of the real world in a meaningful way, and anything that clips outside of those 12 stops is normally fine being clipped. This means most modern cameras can record the real world in a beautiful, meaningful way if used correctly

I remember the discussions about shooting scenes of people sitting around a fire and the benefit was that it turned something that was a logistical nightmare for the grip crew into something that was basically like any other setup, potentially cutting days from a shoot schedule and easily justifying the premium on camera rental costs.

The way I see it is any camera advancement probably does a few things:

  • makes something previously routine much easier / faster / cheaper
  • makes something previously possible but really difficult into something that can be done with far less fuss and therefore the quality of everything else can go up substantially
  • makes something previously not possible become possible

..but the more advanced the edge of possible/impossible becomes the less situations / circumstances are impacted.

Another recent example might be filming in a "volume" where the VFX background is on a wall around the character.  Having the surroundings there on set instead of added in post means camera angles and sight-lines etc can be done on the spot instead of operating blind, therefore acting and camera work can improve.

Link to comment
Share on other sites

On 1/6/2026 at 2:26 AM, Jahleh said:

Appreciate the long reply and blunt reply start to my own blunt message before😀 Part of the reason my HDR grades are looking better to me might be explained by Macbook Pro having so good HDR display and accurate display color space P3-ST 2084 for grading in Rec.2020 ST2084 1000 nits timeline in Resolve.

For SDR it has been a bit of a shit show in macOS, whether to use Rec.709-A, Rec.709 (Scene) or just Rec.709 gamma 2.4, and then wonder whether to set the MacBooks display to default Apple XDR Display (P3-1600 nits) or to HDTV Video (BT.709-BT.1886) which should be Rec.709 gamma 2.4 I believe, but makes the display much darker. 

My advice is to forget about "accuracy".  I've been down the rabbit-hole of calibration and discovered it's actually a mine-field not a rabbit hole, and there's a reason that there are professionals who do this full-time - the tools are structured in a way that deliberately prevents people from being able to do it themselves.

But, even more importantly, it doesn't matter.  You might get a perfect calibration, but as soon as your image is on any other display in the entire world then it will be wrong, and wrong by far more than you'd think was acceptable.  Colourists typically make their clients view the image in the colour studio and refuse to accept colour notes when viewed on any other device, and the ones that do remote work will setup and courier an iPad Pro to the client and then only accept notes from the client when viewed on the device the colourist shipped them.

It's not even that the devices out there aren't calibrated, or even that manufacturers now ship things with motion smoothing and other hijinx on by default, it's that even the streaming architecture doesn't all have proper colour management built in so the images transmitted through the wires aren't even tagged and interpreted correctly.

On 1/6/2026 at 2:26 AM, Jahleh said:

The other part could very well be what you wrote, that squeezing the cameras DR into the display space DR is not easy. From Canon 550D up to GH5 4k 8 bit Rec.709 I remember grading felt easier, image looked as good or bad as it was shot, as there was not much room to correct it. But from GH5 5k 10bit H.265 HLG onwards things have gotten more complicated, as you have more room to try to do different things to the image.

Here's an experiment for you.

Take your LOG camera and shoot a low-DR scene and a high-DR scene in both LOG and a 709 profile.  Use the default 709 colour profile without any modifications.

Then in post take the LOG shot and try and match both shots to their respective 709 images manually using only normal grading tools (not plugins or LUTs).
Then try and just grade each of the LOG shots to just look nice, using only normal tools.

If your high-DR scene involves actually having the sun in-frame, try a bunch of different methods to convert to 709.  Manufacturers LUT, film emulation plugins, LUTs in Resolve, CST into other camera spaces and use their manufacturers LUTs etc.

On 1/6/2026 at 2:26 AM, Jahleh said:

Sorry about giving the impression of using the headlamps only as they were intended, in the head. Usually we have two headlamps rigged to trees to softly light the whole point of interest area from two angles. A third one used as an ambient light for the background is even better and can generate some moody backgrounds, instead of complete darkness. The amount of light in the headlamps is adjustable too, as too brightly lit subjects won’t look any good. In grading it can be then decided whether to black out the background completely and give more focus to the subject.

Gotcha.  I guess the only improvement is to go with more light sources but have them dimmer, or to turn up the light sources and have them further away.  The inverse-square law is what is giving you the DR issues.

On 1/6/2026 at 2:26 AM, Jahleh said:

In that context shooting NRaw has worked pretty well, overexpose below clipping point and bring it down in post, maybe lift the shadows a bit and include NR. The GH7 should have somewhat similar DR than Z6iii and ZR, but for some unknown reason my grading skills can't get as good results. Of course it's also GH7 H.265 vs NRaw or R3D. In normal, less challenging scenario, there is not that big of a difference between GH7 and NRaw, but the difference is there, nevertheless.

That's like comparing two cars, but one is stuck in first gear.  Compare N-RAW with Prores RAW (or at least Prores HQ) on the GH7.

I'm not saying it'll be as good, but at least it'll be a logical comparison, and your pipeline will be similar so your grading techniques will be applicable to both and be less of a variable in the equation.

On 1/6/2026 at 2:26 AM, Jahleh said:

I’ve actually downloaded that book back in the day when you brought up the subject in another thread. Just scrolled through 1st half of it again. Very interesting subject. I wonder if more camera forum people were into HiFI and big display technology too (not just monitors), would it make people more interested from where the look their end results, be it video or photos. 

People interested in technology are not interested in human perception.  

Almost everyone interested in "accuracy" will either avoid such a book out of principle, or will die of shock while reading it.  The impression that I was left with after I read it was that it's amazing that we can see at all, and that the way we think about the technology (megapixels, sharpness, brightness, saturation, etc) is so far away from how we see that asking "how many megapixels is the human eye" is sort-of like asking "What does loud purple smell like?".

Did you get to the chapter about HDR?  I thought it was more towards the end, but could be wrong.

On 1/6/2026 at 2:26 AM, Jahleh said:

To my eyes very bright HDR videos, that most people nowadays post straight from their phones to social media just burn the eye balls out. Bluntly put it looks like shit. I have had a proper 4k (not UHD) HDR projector about 6 years (contrast ratio about 40 000:1 and it uses tone mapping for HDR)), watched good amount of SDR and HDR movies, series and own content on it, and to my eyes well graded HDR always has more information than well graded SDR and is more pleasing to the eyes to watch. This is also something I try to pursue with my gradings, as 99,9% is for my own use, viewed on the big screen, or on the worst case scenario on the tiny 65" OLED. Before the good HDR projector I had many cheap SDR projectors (contrast ratio about 2000:1 at best) and grading SDR for them was easy, as you could not see shit in the shadows anyway because of the projector contrast limitations.

Yes - the HDR videos on social media look like rubbish and feel like you're staring into the headlights of a car.

This is all for completely predictable and explainable reasons..  which are all in the colour book.

I mentioned before that the colour pipelines are all broken and don't preserve and interpret the colour space tags on videos properly, but if you think that's bad (which it is) then you'd have a heart attack if you knew how dodgy/patchy/broken it is for HDR colour spaces.

I don't know how much you know about the Apple Gamma Shift issue (you spoke about it before but I don't know if you actually understand it deeply enough) but I watched a great ~1hr walk-through of the issue and in the end the conclusion is that because the device doesn't know enough about the viewing conditions under which the video is being watched, the idea of displaying an image with any degree of fidelity is impossible, and the gamma shift issue is a product of that problem.

Happy to dig up that video if you're curious.  Every other video I've seen on the subject covered less than half of the information involved.

Link to comment
Share on other sites

6 hours ago, kye said:

My advice is to forget about "accuracy".  I've been down the rabbit-hole of calibration and discovered it's actually a mine-field not a rabbit hole, and there's a reason that there are professionals who do this full-time - the tools are structured in a way that deliberately prevents people from being able to do it themselves.

But, even more importantly, it doesn't matter.  You might get a perfect calibration, but as soon as your image is on any other display in the entire world then it will be wrong, and wrong by far more than you'd think was acceptable.  Colourists typically make their clients view the image in the colour studio and refuse to accept colour notes when viewed on any other device, and the ones that do remote work will setup and courier an iPad Pro to the client and then only accept notes from the client when viewed on the device the colourist shipped them.

It's not even that the devices out there aren't calibrated, or even that manufacturers now ship things with motion smoothing and other hijinx on by default, it's that even the streaming architecture doesn't all have proper colour management built in so the images transmitted through the wires aren't even tagged and interpreted correctly.

It seems we have wen’t through the same rabbit holes already🙂 Back in the day with Macbook Airs I tried to calibrate their displays for WB, but it was more like a hit and miss. Since then I have not bothered. By accuracy I just ment that the HDR timeline looks the same in Resolve than the exported file would look on my Macbook Pro screen or MBP Pro connected to my projector or to OLED via HDMI, or when I am watching the video from Vimeo. 
Like you said, the real pain is the other person’s displays. You have no way of knowing what they are capable of, how they interpret the video and it’s gamma tags, even if set properly, with or without HDR10 tags. The quick way to check this mess would be to take your video to  your phone. If it looks the same, start Instagram etc post. After first step it usually messes up the gamma if the tags are wrong or meta can’t just understand them. After 30 days it anyways degrades the HDR to SDR, highlights are butchered and the quality gone. And for example the IG story just don’t seem to understand HDR at all. Also my ipad has only SDR display, and watching my HDR videos on it is not pretty.

6 hours ago, kye said:

Here's an experiment for you.

Take your LOG camera and shoot a low-DR scene and a high-DR scene in both LOG and a 709 profile.  Use the default 709 colour profile without any modifications.

Then in post take the LOG shot and try and match both shots to their respective 709 images manually using only normal grading tools (not plugins or LUTs).
Then try and just grade each of the LOG shots to just look nice, using only normal tools.

If your high-DR scene involves actually having the sun in-frame, try a bunch of different methods to convert to 709.  Manufacturers LUT, film emulation plugins, LUTs in Resolve, CST into other camera spaces and use their manufacturers LUTs etc.

Usually I don’t use LUTs, as I prefer the Resolves Color managed pipeline, but I’ve tried also non color managed with LUTs and CSTs. I’ve tried the non LOG SDR profiles on Z6iii and on various Panasonic cameras, and did not like them, or LOG with LUT baked in. They clipped earlier than LOG, but were cleaner in the shadows, if you needed to raise them though.

I usually grade my footage to SDR first, because I want to take screen captures as images from it. Then duplicate the timeline, set it to HDR, adjust the grade and compare both. I’ve used quite a lot time to get both looking good, but still almost always HDR looks better, as it should as it has 10x more brightness, 1000 nits vs 100 nits, and wider color space. Some auto correction in the Iphone to SDR video usually takes it closer to my HDR grade, so clearly my grading skills are just lacking when pushing the 11-12 stops of DR to SDR😆

7 hours ago, kye said:

Gotcha.  I guess the only improvement is to go with more light sources but have them dimmer, or to turn up the light sources and have them further away.  The inverse-square law is what is giving you the DR issues.

With S5ii in same low light situation with headlamps you could say I had problems, as the image looked always quite bad. Now with ZR, Z6iii and GH7 the results are much better in that regard I would say. Dimmer lights are always better than too bright ones, or putting them too close to the subject.

7 hours ago, kye said:

That's like comparing two cars, but one is stuck in first gear.  Compare N-RAW with Prores RAW (or at least Prores HQ) on the GH7.

I'm not saying it'll be as good, but at least it'll be a logical comparison, and your pipeline will be similar so your grading techniques will be applicable to both and be less of a variable in the equation.

The 1st thing I did after getting the GH7 I shot Prores RAW HQ, PRRAW, Prores and H.265 on it and compared them. Recently shot also R3D, NRaw and Prores Raw on ZR and just did not like the Prores Raw. It’s raw panel controls were limited and it looked just worse, or needed more adjusting. On GH7 the PRRaw was a maybe slightly better than it’s H.265 but the file sizes were bigger than 6k50p R3D on ZR🙄

I have made power grades to all Panasonic cameras that I’ve had, to Z6iii NLog and NRaw, to ZR R3D and also to Iphone Prores. So the pipeline is pretty similar no matter what footage I grade. Have also a node set to match specific camera’s color space and gamma too for easier exposure and WB change, when it can’t be done on the Raw panel.

The best option at the moment in my opinion is NRaw, as it’s file size is half the R3D and trimming and saving the trimmed NRaw files in Resolve works too. R3D is slightly better in low light, but as long as saving only the trimmed parts does not work you need to save everything, and that sucks, big time.

7 hours ago, kye said:

People interested in technology are not interested in human perception.  

Almost everyone interested in "accuracy" will either avoid such a book out of principle, or will die of shock while reading it.  The impression that I was left with after I read it was that it's amazing that we can see at all, and that the way we think about the technology (megapixels, sharpness, brightness, saturation, etc) is so far away from how we see that asking "how many megapixels is the human eye" is sort-of like asking "What does loud purple smell like?".

Did you get to the chapter about HDR?  I thought it was more towards the end, but could be wrong.

The HDR section was just in the middle of the book, and last thing I read. If someone prefers projected image over TV’s and monitors I would think they prefer also how the image is experienced, how it feels to look the image (after you get over the HiFI nerd phase of adjusting the image). At least I do, even though my OLED’s specs are superior compared to my projector. So the specs are not everything, even though important.

7 hours ago, kye said:

Yes - the HDR videos on social media look like rubbish and feel like you're staring into the headlights of a car.

This is all for completely predictable and explainable reasons..  which are all in the colour book.

I mentioned before that the colour pipelines are all broken and don't preserve and interpret the colour space tags on videos properly, but if you think that's bad (which it is) then you'd have a heart attack if you knew how dodgy/patchy/broken it is for HDR colour spaces.

I don't know how much you know about the Apple Gamma Shift issue (you spoke about it before but I don't know if you actually understand it deeply enough) but I watched a great ~1hr walk-through of the issue and in the end the conclusion is that because the device doesn't know enough about the viewing conditions under which the video is being watched, the idea of displaying an image with any degree of fidelity is impossible, and the gamma shift issue is a product of that problem.

Happy to dig up that video if you're curious.  Every other video I've seen on the subject covered less than half of the information involved.

Yes, delivering HDR to social media or directly to other people’s displays, you never know how their display would interpret and show the image. Like you said it is a bigger mess than even with SDR, but to me worth it to explore as my delivery is mostly to my own displays and delivery to Vimeo somewhat works.

The Apple gamma shift issue should be fixed by know for SDR. Watched some YT video about it linked to Resolve forum, that Rec.709 (Scene) should fix everything, but it is not that straight forward.

Also while grading it has an effect too on how bright conditions are you grading. If you grade on a dark room and another person is watching the end result on bright daylight, it very likely does not look like how it was intended. With projectors it is even worse. Your room will mess up your image very easily, no matter how good projector you have.

Appreciate all the inputs, not trying to argue with you. Would say these are more like a matter of opinions. And the more you dig deep the more you realize it is a mess, even more with HDR.

Link to comment
Share on other sites

8 hours ago, kye said:

My advice is to forget about "accuracy".  I've been down the rabbit-hole of calibration and discovered it's actually a mine-field not a rabbit hole, and there's a reason that there are professionals who do this full-time - the tools are structured in a way that deliberately prevents people from being able to do it themselves.

But, even more importantly, it doesn't matter.  You might get a perfect calibration, but as soon as your image is on any other display in the entire world then it will be wrong, and wrong by far more than you'd think was acceptable.  Colourists typically make their clients view the image in the colour studio and refuse to accept colour notes when viewed on any other device, and the ones that do remote work will setup and courier an iPad Pro to the client and then only accept notes from the client when viewed on the device the colourist shipped them.

It's not even that the devices out there aren't calibrated, or even that manufacturers now ship things with motion smoothing and other hijinx on by default, it's that even the streaming architecture doesn't all have proper colour management built in so the images transmitted through the wires aren't even tagged and interpreted correctly.

Here's an experiment for you.

Take your LOG camera and shoot a low-DR scene and a high-DR scene in both LOG and a 709 profile.  Use the default 709 colour profile without any modifications.

Then in post take the LOG shot and try and match both shots to their respective 709 images manually using only normal grading tools (not plugins or LUTs).
Then try and just grade each of the LOG shots to just look nice, using only normal tools.

If your high-DR scene involves actually having the sun in-frame, try a bunch of different methods to convert to 709.  Manufacturers LUT, film emulation plugins, LUTs in Resolve, CST into other camera spaces and use their manufacturers LUTs etc.

Gotcha.  I guess the only improvement is to go with more light sources but have them dimmer, or to turn up the light sources and have them further away.  The inverse-square law is what is giving you the DR issues.

That's like comparing two cars, but one is stuck in first gear.  Compare N-RAW with Prores RAW (or at least Prores HQ) on the GH7.

I'm not saying it'll be as good, but at least it'll be a logical comparison, and your pipeline will be similar so your grading techniques will be applicable to both and be less of a variable in the equation.

People interested in technology are not interested in human perception.  

Almost everyone interested in "accuracy" will either avoid such a book out of principle, or will die of shock while reading it.  The impression that I was left with after I read it was that it's amazing that we can see at all, and that the way we think about the technology (megapixels, sharpness, brightness, saturation, etc) is so far away from how we see that asking "how many megapixels is the human eye" is sort-of like asking "What does loud purple smell like?".

Did you get to the chapter about HDR?  I thought it was more towards the end, but could be wrong.

Yes - the HDR videos on social media look like rubbish and feel like you're staring into the headlights of a car.

This is all for completely predictable and explainable reasons..  which are all in the colour book.

I mentioned before that the colour pipelines are all broken and don't preserve and interpret the colour space tags on videos properly, but if you think that's bad (which it is) then you'd have a heart attack if you knew how dodgy/patchy/broken it is for HDR colour spaces.

I don't know how much you know about the Apple Gamma Shift issue (you spoke about it before but I don't know if you actually understand it deeply enough) but I watched a great ~1hr walk-through of the issue and in the end the conclusion is that because the device doesn't know enough about the viewing conditions under which the video is being watched, the idea of displaying an image with any degree of fidelity is impossible, and the gamma shift issue is a product of that problem.

Happy to dig up that video if you're curious.  Every other video I've seen on the subject covered less than half of the information involved.

 

How about using Dolby Vision? On supported devices, streaming services, and suitably prepared videos it adjusts the image based on the device's capabilities automatically, and can do this even on a scene-by-scene basis. I have not tried to export my own videos for Dolby Vision yet, but it seems work very nicely on my Sony xr48a90k TV. The TV adjusts itself based on ambient light and the Dolby Vision adjusts the video content to the capabilities of the device. It seems to be supported also on my Lenovo X1 Carbon G13 laptop. 

 

High dynamic range scenes are quite common, if one for example has the sun in the frame, or at night after the sky has gone completely dark, and if one does not want blown lamps or very noisy shadows in dark places. In landscape photography, people can sometimes bracket up to 11 stops to avoid blowing out the sun and this requires quite a bit of artistry to get it mapped in a beautiful way onto SDR displays or paper. This kind of bracketing is unrealistic for video so the native dynamic range of the camera becomes important. For me it is usually more important to have reasonably good SNR in the main subject in low-light conditions than dynamic range, as in video, it's not possible to use very slow shutter speeds or flash. From this point of view I can understand why Canon went for three native ISOs in their latest C80/C400 instead of the dynamic range optimized DGO technology in the C70/C300III. For documentary videos with limited lighting options (one-person shoots) the high ISO image quality is probably a higher priority than the dynamic range at the lowest base ISO, given how good it already is on many cameras.  However, I'd take more dynamic range any day if offered without making the camera larger or much more expensive. Not because I want to produce HDR content but because the scenes are what they are, and usually for what I do the use of lighting is not possible. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...