Jump to content

SIGMA FP with ProRes RAW and BRAW !


Trankilstef
 Share

Recommended Posts

2 minutes ago, TomTheDP said:

Just trying to find the ideal color workflow. I am sure for the majority of people this is pointless discussion.

I didn't mean any disrespect to you or anyone specifically. I've been very interested in the FP since it was released. It seems like a natural progression for me, but it also is too similar. But it also seems like the true successor to the OG Pocket.

With that said, the FP is basically a camera with 12 stops of DR and 2 native ISOs. As long as you protect the highlights, the rest can be tweaked in post.

As far as color... there seems to be a dozen different ways to get to a similar point. You can use the Camera Raw Panel in Rec709 and adjust your exposure, WB and saturation. You can process the clips as BM Film, then use LUTS or grade from hand, or you can do a CS/ACES Transform and process as LogC. Anything more than those 3 are probably more complicated than necessary for a $1500 camera.

I process my footage through Resolve as BM Film or sometimes as a LogC CST... depending on the footage... then export as ProRes 4444 for a final color and edit in FCPX. I've also found that processing the footage as BM Film and then use the Ursa 4.6K to Rec709 LUT really opens up the waveform and utilizes most of the available DR of the image. At that point you can adjust your skintones where you want them to be in the Raw Panel... etc. An aggressive highlight curve can usually add some depth to your highlights  

Anyway, just my two cents.

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
On 5/7/2022 at 7:23 PM, hyalinejim said:

That's pretty cool, how was that chart generated? I find it hard to figure out sometimes how a Colorchecker should be represented by a given camera. Here the rendering looks a bit low contrast and darker than it should be. Patch D4 is just above 18% grey in reality so should be represented by at least 126, 126, 126 RGB in Rec709. And the brightest patch is 90% reflectance so it should be considerably brighter than shown here.

This is what the spectral data provided by X-Rite looks like in a linear gamma, converted to sRGB:

ColorChecker24_After_Nov2014_RGB_16-bit.jpg.8399885bc4942d798811b9615f853980.jpg

But of course a camera recording of this would be in a different gamma. Nevertheless, I always feel that D4 should be at whatever the middle grey values of the colour space is. Here's a photo of a Colorchecker using Adobe Standard picture profile:

MHM_9547.thumb.jpg.11ab81502511ecc8546c425c348f7094.jpg

D4 patch is the same but highlights are lower (there's a highlight roll off rather than clipping at 100% reflectance which you would get in scene linear) and the shadows are darker.

All this is to say that be wary of digital representations of colour charts for video applications. The spectral data file provided by X Rite is useful for calibrating the linear colour response of RAW camera profiles, but there are gamma issues to consider when matching actual shots to representations of charts: the gamma of the camera may not (and perhaps should not) match the gamma of the generated chart.

In this case, if somebody can get the different flavours of DNG into a scene linear gamma with no highlight recovery then the middle image above is what the visual reference should be for testing which flavour has the more accurate colour

I got the linear ACES image from the colour-science.org github repository:

https://github.com/colour-science/colour-nuke/tree/master/colour_nuke/resources/images/ColorChecker2014

The author and maintainer is one of the core ACES contributors and is also a colour scientist at Weta. There are definitely variables though, yes. For example, I output to Rec709 based on the ACES rrt/odt, so it's been passed through the ACES procedurally generated idealised "film print emulation" which is somehwat like Arri K1S1, but more contrasty. And since the output intent is for Rec709, it's going to be different to your sRGB rendering. Additionally, it would obvously be easier to view these both with or without the borders because that affects perceptual rendering by our eyes.

I try to cut out variables such as "what is the value of middle grey" by viewing camera data in linear space and in a common wide gamut - ACES is designed for this. It's meant to be a "linear to light" representation of the scene reflectance. So it cuts out the variables of different gamma encodings and wide gamuts (Adobe, et al) for a ground truth that can be used to transform to and from other colour spaces accurately.

While I pasted a Rec709 image, it's a perceptual rendering viewed in an sRGB web brwoser, but I just saw it as a rendering relative to the other Rec709 images on this thread. Ultimately, I would check the colour values in ACES linear space underneath the viewer LUT in Nuke. For example, on my Macbook Pro, I have the ACES output transform set to P3D65 but the underlying linear floating point values never change.

To get this scene linear data is why a white paper is necessary from the manufacturer. At minimum it would describe a transform through an IDT, or ideally it would be through measuring the spectral sensitivities for that particular sensor. So since the Sigma fp doesn't have that, the fallback solution of Resolve building an IDT based on the matrices stores in a DNG file is a next-best option compared to a dedicated IDT based on measured spectral sensitivity data.

At this stage, the notion that an ACES input transform of a Sigma fp DNG image is accurate is based on Sigma putting accurate metadata there. I'm fairly positive it's solid though, based on the large format DP Pawel Achtel's spectral sensitivity tests comparing favourably against other cameras like the original Sony Venice.

I also would not measure a colour in RGB intenger values, but rather floating point values where for example, 18% grey is 0.18/0.18/0.18 as an RGB triplet. I can sanity check my scene exposure and colour rendition by exposing with a light meter and importing the resulting iamge through an IDT, and checking the grey value exposure is close to 0.18. I can then balance the grey value to a uniform 0.18 in each channel, and then see where the other chips fall, to get an idea of what kind of "neutral" image a camera creates as a starting point before applying a "look", such as a print film emulation that requires an Arri LogC intermediate transform. 

The level of accuracy I'm describing is at a baseline level to get a decent image without too much effort or technical knowledge required - definitely enough for us here!

 

Link to comment
Share on other sites

8 hours ago, hyalinejim said:

In my experience the Adobe profiles (Color, Standard and Neutral - they are basically 3 different contrast levels of the same profile) are quite accurate in terms of colour for all the cameras I've looked at.

If colour accuracy is the goal in Resolve then the Adobe rendering will give a good indication of what to look out for.

But of course many people (myself included) think that accurate colour is not necessarily the most pleasing colour.

A note of caution about the Adobe profiles ( I assume you mean the ones that come with Lightroom) is that they apply a hue twist to the DNG metadata to make it "look good". So if the goal is to get a baseline representation of the scene reflectance as captured by the sensor as a starting point to say, transforming to Alexa LogC, then you're going to get a more accurate transform based on as neutral a processing chain as possible.

So the linear raw ACES image through the default output transform LUT for the viewing device may look neutral and somewhat unappealing. Although sometimes it looks fucking great with some cameras - IMO the Sony a7sIII/FX3/FX6 just look great through the default ACES IDT/rrt/odt.

I mean with this camera, the linear raw ACES image basedon an "OFF" DNG actually looks a tad desaturated. I don't know for sure though, without more tests, if that's really the case. The aim would be to then apply an additional look modification transform if necessarty (additional contrast and saturation, pront film emulation, etc) to get the look that you want.

It will also be worth comparing how the interpreted DNG "OFF" image compares to the captured ProRes RAW "OFF" image, ie. are they the same? Did Atomos improve on anything? etc.

Link to comment
Share on other sites

12 minutes ago, Llaasseerr said:

I try to cut out variables such as "what is the value of middle grey"

But middle grey is not (or shouldn't be) a variable. It's usually understood to represent 18% reflectivity.

14 minutes ago, Llaasseerr said:

I also would not measure a colour in RGB intenger values

Why not? You totally can, and should. 18% grey is 119 in sRGB, 100 in ProPhoto, 126 in Rec709, 50 in Lab. These are known values and you can use them to check things like exposure but also the validity of your workflow. In the image you posted above, I'm pointing out that the value of middle grey is too low. So something may be "wrong" in your image pipeline.

This is the whole point of a Colorchecker, to compare your results with what those values are supposed to be. If you're getting the D4 patch at 119 (actually more like 122 as it's a bit brighter than 18%) you know that your exposure is correct. If it's lower maybe the exposure was too low or you're using the wrong gamma curve.

Link to comment
Share on other sites

5 hours ago, mercer said:

I may be completely wrong here, but it seems like a lot of people are overthinking the workflow of this camera... from exposure to color... I don't know if it should be this problematic. Perhaps I am so used to workarounds with my 5D3 and ML Raw, so I don't understand the issues. But it shouldn't be this difficult. 

You're right, it shouldn't be this difficult, but Sigma have not published any accurate information about the camera's colour science and I don't think they even really know about the camera's spectral characteristics or how to communicate a modern colour science pipeline to a DP even though they claim it's aimed at professional DPs who can use the camera properly since they know what they're doing.

It depends on how accurate you want to be and how predictable you want your workflow to be without just twiddling knobs in Resolve on a per-shot basis until it looks good.

I'm happy with the baseline accuracy I get with DNG tags being interpreted in as ACES project in Resolve. At least that's why I started posting in this thread, was to tell other people that it was a valid starting point that's not gaslighting you.

But then I started being told by other users about the weird inconsitent behaviour with ISO and variance as to how it behaves whether capturing internally or with a Ninja V. Then the fact the false colors don't update in some cases between ISO 100-800, or the fact that the DNGs look different based on whether the colour profile is set to ON or OFF. Also the complete lack of ability to monitor the highlights, I guess unless you underexpose the image with an ND and add a LUT n an external monitor to bring the exposure back up to where it will be in post.

Having said all that, it's still a great, intriguing camera. I would really like it if this and the FX3 were smushed together with an extra stop of dynamic range.

 

 

Link to comment
Share on other sites

11 minutes ago, Llaasseerr said:

they apply a hue twist to the DNG metadata to make it "look good"

Even applying a simple curve to linear data will alter the hue. Perhaps the Adobe Profiles try to compensate for this so that hues stay accurate.

Link to comment
Share on other sites

7 minutes ago, hyalinejim said:

But middle grey is not (or shouldn't be) a variable. It's usually understood to represent 18% reflectivity.

Why not? You totally can, and should. 18% grey is 119 in sRGB, 100 in ProPhoto, 126 in Rec709, 50 in Lab. These are known values and you can use them to check things like exposure but also the validity of your workflow. In the image you posted above, I'm pointing out that the value of middle grey is too low. So something may be "wrong" in your image pipeline.

This is the whole point of a Colorchecker, to compare your results with what those values are supposed to be. If you're getting the D4 patch at 119 (actually more like 122 as it's a bit brighter than 18%) you know that your exposure is correct. If it's lower maybe the exposure was too low or you're using the wrong gamma curve.

Sorry for the misunderstanding, I'm not saying that it's a variable, I'm saying it's a constant. I got the impression that others were saying it was a variable and I'm saying I'm cutting that out.

And to be clear, I don't need to know the RGB integer value of middle grey in any of those other color spaces if I know that it's 0.18 in scene linear. Checking exposure is also infinitely easier in linear space because it behaves the same as exposure does in the real world. That's the beauty of scene referred, linear to light imaging.

The other values you're talking about are display referred, not just influenced by luminance but also the primaries, that could change the RGB weighting. So there's plenty of room for confusion.

I've worked on a lot of very large blockbuster films for Disney, Lucasfilm, etc and helped set up their colour pipeline in some cases, so I know what I'm talking about.

Link to comment
Share on other sites

6 minutes ago, hyalinejim said:

Even applying a simple curve to linear data will alter the hue. Perhaps the Adobe Profiles try to compensate for this so that hues stay accurate.

The metadata in the DNG provides the transforms to import a neutral linear image in the sensor native gamut. Anything Adobe adds on top of that is purely their recipe for what "looks good", and in a film imaging pipeline, typically a cg-heavy film, those source image that have been processed through Lightroom will typically need the Adobe special sauce hue twists removed so that the image is true to the captured scene reflectance. 

Link to comment
Share on other sites

23 minutes ago, hyalinejim said:

But middle grey is not (or shouldn't be) a variable. It's usually understood to represent 18% reflectivity.

Why not? You totally can, and should. 18% grey is 119 in sRGB, 100 in ProPhoto, 126 in Rec709, 50 in Lab. These are known values and you can use them to check things like exposure but also the validity of your workflow. In the image you posted above, I'm pointing out that the value of middle grey is too low. So something may be "wrong" in your image pipeline.

This is the whole point of a Colorchecker, to compare your results with what those values are supposed to be. If you're getting the D4 patch at 119 (actually more like 122 as it's a bit brighter than 18%) you know that your exposure is correct. If it's lower maybe the exposure was too low or you're using the wrong gamma curve.

Just as a follow-up, I want to reiterate that I only posted the ACES Rec709 rendering of the colorchecker as a point of comparison to the Rec709 ACES "OFF" image I posted that had the colorchecker in frame, so that people could look at an idealised synthetic version of the colorchecker vs an actual photographed colorchecker in the scene that had gone through the same imaging pipeline. So relative to each other, those two images were able to be compared. There definitely was nothing absolute about the RGB values of that colorchecker I posted.

Link to comment
Share on other sites

1 hour ago, Llaasseerr said:

You're right, it shouldn't be this difficult, but Sigma have not published any accurate information about the camera's colour science and I don't think they even really know about the camera's spectral characteristics or how to communicate a modern colour science pipeline to a DP even though they claim it's aimed at professional DPs who can use the camera properly since they know what they're doing.

It depends on how accurate you want to be and how predictable you want your workflow to be without just twiddling knobs in Resolve on a per-shot basis until it looks good.

I'm happy with the baseline accuracy I get with DNG tags being interpreted in as ACES project in Resolve. At least that's why I started posting in this thread, was to tell other people that it was a valid starting point that's not gaslighting you.

But then I started being told by other users about the weird inconsitent behaviour with ISO and variance as to how it behaves whether capturing internally or with a Ninja V. Then the fact the false colors don't update in some cases between ISO 100-800, or the fact that the DNGs look different based on whether the colour profile is set to ON or OFF. Also the complete lack of ability to monitor the highlights, I guess unless you underexpose the image with an ND and add a LUT n an external monitor to bring the exposure back up to where it will be in post.

Having said all that, it's still a great, intriguing camera. I would really like it if this and the FX3 were smushed together with an extra stop of dynamic range.

 

 

No matter how you look at it, the FP is barely a prosumer camera. It apparently has its quirks, but it seems to me that as long as you protect your highlights, or adjust exposure of the in-camera meter reading with whatever the inconsistency is, then you should be able to get a consistent exposure... if it's reading a stop under then adjust accordingly.

As I said previously, I don't know a lot about ACES and have found your contribution to the discussion very interesting.

I wasn't really speaking of anybody specifically. Since the FP was released, there has been a steady flow of discussions that make it seem like the camera is almost unusable. And I have seen more than enough samples from the FP that screams otherwise and most of the creators from those videos rarely discuss these issues. So it seems that some of the complaints are either overblown or created from theory rather than practicality. I assume we have all shot in imperfect scenarios and we just have to make it work.

But I'm usually the minority around here. I see the FP as what it is... the smallest FF raw video camera and that's how I would use it... a handgrip, a lens and a camera strap. Other than the added size due to the SSD, I feel like I should be able to slip it into a pocket in between shots. But I shoot DIY, run and gun narratives, so YMMV.

Link to comment
Share on other sites

We're probably at the risk of going on one or more tangents here but it's an interesting conversation to me so I'll go ahead.

I understand that the image you posted wasn't supposed to be at the correct luminance level. Nevertheless:

1 hour ago, Llaasseerr said:

The other values you're talking about are display referred, not just influenced by luminance but also the primaries, that could change the RGB weighting.

But the primaries for each of those colour spaces are defined, no? So there should be no variance in the RGB integers for middle grey. Correct me if I'm wrong, but my understanding that the value for middle grey (18% reflectance) in, for example, sRGB colour space is 119, 119, 119. It's not 0, 0, 0 nor is it 255, 255, 255. And in your example of 0.18 in linear light, if you transform correctly to sRGB you should end up with 119. If it's any other value then something is "incorrect" in the transformation. Just as middle grey is a constant, so are the characteristics of each colour space with defined primaries and gamma.

1 hour ago, Llaasseerr said:

Anything Adobe adds on top of that is purely their recipe for what "looks good", and in a film imaging pipeline, typically a cg-heavy film, those source image that have been processed through Lightroom will typically need the Adobe special sauce hue twists removed

I don't know anything about what's involved in cg-heavy film imaging pipeline, but in my experience of looking at the results of Adobe's colour engine and comparing it to colour charts I would suggest that Adobe's motivation is not to create colours that look good, but colours that look accurate. I would guess that for each new camera model they shoot a colour chart and then devise a profile that tries to match the colours of the chart as closely as possible while retaining a healthy amount of contrast (as opposed to a linear profile). The camera manufacturers, on the other hand, create colour profiles that are not as accurate but arguably look better than what Adobe offers. This is what I've noticed in relation to RAW still photography. I don't know anything about what's up with the Sigma FP 🙂

Link to comment
Share on other sites

6 hours ago, mercer said:

No matter how you look at it, the FP is barely a prosumer camera. It apparently has its quirks, but it seems to me that as long as you protect your highlights, or adjust exposure of the in-camera meter reading with whatever the inconsistency is, then you should be able to get a consistent exposure... if it's reading a stop under then adjust accordingly.

As I said previously, I don't know a lot about ACES and have found your contribution to the discussion very interesting.

I wasn't really speaking of anybody specifically. Since the FP was released, there has been a steady flow of discussions that make it seem like the camera is almost unusable. And I have seen more than enough samples from the FP that screams otherwise and most of the creators from those videos rarely discuss these issues. So it seems that some of the complaints are either overblown or created from theory rather than practicality. I assume we have all shot in imperfect scenarios and we just have to make it work.

But I'm usually the minority around here. I see the FP as what it is... the smallest FF raw video camera and that's how I would use it... a handgrip, a lens and a camera strap. Other than the added size due to the SSD, I feel like I should be able to slip it into a pocket in between shots. But I shoot DIY, run and gun narratives, so YMMV.

Yes, agreed - just protect the highlights and shoot good stuff. Do the work figuring out where the sensor clips, and expose accordingly even if you can't see that while monitoring. But we have been spoiled by being able to monitor a LUT-ted log image so we can see while shooting exactly what we will see in Resolve, if the colour pipeline is transparent. I have that with the Digital Bolex and fundamentally it's the same in that it just shoots raw DNGs.

As for imperfect scenarios, I do find that there are so many stressful factors when shooting a creative low budget project with some friends that having to babysit the camera is a real killer to the spontaneity, rather than just being able to confidently know what you're getting.

I think what possibly interests me is shooting with an ND and using the exposure compensation to see if that would allow capturing the highlights but at least viewing for the middle. Would need to try it out. I personally don't like the whole ETTR approach because your shots are all over the place and IMO you then really need to shoot a grey card to get back to a baseline exposure, otherwise you're just eyeballing it.

To your larger point, this camera really does seem like something that can just spark some joy and spontaneity because it's so small to be carried around in your pocket and whip out and do some manual focus raw recording, with some heavy hitting metrics in comparison to a Red or a Sony Venice. I get that. I do really feel though that Sigma would not have had to do too much work to make it more objectively usable at image capture, and after its announcement I was interested to give feedback prior to the release, but I didn't know how to get my thoughts up the chain to the relevant people. Then when it was released, they seemed to have made all the classic mistakes.

Link to comment
Share on other sites

7 hours ago, hyalinejim said:

But the primaries for each of those colour spaces are defined, no? So there should be no variance in the RGB integers for middle grey. Correct me if I'm wrong, but my understanding that the value for middle grey (18% reflectance) in, for example, sRGB colour space is 119, 119, 119. It's not 0, 0, 0 nor is it 255, 255, 255. And in your example of 0.18 in linear light, if you transform correctly to sRGB you should end up with 119. If it's any other value then something is "incorrect" in the transformation. Just as middle grey is a constant, so are the characteristics of each colour space with defined primaries and gamma.

You're right, they are defined and that's what allows us to transform in and out of them to other colour spaces. It's not like we have to work in linear floating point in ACES gamut, Rec2020 or Alexa wide gamut, but to me working in ACES is a kind of lingua franca where the mathematics behave in a simple predictable way that is the same as the way exposure works in the real world. And under the hood, the Resolve colour corrections are still applied in a log space (ACEScc or ACEScct). I'm not saying it's perfect, but it makes a lot of sense to me.

The caveat though is that more often than not, an image that is represented as sRGB or Rec709 has had a linear dynamic range compressed into the 0-1 range not just by doing a transform from linear to sRGB or linear to Rec709, because that would cut out a lot of highlights. So there's some form of highlight rolloff - but what did they do? In addition, they probably apply an s-curve - what did they do? Arri publishes their logC to Rec709 transform which is via their K1S1 look, so this is knowable. But if you transform logC to Rec709 with the CST it will do a pure mathematical transform based on the log curve to the Rec709 curve and it will look different. 

So basically, to say an image is sRGB or Rec709 isn't accounting for the secret sauce that the manufacturer is adding to their jpeg output to most pleasingly, in their mind, shove the linear gamma/native wide gamut sensor data into the "most pleasing" sRGB or Rec709 container. Sorry if you knew all that, I'm not trying to be didactic.

Just as a side note, Sigma apparently didn't do this with the OFF profile, which is both helpful and not helpful (see below).

 

Quote

I don't know anything about what's involved in cg-heavy film imaging pipeline, but in my experience of looking at the results of Adobe's colour engine and comparing it to colour charts I would suggest that Adobe's motivation is not to create colours that look good, but colours that look accurate. I would guess that for each new camera model they shoot a colour chart and then devise a profile that tries to match the colours of the chart as closely as possible while retaining a healthy amount of contrast (as opposed to a linear profile). The camera manufacturers, on the other hand, create colour profiles that are not as accurate but arguably look better than what Adobe offers. This is what I've noticed in relation to RAW still photography. I don't know anything about what's up with the Sigma FP

I stated before, that the work Adobe did with CinemaDNG and the way a tool like Resolve and a few more specialized command line tools like oiiotool and rawtoaces interpret the DNG metadata, is based around capturing and then  interpreting a linear raw image as a linear-to-light scene referred floating point image that preserves the entirety of the dynamic range and the sensor wide gamut in a demosaiced rgb image. Not that Cinema DNG is perfect, but its aim is noble enough.

What you're describing with shooting a chart and devising a profile is what the DNG metadata tags are meant to contain courtesy of the manufacturer (not Adobe), which is why it's a pragmatic option for an ACES input transform in absence of the more expensive and technically involved idea of creating an IDT based on measured spectral sensitive data. If you look in the DNG spec under Camera Profiles it lays out the tags that are measured and supplied by the manufacturer.

Separate from that is the additional sauce employed in the name of aesthetics in Lightroom, and above I described the process by which the manufacturers add a look to their internal jpegs or baked Rec709 movies. And what does it matter if there's something extra that makes the photos look good? Well I'd rather have a clean imaging pipeline where I can put in knowable transforms so that I can come up with my own workflow, when Sigma have kind of fucked up on that count.

What I will say about sigma's OFF profile that was introduced after feedback, is that as best I can tell, they just put a Rec709 curve/gamut on their DNG image almost as malicious compliance, but they didn't tell us exactly what they did. I mean in this case, they did not do any highlight rolloff, which in some ways is good because it's more transparent how to match it to the DNG images, but also unlike a log curve the highlights are lost.

So the most I've been able to deduce is that by inverting a Rec709 curve into ACES that I get a reasonable match to the linear DNG viewed through ACES, but with clipped highlights. But for viewing and exposing the mid range, it's usable to get a reasonable match - ideally with another monitoring LUT applied on top of it for the final transform closer to what you would see in Resolve.

And I absolutely don't want to say that we all must be using these mid-range cameras like we're working on a big budget cg movie, thus sucking the joy out of it.  But like it or not, a lot of the concepts in previously eye-wateringly expensive and esoteric film colour pipelines have filtered down to affordable cameras, mainly through things like log encoding and wide gamut, as well as software like Resolve. But the requisite knowledge has not been passed down as well, as to how to use these tools in the way they were designed. So it has created a huge online cottage industry out of false assumptions. Referring back to the OG authors and current maintainers pushing the high end space forward can go a long way to personal empowerment as to what you can get out of an affordable camera.

Link to comment
Share on other sites

@Llaasseerr would you mind explaining this ISO / DR and middle grey combination for me? I am not able to grab this.

Have tried today with my incident meter to check if my exposure table is correct. ISO 800 (is proven ISO 100 as per meter) and 3200 (ISO 640) with closed down aperture to compensate looks exactly the same. No shift in DR can be seen whatsoever. 

Could it be that ProRes RAW is distributing the DR linear? So nothing changes no matter the ISO value? As long as you are not clipping the sensor all is good it seems.

For more than one light source my approach is now the following, dial in the exposure needed for example to not blow a window which I would like to see through. (with NDs and/or aperture) and measure the exposure for the talents skin with either false color or with my incident meter and the correct ISO value as per my table. Adjust the second light source accordingly to match cameras settings. 

Confusion... Feedback highly appreciated 🙂 

 

Link to comment
Share on other sites

This camera to me sounds like it is more of a pain in the ass to use than it is worth. The OG BMPCC is sort of like that also. Sure, when it works it is great, when it doesn't you end up with shit footage.  Why the hell bother. There are too many cameras out now that you just pick up and shoot and bingo, 90% of what you wanted. One and done as they say.

And then you still have to add stuff onto it to make it somewhat useable, I just don't get it. You have to be a gluten for punishment to use this camera, sort of the same for the EOS-M using ML. Hit or miss, mostly miss. 

Link to comment
Share on other sites

20 minutes ago, OleB said:

@Llaasseerr would you mind explaining this ISO / DR and middle grey combination for me? I am not able to grab this.

Have tried today with my incident meter to check if my exposure table is correct. ISO 800 (is proven ISO 100 as per meter) and 3200 (ISO 640) with closed down aperture to compensate looks exactly the same. No shift in DR can be seen whatsoever. 

Could it be that ProRes RAW is distributing the DR linear? So nothing changes no matter the ISO value? As long as you are not clipping the sensor all is good it seems.

For more than one light source my approach is now the following, dial in the exposure needed for example to not blow a window which I would like to see through. (with NDs and/or aperture) and measure the exposure for the talents skin with either false color or with my incident meter and the correct ISO value as per my table. Adjust the second light source accordingly to match cameras settings. 

Confusion... Feedback highly appreciated 🙂 

 

Are you referring to my idea of using the built-in exposure compensation? Of course, without having the camera I'm making assumptions about how it would work and if it would work at all. But I think broadly speaking, I was thinking that the sensor clips too early in the highlights, but that the shadows were very clean. So the whole range could probably be pushed easily by +3 stops. So say, shooting 800 for 100 which is what you say it does anyway.

But that could be pushed further. Beyond the fact its internally pushing 100 to 800, I'm saying maybe shoot with -3 ND and compensate for that in the light meter reading and the exposure compensation on the camera, so that it displays the image at +3 so middle grey looks correct. Again, assumptions.

What this could synergistically offer is that if you can toggle exposure comp off then you could see the highlights without them clipping, then toggle it back on on and see the image exposed correctly for middle grey. 

Given the quirks of the camera, I don't know if this workflow is possible fully internally, or if a Ninja V or an external monitor and a separate LUT would need to be applied. Theoretically, outputting an underexposed image via HDMI could then allow a LUT to do the exposure difference and roll off the highlights that would otherwise be visually clipped while shooting.

Yes PR Raw should absolutely be linear raw. I noticed the way it looks on the Mac can be interpreted as log or some other thing, but that's to do with metadata tags that are non-destructive. The underlying data in unclipped linear raw.  I'm not sure about ISO invariance though. I'm taking a stab and saying that it only looks different based on whether the camera is using one or the other of the base ISOs, and then the "intended" ISO is just metadata.

The overall idea is that I'm proposing shifting the highlight range beyond what Sigma recommends with their ISO/DR chart via a fixed underexposure (not ETTR with a histogram on a per-shot basis), to the point where you're still just about comfortable with the raised noise floor - and even then you can consider pushing it past that and applying a bit of denoising with Neat Video.

 

 

Link to comment
Share on other sites

10 minutes ago, webrunner5 said:

This camera to me sounds like it is more of a pain in the ass to use than it is worth. The OG BMPCC is sort of like that also. Sure, when it works it is great, when it doesn't you end up with shit footage.  Why the hell bother. There are too many cameras out now that you just pick up and shoot and bingo, 90% of what you wanted. One and done as they say.

And then you still have to add stuff onto it to make it somewhat useable, I just don't get it. You have to be a gluten for punishment to use this camera, sort of the same for the EOS-M using ML. Hit or miss, mostly miss. 

Definitely a valid opinion. I don't think it's as bad as the EOS-M, but there are issues for sure.

Although the original BMPCC had a decent colour workflow that was a bit more predictable. The main issue for me though, was that BMD did not actually publish their log curve/gamut, so it was still a bit of a black box.

I find the Sony alpha cams (a7sIII/FX3/FX6) to be great and predictable, and the most like an Alexa in their workflow, although having about 2 stops less DR.

 

Link to comment
Share on other sites

I was going to say just buy a Sony Alpha camera but didn't lol. Even the original Canon R would be ok. I get the idea of DNG files. You make it happen not the camera. Cameras that do that are great as a learning tool. Not so great for getting what you want on a daily basis.

I would not mind having one to be honest. But I can think of a Lot of "better" cameras for my use than it for the same or near money. I am tired of spending my time going somewhere, long drive to get some footage and it is crap when you look at it back home. Wow not a good way to spend your time. But I guess if you shoot Raw you can fix a lot of F ups. I guess I have been there done that too many times to consider it to be honest. We have come to a point in time where most newer cameras now are a Lot smarter than we are.

Link to comment
Share on other sites

2 hours ago, Llaasseerr said:

Are you referring to my idea of using the built-in exposure compensation? Of course, without having the camera I'm making assumptions about how it would work and if it would work at all. But I think broadly speaking, I was thinking that the sensor clips too early in the highlights, but that the shadows were very clean. So the whole range could probably be pushed easily by +3 stops. So say, shooting 800 for 100 which is what you say it does anyway.

But that could be pushed further. Beyond the fact its internally pushing 100 to 800, I'm saying maybe shoot with -3 ND and compensate for that in the light meter reading and the exposure compensation on the camera, so that it displays the image at +3 so middle grey looks correct. Again, assumptions.

What this could synergistically offer is that if you can toggle exposure comp off then you could see the highlights without them clipping, then toggle it back on on and see the image exposed correctly for middle grey. 

Given the quirks of the camera, I don't know if this workflow is possible fully internally, or if a Ninja V or an external monitor and a separate LUT would need to be applied. Theoretically, outputting an underexposed image via HDMI could then allow a LUT to do the exposure difference and roll off the highlights that would otherwise be visually clipped while shooting.

Yes PR Raw should absolutely be linear raw. I noticed the way it looks on the Mac can be interpreted as log or some other thing, but that's to do with metadata tags that are non-destructive. The underlying data in unclipped linear raw.  I'm not sure about ISO invariance though. I'm taking a stab and saying that it only looks different based on whether the camera is using one or the other of the base ISOs, and then the "intended" ISO is just metadata.

The overall idea is that I'm proposing shifting the highlight range beyond what Sigma recommends with their ISO/DR chart via a fixed underexposure (not ETTR with a histogram on a per-shot basis), to the point where you're still just about comfortable with the raised noise floor - and even then you can consider pushing it past that and applying a bit of denoising with Neat Video.

 

 

Thank you for this extensive and comprehensive write-up. 

In regards to push DR like you have described. My understanding in regards to the overall dynamic range is that you measure stops above the point where nothing is anymore distinguishable from the noise floor of the camera. So if the fp has 12.5 stops that means above the black noise floor. So if you do not push the upper limit into the white clouds and underexpose them, what happens then? 

Link to comment
Share on other sites

From my experience ETTR Only works for photography not video. YMMV. For video you have to think 0 to 100, and you Have to stay in between those values. You are screwed if you go above or below those figures. You really can't be near the limit at either value to be honest. You would have to have 18 stops or more to use it all.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...