Jump to content

SIGMA FP with ProRes RAW and BRAW !


Trankilstef
 Share

Recommended Posts

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
On 5/28/2022 at 6:15 PM, OleB said:

Native monitoring gives you indeed the correct middle grey value for each ISO value of the camera, but the white point is not at 100 IRE. It is changing for every ISO setting, sometimes white is at 80 IRE, sometimes 65 IRE etc. Additionally the black point is not at 0 IRE, but most of the times at 10-15 IRE. 

All in all, despite the latest method I have described, there seems to be no way to see a picture on the Ninja V, which by any means is representing what you get in the file. And that even applies to in camera tools, if you record RAW. And to film something with no tools to visually see what will be is at least nowadays with mirrorless cameras more than just a little cumbersome. 

Seems they opened the camera for RAW recording but in the end, that workflow was never fully developed. Since most people tend to use CNDG from what I have heard, and that is running more or less smoothly, guess there is no need for Sigma to address this. 

This is all super frustrating, and clearly it goes back to Sigma not thinking through a more predictable workflow and then Atomos just running with whatever they could get out of Sigma. I do think if I personally get hold of the camera that with some extra attention I'll find a predictable workflow despite that. But it's definitely not easy communicating back and forth via a chat forum.

The value proposition with the Ninja V is clearly compressed raw. It has a bit less highlight dynamic range because it doesn't have the highlight recovery of DNGs, but that is mostly fake information at best. It can work well, other times not. 

I do think that there's a way to incorporate the Ninja V with more predictable exposure and monitoring. But again, I would need to sit down and test it myself. Obviously, there's no straightforward solution for people who have not got some industry knowledge about color imaging pipelines and know their linear raw vs log vs rec709, etc.

Also I would recommend maybe look into transcoding the PRR footage to PR4444 in Assimilate Play Pro. Assimilate seem to have the most mature PRR workflow for ingest. I managed to get a free license with my Ninja V. Then you can for example export full quality Arri log footage from the linear raw footage and take it back to FCPX.

So what is the value proposition of this camera? It is very affordable for what it does, and it's quite unique at the moment. It seems like a camera that's best to experiment with because it may drive you mad if you try to apply a predictable workflow.

It is a lot more affordable than a Sony FX3. I was initially turned off by the slow rolling shutter speed and the fact that the raw output is downsampling from the full sensor. I still don't get how that is even possible for a raw file. But it seems the camera still has a lot to offer.

So I feel like that is it for now! Hopefully I'll get one in the next few weeks and I can do my own tests.

 

Link to comment
Share on other sites

Unique indeed, will try that Assimilate Play Pro way, sounds at least like a good idea in regards to what it does.

For me the solution with the curve presets is working fine. Agree that one should not have to fiddle with that kind of things, but anyway. After 1 year and 4 months of trying to get my feet on the ground in regards to using that camera properly and predictable I am somehow happy now.

Today I focused on something different finally. Highlight rolloff. Trying to get the sharp edge off a little. Seems to be working quite well. 🙂

SOOC:

71523156_hightlightsooc.thumb.png.0896344dbc9b8a55fec6d185b1234913.png

Highlight rolloff:

846382641_hightlightrolloff.thumb.png.afd38c592d340027388a727e56dc4cc8.png

 

Link to comment
Share on other sites

3 hours ago, OleB said:

Today I focused on something different finally. Highlight rolloff. Trying to get the sharp edge off a little. Seems to be working quite well. 

SOOC:

71523156_hightlightsooc.thumb.png.0896344dbc9b8a55fec6d185b1234913.png

Highlight rolloff:

846382641_hightlightrolloff.thumb.png.afd38c592d340027388a727e56dc4cc8.png

 

I am most impressed. Can you tell us a bit about your process? Thanks.

Link to comment
Share on other sites

6 hours ago, OleB said:

One possibly a little more real word sample of what can be achieved quite easily. 

In my opinion this does not look bad at all. 🙂

SOOC:

SOOC.thumb.png.442658d097207ebfa6e59d24da53a840.png

Highlight rolloff:

307976759_Highlightrolloff.thumb.png.1410c98f33aac1a490300138f15e9b40.png

 

7 hours ago, OleB said:

Unique indeed, will try that Assimilate Play Pro way, sounds at least like a good idea in regards to what it does.

For me the solution with the curve presets is working fine. Agree that one should not have to fiddle with that kind of things, but anyway. After 1 year and 4 months of trying to get my feet on the ground in regards to using that camera properly and predictable I am somehow happy now.

Today I focused on something different finally. Highlight rolloff. Trying to get the sharp edge off a little. Seems to be working quite well. 🙂

SOOC:

71523156_hightlightsooc.thumb.png.0896344dbc9b8a55fec6d185b1234913.png

Highlight rolloff:

846382641_hightlightrolloff.thumb.png.afd38c592d340027388a727e56dc4cc8.png

 

With raw there's no such thing as SOOC. I think that what you're seeing as SOOC is just some random interpretation by FCP. But as long as it works for you that's the main thing.

You should get a good predictable starting point just by transcoding raw to Arri LogC and applying the standard Arri log to Rec709 LUT, which has a visually similar highlight rolloff to the default ACES look transform but it's a bit less contrasty. Both are based roughly on the s-curve of a film print. Why make life hard? Everyone talks about Alexa highlight rolloff, but all they are really looking at most of the time is the rolloff of the 709 LUT from the raw sensor data since the sensor itself has no highlight rolloff in the captured image.

My opinion from limited testing is that working internally with DNGs seems pretty straightforward.

 

 

Link to comment
Share on other sites

Here's the image rendered through the default ACES look transform. In this case, I first exported the PRR files as Arri LogC/AWG then I used the Alexa input device transform to ACES.

This is from the ISO 3200 image you provided. It's quite a graceful transition for blown out highlights.

iso3200_hr.thumb.jpg.c46dd5189471220e49aa14f7e40b71ad.jpg

 

If we pull the exposure on this image down -2 stops, it's not too bad because the LUT is doing a good job interpreting the raw data. But you start to see the sensor clipping and the rolloff is more abrupt towards the core which is lacking detail. This is where the Alexa wins out, as it will allow the LUT to keep going with the rolloff. It's the LUT that is providing the rolloff, the sensor data is just linear - but the Alexa has a lot more of it captured on the top end by default.

 

iso3200_hr-2stops.thumb.jpg.fefdb93bf1bae74f1549aeeec4700420.jpg

 

If you underexpose with the Sigma, obviously you will keep that rolloff going too and it will not look like a flat white blob with magenta edges at the transition.

 

Link to comment
Share on other sites

Here is the same thing, but with your ISO 6400 image and 6400 -2 stops.

 

iso6400_hr.jpg

iso6400_hr-2stops.jpg

 

The point of these is not to just show another look, but this is a "good enough" technically accurate pipeline for most people based on a scene referred image, and I did not need to do any special modifications.

 

Link to comment
Share on other sites

1 hour ago, Llaasseerr said:

Everyone talks about Alexa highlight rolloff, but all they are really looking at most of the time is the rolloff of the 709 LUT from the raw sensor data since the sensor itself has no highlight rolloff in the captured image.

Someone told me once that part of the mojo from the Alexa is built into their LogC to Rec709 LUT like it was designed to be part of the image pipeline.

Honestly, I don't know about all that, but when I do a LogC CST, with my ML Raw footage, and then apply the LUT, the image does seem to open up nicely. It's usually best using Linear as an input, but even a BM Film input helps.

Link to comment
Share on other sites

1 hour ago, Llaasseerr said:

You should get a good predictable starting point just by transcoding raw to Arri LogC and applying the standard Arri log to Rec709 LUT, which has a visually similar highlight rolloff to the default ACES look transform but it's a bit less contrasty. Both are based roughly on the s-curve of a film print. Why make life hard? Everyone talks about Alexa highlight rolloff, but all they are really looking at most of the time is the rolloff of the 709 LUT from the raw sensor data since the sensor itself has no highlight rolloff in the captured image.

Thanks Llaasseerr, will try the way with ACES etc. but I have absolutely no experience with that so far. Sounds mega logical that in reality the Alexa rolloff is based on the LUT plus more headroom stops. If that proves to be even easier, perfect.

5 hours ago, Owlgreen said:

I am most impressed. Can you tell us a bit about your process? Thanks.

Of course, this is the reason we are here, no? 🙂

First step is to bring back the exposure in FCPX to where it belongs. Since I am only shifting exposure down to resample the picture at the time I was recording it, don't think I am braking something. That I have already explained in detail with curves above. Now let us take the example with the sun. It has been captured in ISO 800 (which for the Ninja V is base ISO so as per my latest interpretation ISO 250). Additionally I made use of a Black Pro Mist 1/4 for a little more base highlight bloom.

Undoubtly the sun was clipping in the recording. Have read a lot of things about highlight rolloff in film stock to get an idea on how it is supposed to look like. As per my findings you have a soft transition from pure white to other highlight values and additionally the more it reaches the white point, the less saturated the colors become until they reach white.

To resample this in FCPX on top of the exposure correction my curves looked like this:

Luma:

luma.png.0323f241751c05ab6fd6c20ee2667f0d.png

Luma vs. Saturation:

367665172_lumavssat.png.db5685db4bcdd56ad1c7472c2714d57e.png

Basically that did the trick for me. Looking at the curves you can clearly tell that something is off with the fp RAW files in FCPX, since the white point should be somewhere else, but so be it.

Hope that helps. 🙂

 

Link to comment
Share on other sites

3 hours ago, Llaasseerr said:

Everyone talks about Alexa highlight rolloff, but all they are really looking at most of the time is the rolloff of the 709 LUT from the raw sensor data since the sensor itself has no highlight rolloff in the captured image.

I have spoken about this at length one other forums and been convinced that the Alexa is simply a Linear capture and all the processing is in the LUT.

Here is what happens when you under/over expose the Alexa and compensate for that exposure under the LUT:

https://imgur.com/a/OGbI2To

Result: identical looking images (apart from noise and clipping limits)

Of course, the Alexa is a very high DR Linear capture device so I'm not criticising it at all.  

However, the FP is also a high-quality Linear capture device, and the fact that the ARRI colour magic is in the LUT means that we can all use it on our own footage if we can convert that footage back to Linear / LOG from whatever is recorded in-camera.

Link to comment
Share on other sites

5 hours ago, mercer said:

Someone told me once that part of the mojo from the Alexa is built into their LogC to Rec709 LUT like it was designed to be part of the image pipeline.

Honestly, I don't know about all that, but when I do a LogC CST, with my ML Raw footage, and then apply the LUT, the image does seem to open up nicely. It's usually best using Linear as an input, but even a BM Film input helps.

That LUT is a default example of how well the rolloff can be utilised.

Re: ML Raw, yes it's really the same workflow as with any raw footage. At some point, the raw footage needs to be transformed to log to apply a film print style LUT. In ACES that transform is built into the reference rendering transform/output device transform. If doing it manually, it's the process of transforming to, for example, LogC then to Rec709. Those are the two basic, most common examples. 

 

Link to comment
Share on other sites

2 hours ago, kye said:

I have spoken about this at length one other forums and been convinced that the Alexa is simply a Linear capture and all the processing is in the LUT.

Here is what happens when you under/over expose the Alexa and compensate for that exposure under the LUT:

https://imgur.com/a/OGbI2To

Result: identical looking images (apart from noise and clipping limits)

Of course, the Alexa is a very high DR Linear capture device so I'm not criticising it at all.  

However, the FP is also a high-quality Linear capture device, and the fact that the ARRI colour magic is in the LUT means that we can all use it on our own footage if we can convert that footage back to Linear / LOG from whatever is recorded in-camera.

You don't need to convince yourself because it's true. I already posted a wedge test in an ACES project where I did the same thing with the Sigma FP footage posted. I equalized the input image just by doing exposure shifts in linear space and the result is the same image with different clipping points and levels of sensor noise.

Which is why shifting the range towards highlight capture will give Arri-like, or really, film-like rolloff at the expense of more noise obviously.

And it's important to know that it's not the Arri Rec709 LUT in itself that is the Mojo. That LUT expects the LogC/AWG input. Using ACES gives the same kind of result too with the basic display transform. Both can prove as a starting point and then other kinds of look modification can be added.

And then obviously people like Steve Yedlin are doing an accurate film print emulation instead, and have their own managed workflow that is not specifically ACES, but it's the same idea. It depends on the show and the people involved.

 

730986667_exposureincreasezerodout.thumb.jpg.691d86b9859c8f7172823ad813807de9.jpg

Link to comment
Share on other sites

5 hours ago, OleB said:

Thanks Llaasseerr, will try the way with ACES etc. but I have absolutely no experience with that so far. Sounds mega logical that in reality the Alexa rolloff is based on the LUT plus more headroom stops. If that proves to be even easier, perfect.

I looked up online at the raw to log conversion options available in FCPX in the Info Inspector, and I can see that you can transform to Panasonic V-log/Vgamut which is coincidentally the same default log curve added by the Ninja V to the FP file metadata.

This is similar to Arri LogC, but I created a LUT that transforms from V-log/Vgamut to LogC/AWG (attached) as a next step so that it's more accurate. It is just a slight shift.

Then you can later apply the FCPX built-in Arri to Rec709 LUT, or download the LUT from Arri, or use any LUT that expects LogC/AWG as an input, which is many of them. So it's convenient and helps quickly wrangle the raw footage with predictable results.

This is kind of a workaround for a lack of a managed color workflow in FCPX. You want to break apart the linear to log then log to display transform into separate operations, rather than apply them in the same operation on import in the "Info Inspector", so that you can grade in-between.

https://www.dropbox.com/s/43qwzizvbid79c1/vlog_vgamut_to_logC_AWG.cube?dl=0

Link to comment
Share on other sites

1 hour ago, Llaasseerr said:

You don't need to convince yourself because it's true.

If only it were the case that something simply being true meant that no-one needed to be convinced of it!

At this point, I've read so much BS online that I require quite a high amount of evidence that something is true before I repeat it to others, which was the point of me phrasing it like that in my original comment 🙂 

It really seems like the FP is a great FF cinema camera really just lacking a good post-process and support for it in the NLE space.  I really hope they rectify this in future firmware updates - the sensor has soooo much potential!

Link to comment
Share on other sites

1 minute ago, kye said:

If only it were the case that something simply being true meant that no-one needed to be convinced of it!

At this point, I've read so much BS online that I require quite a high amount of evidence that something is true before I repeat it to others, which was the point of me phrasing it like that in my original comment 🙂 

It really seems like the FP is a great FF cinema camera really just lacking a good post-process and support for it in the NLE space.  I really hope they rectify this in future firmware updates - the sensor has soooo much potential!

The trick is that with something like Resolve that has more high end colour management, it can be dealt with. But with tools like Premiere and FCPX, it's more likely that custom LUTs need to be generated from other tools then chained together. 

I think the post process is actually fine, but it's more the lack of monitoring that will give you the same image as your starting point in post, and we are still a little unclear as to how to expose using camera tools. A light meter would be best for now, probably. But I'm sure there's an obvious solution to expose based on the middle with the new false color feature, even if the highlights remain unseen.

With the Ninja V it's a little more tricky to say for now. According to Atomos there's no log monitoring option apparently, but OleB has come up with a workaround of using PQ to view the highlights which is handy to know.

It would be good if the Atomos display is actually V-log/Vgamut which is what they are embedding in the file metadata, but I think that is not the case. I think it's more similar to the OFF mode from the camera itself. I'll be interesting to check this if I get a chance.

Link to comment
Share on other sites

26 minutes ago, Llaasseerr said:

The trick is that with something like Resolve that has more high end colour management, it can be dealt with. But with tools like Premiere and FCPX, it's more likely that custom LUTs need to be generated from other tools then chained together. 

I think the post process is actually fine, but it's more the lack of monitoring that will give you the same image as your starting point in post, and we are still a little unclear as to how to expose using camera tools. A light meter would be best for now, probably. But I'm sure there's an obvious solution to expose based on the middle with the new false color feature, even if the highlights remain unseen.

With the Ninja V it's a little more tricky to say for now. According to Atomos there's no log monitoring option apparently, but OleB has come up with a workaround of using PQ to view the highlights which is handy to know.

It would be good if the Atomos display is actually V-log/Vgamut which is what they are embedding in the file metadata, but I think that is not the case. I think it's more similar to the OFF mode from the camera itself. I'll be interesting to check this if I get a chance.

I must admit that I haven't kept up with your discussions on this, but I got the impression that you can't use the False Colour mode on the FP to accurately monitor things across all the ISOs - is that correct?

The way I would use this camera would be either manually exposing or using it in auto-ISO and using exposure compensation, but I would be using the false-colours in either mode to tell me what was clipped and where the middle was.

I'd be happy to adjust levels shot-by-shot in post (unlike professional workflows when working with a team) and know how to do that in Resolve, so I'd be comfortable raising or lowering the exposure based on what was clipping and what I wanted to retain in post.  If the false-colour doesn't tell me those things then it would sort of defeat its entire purpose..

Link to comment
Share on other sites

3 hours ago, Llaasseerr said:

I looked up online at the raw to log conversion options available in FCPX in the Info Inspector, and I can see that you can transform to Panasonic V-log/Vgamut which is coincidentally the same default log curve added by the Ninja V to the FP file metadata.

This is similar to Arri LogC, but I created a LUT that transforms from V-log/Vgamut to LogC/AWG (attached) as a next step so that it's more accurate. It is just a slight shift.

Then you can later apply the FCPX built-in Arri to Rec709 LUT, or download the LUT from Arri, or use any LUT that expects LogC/AWG as an input, which is many of them. So it's convenient and helps quickly wrangle the raw footage with predictable results.

This is kind of a workaround for a lack of a managed color workflow in FCPX. You want to break apart the linear to log then log to display transform into separate operations, rather than apply them in the same operation on import in the "Info Inspector", so that you can grade in-between.

https://www.dropbox.com/s/43qwzizvbid79c1/vlog_vgamut_to_logC_AWG.cube?dl=0

Thank you. Highly appreciated! Will take a look on this later. 
 

Maybe the fault is the not so great color management in FCPX. Possibly it would be way easier with Davinci Resolve taking BRAW as base. Do not have such a recording device at hand for testing though. 

3 hours ago, Llaasseerr said:

It would be good if the Atomos display is actually V-log/Vgamut which is what they are embedding in the file metadata, but I think that is not the case. I think it's more similar to the OFF mode from the camera itself. I'll be interesting to check this if I get a chance.

The native mode is quite similar to something I would expect the V-log to look like. However both black and white points are weird. Black at about 10 IRE and white about 65 IRE. Should they not be at the full scale, so 0-100? The middle grey though is at its correct position for the respective ISO setting as far as I can tell. 

Link to comment
Share on other sites

By the way the Ninja uses the LUTs also for the false color prediction. Could it be a solution to have something like a custom LUT which is basically converting the Vgamut to Arri LUT? Load this into the monitor to check exposure and in post apply the way Llaasseerr described?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...