Jump to content
Andrew Reid

Sigma Fp review (part 1) and interview - Cinema DNG RAW internal recording!

Recommended Posts

Ok, slightly o/t, but anyone have any decent "starter" type tutorials for working with DNG in Resolve? The results y'all are posting have piqued my interest. REALLY nice. As far as experience, I've shot stills for much longer than I've shpt video, so I'd rate myself *so-so* with Premiere, and quasi-expert with Lightroom, but I've only messed with Resolve a little bit and that was a long time ago.

 

I've created my own LUTs for Premiere, and would probably still do timeline editing there, but I'd like to to use Resolve just to import and do basic grading / corrections, then export to Premiere- which can't currently deal with DNG, unless I'm mistaken.

 

Thanks in advance...

Share this post


Link to post
Share on other sites
EOSHD Pro Color for Sony cameras EOSHD Pro LOG for Sony CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

@imagesfromobjects
One exercise that helped me get started with Resolve was opening an image in Lightroom and tweaking it to taste, and then opening that same image in Resolve & using the same controls (exposure, contrast, saturation, Hue vs Hue, Hue vs Lum curves, etc.) to try to recreate the same result that I saw in Lightroom. I know that video adjustments tend to exist in their own universe when you dig in to try to make things more technically sound for video compared to a still, but it might be worthwhile to do something similar. 

With that approach, I could still think in terms of editing a still but developed more of a video mindset to how the controls translate differently between stills and video. Curves still largely function the same, though they tend to be more sensitive in Resolve than Lightroom, and instead of using one master stack for operations, you apply them to nodes which can then be re-wired to create different results depending on the flow of the math. I also had experience with the Unreal Engine material editor for years, so node-based editing and order of operations wasn't a huge leap. 

I think @paulinventome 's approach with injecting a DNG into the RED IPP2 workflow is one of the faster ways to get to a solid image that I've seen recently.

Share this post


Link to post
Share on other sites
23 minutes ago, Scott_Warren said:

@imagesfromobjects
One exercise that helped me get started with Resolve was opening an image in Lightroom and tweaking it to taste, and then opening that same image in Resolve & using the same controls (exposure, contrast, saturation, Hue vs Hue, Hue vs Lum curves, etc.) to try to recreate the same result that I saw in Lightroom. I know that video adjustments tend to exist in their own universe when you dig in to try to make things more technically sound for video compared to a still, but it might be worthwhile to do something similar. 

With that approach, I could still think in terms of editing a still but developed more of a video mindset to how the controls translate differently between stills and video. Curves still largely function the same, though they tend to be more sensitive in Resolve than Lightroom, and instead of using one master stack for operations, you apply them to nodes which can then be re-wired to create different results depending on the flow of the math. I also had experience with the Unreal Engine material editor for years, so node-based editing and order of operations wasn't a huge leap. 

I think @paulinventome 's approach with injecting a DNG into the RED IPP2 workflow is one of the faster ways to get to a solid image that I've seen recently.

Awesome, yes that sounds reasonable. The LUTs I made for Premiere were based on LR presets I tweaked and created CUBE using LUTGenerator. That process is easy enough for me to do, and sounds similar to what you are suggesting.

 

Only issue is- I don't even know how to load a DNG clip into Resolve to even begin hahaha. So, mostly I'm just asking how to do that for now, or if you have any suggestions on decent basic tutorials for a Resolve DNG worflow, I'd love to hear. Thanks again!

Share this post


Link to post
Share on other sites

@imagesfromobjects 
You'd want to browse to the root of where the DNG sequence lives on your drive in the media browser, then either drag it into the media pool or right-click it and Add to the media pool. From the media pool, you would create a new timeline with it and off you go! It's easier to add DNG sequences through the media pool browser instead of manually adding a folder filled with frames where you'd need to select all the frames, then add... it gets tedious after several clips. 
 

dngclip.jpg

Share this post


Link to post
Share on other sites

You start on the import page. Browse your hard drive for the camera footage folder. Open it, then drag all the individual sequence folders on to the Resolve media window.

Resolve automatically creates single continuous clips for you, not hundreds of DNGs.

There are great free tutorials on Youtube - Ripple Training, Avery Peck, Casey Farris, Color Grading Central, The Modern Filmmaker and BMD themselves.

Lot's of get going in Resolve fast type content.

 

It's actually pretty easy to create a graded image without using a LUT.

Share this post


Link to post
Share on other sites
On 3/25/2020 at 11:34 AM, paulinventome said:

As far as i understand the 1st version of both the Elmarit and the Summicron had huge issues on non M cameras because the filter stacks are too thick and the rear of the lens produced very shallow ray angles. Jono Slack did a comparison with the V1 and V2 summicron on a Sony a7 and the results were very different. The performance of it on the a7 was abysmal before. Again, infinity focused.

But the fp has no OLPF and there's no data as to how thick that stack is. I've been after a 28 cron for a while, but the V2 is expensive and elusive whereas there are plenty of V1 around - but there are no tests on an fp.

The 50 and 90 are incredible on the fp but i need a wide to go with them. 

So would be super interested to see how that fairs when you do get your fp.

This is less FC and more smearing. More an issue with the camera than the lens.

thanks
Paul

I shot some landscape-type stuff with the Elmarit-M 28/2.8 Asph and Sigma fp the other day.  My personal verdict on the combo is that, at f/8 and a hair back from the infinity hard stop, it could be described as "unnecessarily sharp" across the frame, but I'd be happy to upload the DNGs for you to check out if you're interested.

Share this post


Link to post
Share on other sites

Also, is there anywhere we could start a comprehensive list of current quirks? Something that could easily be referenced by Sigma to hopefully be addressed in future firmware updates. I feel like a lot of stuff has been mentioned here, but it's spread out over 50+ pages...

Share this post


Link to post
Share on other sites
On 3/24/2020 at 9:56 PM, Scott_Warren said:

No offense to Crimson Engine, but it's a strange suggestion for people to use an incorrect colorspace as a starting point with a camera that has pretty great color in its own right as Paul has mentioned!

Blackmagic Design Film for colour space/gamut (Colour Science Gen 1) in a DNG workflow is 'sensor space'. So it's really how the sensor sees colour. This is not meant to be presentable on a monitor, you're meant to transform to a display ready space but to do so requires the proper transforms for that specific camera/sensor (so you can't use the CST plugin for non BM cameras for example) in which the transform varies depending on the white balance setting you choose.

If you're happy to grade from sensor space, that's fine but I would also join in recommending Rec.709 or an ACES workflow in this case since neither approach will clip any data and will also do the heavy lifting of transforming the sensor response no matter the wb setting.

Share this post


Link to post
Share on other sites
50 minutes ago, CaptainHook said:

Blackmagic Design Film for colour space/gamut (Colour Science Gen 1) in a DNG workflow is 'sensor space'. So it's really how the sensor sees colour. This is not meant to be presentable on a monitor, you're meant to transform to a display ready space but to do so requires the proper transforms for that specific camera/sensor (so you can't use the CST plugin for non BM cameras for example) in which the transform varies depending on the white balance setting you choose.

If you're happy to grade from sensor space, that's fine but I would also join in recommending Rec.709 or an ACES workflow in this case since neither approach will clip any data and will also do the heavy lifting of transforming the sensor response no matter the wb setting.

You learn something new every day! That makes a lot of sense, actually. I assumed that the Blackmagic setting was expecting specific colors from your own color filters and so would produce "wrong" results if you started that way. Thank you for the information :) 

Share this post


Link to post
Share on other sites
7 hours ago, CaptainHook said:

Blackmagic Design Film for colour space/gamut (Colour Science Gen 1) in a DNG workflow is 'sensor space'. So it's really how the sensor sees colour. This is not meant to be presentable on a monitor, you're meant to transform to a display ready space but to do so requires the proper transforms for that specific camera/sensor (so you can't use the CST plugin for non BM cameras for example) in which the transform varies depending on the white balance setting you choose.

And that sensor space is defined by the matrixes in the DNG or it is built into Resolve specifically for BMD Cameras?

I did try to shoot a P3 image off my reference monitor, actually an RGB in both P3 and 709 space. The idea is to try various methods to display it to see if the sensor space was larger than 709. Results so far inconclusive! Too many variables that could mess the results up. If you turn off resolve colour management then you can choose the space to debayer into. But if you choose P3 then the image is scaled into P3 - correct? If you choose 709 it is scaled into there. So it seems that all of the options scale to fit the selected space.

Can you suggest a workflow that might reveal native gamut?

For some reason i cannot get the sigma DNGs into Nuke otherwise i'd be able to confirm there. 

In my experience of Sony sensors usually the Reds go way beyond 709. So end to end there's a bunch of things to check. One example is on the camera itself whether choosing colourspace alters the matrixes in the DNG - how much pre processing happens to the colour in the camera.

Cheers
Paul

Share this post


Link to post
Share on other sites
17 hours ago, imagesfromobjects said:

but I'd be happy to upload the DNGs for you to check out if you're interested.

Just one with detail in the corners would do. My pre-asph looks swirly in the far corners even at F8.

I wonder if adapter has an impact. I'm using a Novoflex, not the most expensive, but not cheap either.

Share this post


Link to post
Share on other sites
22 hours ago, imagesfromobjects said:

I shot some landscape-type stuff with the Elmarit-M 28/2.8 Asph and Sigma fp the other day.  My personal verdict on the combo is that, at f/8 and a hair back from the infinity hard stop, it could be described as "unnecessarily sharp" across the frame, but I'd be happy to upload the DNGs for you to check out if you're interested.

That would be awesome! Yes, thank you. Sorry i missed this as i was scanning the forum earlier.

Much appreciated!
Paul

Share this post


Link to post
Share on other sites
23 hours ago, CaptainHook said:

Blackmagic Design Film for colour space/gamut (Colour Science Gen 1) in a DNG workflow is 'sensor space'. So it's really how the sensor sees colour. This is not meant to be presentable on a monitor, you're meant to transform to a display ready space but to do so requires the proper transforms for that specific camera/sensor (so you can't use the CST plugin for non BM cameras for example) in which the transform varies depending on the white balance setting you choose.

If you're happy to grade from sensor space, that's fine but I would also join in recommending Rec.709 or an ACES workflow in this case since neither approach will clip any data and will also do the heavy lifting of transforming the sensor response no matter the wb setting.

What’s a “universal” color space to work in? Let’s say I wanted to deliver to major movie theaters, but also export for YouTube. Can’t we just work in the largest color space, and while exporting, convert the colors for different destinations (ie: YouTube vs movie theater?)

also, is Adobe DNG converter any use right now to make DNG file size smaller from the FP?

What does Resolve do with Adobe’s Smart Previews?

Share this post


Link to post
Share on other sites

Any of you guys seeing these random flashes - I don't think it's the same as the noise flicker/pulsating..it's different, and is completely intermittent - not consistent like the other shadow pulses. This was at ISO 400. Ugh! 

 

Share this post


Link to post
Share on other sites
9 hours ago, redepicguy said:

Any of you guys seeing these random flashes - I don't think it's the same as the noise flicker/pulsating..it's different, and is completely intermittent - not consistent like the other shadow pulses. This was at ISO 400. Ugh! 

If you're talking about the shadow flash half way through then yes. At ISO400 only.

And yes Sigma are aware of it and i'm awaiting some data as the whether we can guarantee it only happens at 400.

This is the same for me. But it's only 400 for me, whereas others have seen it at other ISOs...

cheers
Paul

Share this post


Link to post
Share on other sites
9 hours ago, Devon said:

What’s a “universal” color space to work in? Let’s say I wanted to deliver to major movie theaters, but also export for YouTube. Can’t we just work in the largest color space, and while exporting, convert the colors for different destinations (ie: YouTube vs movie theater?)

Actually you don't really want to work in a huge colourspace because colour math goes wrong and some things are more difficult to do.  Extreme examples here:

https://ninedegreesbelow.com/photography/unbounded-srgb-as-universal-working-space.html

There were even issues with CGI rendering in P3 space as well.

You want to work in a space large enough to contain your colour gamut. These days it's probably prudent to work in 2020 or one of the ACES spaces especially designed as a working space.

What you say is essentially true. If you are mastering then you'd work in a large space and then do a trim pass for each deliverable - film. digital cinema, tv, youtube etc, You can transform from a large space into each deliverable but if your source uses colours and values beyond your destination then a manual trim is the best approach. 

You see this more often now with HDR - so a master wide colour space HDR is done and then additional passes are done for SDR etc,.

However, this is at the high end. Even in indie cinema the vast majority of grading and delivering is still done in 709 space. We are still a little way off that changing.

Bear in mind that the vast majority of cameras are really seeing in 709.

P3 is plenty big enough - 2020 IMHO is a bit over the top. 

cheers
Paul

Share this post


Link to post
Share on other sites
7 hours ago, Chris Whitten said:

DNG or MOV?

DNG. 

 

4 hours ago, paulinventome said:

If you're talking about the shadow flash half way through then yes. At ISO400 only.

And yes Sigma are aware of it and i'm awaiting some data as the whether we can guarantee it only happens at 400.

This is the same for me. But it's only 400 for me, whereas others have seen it at other ISOs...

cheers
Paul

Ok...here's to hoping it's only at 400. This isn't the same (or at least I don't think) as the continuous shadow level hits/bouncing as I experience at a wide range of ISO's. Sigma is entirely shut down right now...so I don't know that anything will be remedied soon. 

Share this post


Link to post
Share on other sites
On 3/30/2020 at 7:45 PM, paulinventome said:

And that sensor space is defined by the matrixes in the DNG or it is built into Resolve specifically for BMD Cameras?

I did try to shoot a P3 image off my reference monitor, actually an RGB in both P3 and 709 space. The idea is to try various methods to display it to see if the sensor space was larger than 709. Results so far inconclusive! Too many variables that could mess the results up. If you turn off resolve colour management then you can choose the space to debayer into. But if you choose P3 then the image is scaled into P3 - correct? If you choose 709 it is scaled into there. So it seems that all of the options scale to fit the selected space.

Can you suggest a workflow that might reveal native gamut?

Resolve uses the matrices in the DNG. "Sensor space" isn't really defined like a normal gamut with 3 primaries and a white point. It's usually described by spectral sensitivities and if you were to attempt to plot the response on a x/y (or u/v) plot it wouldn't be straight lines connecting 3 points.

You can an idea by calculating the primaries from the matrices in the DNG since they describe XYZ to sensor space transforms and then plotting that. You would want to do it for both illuminants in the DNG metadata and take both into account. I haven't read every post as its hard to keep up, but I'm curious at to why you want to know?

On 3/31/2020 at 11:04 AM, Devon said:

What’s a “universal” color space to work in? Let’s say I wanted to deliver to major movie theaters, but also export for YouTube. Can’t we just work in the largest color space, and while exporting, convert the colors for different destinations (ie: YouTube vs movie theater?)

Depends on how you want to work and what suits, but this is exactly the intention of either Resolve's Colour Management option (rather than YRGB) or using ACES.

Share this post


Link to post
Share on other sites
18 hours ago, redepicguy said:

Ok...here's to hoping it's only at 400. This isn't the same (or at least I don't think) as the continuous shadow level hits/bouncing as I experience at a wide range of ISO's. Sigma is entirely shut down right now...so I don't know that anything will be remedied soon. 

I got a reply from them last week, has it shut down recently?

We need to double check we're talking about the same thing. I see half way through that an exposure 'pulse' not a single frame flash but more a pulse. I believe it's an exposure thing in linear space but when viewed like this it affects the shadows much more. Can you post a link to the video direct and maybe enable download so we can frame by frame through it. A mild pulsing is also very common in compression so we want to make sure we're not look at that...?

cheers
Paul

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...