Jump to content
Andrew Reid

Comprehensive Sigma Fp first impressions and interview - Cinema DNG RAW internal recording!

Recommended Posts

14 hours ago, CaptainHook said:

This is why DNGs from our cameras look different across various apps, because they are free to interpret them as they want.

I'm digging into the 8bit 4K DNGs on the Fp, comparing single frames in Adobe Camera Raw to the 12bit SSD recordings and some 14bit stills from other cameras.

Curiously both the 8bit DNGs in video mode and the Fp's 24 megapixel DNGs in stills mode report as 8bit in Adobe Camera Raw, using Adobe RGB colour space.

You have the choice between 8bit and 16bit.

Why does Adobe default to 8bit colour space for raw?

Where is 12bit, 14bit?

8bit-acr.jpg

adobe-rgb.jpg

Share this post


Link to post
Share on other sites
EOSHD Pro Color for Sony cameras EOSHD Pro LOG for Sony CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
49 minutes ago, Andrew Reid said:

I'm digging into the 8bit 4K DNGs on the Fp, comparing single frames in Adobe Camera Raw to the 12bit SSD recordings and some 14bit stills from other cameras.

Curiously both the 8bit DNGs in video mode and the Fp's 24 megapixel DNGs in stills mode report as 8bit in Adobe Camera Raw, using Adobe RGB colour space.

You have the choice between 8bit and 16bit.

Why does Adobe default to 8bit colour space for raw?

Where is 12bit, 14bit?

It is not that the files are reported as such. Rather, this is the internal working precision of ACR. Just set it to 16 bits and you are good.

Share this post


Link to post
Share on other sites
2 minutes ago, cpc said:

It is not that the files are reported as such. Rather, this is the internal working precision of ACR. Just set it to 16 bits and you are good.

Basically what it does with 8 bit cDNG is interpolate them. I'd still agree working in 16 bit to avoid processing artifacts inside ACR (and Photoshop/After Effects for that matter).

Share this post


Link to post
Share on other sites
15 hours ago, CaptainHook said:

Resolve has support through RCM (Resolve Colour Management) and CST for all major camera manufacturers log curves and gamuts so you could interpret the DNGs using RCM for instance into any gamma/gamut correctly for the given white balance setting if you would prefer. But that's why i recommend the Rec709 approach in Resolve for the Sigma DNGs (or RCM as mentioned would also work). One major issue for DNG for us was that there is no standard way defined to interpret DNGs into different colour spaces or even ISOs through metadata which was a big focus for Blackmagic RAW*. This is why DNGs from our cameras look different across various apps, because they are free to interpret them as they want. So we had the same problem with other apps and our DNGs, but it was worse as most other apps don't have an equivalent to RCM or CST to manage the transform into Blackmagic colour spaces.

*That's ignoring how slow DNG is to decode (relatively) and that even at 4:1 compression, DNGs from our cameras had image artefacts in certain scenes and situations we weren't happy with (5:1 was evaluated to be not useable/releasable to the public) which is a real problem as resolution and frame rates increase and is even a problem for recording 6K to affordable media (or even 4.6K at high frame rates). Even if we weren't put into the situation to drop DNG when we did, IMHO it's unlikely it would be a good viable solution long term with where things are heading and the amount of complaints about DNG we got/saw and had ourselves. It was great when the first Blackmagic cameras were HD (2.4K) and 30fps, but that even Adobe themselves seemed to have no interest in maintaining or developing it further it's limitations now can't be ignored.

I think there are a few upgrades BM could have done to DNG. First,  perhaps introduce MXF containers, as per the CinemaDNG spec. This would have decreased filesystem load, reduce wasted storage space, as well as eliminate the need to replicate metadata in each frame. With the rare power to do be able to produce both cameras and Resolve, this sounded like a no brainer. Also, there were some choices which handicapped BM cameras in terms of achievable file sizes. First, the non-linear curve used was not friendly to lossless compression, which basically resulted in ~20-25% bigger lossless files from the start (~1.5:1 vs. ~2:1). Then, BM essentially had the freedom to introduce a lossy codec of their choice, it was a proprietary thing anyway, even though it was wrapped in DNG clothes. Even with the choice BM did, if they didn't treat the Bayer image as monochrome during compression, they would have likely been able to push 5:1 with acceptable quality (as it stands, 4:1 could drop to as low as 7 bits of actual precision, never mind the nominal tag of 12 bits). And finally, BM could have ditched byte stuffing in lossy modes (remember, it is essentially a proprietary thing, you could have done anything!), which would have boosted decoding (and encoding) speed significantly.

Of course, reproducibility of results across apps is a valid argument, and is something that the likes of Arri did bet on from the beginning. But you need an SDK for this anyway, and it is by no means bound to the actual format. To promote an uniform look, BM could have done an SDK/plugins/whatever for the BM DNG image, the same way they did with BRAW.

Share this post


Link to post
Share on other sites
5 hours ago, Andrew Reid said:

I'm digging into the 8bit 4K DNGs on the Fp, comparing single frames in Adobe Camera Raw to the 12bit SSD recordings and some 14bit stills from other cameras.

 

I'd really recommend looking at RawDigger, it will let you open the files and see really what is inside. More so than going through Adobe/Resolve.

It would be awesome to be able to download some samples?

cheers
Paul

Share this post


Link to post
Share on other sites
5 hours ago, cpc said:

I think there are a few upgrades BM could have done to DNG. First,  perhaps introduce MXF containers, as per the CinemaDNG spec. This would have decreased filesystem load, reduce wasted storage space, as well as eliminate the need to replicate metadata in each frame. With the rare power to do be able to produce both cameras and Resolve, this sounded like a no brainer. Also, there were some choices which handicapped BM cameras in terms of achievable file sizes. First, the non-linear curve used was not friendly to lossless compression, which basically resulted in ~20-25% bigger lossless files from the start (~1.5:1 vs. ~2:1). Then, BM essentially had the freedom to introduce a lossy codec of their choice, it was a proprietary thing anyway, even though it was wrapped in DNG clothes. Even with the choice BM did, if they didn't treat the Bayer image as monochrome during compression, they would have likely been able to push 5:1 with acceptable quality (as it stands, 4:1 could drop to as low as 7 bits of actual precision, never mind the nominal tag of 12 bits). And finally, BM could have ditched byte stuffing in lossy modes (remember, it is essentially a proprietary thing, you could have done anything!), which would have boosted decoding (and encoding) speed significantly.

Of course, reproducibility of results across apps is a valid argument, and is something that the likes of Arri did bet on from the beginning. But you need an SDK for this anyway, and it is by no means bound to the actual format. To promote an uniform look, BM could have done an SDK/plugins/whatever for the BM DNG image, the same way they did with BRAW.

Many of your points don't really take in the big picture that we have (which is understandable) or consider hardware implementations in real time on a camera - for instance implying we shouldn't have encoded the DNGs with a non-linear curve - so for the 4.6K with 15 stops that would mean 16bit linear which would negate the 20-25% savings you mention (uncompressed 17.42MB~ per frame for 12bit non-linear vs 23.22MB~ per frame for 16bit linear) needing a completely different and much MUCH more expensive hardware design for the camera for basically the same file size. Doing this stuff in camera is completely different to desktop applications where even saving a single bit can make a HUGE difference to what you can actually do because of the hardware processing and other bandwidth restrictions (eg. To keep it somewhat relevant to this thread look at the bit depth restrictions in the Sigma DNGs). Its also not a "proprietary" byte stuffing, its just the standard JPEG extension spec for 12bit since DNG only specified JPEG for up to 10bit lossy compression and we needed higher bit depth than that. Blackmagic always tries to use existing standards when possible like using what's in the available JPEG spec rather than doing 'anything'. We also evaluated many things for compression to get better quality, and we ultimately ended up with Blackmagic RAW. 😉

Also if we spent any more time on DNG instead of spending the last 3 or so years developing Blackmagic RAW we would likely currently be in the position of having no RAW option at all on our cameras because we would have had to remove it anyway. Or if we had spent our limited resources to try do and manage both, then we would still just be left with Blackmagic RAW except we would have wasted time/resources on developing DNG further only to have had to remove it and some of the other features we have done we wouldn't have had time for. That was obvious to me even when I started at Blackmagic just over 5 years ago. And as you start to list the things you want to improve or change with DNG, you end up realizing it's not DNG anymore and end up with a new codec anyway.

So i'm not sure if you actually suggesting that we SHOULD have done any of that, or just that in "theory" we could have (where the truth on camera hardware is quite different) - because we could of course do or try many things that wouldn't make business or long term sense (or even short term), but you might just be listing 'possibilities' like many other people do in which case my response is unnecessary. Especially in a Sigma thread. 🙂

Share this post


Link to post
Share on other sites
6 hours ago, cpc said:

I think there are a few upgrades BM could have done to DNG. First,  perhaps introduce MXF containers, as per the CinemaDNG spec. This would have decreased filesystem load, reduce wasted storage space, as well as eliminate the need to replicate metadata in each frame. With the rare power to do be able to produce both cameras and Resolve, this sounded like a no brainer.

I prefer file sequences so I'm fine with what the Sigma fp is doing there. Fingers crossed DNG compression gets added eventually. Coming from using software like Nuke and Resolve in film production, all shows are ingested from raw or pro res as dpx or open exr and go all the way through to DI in this way (editing is dnx in Avid). Indie features and second tier tv shows are typically pro res or Avid dnx, and I understand at the dslr level it's all movie formats not frames. It's only at the dslr level that anyone is actually trying to edit raw. IMO it would be like trying to edit with the original camera negative on a Steenbeck.

Maybe we can just say that it's a cultural difference. But discrete frames allow the following:

- Varying metadata to be stored per frame.

- The whole file isn't hosed because of one bad frame

- File transfer is easier. Way less load on the network and this is a biggy, with transfer to/from cloud.

Having the option of MXF is fine though. I remember testing a CinemaDNG mxf when the spec was just released, but no-one actually used it.

Share this post


Link to post
Share on other sites
2 hours ago, CaptainHook said:

Many of your points don't really take in the big picture that we have (which is understandable) or consider hardware implementations in real time on a camera - for instance implying we shouldn't have encoded the DNGs with a non-linear curve - so for the 4.6K with 15 stops that would mean 16bit linear which would negate the 20-25% savings you mention (uncompressed 17.42MB~ per frame for 12bit non-linear vs 23.22MB~ per frame for 16bit linear) needing a completely different and much MUCH more expensive hardware design for the camera for basically the same file size. Doing this stuff in camera is completely different to desktop applications where even saving a single bit can make a HUGE difference to what you can actually do because of the hardware processing and other bandwidth restrictions (eg. To keep it somewhat relevant to this thread look at the bit depth restrictions in the Sigma DNGs). Its also not a "proprietary" byte stuffing, its just the standard JPEG extension spec for 12bit since DNG only specified JPEG for up to 10bit lossy compression and we needed higher bit depth than that. Blackmagic always tries to use existing standards when possible like using what's in the available JPEG spec rather than doing 'anything'. We also evaluated many things for compression to get better quality, and we ultimately ended up with Blackmagic RAW. 😉

You seem to have misunderstood this part. :)

What I am saying is that the choice of the actual curve handicapped the results. It is a curve with lots of holes (unused values) at the low end, which is good for quantization (e.g. quantized DCT), but bad for entropy coded predictors (as in lossless DNG). Also, my point was that if you ditched byte stuffing altogether (which is a trivial mod), without changing anything else, this would speed up both decoding and encoding, as well as give a small bonus in size. For all practical purposes, lossy BM DNG was proprietary, because there was zero publicly available information about it, so BM was in a position to simplify and optimize.

2 hours ago, CaptainHook said:

Also if we spent any more time on DNG instead of spending the last 3 or so years developing Blackmagic RAW we would likely currently be in the position of having no RAW option at all on our cameras because we would have had to remove it anyway. Or if we had spent our limited resources to try do and manage both, then we would still just be left with Blackmagic RAW except we would have wasted time/resources on developing DNG further only to have had to remove it and some of the other features we have done we wouldn't have had time for. That was obvious to me even when I started at Blackmagic just over 5 years ago. And as you start to list the things you want to improve or change with DNG, you end up realizing it's not DNG anymore and end up with a new codec anyway.

So i'm not sure if you actually suggesting that we SHOULD have done any of that, or just that in "theory" we could have (where the truth on camera hardware is quite different) - because we could of course do or try many things that wouldn't make business or long term sense (or even short term), but you might just be listing 'possibilities' like many other people do in which case my response is unnecessary. Especially in a Sigma thread. 🙂

Of course I am just theorizing a parallel future. Certainly in hindsight, having an alternative ready would have helped tremendously when the patent thing came up. :)

 

1 hour ago, Llaasseerr said:

I prefer file sequences so I'm fine with what the Sigma fp is doing there. Fingers crossed DNG compression gets added eventually. Coming from using software like Nuke and Resolve in film production, all shows are ingested from raw or pro res as dpx or open exr and go all the way through to DI in this way (editing is dnx in Avid). Indie features and second tier tv shows are typically pro res or Avid dnx, and I understand at the dslr level it's all movie formats not frames. It's only at the dslr level that anyone is actually trying to edit raw. IMO it would be like trying to edit with the original camera negative on a Steenbeck.

Maybe we can just say that it's a cultural difference. But discrete frames allow the following:

- Varying metadata to be stored per frame.

- The whole file isn't hosed because of one bad frame

- File transfer is easier. Way less load on the network and this is a biggy, with transfer to/from cloud.

Having the option of MXF is fine though. I remember testing a CinemaDNG mxf when the spec was just released, but no-one actually used it.

Sigma have no option but to go the sequence way since there is no support for MXF CinemaDNG anywhere. BM were in the unique position of making cameras AND Resolve.

Certainly there are some advantages to discrete frames; the biggest one might be that you can easily swap bad frames. I can't think of a case where you need frame specific metadata (other than time codes and stuff), but you can have this in the MXF version of the CinemaDNG spec too. Also, with frame indexing you can have your file working fine with bad frames in it. And your third point I am not sure I understand, individual frames equal more bytes equal more load on the network, there is no way around this. And certainly reading more files puts more load on the file system, as you need to access the file index for every frame. :)

On a related note, there are benefits to keeping the raw files all the way through DI: raw controls can actually be used creatively in grading, and raw files are significantly smaller than DPX frames. And if you edit in Resolve, you might as well edit raw for anything that doesn't need vfx, as long as your hardware can handle the resolution. After all, working on the raw base all the way is the beauty of the non-destructive raw workflow.

Share this post


Link to post
Share on other sites
2 hours ago, cpc said:

Sigma have no option but to go the sequence way since there is no support for MXF CinemaDNG anywhere. BM were in the unique position of making cameras AND Resolve.

Certainly there are some advantages to discrete frames; the biggest one might be that you can easily swap bad frames. I can't think of a case where you need frame specific metadata (other than time codes and stuff), but you can have this in the MXF version of the CinemaDNG spec too. Also, with frame indexing you can have your file working fine with bad frames in it.

 

Per frame metadata is a big part of feature film production and is not going anywhere. But at the lower end of the market it is probably not that relevant.

Quote

 

And your third point I am not sure I understand, individual frames equal more bytes equal more load on the network, there is no way around this. And certainly reading more files puts more load on the file system, as you need to access the file index for every frame. :)

 

Have you actually tried it? There are many reasons that VFX and DI facilities work with frames. A single frame is much smaller to read across the network than a large movie file and the software will display a random frame much faster when scrubbing around - not factoring in CPU overhead for things like compression - or debayering. As for my example of upload to cloud, a multithreaded command line upload of a frame sequence is much faster than a movie file, and I'm able to stream frames from cloud to timeline with a fast enough internet connection. But in a small setup where you are just making your own movies at home then this all may be a moot point.

Quote

On a related note, there are benefits to keeping the raw files all the way through DI: raw controls can actually be used creatively in grading, and raw files are significantly smaller than DPX frames. And if you edit in Resolve, you might as well edit raw for anything that doesn't need vfx, as long as your hardware can handle the resolution. After all, working on the raw base all the way is the beauty of the non-destructive raw workflow.

In a film post production pipeline, raw controls are really for ingest at the start of post, not grading at the end. But I agree that if you are a one person band who doesn't need to share files with anyone, then ingesting raw, editing and finishing in Resolve would be possible. Our workflows are very different because of different working environments. And for what you are doing, your way of working may be best for you.

Share this post


Link to post
Share on other sites
11 hours ago, cpc said:

You seem to have misunderstood this part. :)

Oh, i see what you're trying to say now. Again, there are reasons for the decisions we make where theory and practice in hardware diverge and you have to make trade offs to balance one thing against another - more bigger picture stuff again. This is already an area i can't discuss publicly but I guess what I'll say is, if we could have implemented things that way or differently back then, we would have.

And it's not that we didn't know some ways we could improve what we had initially done with DNG (much of it informed by the hardware problems we were solving back then), it just didn't make sense to spend more time on it when we already knew we could do something else that would fit our needs better. Like i said, the problems you describe were solved for us with Blackmagic RAW where we were able to achieve image quality we wanted with small file sizes and very fast performance on desktop with highly optimized GPU and CPU decode, the ability to embed multiple 3DLUTs, etc etc etc. THAT is a no brainer to me. 😉

I do understand your point of view especially as someone who developed a desktop app around DNG but there are so many more considerations we have that i can't even begin to discuss. Something I've learned being at a company like this is how often other people can't understand some of the decisions some companies make, but I find it much easier now to have an idea of what other considerations likely led them to choose the path they did. It's hard to explain until you've experienced it but even when i was just beta testing Resolve and then the cameras, I had no idea what actually goes on and the types of decisions and challenges faced.

I see people online almost daily berate other camera manufacturers about things "that should be so obvious, why don't they do it" and I just have to shake my head and shrug because I have a very good idea why the company HASN'T done it or why they DID choose to do something else. I'm sure other companies have a very similar insight into Blackmagic as well, because for the most part we all have similar goals and face similar challenges.

Share this post


Link to post
Share on other sites
46 minutes ago, nathlas said:

Half nude tragedy

Ok. Let's agree on this.

There is noone to buy those two cameras for AF perfomance. They both rely on same technology that we all know it's not efficient.

 

Let's concetrate on image quality thay can deliver.

 

 

On the Youtube comments, one person asks "What is happening in the highlights?"  The FP does look relatively more blown out as the girl approaches into the sunlight.

Share this post


Link to post
Share on other sites
On 10/25/2019 at 8:01 PM, CaptainHook said:

Its not needed in Resolve as Sigma have already added the required matrices and linearization table (for their log encoded versions) so you can convert to ACES as outlined. I can't speak for other apps or workflows though.

 

Are you talking about Sigma adding DNG metadata? Obviously that's great, and I like Resolve's ability to create an IDT on the fly using that data. If it's something else you're talking about, I'd be interested to know. What I was describing is a more high precision IDT based on spectral sensitivity measurements - not "just" a matrix and a 1D delog lut. If you look at the rawtoaces github page, the intermediate method is as you describe the way Resolve works. It only tries that method if the afore-mentioned spectral sensitivity data is not available.

Share this post


Link to post
Share on other sites
On 10/27/2019 at 4:40 AM, Llaasseerr said:

In a film post production pipeline, raw controls are really for ingest at the start of post, not grading at the end. But I agree that if you are a one person band who doesn't need to share files with anyone, then ingesting raw, editing and finishing in

Actually having the RAW files in vfx is very useful, i can develop the RAW in different ways to maximise the keying possibilities. Producing images that look awful to the eye, have soft edges,  but have better colour separation in. So actually whilst a lot of pipelines do work the way you say - they shouldn't ideally....

+1 on working with frames, especially EXR. One benefit is if a render crashes halfway through, you still have the frames from before and can start where you left off. That's a real world thing!
 

cheers
Paul

Share this post


Link to post
Share on other sites
6 hours ago, mat33 said:

I really really hope this is the spiritual successor to the digital bolex.

Agreed.  So curious about what Sigma manage to achieve with video resolution on the Foveon version, which could be more like the Bolex reincarnated.  Purely from memory, Andrew wrote the Bolex shoots at ISO 100 and pretty much has to stay there, and one expects similar video compromise with Foveon, but it might well really shine if played to the strengths.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...