Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


CaptainHook last won the day on October 24 2019

CaptainHook had the most liked content!

About CaptainHook

  • Rank

Profile Information

  • Location
    Melbourne, Australia

Contact Methods

  • Facebook
  • Twitter
  • Instagram

Recent Profile Visitors

917 profile views
  1. BMDFim Gen 1 doesn't touch gamut, so for DNG's what you get is the sensors native response to colour, which is not suitable for a typical display. You can't really map a sensors spectral response to just 3 chromaticity points like a typical gamut and show it on a CIE plot either, but people do it anyway to give a rough idea. But most sensors will definitely have a response beyond visible colours so this is expected. I don't work in the Resolve team but CIE scope in Resolve will be plotting our Blackmagic Cinema Camera spectral response as best it can because it assumes if you're using that colour space you're using our camera. It doesn't try work out a gamut for whatever camera clip you're currently viewing. So the chromaticity points for the gamut you see are based on the original Blackmagic Cinema/Pocket Camera. When you transform from one space to another, you are scaling values by a 3x3 matrix. There's no algorithm deciding if a pixel value is within a gamut to leave it a lone depending on the target gamut. It just scales it by the 3x3 matrix regardless. Resolve does have a RFX plugin though that will allow you to selectively decide how to deal with out of gamut colours when transforming from a larger space to a smaller, it's under "Gamut Mapping" in the Colour Space Transform plugin. But this isn't typical 'standard colour science', because generally you want your transforms to be "reversible" and when you start doing non-linear manipulations it gets very difficult to reverse. It's best to white balance and do highlight recovery in native camera/sensor RGB space. Blackmagic RAW is not white balanced, its the native sensor response and we white balance it during decode. XYZ is okay to white balance in as an alternative if you can't do it in sensor space and this is possible because you can transform to XYZ from any gamut, but if you've already white balanced you can't do common highlight recovery (in XYZ or otherwise) that uses unclipped channels as during white balance you clip the highlights to ensure white = white.
  2. "BMD Film" (version 1 which will be applied to non-BMD DNGs) will just apply a log type gamma curve and not touch colour (gamut). So you will be getting Sigma fp sensor RGB space in terms of gamut and no colour 'errors' except that a sensor response isn't meaningful on a display. To transform into a common display space would not be straight forward though unless Sigma or someone provided the correct conversion, so unless you could get something you're happy with by manually correcting the colours it might be easier and more straight forward to decode into a known defined space like 709 or something and go from there.
  3. Also head over to our gallery page for some footage downloads. https://www.blackmagicdesign.com/products/blackmagicpocketcinemacamera/gallery
  4. The issue shown at Slashcam is not the same crosshatching issue you link and has a different cause. The "solution" of adjusting pan/tilt in Resolve (I would call it a workaround, not a solution) might help hide the crosshatching issue (that was addressed in a firmware update) but doesn't change the underlying artefacts from the DNG debayer shown at Slashcam.
  5. Ah gotcha. I'm not really directing this at you but just to anyone in general who isn't aware or wants to learn more, and I'm probably being a 'stickler for accuracy' here but obviously a sensor/camera has no inherent highlight rolloff as the sensor is linear (as close as practically possible with calibration) so highlight rolloff is mostly down to the dynamic range of the sensor, how the image is exposed and how the colourist chooses to roll off the highlights. All typical CMOS sensors will have "hard clipping" though, being linear capture devices. I only say this because I see a few people mention 'highlight rolloff' as part of a log curve or colour science or something when in that respect it is just a by-product of the log curve optimising the dynamic range of the sensor in the container its being stored in. It's still assumed the user will create their own highlight rolloff in grading (I'm speaking just of RAW and log captures, not "profiles" or looks applied in camera intended for display on Rec.709 devices). ARRI for example have a lot of stops above where they recommend middle grey be exposed for - due partly to their very large pixels and the large dynamic range they have - and when a log curve is calculated for mapping that dynamic range from middle grey to 940 (video white in 10bit where they map sensor saturation to in their log curves) you get a very flat curve at the top as it maps that range. When you flatten contrast that much it also appears to desaturate. I've seen some mention they believe ARRI purposely desaturate their highlights, but if they did that in processing before creating RAW files or LogC ProRes clips you wouldn't be able to inverse it correctly into linear for ACES workflows etc because processing like that is non-linear. They possibly do something like that for their LogC to Rec709 LUTs etc but people seem to attribute it to their LogC/RAW files too. For our cameras we map sensor saturation at our "native ISO" to 940 also, but for ISO curve's above we go into "super whites" to make better use of the bit depth available especially since we deal so much with SDI output (10bit) and ProRes 422HQ is common for our customers (10bit also). ARRI says ProRes 444 (12bit) is the minimum for LogC because they don't use the full range available. We may change that in the future but the caveat would also be you would need to use 12bit for best results. In theory you could expose a 10 stop camera/sensor so that you place middle grey at the second bottom stop, giving you 8 stops to create a very gentle highlight rolloff. You would just have VERY little range for the shadows. ? So long story short, the question I would suggest people ask is 'what is the dynamic range like compared to other cameras' as that will really tell you what kind of highlight roll off YOU (the user) can create for your preference with how you like to expose given the amount of shadow information you like to retain, tolerance for noise, etc.
  6. That's an oversight on the Resolve teams part, I alerted it to them last week when a user reported that some options are also missing from the ACES ResolveFX plugin too. We are looking into it. It's not specific to the Pocket 4K, is happens with the G2 and other cameras. Its how gamut mapping is done - I've seen the same artefact on footage shot with ARRI/Red/Sony/etc from test files with saturated red highlights clipping and also seen it on publicly broadcast tv shows and movies with huge budgets shooting Alexa etc where the gamut mapping is not handled by the colourist (its very common on TV series on Netflix, HBO, Amazon, etc). FYI, Arri Wide Gamut is very similar in size and location of primaries as BMD Wide Gamut Gen 4. The issue is its a non-linear transform to address the problem so if you apply that correction to footage it's not easily reversible anymore in standard colour science workflows like ACES. So if you applied it on camera to ProRes footage and then transformed it to another colour space "technically" it would be wrong. Same if it were an option in Blackmagic RAW decode and you took that output to VFX workflow etc that requires linear or some other transform. So the user has to be careful about when to use this. But we are looking into it to make it easier. Otherwise for problem shots people can decode into another space and handle the gamut mapping themselves (I personally think this is preferable when possible so it can be tailored to each shot and target gamut but it does require the user to have a certain amount of knowledge and time to address it in post which is not reasonable to assume).
  7. What do you mean by Magic Lanterns highlight rolloff then sorry? I think I missed something.
  8. Incorrect. And we provide our colour science data to 3rd party apps, post houses, and studios like Netflix on request so they have the data to transform to ACES outside of Resolve if needed.
  9. This has nothing to do with the decisions made by Netflix regarding our cameras (they ask we don't reveal publicly the criteria they use internally so I can't say) and what you attribute to "colour science" is also misunderstood.
  10. I would offer that for matching shots (the majority of most grading work), adjusting white balance in sensor space (or even XYZ as a fallback) and exposure in linear makes a huge difference to how well shots match and flow. I see many other colourists claim they can do just as good white balancing with the normal primaries controls, but i think if they actually spent considerable time with both approaches instead of just one they would develop a sensitivity to it that would make them rethink just how 'good' the results with primaries are. Its one area i think photographers experienced with dialing in white balance in RAW files develop that sensitivity and eye to how it looks when white balance is transformed more accurately - more so than those in the motion image world who still aren't used to it. I've been a fan of Ian Vertovec from Light Iron for quite a few years, and I was not surprised to learn recently that he likes to do basic adjustments in linear because there was something in his work that stood out to me (including his eye/talent/skill/experience of course).
  11. I can't speak for Sigma with certainty, but the matrices used in DNGs are generally calculated based on spectral sensitivity measurements - rawtoaces is just providing a tool to calculate the matrices from the spectral response data (the Ceres solver they use is just one method to do a regression fit) and then convert the file in the same way as with an IDT. They may prefer to keep the calculated matrices in float without rounding, but you don't need that many decimal points of precision to reduce any error delta down to "insignificant" so in this case rounding is of no real concern here. Rawtoaces doing the calculation also removes any preference the manufacturer may have about regression fit techniques, weighting certain colours for higher accuracy over others and the training data used (like skin patches etc), how to deal with chromatic adaptation, etc. This is really the only area a manufacturer can impart their 'taste' into their "colour science" (apart from maybe picking primaries which is irrelevant in an ACES workflow unless you want to start from a particular manufacturers gamut for grading). Noise and other attributes are IMHO not "colour science", but calibration and other image processing decisions. The ideal goal for ACES is to remove the manufacturers preferences of colour science which leaves the rest down to metamerism of the sensor and it's overall dynamic range which are the elements that survive ACES trying to make all sensors look as similar as possible once transformed into the same space. Its also why it can't fully be successful at its goal but to be fair they do allow the preferences in colour science to remain somewhat intact since manufacturers can provide their own IDTs. But they would prefer all IDTs be created the same way as it would get them closer to their goal. The Academy document they link describes the basic principles of calculating matrices from spectral sensitivity data but also offers an alternative in the appendix based on capturing known targets (colour charts) under various colour temperatures/source illuminants. I also mentioned both of these on the previous page here. So an actual IDT generated from spectral response data just contains a matrix to convert from sensor RGB/space to ACES, and a way to transform to linear if needed (it can be an equation or a LUT). Take a look at an IDT from Sony for Slog3 and SGamut3 - it just has the 3x3 matrix and a log to linear equation: https://github.com/ampas/aces-dev/blob/master/transforms/ctl/idt/vendorSupplied/sony/IDT.Sony.SLog3_SGamut3.ctl Or look at an Arri one for LogC - 3x3 matrix (at the bottom of the long file) and log to linear LUT: https://raw.githubusercontent.com/ampas/aces-dev/master/transforms/ctl/idt/vendorSupplied/arri/alexa/v3/EI800/IDT.ARRI.Alexa-v3-logC-EI800.ctl Also notice with Arri, not only is there a folder for each ISO, but multiple IDTs for the raw files for each colour temperature (CCT = correlated colour temperature) going back to what I described earlier about needing different transforms per colour temperature (DNG processing pipelines handle this automatically if the author uses two matrices in combination with AsShotNeutral tags) - and Arri also has different matrices for when you use their internal NDs as they deemed it necessary to compensate the colour shift introduced by their NDs: https://github.com/ampas/aces-dev/tree/master/transforms/ctl/idt/vendorSupplied/arri/alexa/v3/EI800 If you're really curious, Arri even provides the python script they use to calculate the IDTs (it uses pre-calculated matrices for each CCT likely generated from the spectral response data). https://github.com/ampas/aces-dev/blob/master/transforms/ctl/idt/vendorSupplied/arri/alexa/v3_IDT_maker.py So a DNG actually already contains the ingredients needed for an IDT - a way to convert to linear (if not already in linear) and the matrix (or matrices) required to transform from sensor space to ACES - most likely calculated from spectral sensitivity/response data (in the DNG case you get to ACES primaries via a standard transform from XYZ). If you have a DNG, you don't need an IDT. The information is there. Hope that clears up what I was trying to say some more.
  12. Oh, i see what you're trying to say now. Again, there are reasons for the decisions we make where theory and practice in hardware diverge and you have to make trade offs to balance one thing against another - more bigger picture stuff again. This is already an area i can't discuss publicly but I guess what I'll say is, if we could have implemented things that way or differently back then, we would have. And it's not that we didn't know some ways we could improve what we had initially done with DNG (much of it informed by the hardware problems we were solving back then), it just didn't make sense to spend more time on it when we already knew we could do something else that would fit our needs better. Like i said, the problems you describe were solved for us with Blackmagic RAW where we were able to achieve image quality we wanted with small file sizes and very fast performance on desktop with highly optimized GPU and CPU decode, the ability to embed multiple 3DLUTs, etc etc etc. THAT is a no brainer to me. ? I do understand your point of view especially as someone who developed a desktop app around DNG but there are so many more considerations we have that i can't even begin to discuss. Something I've learned being at a company like this is how often other people can't understand some of the decisions some companies make, but I find it much easier now to have an idea of what other considerations likely led them to choose the path they did. It's hard to explain until you've experienced it but even when i was just beta testing Resolve and then the cameras, I had no idea what actually goes on and the types of decisions and challenges faced. I see people online almost daily berate other camera manufacturers about things "that should be so obvious, why don't they do it" and I just have to shake my head and shrug because I have a very good idea why the company HASN'T done it or why they DID choose to do something else. I'm sure other companies have a very similar insight into Blackmagic as well, because for the most part we all have similar goals and face similar challenges.
  13. Many of your points don't really take in the big picture that we have (which is understandable) or consider hardware implementations in real time on a camera - for instance implying we shouldn't have encoded the DNGs with a non-linear curve - so for the 4.6K with 15 stops that would mean 16bit linear which would negate the 20-25% savings you mention (uncompressed 17.42MB~ per frame for 12bit non-linear vs 23.22MB~ per frame for 16bit linear) needing a completely different and much MUCH more expensive hardware design for the camera for basically the same file size. Doing this stuff in camera is completely different to desktop applications where even saving a single bit can make a HUGE difference to what you can actually do because of the hardware processing and other bandwidth restrictions (eg. To keep it somewhat relevant to this thread look at the bit depth restrictions in the Sigma DNGs). Its also not a "proprietary" byte stuffing, its just the standard JPEG extension spec for 12bit since DNG only specified JPEG for up to 10bit lossy compression and we needed higher bit depth than that. Blackmagic always tries to use existing standards when possible like using what's in the available JPEG spec rather than doing 'anything'. We also evaluated many things for compression to get better quality, and we ultimately ended up with Blackmagic RAW. ? Also if we spent any more time on DNG instead of spending the last 3 or so years developing Blackmagic RAW we would likely currently be in the position of having no RAW option at all on our cameras because we would have had to remove it anyway. Or if we had spent our limited resources to try do and manage both, then we would still just be left with Blackmagic RAW except we would have wasted time/resources on developing DNG further only to have had to remove it and some of the other features we have done we wouldn't have had time for. That was obvious to me even when I started at Blackmagic just over 5 years ago. And as you start to list the things you want to improve or change with DNG, you end up realizing it's not DNG anymore and end up with a new codec anyway. So i'm not sure if you actually suggesting that we SHOULD have done any of that, or just that in "theory" we could have (where the truth on camera hardware is quite different) - because we could of course do or try many things that wouldn't make business or long term sense (or even short term), but you might just be listing 'possibilities' like many other people do in which case my response is unnecessary. Especially in a Sigma thread. ?
  14. Its not needed in Resolve as Sigma have already added the required matrices and linearization table (for their log encoded versions) so you can convert to ACES as outlined. I can't speak for other apps or workflows though. Resolve has support through RCM (Resolve Colour Management) and CST for all major camera manufacturers log curves and gamuts so you could interpret the DNGs using RCM for instance into any gamma/gamut correctly for the given white balance setting if you would prefer. But that's why i recommend the Rec709 approach in Resolve for the Sigma DNGs (or RCM as mentioned would also work). One major issue for DNG for us was that there is no standard way defined to interpret DNGs into different colour spaces or even ISOs through metadata which was a big focus for Blackmagic RAW*. This is why DNGs from our cameras look different across various apps, because they are free to interpret them as they want. So we had the same problem with other apps and our DNGs, but it was worse as most other apps don't have an equivalent to RCM or CST to manage the transform into Blackmagic colour spaces. *That's ignoring how slow DNG is to decode (relatively) and that even at 4:1 compression, DNGs from our cameras had image artefacts in certain scenes and situations we weren't happy with (5:1 was evaluated to be not useable/releasable to the public) which is a real problem as resolution and frame rates increase and is even a problem for recording 6K to affordable media (or even 4.6K at high frame rates). Even if we weren't put into the situation to drop DNG when we did, IMHO it's unlikely it would be a good viable solution long term with where things are heading and the amount of complaints about DNG we got/saw and had ourselves. It was great when the first Blackmagic cameras were HD (2.4K) and 30fps, but that even Adobe themselves seemed to have no interest in maintaining or developing it further it's limitations now can't be ignored.
  • Create New...