Jump to content

CaptainHook

Members
  • Posts

    69
  • Joined

  • Last visited

Reputation Activity

  1. Like
    CaptainHook got a reaction from leslie in Sigma Fp review and interview / Cinema DNG RAW   
    Ah gotcha.

    I'm not really directing this at you but just to anyone in general who isn't aware or wants to learn more, and I'm probably being a 'stickler for accuracy' here but obviously a sensor/camera has no inherent highlight rolloff as the sensor is linear (as close as practically possible with calibration) so highlight rolloff is mostly down to the dynamic range of the sensor, how the image is exposed and how the colourist chooses to roll off the highlights. All typical CMOS sensors will have "hard clipping" though, being linear capture devices.
    I only say this because I see a few people mention 'highlight rolloff' as part of a log curve or colour science or something when in that respect it is just a by-product of the log curve optimising the dynamic range of the sensor in the container its being stored in. It's still assumed the user will create their own highlight rolloff in grading (I'm speaking just of RAW and log captures, not "profiles" or looks applied in camera intended for display on Rec.709 devices).

    ARRI for example have a lot of stops above where they recommend middle grey be exposed for - due partly to their very large pixels and the large dynamic range they have - and when a log curve is calculated for mapping that dynamic range from middle grey to 940 (video white in 10bit where they map sensor saturation to in their log curves) you get a very flat curve at the top as it maps that range. When you flatten contrast that much it also appears to desaturate. I've seen some mention they believe ARRI purposely desaturate their highlights, but if they did that in processing before creating RAW files or LogC ProRes clips you wouldn't be able to inverse it correctly into linear for ACES workflows etc because processing like that is non-linear. They possibly do something like that for their LogC to Rec709 LUTs etc but people seem to attribute it to their LogC/RAW files too.

    For our cameras we map sensor saturation at our "native ISO" to 940 also, but for ISO curve's above we go into "super whites" to make better use of the bit depth available especially since we deal so much with SDI output (10bit) and ProRes 422HQ is common for our customers (10bit also). ARRI says ProRes 444 (12bit) is the minimum for LogC because they don't use the full range available. We may change that in the future but the caveat would also be you would need to use 12bit for best results.
    In theory you could expose a 10 stop camera/sensor so that you place middle grey at the second bottom stop, giving you 8 stops to create a very gentle highlight rolloff. You would just have VERY little range for the shadows. ?

    So long story short, the question I would suggest people ask is 'what is the dynamic range like compared to other cameras' as that will really tell you what kind of highlight roll off YOU (the user) can create for your preference with how you like to expose given the amount of shadow information you like to retain, tolerance for noise, etc.
  2. Like
    CaptainHook got a reaction from Jonathan422 in Sigma Fp review and interview / Cinema DNG RAW   
    Ah gotcha.

    I'm not really directing this at you but just to anyone in general who isn't aware or wants to learn more, and I'm probably being a 'stickler for accuracy' here but obviously a sensor/camera has no inherent highlight rolloff as the sensor is linear (as close as practically possible with calibration) so highlight rolloff is mostly down to the dynamic range of the sensor, how the image is exposed and how the colourist chooses to roll off the highlights. All typical CMOS sensors will have "hard clipping" though, being linear capture devices.
    I only say this because I see a few people mention 'highlight rolloff' as part of a log curve or colour science or something when in that respect it is just a by-product of the log curve optimising the dynamic range of the sensor in the container its being stored in. It's still assumed the user will create their own highlight rolloff in grading (I'm speaking just of RAW and log captures, not "profiles" or looks applied in camera intended for display on Rec.709 devices).

    ARRI for example have a lot of stops above where they recommend middle grey be exposed for - due partly to their very large pixels and the large dynamic range they have - and when a log curve is calculated for mapping that dynamic range from middle grey to 940 (video white in 10bit where they map sensor saturation to in their log curves) you get a very flat curve at the top as it maps that range. When you flatten contrast that much it also appears to desaturate. I've seen some mention they believe ARRI purposely desaturate their highlights, but if they did that in processing before creating RAW files or LogC ProRes clips you wouldn't be able to inverse it correctly into linear for ACES workflows etc because processing like that is non-linear. They possibly do something like that for their LogC to Rec709 LUTs etc but people seem to attribute it to their LogC/RAW files too.

    For our cameras we map sensor saturation at our "native ISO" to 940 also, but for ISO curve's above we go into "super whites" to make better use of the bit depth available especially since we deal so much with SDI output (10bit) and ProRes 422HQ is common for our customers (10bit also). ARRI says ProRes 444 (12bit) is the minimum for LogC because they don't use the full range available. We may change that in the future but the caveat would also be you would need to use 12bit for best results.
    In theory you could expose a 10 stop camera/sensor so that you place middle grey at the second bottom stop, giving you 8 stops to create a very gentle highlight rolloff. You would just have VERY little range for the shadows. ?

    So long story short, the question I would suggest people ask is 'what is the dynamic range like compared to other cameras' as that will really tell you what kind of highlight roll off YOU (the user) can create for your preference with how you like to expose given the amount of shadow information you like to retain, tolerance for noise, etc.
  3. Like
    CaptainHook got a reaction from deezid in Panasonic S1H review / hands-on - a true 6K full frame cinema camera   
    That's an oversight on the Resolve teams part, I alerted it to them last week when a user reported that some options are also missing from the ACES ResolveFX plugin too.
    We are looking into it. It's not specific to the Pocket 4K, is happens with the G2 and other cameras. Its how gamut mapping is done - I've seen the same artefact on footage shot with ARRI/Red/Sony/etc from test files with saturated red highlights clipping and also seen it on publicly broadcast tv shows and movies with huge budgets shooting Alexa etc where the gamut mapping is not handled by the colourist (its very common on TV series on Netflix, HBO, Amazon, etc). FYI, Arri Wide Gamut is very similar in size and location of primaries as BMD Wide Gamut Gen 4.

    The issue is its a non-linear transform to address the problem so if you apply that correction to footage it's not easily reversible anymore in standard colour science workflows like ACES. So if you applied it on camera to ProRes footage and then transformed it to another colour space "technically" it would be wrong. Same if it were an option in Blackmagic RAW decode and you took that output to VFX workflow etc that requires linear or some other transform. So the user has to be careful about when to use this. But we are looking into it to make it easier. Otherwise for problem shots people can decode into another space and handle the gamut mapping themselves (I personally think this is preferable when possible so it can be tailored to each shot and target gamut but it does require the user to have a certain amount of knowledge and time to address it in post which is not reasonable to assume).
  4. Like
    CaptainHook got a reaction from The ghost of squig in Sigma Fp review and interview / Cinema DNG RAW   
    What do you mean by Magic Lanterns highlight rolloff then sorry? I think I missed something.
  5. Like
    CaptainHook got a reaction from Jonathan422 in Panasonic S1H review / hands-on - a true 6K full frame cinema camera   
    Incorrect.

    And we provide our colour science data to 3rd party apps, post houses, and studios like Netflix on request so they have the data to transform to ACES outside of Resolve if needed.
  6. Like
    CaptainHook got a reaction from Adept in Sigma Fp review and interview / Cinema DNG RAW   
    I would offer that for matching shots (the majority of most grading work), adjusting white balance in sensor space (or even XYZ as a fallback) and exposure in linear makes a huge difference to how well shots match and flow. I see many other colourists claim they can do just as good white balancing with the normal primaries controls, but i think if they actually spent considerable time with both approaches instead of just one they would develop a sensitivity to it that would make them rethink just how 'good' the results with primaries are. Its one area i think photographers experienced with dialing in white balance in RAW files develop that sensitivity and eye to how it looks when white balance is transformed more accurately - more so than those in the motion image world who still aren't used to it.

    I've been a fan of Ian Vertovec from Light Iron for quite a few years, and I was not surprised to learn recently that he likes to do basic adjustments in linear because there was something in his work that stood out to me (including his eye/talent/skill/experience of course).
  7. Like
    CaptainHook got a reaction from ntblowz in Panasonic S1H review / hands-on - a true 6K full frame cinema camera   
    This has nothing to do with the decisions made by Netflix regarding our cameras (they ask we don't reveal publicly the criteria they use internally so I can't say) and what you attribute to "colour science" is also misunderstood.
  8. Like
    CaptainHook got a reaction from paulinventome in Sigma Fp review and interview / Cinema DNG RAW   
    I would offer that for matching shots (the majority of most grading work), adjusting white balance in sensor space (or even XYZ as a fallback) and exposure in linear makes a huge difference to how well shots match and flow. I see many other colourists claim they can do just as good white balancing with the normal primaries controls, but i think if they actually spent considerable time with both approaches instead of just one they would develop a sensitivity to it that would make them rethink just how 'good' the results with primaries are. Its one area i think photographers experienced with dialing in white balance in RAW files develop that sensitivity and eye to how it looks when white balance is transformed more accurately - more so than those in the motion image world who still aren't used to it.

    I've been a fan of Ian Vertovec from Light Iron for quite a few years, and I was not surprised to learn recently that he likes to do basic adjustments in linear because there was something in his work that stood out to me (including his eye/talent/skill/experience of course).
  9. Like
    CaptainHook got a reaction from Adept in Sigma Fp review and interview / Cinema DNG RAW   
    I can't speak for Sigma with certainty, but the matrices used in DNGs are generally calculated based on spectral sensitivity measurements - rawtoaces is just providing a tool to calculate the matrices from the spectral response data (the Ceres solver they use is just one method to do a regression fit) and then convert the file in the same way as with an IDT. They may prefer to keep the calculated matrices in float without rounding, but you don't need that many decimal points of precision to reduce any error delta down to "insignificant" so in this case rounding is of no real concern here. Rawtoaces doing the calculation also removes any preference the manufacturer may have about regression fit techniques, weighting certain colours for higher accuracy over others and the training data used (like skin patches etc), how to deal with chromatic adaptation, etc. This is really the only area a manufacturer can impart their 'taste' into their "colour science" (apart from maybe picking primaries which is irrelevant in an ACES workflow unless you want to start from a particular manufacturers gamut for grading). Noise and other attributes are IMHO not "colour science", but calibration and other image processing decisions. The ideal goal for ACES is to remove the manufacturers preferences of colour science which leaves the rest down to metamerism of the sensor and it's overall dynamic range which are the elements that survive ACES trying to make all sensors look as similar as possible once transformed into the same space. Its also why it can't fully be successful at its goal but to be fair they do allow the preferences in colour science to remain somewhat intact since manufacturers can provide their own IDTs. But they would prefer all IDTs be created the same way as it would get them closer to their goal.

    The Academy document they link describes the basic principles of calculating matrices from spectral sensitivity data but also offers an alternative in the appendix based on capturing known targets (colour charts) under various colour temperatures/source illuminants. I also mentioned both of these on the previous page here. So an actual IDT generated from spectral response data just contains a matrix to convert from sensor RGB/space to ACES, and a way to transform to linear if needed (it can be an equation or a LUT).

    Take a look at an IDT from Sony for Slog3 and SGamut3 - it just has the 3x3 matrix and a log to linear equation: 
    https://github.com/ampas/aces-dev/blob/master/transforms/ctl/idt/vendorSupplied/sony/IDT.Sony.SLog3_SGamut3.ctl

    Or look at an Arri one for LogC - 3x3 matrix (at the bottom of the long file) and log to linear LUT:
    https://raw.githubusercontent.com/ampas/aces-dev/master/transforms/ctl/idt/vendorSupplied/arri/alexa/v3/EI800/IDT.ARRI.Alexa-v3-logC-EI800.ctl

    Also notice with Arri, not only is there a folder for each ISO, but multiple IDTs for the raw files for each colour temperature (CCT = correlated colour temperature) going back to what I described earlier about needing different transforms per colour temperature (DNG processing pipelines handle this automatically if the author uses two matrices in combination with AsShotNeutral tags) - and Arri also has different matrices for when you use their internal NDs as they deemed it necessary to compensate the colour shift introduced by their NDs:
    https://github.com/ampas/aces-dev/tree/master/transforms/ctl/idt/vendorSupplied/arri/alexa/v3/EI800

    If you're really curious, Arri even provides the python script they use to calculate the IDTs (it uses pre-calculated matrices for each CCT likely generated from the spectral response data).
    https://github.com/ampas/aces-dev/blob/master/transforms/ctl/idt/vendorSupplied/arri/alexa/v3_IDT_maker.py

    So a DNG actually already contains the ingredients needed for an IDT - a way to convert to linear (if not already in linear) and the matrix (or matrices) required to transform from sensor space to ACES - most likely calculated from spectral sensitivity/response data (in the DNG case you get to ACES primaries via a standard transform from XYZ). If you have a DNG, you don't need an IDT. The information is there. Hope that clears up what I was trying to say some more.
  10. Like
    CaptainHook got a reaction from IronFilm in Sigma Fp review and interview / Cinema DNG RAW   
    Oh, i see what you're trying to say now. Again, there are reasons for the decisions we make where theory and practice in hardware diverge and you have to make trade offs to balance one thing against another - more bigger picture stuff again. This is already an area i can't discuss publicly but I guess what I'll say is, if we could have implemented things that way or differently back then, we would have.

    And it's not that we didn't know some ways we could improve what we had initially done with DNG (much of it informed by the hardware problems we were solving back then), it just didn't make sense to spend more time on it when we already knew we could do something else that would fit our needs better. Like i said, the problems you describe were solved for us with Blackmagic RAW where we were able to achieve image quality we wanted with small file sizes and very fast performance on desktop with highly optimized GPU and CPU decode, the ability to embed multiple 3DLUTs, etc etc etc. THAT is a no brainer to me. ?

    I do understand your point of view especially as someone who developed a desktop app around DNG but there are so many more considerations we have that i can't even begin to discuss. Something I've learned being at a company like this is how often other people can't understand some of the decisions some companies make, but I find it much easier now to have an idea of what other considerations likely led them to choose the path they did. It's hard to explain until you've experienced it but even when i was just beta testing Resolve and then the cameras, I had no idea what actually goes on and the types of decisions and challenges faced.

    I see people online almost daily berate other camera manufacturers about things "that should be so obvious, why don't they do it" and I just have to shake my head and shrug because I have a very good idea why the company HASN'T done it or why they DID choose to do something else. I'm sure other companies have a very similar insight into Blackmagic as well, because for the most part we all have similar goals and face similar challenges.
  11. Like
    CaptainHook got a reaction from Lars Steenhoff in Sigma Fp review and interview / Cinema DNG RAW   
    I would offer that for matching shots (the majority of most grading work), adjusting white balance in sensor space (or even XYZ as a fallback) and exposure in linear makes a huge difference to how well shots match and flow. I see many other colourists claim they can do just as good white balancing with the normal primaries controls, but i think if they actually spent considerable time with both approaches instead of just one they would develop a sensitivity to it that would make them rethink just how 'good' the results with primaries are. Its one area i think photographers experienced with dialing in white balance in RAW files develop that sensitivity and eye to how it looks when white balance is transformed more accurately - more so than those in the motion image world who still aren't used to it.

    I've been a fan of Ian Vertovec from Light Iron for quite a few years, and I was not surprised to learn recently that he likes to do basic adjustments in linear because there was something in his work that stood out to me (including his eye/talent/skill/experience of course).
  12. Like
    CaptainHook got a reaction from Kisaha in Sigma Fp review and interview / Cinema DNG RAW   
    Its not needed in Resolve as Sigma have already added the required matrices and linearization table (for their log encoded versions) so you can convert to ACES as outlined. I can't speak for other apps or workflows though.
    Resolve has support through RCM (Resolve Colour Management) and CST for all major camera manufacturers log curves and gamuts so you could interpret the DNGs using RCM for instance into any gamma/gamut correctly for the given white balance setting if you would prefer. But that's why i recommend the Rec709 approach in Resolve for the Sigma DNGs (or RCM as mentioned would also work). One major issue for DNG for us was that there is no standard way defined to interpret DNGs into different colour spaces or even ISOs through metadata which was a big focus for Blackmagic RAW*. This is why DNGs from our cameras look different across various apps, because they are free to interpret them as they want. So we had the same problem with other apps and our DNGs, but it was worse as most other apps don't have an equivalent to RCM or CST to manage the transform into Blackmagic colour spaces.

    *That's ignoring how slow DNG is to decode (relatively) and that even at 4:1 compression, DNGs from our cameras had image artefacts in certain scenes and situations we weren't happy with (5:1 was evaluated to be not useable/releasable to the public) which is a real problem as resolution and frame rates increase and is even a problem for recording 6K to affordable media (or even 4.6K at high frame rates). Even if we weren't put into the situation to drop DNG when we did, IMHO it's unlikely it would be a good viable solution long term with where things are heading and the amount of complaints about DNG we got/saw and had ourselves. It was great when the first Blackmagic cameras were HD (2.4K) and 30fps, but that even Adobe themselves seemed to have no interest in maintaining or developing it further it's limitations now can't be ignored.
  13. Thanks
    CaptainHook got a reaction from leslie in Blackmagic Pocket Cinema Camera 4K   
    Here's a more detailed list:
    Support for Blackmagic Pocket Battery Grip
    Support for new horizon meter tool
    4K 2.40:1 (4096 x 1712) recording up to 60 fps in Blackmagic RAW (4K model)
    2.6K (2688 x 1512) recording up to 120 fps in Blackmagic RAW (4K model)
    2.8K 4:3 (2280 x 2160) recording up to 80 fps in Blackmagic RAW (4K model)
    2x desqueeze preview when recording 2.8K 4:3
    1.33x desqueeze preview when recording in 16:9 and 17:9 formats
    Pinch to zoom up to 8x magnification
    USB PTP control support
    New 1:1 and 4:5 frame guide options
    New ability to type in custom frame guide ratios
    Improve auto focus performance
    Language localizations
  14. Like
    CaptainHook got a reaction from majoraxis in Sigma Fp review and interview / Cinema DNG RAW   
    FYI if you interpret a (non Blackmagic Design) DNG in Resolve as "BMD Film" this will be Gen 1 which means for gamut there is NO transform - you are getting the sensor spectral response to colour which is not represented by 3 primaries like most gamuts and can't be treated as such. That would mean to "accurately" convert to Rec709 you would need the spectral response data for the sensor in the camera to calculate the transform to Rec709 for an illuminants given colour temperature, or by shooting known targets under various illuminants and doing a regression fit from there. It means multiple transforms depending on your white balance.

    The 709 gamut option in Resolve for DNGs is using the 2 colour matrices and "AsShotNeutral" tags (for white balance/tint) provided in the DNG metadata which have been calculated by the manufacturer (using one or both the methods described above) to provide a transform to XYZ for the given white balance (the matrices are actually used to transform from XYZ to sensor space but you can invert that) and from XYZ a standard conversion to 709 is performed. The ACES pipeline in Resolve also uses the same matrices in the DNG to convert to XYZ and then to ACES primaries.

    Of course many people just graded BMCC/Pocket footage with Gen 1 by hand and were also pleased with the result, but one should not expect the sensor response to appear in a meaningful way on a typical display without some kind of transform (calculated or done manually).

    But this is mostly to say if someone created a LUT or otherwise for BMCC/Pocket Gen 1 DNG footage, it will not be "correct" for other cameras (it may coincidentally look okay but that may also be scene specific). And most importantly for our cameras our others, if grading or creating a LUT etc for footage left in sensor space you need to adjust the correction/transform for illuminants with different colour temperatures. I.e Something done for footage shot under daylight will not usually work well for something shot under tungsten.

    If you use the Rec709 gamut option, you could use the CST OFX plugin to transform to another gamut safely as the RAW decode and CST plugin will not clip data and the white balance specific response has been already taken care of (the way the manufacturer wants done) in the transform to 709. This is the workflow I would recommend with such footage. YMMV.
  15. Thanks
    CaptainHook got a reaction from wyrlyn in Blackmagic Pocket Cinema Camera 4K   
    Here's a more detailed list:
    Support for Blackmagic Pocket Battery Grip
    Support for new horizon meter tool
    4K 2.40:1 (4096 x 1712) recording up to 60 fps in Blackmagic RAW (4K model)
    2.6K (2688 x 1512) recording up to 120 fps in Blackmagic RAW (4K model)
    2.8K 4:3 (2280 x 2160) recording up to 80 fps in Blackmagic RAW (4K model)
    2x desqueeze preview when recording 2.8K 4:3
    1.33x desqueeze preview when recording in 16:9 and 17:9 formats
    Pinch to zoom up to 8x magnification
    USB PTP control support
    New 1:1 and 4:5 frame guide options
    New ability to type in custom frame guide ratios
    Improve auto focus performance
    Language localizations
  16. Like
    CaptainHook got a reaction from deezid in Blackmagic Pocket Cinema Camera 4K   
    Here's a more detailed list:
    Support for Blackmagic Pocket Battery Grip
    Support for new horizon meter tool
    4K 2.40:1 (4096 x 1712) recording up to 60 fps in Blackmagic RAW (4K model)
    2.6K (2688 x 1512) recording up to 120 fps in Blackmagic RAW (4K model)
    2.8K 4:3 (2280 x 2160) recording up to 80 fps in Blackmagic RAW (4K model)
    2x desqueeze preview when recording 2.8K 4:3
    1.33x desqueeze preview when recording in 16:9 and 17:9 formats
    Pinch to zoom up to 8x magnification
    USB PTP control support
    New 1:1 and 4:5 frame guide options
    New ability to type in custom frame guide ratios
    Improve auto focus performance
    Language localizations
  17. Like
    CaptainHook got a reaction from deezid in Sigma Fp review and interview / Cinema DNG RAW   
    FYI if you interpret a (non Blackmagic Design) DNG in Resolve as "BMD Film" this will be Gen 1 which means for gamut there is NO transform - you are getting the sensor spectral response to colour which is not represented by 3 primaries like most gamuts and can't be treated as such. That would mean to "accurately" convert to Rec709 you would need the spectral response data for the sensor in the camera to calculate the transform to Rec709 for an illuminants given colour temperature, or by shooting known targets under various illuminants and doing a regression fit from there. It means multiple transforms depending on your white balance.

    The 709 gamut option in Resolve for DNGs is using the 2 colour matrices and "AsShotNeutral" tags (for white balance/tint) provided in the DNG metadata which have been calculated by the manufacturer (using one or both the methods described above) to provide a transform to XYZ for the given white balance (the matrices are actually used to transform from XYZ to sensor space but you can invert that) and from XYZ a standard conversion to 709 is performed. The ACES pipeline in Resolve also uses the same matrices in the DNG to convert to XYZ and then to ACES primaries.

    Of course many people just graded BMCC/Pocket footage with Gen 1 by hand and were also pleased with the result, but one should not expect the sensor response to appear in a meaningful way on a typical display without some kind of transform (calculated or done manually).

    But this is mostly to say if someone created a LUT or otherwise for BMCC/Pocket Gen 1 DNG footage, it will not be "correct" for other cameras (it may coincidentally look okay but that may also be scene specific). And most importantly for our cameras our others, if grading or creating a LUT etc for footage left in sensor space you need to adjust the correction/transform for illuminants with different colour temperatures. I.e Something done for footage shot under daylight will not usually work well for something shot under tungsten.

    If you use the Rec709 gamut option, you could use the CST OFX plugin to transform to another gamut safely as the RAW decode and CST plugin will not clip data and the white balance specific response has been already taken care of (the way the manufacturer wants done) in the transform to 709. This is the workflow I would recommend with such footage. YMMV.
  18. Like
    CaptainHook got a reaction from Phil A in Sigma Fp review and interview / Cinema DNG RAW   
    FYI if you interpret a (non Blackmagic Design) DNG in Resolve as "BMD Film" this will be Gen 1 which means for gamut there is NO transform - you are getting the sensor spectral response to colour which is not represented by 3 primaries like most gamuts and can't be treated as such. That would mean to "accurately" convert to Rec709 you would need the spectral response data for the sensor in the camera to calculate the transform to Rec709 for an illuminants given colour temperature, or by shooting known targets under various illuminants and doing a regression fit from there. It means multiple transforms depending on your white balance.

    The 709 gamut option in Resolve for DNGs is using the 2 colour matrices and "AsShotNeutral" tags (for white balance/tint) provided in the DNG metadata which have been calculated by the manufacturer (using one or both the methods described above) to provide a transform to XYZ for the given white balance (the matrices are actually used to transform from XYZ to sensor space but you can invert that) and from XYZ a standard conversion to 709 is performed. The ACES pipeline in Resolve also uses the same matrices in the DNG to convert to XYZ and then to ACES primaries.

    Of course many people just graded BMCC/Pocket footage with Gen 1 by hand and were also pleased with the result, but one should not expect the sensor response to appear in a meaningful way on a typical display without some kind of transform (calculated or done manually).

    But this is mostly to say if someone created a LUT or otherwise for BMCC/Pocket Gen 1 DNG footage, it will not be "correct" for other cameras (it may coincidentally look okay but that may also be scene specific). And most importantly for our cameras our others, if grading or creating a LUT etc for footage left in sensor space you need to adjust the correction/transform for illuminants with different colour temperatures. I.e Something done for footage shot under daylight will not usually work well for something shot under tungsten.

    If you use the Rec709 gamut option, you could use the CST OFX plugin to transform to another gamut safely as the RAW decode and CST plugin will not clip data and the white balance specific response has been already taken care of (the way the manufacturer wants done) in the transform to 709. This is the workflow I would recommend with such footage. YMMV.
  19. Thanks
    CaptainHook got a reaction from Kisaha in Blackmagic Pocket Cinema Camera 4K   
    Here's a more detailed list:
    Support for Blackmagic Pocket Battery Grip
    Support for new horizon meter tool
    4K 2.40:1 (4096 x 1712) recording up to 60 fps in Blackmagic RAW (4K model)
    2.6K (2688 x 1512) recording up to 120 fps in Blackmagic RAW (4K model)
    2.8K 4:3 (2280 x 2160) recording up to 80 fps in Blackmagic RAW (4K model)
    2x desqueeze preview when recording 2.8K 4:3
    1.33x desqueeze preview when recording in 16:9 and 17:9 formats
    Pinch to zoom up to 8x magnification
    USB PTP control support
    New 1:1 and 4:5 frame guide options
    New ability to type in custom frame guide ratios
    Improve auto focus performance
    Language localizations
  20. Thanks
    CaptainHook got a reaction from dslnc in Blackmagic Pocket Cinema Camera 4K   
    Here's a more detailed list:
    Support for Blackmagic Pocket Battery Grip
    Support for new horizon meter tool
    4K 2.40:1 (4096 x 1712) recording up to 60 fps in Blackmagic RAW (4K model)
    2.6K (2688 x 1512) recording up to 120 fps in Blackmagic RAW (4K model)
    2.8K 4:3 (2280 x 2160) recording up to 80 fps in Blackmagic RAW (4K model)
    2x desqueeze preview when recording 2.8K 4:3
    1.33x desqueeze preview when recording in 16:9 and 17:9 formats
    Pinch to zoom up to 8x magnification
    USB PTP control support
    New 1:1 and 4:5 frame guide options
    New ability to type in custom frame guide ratios
    Improve auto focus performance
    Language localizations
  21. Like
    CaptainHook got a reaction from kye in Sigma Fp review and interview / Cinema DNG RAW   
    FYI if you interpret a (non Blackmagic Design) DNG in Resolve as "BMD Film" this will be Gen 1 which means for gamut there is NO transform - you are getting the sensor spectral response to colour which is not represented by 3 primaries like most gamuts and can't be treated as such. That would mean to "accurately" convert to Rec709 you would need the spectral response data for the sensor in the camera to calculate the transform to Rec709 for an illuminants given colour temperature, or by shooting known targets under various illuminants and doing a regression fit from there. It means multiple transforms depending on your white balance.

    The 709 gamut option in Resolve for DNGs is using the 2 colour matrices and "AsShotNeutral" tags (for white balance/tint) provided in the DNG metadata which have been calculated by the manufacturer (using one or both the methods described above) to provide a transform to XYZ for the given white balance (the matrices are actually used to transform from XYZ to sensor space but you can invert that) and from XYZ a standard conversion to 709 is performed. The ACES pipeline in Resolve also uses the same matrices in the DNG to convert to XYZ and then to ACES primaries.

    Of course many people just graded BMCC/Pocket footage with Gen 1 by hand and were also pleased with the result, but one should not expect the sensor response to appear in a meaningful way on a typical display without some kind of transform (calculated or done manually).

    But this is mostly to say if someone created a LUT or otherwise for BMCC/Pocket Gen 1 DNG footage, it will not be "correct" for other cameras (it may coincidentally look okay but that may also be scene specific). And most importantly for our cameras our others, if grading or creating a LUT etc for footage left in sensor space you need to adjust the correction/transform for illuminants with different colour temperatures. I.e Something done for footage shot under daylight will not usually work well for something shot under tungsten.

    If you use the Rec709 gamut option, you could use the CST OFX plugin to transform to another gamut safely as the RAW decode and CST plugin will not clip data and the white balance specific response has been already taken care of (the way the manufacturer wants done) in the transform to 709. This is the workflow I would recommend with such footage. YMMV.
  22. Like
    CaptainHook got a reaction from Gregormannschaft in Blackmagic Pocket Cinema Camera 6K   
    The Blackmagic RAW highlight and shadow rolloff sliders affect the roll off of the "S" contrast curve you create with the custom gamma controls. If you add no contrast, they will do nothing as there's nothing to roll off. Increase (or lower) contrast and they will start to take affect. Choose "Extended Video" from the gamma selection in the Blackmagic RAW panel and you will see it's actually just a custom S curve with saturation preset via the custom gamma controls - you can start to modify it to your taste and the "gamma" automatically changes from "Extended Video" to "Custom". Or you can start from scratch with "Blackmagic Film".

    If you want the highlight/shadows sliders that were in DNG panel, those are not RAW specific controls and are still available in the 2nd tab of the primaries (same algorithm as found in DNG tab and same exact effect on Blackmagic RAW clips as DNG).

    If transforming to another log curve or colour space helps you get what you want faster, great! Different log curves like ARRI/Red or using highlight roll off sliders/presets don't give you anything you can't do yourself by using the curves tool etc, but not everyone is experienced with that stuff. Our log curves preserve the full sensor dynamic range though so the highlight roll off is completely in your hands, or you can start with another log curve if it suits.

    Something I tend to do a bit is expose for the native ISO in camera, then later in the RAW tab I switch to the highest ISO possible and compensate middle grey with the exposure slider which will lower contrast and have the most highlight roll off we offer in our log curves for a particular camera. I start with Extended Video from the gamma drop down and tweak the contrast/sat and highlight/roll off sliders for the shot. Adjust white balance/tint and that gets me 90-95% there with most footage.
  23. Like
    CaptainHook got a reaction from drm in Blackmagic Pocket Cinema Camera 6K   
    The Blackmagic RAW highlight and shadow rolloff sliders affect the roll off of the "S" contrast curve you create with the custom gamma controls. If you add no contrast, they will do nothing as there's nothing to roll off. Increase (or lower) contrast and they will start to take affect. Choose "Extended Video" from the gamma selection in the Blackmagic RAW panel and you will see it's actually just a custom S curve with saturation preset via the custom gamma controls - you can start to modify it to your taste and the "gamma" automatically changes from "Extended Video" to "Custom". Or you can start from scratch with "Blackmagic Film".

    If you want the highlight/shadows sliders that were in DNG panel, those are not RAW specific controls and are still available in the 2nd tab of the primaries (same algorithm as found in DNG tab and same exact effect on Blackmagic RAW clips as DNG).

    If transforming to another log curve or colour space helps you get what you want faster, great! Different log curves like ARRI/Red or using highlight roll off sliders/presets don't give you anything you can't do yourself by using the curves tool etc, but not everyone is experienced with that stuff. Our log curves preserve the full sensor dynamic range though so the highlight roll off is completely in your hands, or you can start with another log curve if it suits.

    Something I tend to do a bit is expose for the native ISO in camera, then later in the RAW tab I switch to the highest ISO possible and compensate middle grey with the exposure slider which will lower contrast and have the most highlight roll off we offer in our log curves for a particular camera. I start with Extended Video from the gamma drop down and tweak the contrast/sat and highlight/roll off sliders for the shot. Adjust white balance/tint and that gets me 90-95% there with most footage.
  24. Like
    CaptainHook got a reaction from majoraxis in Blackmagic Pocket Cinema Camera 6K   
    The Blackmagic RAW highlight and shadow rolloff sliders affect the roll off of the "S" contrast curve you create with the custom gamma controls. If you add no contrast, they will do nothing as there's nothing to roll off. Increase (or lower) contrast and they will start to take affect. Choose "Extended Video" from the gamma selection in the Blackmagic RAW panel and you will see it's actually just a custom S curve with saturation preset via the custom gamma controls - you can start to modify it to your taste and the "gamma" automatically changes from "Extended Video" to "Custom". Or you can start from scratch with "Blackmagic Film".

    If you want the highlight/shadows sliders that were in DNG panel, those are not RAW specific controls and are still available in the 2nd tab of the primaries (same algorithm as found in DNG tab and same exact effect on Blackmagic RAW clips as DNG).

    If transforming to another log curve or colour space helps you get what you want faster, great! Different log curves like ARRI/Red or using highlight roll off sliders/presets don't give you anything you can't do yourself by using the curves tool etc, but not everyone is experienced with that stuff. Our log curves preserve the full sensor dynamic range though so the highlight roll off is completely in your hands, or you can start with another log curve if it suits.

    Something I tend to do a bit is expose for the native ISO in camera, then later in the RAW tab I switch to the highest ISO possible and compensate middle grey with the exposure slider which will lower contrast and have the most highlight roll off we offer in our log curves for a particular camera. I start with Extended Video from the gamma drop down and tweak the contrast/sat and highlight/roll off sliders for the shot. Adjust white balance/tint and that gets me 90-95% there with most footage.
  25. Like
    CaptainHook got a reaction from leslie in Blackmagic Pocket Cinema Camera 6K   
    The Blackmagic RAW highlight and shadow rolloff sliders affect the roll off of the "S" contrast curve you create with the custom gamma controls. If you add no contrast, they will do nothing as there's nothing to roll off. Increase (or lower) contrast and they will start to take affect. Choose "Extended Video" from the gamma selection in the Blackmagic RAW panel and you will see it's actually just a custom S curve with saturation preset via the custom gamma controls - you can start to modify it to your taste and the "gamma" automatically changes from "Extended Video" to "Custom". Or you can start from scratch with "Blackmagic Film".

    If you want the highlight/shadows sliders that were in DNG panel, those are not RAW specific controls and are still available in the 2nd tab of the primaries (same algorithm as found in DNG tab and same exact effect on Blackmagic RAW clips as DNG).

    If transforming to another log curve or colour space helps you get what you want faster, great! Different log curves like ARRI/Red or using highlight roll off sliders/presets don't give you anything you can't do yourself by using the curves tool etc, but not everyone is experienced with that stuff. Our log curves preserve the full sensor dynamic range though so the highlight roll off is completely in your hands, or you can start with another log curve if it suits.

    Something I tend to do a bit is expose for the native ISO in camera, then later in the RAW tab I switch to the highest ISO possible and compensate middle grey with the exposure slider which will lower contrast and have the most highlight roll off we offer in our log curves for a particular camera. I start with Extended Video from the gamma drop down and tweak the contrast/sat and highlight/roll off sliders for the shot. Adjust white balance/tint and that gets me 90-95% there with most footage.
×
×
  • Create New...