Jump to content

Llaasseerr

Members
  • Posts

    347
  • Joined

  • Last visited

Everything posted by Llaasseerr

  1. It's a common myth that there is "highlight rolloff" from a sensor. It reads linear light values and hard clips at a certain point - there is no highlight compression, unlike with film negative. Highlight rolloff is a product of how much dynamic range a sensor captures combined with the filmic s-curve applied either in-camera or in post. If you want more clarification on this, here are some links: https://www.fie.us/2013/09/09/basic-color-science-for-cinema/ http://yedlin.net/OnColorScience/
  2. Just stepped away for a sec and it occurred to me maybe you're saying that checking zebras in log is helpful for checking sensor clipping. If so, then yes I agree because the sensor DR should be normalised to 0-1 space in a log display, if the log curve encompasses the entire DR of the sensor. I say "if", because in my tests the Fuji F-log max linear value before encoding to log is below the X-T3 sensor clipping point, which is why I use HLG because it's more generous. It's a bit annoying needing to toggle back and forth between log and your display LUT, but yeah it's a workable option.
  3. Yeah it sounds like a goood feaature to have. But adjusting a raw image +/- 2 stops with an ISO invariant sensor is a lossless operation, so what makes you think that? I routinely shoot -2 stops to protect highlights and I sanity check it with a light meter. Generally I use an ND to go down -2 stops in that case though. You don't lose 2 stops, it's just a trade off whether you get noisy shadows or clipped highlights. I'd rather have noisy shadows because I like the texture and I can always degrain a bit, vs not getting the highlights at all. Again, I might be missing something specific about this camera. If so then sorry about that. I'll just have to try it out to get what you're saying. I would never use zebras to adjust exposure. I know the S1H has a built in luminance spot meter mode for checking exposure based on 18% grey. That's more appropriate IMO. Internally, cameras that save sRGB JPEGs are still metering in linear light.
  4. Your first paragraph is a given and I stated that myself so I'm not sure why you are rephrasing that back at me. Basically everything you're describing is the way I work shooting raw with the D16 except it's not dual ISO. But I do get predictable results. As for why the fp sensor is rated at 100 vs the BMD sensor at 400 for the same brightness, that's because they are completely different sensors and the manufactures rate them with different native ISOs based on their technology. Your second paragraph I don't get. I think you're trying to explain how the ISO readings differ based on whether you're looking at the rec709 preview vs what the raw exposure is? ISO values are always in linear, because they're based on the voltage of the sensor or in photographic terms it's the same, it's the doubling of light that means +1 stop brighter. This always happens in linear light and this is unambiguous. The sRGB/Rec709 curve is added afterwards for display, but it makes no difference. sRGB has nothing to do with slide film or how film is exposed. It's an ancient display standard that is a gamma curve that compensates for the fact that gamma 1.0 images do not look correct and sRGB is a more efficient usage of what was traditionally only 8 bits to display tonal information. ETTR exposing is nothing but a workaround for people with more affordable digital video cameras that want to protect their highlights. With an Alexa you just expose for middle grey and the highlights come along for the ride. Sigma would not need to adjust the metering if the camera had a log profile. Metering is done in linear light values, ie. in digital terms "scene referred gamma 1.0". All log encoding does is that it changes exposure adjustment into an additive operation instead of a multiplication operation. Log is just an encoding method that is very efficient and creates uniform code values between stops. A pure log curve is ACEScc. It doesn't sound to me like Sigma are doing anything wrong except maybe there are a few rough edges.
  5. Edit: If I understand you correctly then based on re-reading what you're saying, I don't see why it's an issue that at ISO 800 it records as if at ISO 100 since it's ISO invariant, unless you switch to ISO 3200. Assuming when shooting @800 the display is ISO 100 +3 stops then that would be correct because my understanding with dual ISO cameras is that you can't shoot at a lower ISO once you switch to the higher native ISO. So the fact you shoot at ISO 800 when it's set to 3200 and it's two stops over exposed makes sense. The way this is dealt with in say the Fuji X-T3 is that you can't shoot below the native ISO when in say HLG. Sigma should probably enforce a limitation in the next firmware update to avoid confusion. I would probably only use ISO 100 aka below 3200 anyway, since it's the max 12.5 stops and I would just add lights if it's too dark.
  6. Okay, this is a workaround but could I for example just leave it at ISO 100 where the sensor DR is at the max (12.5 stops) then assuming I have an external monitor, just have a LUT that exposes it up to ISO 800 and shoot everything like that? I don't need high ISOs and I would use an ND. Or light appropriately if it's too dark. So did you discover this through detective work? It does sound like Sigma have not thought through their exposure implementation with a consistent user experience across ISOs, and the camera needs further firmware updates. Or in the case of not being able to implement log, a hardware refresh (not sure if I totally buy that, either. But I also don't see why they would lie). Back to the original topic of the thread: hopefully there are alternative exposure monitoring workflows presented with the external raw recorders. I guess I would lean towards the Blackmagic video assist since I would use Resolve. I'm starting to see the videos filter in on Youtube but I'd like to see the actual settings available on the recorders.
  7. I seem to remember Sigma have an excellent photo developing app of their own?
  8. Sounds like a hassle. I still don't totally get what you're saying when I compare to my own experiences shooting DNG with my D16, but I won't argue with your experience on the fp before I actually try it out for myself. 😉 This seems like another reason to shoot a grey card so you can bring the exposure in post back to where your intent is. I think you're going to get clipping anyway. It's just how bad is it going to be. Because I think "profile off" is just taking a slice of the linear scene referred output from the sensor and then applying a rec709 curve (essentially a gamma correction). I'm guessing that the nominal 0-1 slice is based on what the ISO setting would peg 18% grey at (0.18 in linear floating point) and then take a 0-1 slice around that. So all the highlights above "1" would get thrown out. With a log curve designed for the full range of the sensor, they all get recorded in a normalised 0-1 space which can be reversed out later and thus offering a faithful preview of the raw image.
  9. I personally agree that exposure is not really an issue. I'm used to using a light meter or a grey card and false color with the Digital Bolex or Fuji X-T3, and for the most part that can be done with the fp. Highlight clipping is more tricky though. Sorry if the topic has been derailed. I was originally commenting on what the external raw recording may bring to the table as far as more accurate monitoring, and how that may relate to the "none" picture mode. If you want I can post on the other thread instead. I did also mention that people are creating great images with this camera. I saw some of your gorgeous images. Regardless of its quirks when it comes to color workflow, it has a lot going for it. It just depends on the context. I work in film post production and so maybe I'm being too demanding as far as what I expect from this camera, but it seems close to usable in that context. What I'm actually advocating is something much more simple and robust than the rather complex ideas that I see people posting using the log curves of other cameras, which are not necessarily technically accurate workflows anyway. But I understand people need to fill in workflow gaps.
  10. BTW I meant 18% grey, couldn't edit my message. Yes I'm talking about if you had an external monitor with false color plugged in. Which kind of disrupts the minimalist design vibe. This baked in ISO doesn't sound like an issue. If you're measuring false color from a grey card with baked in gain from the HDMI then that's what you want as a preview of where the metadata will place the ISO in post. At least to show the "intent". For example the D16 is ISO invariant with the DNG, but when you increase the ISO it reflects that in the log image sent over HDMI. Is that any different to this? But I did also consider the idea of just shooting at 100 and having different LUTS with +1 stop increments instead. Could be worth a shot. Usually I do this anyway, but just +1 and -1 under/over the current exposure.
  11. I had a look at the images of the toy ferrari posted by @Lars Steenhoff . If I bring the 4K DNG and the "none" Mpeg4 mov into Resolve, it actually seems that the mov needs to be treated as Rec709 gamut - not Rec2020 - to fall in line with the way the DNG is being interpreted by the Resolve ACES IDT (unless the DNG metadata is wrong). I would need to check with the exact same frame though, but I'm basing that off the way the red was interpreted. I'm also seeing the following tags in the mov metadata, but that doesn't mean the actual image doesn't have different primaries. Color primaries : BT.709 Transfer characteristics : BT.709 Matrix coefficients : BT.709
  12. I see, so you're using the clipping indicators with the attendant issues you describe. What I'm saying and I've mentioned a few months ago is don't use them, IMO they are useless and you're saying they are misleading you anyway. It's a DSLR thing. Use false color to expose 15% grey. Have faith (and do some tests) there is enough range in the raw image to check clipping. I mean babysitting highlight clipping is admittedly always a battle unless you are shooting on Alexa or film. Yes it would be good if there was a better indicator of the sensor clipping point. I mentioned the D16 has a raw b+w display mode that is just the bayer sensor and you can easily see if it's clipping if you need to protect your highlights. Another way of doing this with log, is on the monitor you have your log to rec709 dailies LUT and you can quite easily see whether the image is clipping vs nicely rolling off. But that's not possible with this camera unfortunately.
  13. My bad, I forgot to apply the sRGB output transform to the two darker images. I wasn't able to edit the original post. Here they are uploaded again:
  14. For sure, I can't say without having the camera to mess around with. But I just did a quick test in Nuke with a few frames of the "none" mode. Unfortunately it's displaying very dark in the browser, but in Nuke it's displaying correctly. EDIT: I forgot to apply the sRGB output transform to the two darker images which is why they look too dark, I'll upload the corrected versions. But the image of the girl is correct. Basically what they seem to have done is mapped the raw gamma 1.0 footage to Rec.709 gamma and called it a day, which is what I suspected. So anything outside of a nominal 0-1 range is getting clipped. They also left it in the native Rec2020-ish gamut which is fine. But that doesn't mean it can't be used with false color because most false color is I believe set up for rec709, and often you can adjust false color bands based on IRE levels. So if it doesn't work, you just need to do some tests and set it up. Without the camera, I can't say though. For example, I monitor F-log by applying an F-log to ACES Rec709 LUT and then I can use false color to confirm that I'm hitting 18% grey exposure, and it matches a light meter reading as an additional sanity check. Whether that would work for this camera and a particular monitor with false color would need to be checked. But I also monitor DNG raw from a Digital Bolex by monitoring the Bolex log signal in the same way, and that matches the DNG to ACES image in Resolve later. Again, with the lightmeter sanity check. As long as you know the colour transform path, this should be totally doable.
  15. Without details from Sigma, here's a quick look at perhaps making sense of footage shot with the new "none" profile, and how they could be used for internal raw monitoring. I grabbed the images from this page: https://blog.sigmaphoto.com/2020/the-sigma-fp-evolves-firmware-2-0-for-the-cinema-crowd/ I assumed they are linear 1.0 in a rec2020 type of gamut, but that they have had a rec709 curve applied. So I took them into Nuke in an ACES color managed script, and I inverted the rec709 curve so that they are again linear 1.0 gamma. I then transformed the rec2020 gamut to ACES. The default ACES rrt/odt film look is applied, and I rendered out jpegs in sRGB, which I'm attaching here. Of course the highlights are completely clipped, but the midtones can be useful for accurately exposing with a grey card. I'm not sure if they will display correctly in the browser window, but they look "correct" in Nuke. EDIT: they look too dark in a Chomium-based browser compared to Nuke, but it may be because my screen is currently set to P3D65. YMMV.
  16. Decided to drill down a bit further on the Ninja V ProRes raw capabilities, and I found a video on setting it up for the ZCam. This link starts on a frame showing the raw display settings and it seems very comprehensive and flexible, offering choice with log/hdr, gamut, etc. At least I think so. From 2:16. If that's the case with the fp, that would be awesome. But it would be great to tighten up the internal raw options too. As far as internal recording, I have an old Ikan VL35 3.5" monitor with 4k HDMI input that has false colour. This could be a great somewhat unobtrusive monitor to attach to the camera to do exposure checks, in I guess, the "none" mode, but probably any relatively neutral Rec709 mode would be okay. I'd be interested if current owners think this could be a good solution.
  17. Well actually, thinking about it again it should be trivial to just add an X-Trans debayer after the raw data has been saved as ProRes raw. It would just be a tweak to the display debayer on the Ninja V, I would imagine. Something like a firmware update. Totally guessing, but if there's a flag in the raw stream, then it would tell the Ninja V to display with an alternative debayer method. Once it's on the computer there shouldn't be any constraints in software as to how bayered raw data is debayered. I just imagine they could add the X-Trans method the same way Lightroom does.
  18. Assuming you shoot with a grey card in your preroll, the easiest workaround is probably to shoot in the "none" mode and just add a cheap external monitor with false color, and adjust exposure to land on that. Which is annoying because it's more bulk, so hopefully Sigma will add false color in the next firmware update. I mean I'm happy to shoot with a light meter, but if you're using a consumer level vari ND then it gets a bit more complex figuring out how many stops you're going down so it's easy to dial it in with false color mode pointed at a grey card. I've accepted that the solution shooting external ProRes raw involves an ingest step through FCP before moving to your preferred editor. I'm not so concerned about lack of standard raw controls in some software, because these are just basic rgb multipliers that can be done in the grading environment as long as you are either in either linear or ACEScc log space.
  19. If Fuji released an XT-5 or XH-2 with ProRes raw recording that also allowed monitoring in F-log, then that would cover a lot of what the fp should do. But you would still need the external Ninja V recorder. Does anyone know any more about recording with the director's viewfinder? I love the industrial design and the minimalism of the fp, as well as the internal raw option. Not sure if I've already stated it, but they should add true 24p and 2k DCI raw internal recording. The X-T3 has 2K DCI which is pretty neat, but of course it's crappy 4:2:0 chroma compressed. If you look at the U and V channels they are just garbage and it really affects even minor grading with qualifiers in Resolve. It would be great if a future Sigma video camera uses a Foveon sensor and also sorts out some of these remaining workflow issues.
  20. Just checking in on this thread because I'm still intrigued by this camera. There are two workflows of interest: shooting raw internally or externally. Obviously internally is preferable, but accurate monitoring is an issue. The Digital Bolex solved this so simply by just outputting a known log version over HDMI of the raw image recorded internally. That is all Sigma would have to do. And when I say log, I also mean wide gamut - Rec2020 in this case. Digital Bolex supported this with a white paper in the same way that Arri, Fuji, Sony et al do. If the D16 was still getting firmware updates, then by this point we would probably see it outputting an even more accurate image over HDMI based on further spectral analysis of the CCD sensor, and it would be in Alexa LogC/AWG for further workflow compatibility. Without any technical details, the "none" profile on this camera looks like it's just in the original wide gamut and maybe with no S-curve applied for highlight rolloff, and that's the only difference. Luminance-wise, it still clearly is gamma corrected to Rec709 and that means the highlights are clipped since a linear raw image holds a lot more range than Rec709, which is normalized to 0-1. That's why Rec709 custom display curves by individual camera companies have highlight rolloff baked in to hold the dynamic range. So it's pretty much useless for monitoring highlights, but you could at least use it for checking exposing middle grey with false color. Does it have false color in the new firmware update? That's not to say people aren't getting great results anyway, but it just puts this camera more in the enthusiast category. We are now at the point where people are busting out rather arcane, complex color transform combos in Resolve but none of these are an actual simple, technically accurate workflow. As for how to handle the DNG frames in Resolve, I personally just use ACES since the Resolve IDT does a decent job of interpreting the DNG matrix metadata and delivering a consistent result with what I saw monitoring on set. It's a decent solution when you don't have a specific IDT based on spectral analysis of the camera sensor. You just need to think of ACES on one level as being the biggest and most convenient bucket for capturing the dynamic range and wide gamut of the raw image. It maps it to RGB as linear floating point wide gamut at the correct exposure and displays it in a meaningful way. If you wanted, you could then transform ACES to AlexaLogC/AWG and render ProRes, then work in any Alexa log workflow. The other option is to use an external raw recorder, either BRAW or ProRes raw. I don't know anything about these workflows. For example, how do you monitor ProRes Raw on a Ninja V? Is there a way of displaying it as a known log image then you can put your own LUT on top? Can you do that with BRAW and a BMD monitor?
  21. I'd love to know more about this mode. From what I saw, it doesn't look like anything useful except they did not apply the s-curve and gamut transformation, and it still looked gamma encoded. A linear raw image can be transformed by your lut to your final output display look, but it needs to first be in a 0-1 space. There are two typical ways of doing this. 1. First transform to log as a shaper lut then apply your look lut which is a typical log to Rec709 lut maybe with a print film emulation as well. 2. The "none" image may be linear, but to retain all the highlight information a raw image must be exposed down. This is how tools like DCraw export images to disk as 16 bit integer files without any dynamic range loss. In order to apply your lut, you need to first adjust the exposure back to what it's meant to be, then transform to a log shaper lut, then proceed with the shaper lut->log to rec709 combo. For this to be accurate, at every step of the way you need to know what the colour transform is so that you can account for it in your lut creation.
  22. The main issue for me with ProResHQ is chroma subsampling. You also miss out on some highlight reconstruction options, but that's just putting data in to the image that was not there. But I agree, ISO and white balance operations are just rgb multipliers so they are not particularly destructive. To replicate what happens in linear raw you just need to delog the footage and if need be set the gamma to 1.0 before you do those operations, or use the correct mathematical operation in log space (add vs multiply). This will work in ACEScc log encoding. Edit: this is assuming you're actually log encoding your ProRes which we can't do with this camera. I'm personally fine with raw, but there needs to be an accurate monitoring method. Accurate to your final intent that is, since the raw image without context is meaningless.
  23. Right, that doesn't make sense on re-reading. The article seemed to be saying there was an option to capture 8 bit 4:2:2 ProRes from the incoming raw stream, instead of writing as ProRes Raw proper. It could be a mistake on their part. I haven't actually used a Ninja V before, but if there was an 8 bit compressed option I thought maybe it will be some kind of log version. Maybe some sort of dump to disk of the debayer that you are looking at while monitoring?
  24. This is a pretty interesting upgrade. I'd still like to see them add true 24p and also internal raw DCI-2k like the X-T3 does. I'm also very interested in what the output looks like when you capture the director's viewfinder mode. Can we do Super 16? Anamorphic 35mm? Does it have a big black area around it? As for the omission of a log profile, here's my take. For me, this would be most useful for monitoring with your dailies LUT while recording raw since currently there does not appear to be an accurate way to monitor raw. The Digital Bolex handles this brilliantly and simply by outputting log over HDMI. If you hook up an external monitor that accepts 3D LUTs then the log can be transformed to your final Rec709 look. That is also because they published their log spec, so you can do the transform accurately. Same idea as an Arri LogC to Rec709 LUT. This workflow means your on-set LUT matches what you will see in Resolve when viewing your raw dailies footage. Ideally you could capture raw but view some kind of Sigma log on the camera screen, and crucially, also add your own log to rec709 transform LUT on top. That way you don't need an external monitor at all. If the new HDR mode can be output over HDMI while recording internal raw, then if the HDR is a known spec like HLG then with the addition of a LUT it can be used to monitor the raw image, and it can also be used in place of a log profile. Another option is if you can display log versions for ProRes or BMD raw captures and you can put your own LUT on top of the log image. I'm not sure of the capabilities of the recorders though. I don't think the new 'none' mode is going to help much, but if Sigma released details explaining exactly what is captured in this mode, then we can make our own decisions. It certainly doesn't look like linear 1.0 gamma since it's not dark enough, and it most likely is clipping highlight information above 1.0. Edit: does this output some kind of 8 bit ProRes raw over HDMI? This was mentioned on another site. If this is some kind of log signal so it fits in 8 bits then again it can be used for monitoring.
×
×
  • Create New...