Jump to content

Llaasseerr

Members
  • Content Count

    38
  • Joined

  • Last visited

Everything posted by Llaasseerr

  1. Agreed, slimRAW is a great workaround even if it does add a step to the ingest process. But it's a shame the Sigma probably won't see a lossless or lossy DNG variant that may have allowed 12 bit internal recording. Still, external SSDs are cheap and it allows for fast offloads. I do hope Apple ultimately prevails here though.
  2. Yes there are - I know of a few approaches already, we are really talking OOB though. Yes Resolve is powerful for colour, but it does not make that power explicit and precise like Nuke. It's dumbed down in some ways. Using Gain is the obvious method if you can't use the Raw tab (Prores footage?). As I mentioned, I think using an ACEScc log curve will make the printer lights behave the same as a Gain - I need to check, it's been a while. I personally dont't know though what is a 1 stop increment wiht printer lights. Using Gain is easier, you double it for each stop up or halve it for each stop down. But of course, there should just be a linear Exposure mode in stops that can be toggled right there on the panel as a fully fledged part of the UI. It would require that you tell it what the input transfer function is, then it would bracket that behind the scenes.
  3. Right. It would be great if there was an option to use the "printer lights" feature as a multiply (gain) operation in linear space in Resolve. That's how it is in Nuke, for example. I can't remember to be honest, but I think ACEScc may be a direct log conversion in the blacks, because the bottom of the curve is pinned at black - which is why ACEScct was developed, to feel more like a traditional log curve when grading. I'd need to check but I have been able to get the exact same results with offset in log (probably ACEScc) vs Gain in linear.
  4. Exactly - it's not intuitive, or consistent across different working scenarios. I do use that. There is also the option to do it in the Offset tab in log space, but that is not particularly transparent either. In theory the more recently added right click option to bracket a grade operation within a certain color space should make this work a little better, but in practice I've found it doesn't work consistently and explicitly as it does in say, Nuke. The presumption in the design of Resolve that one would only want to do a linear exposure adjustment on raw footage in the raw tab is a little odd.
  5. Well said. To speak to part what you re saying, I'm still not clear on why Blackmagic can't publish their log curve and gamut though. So we can go to linear. I mean it's one of the main reasons I never ended up buying a Blackmagic camera.
  6. If you mean some kind of container with still frames inside, then yes. I'm not clear on how MXF works but I'd like it if it was the best of both worlds and just a wrapper around the DNG frames that you can right click on and then go inside. I'll agree this is all pretty old tech but I'm not as opposed to the format as others. Personally I do think that the same general advantages I talked about with frames apply to intermediate sequences and source media as far as file handling. I'm not 100% on this, but if you record a movie then can't a couple of bad frames corrupt the whole mov file, rather than allowing you to salvage most of what was shot because it was frame based? I'm talking about source metadata that is carried all the way through from shoot to ingest through post and vfx to DI. It could be matrices, CDL, lens information, etc. So if you go to an intermediate format like exr then all the footage metadata from the shoot comes along for the ride. But I'll admit that this workflow is not something that most people on this forum are considering. That's cool. You wrote slimRAW right? I own it 😀. I have a Digital Bolex and it's 100% necessary for that camera. I'm sure it will be gold for the Sigma fp. I personally don't believe it's worth the overhead of debayering raw on the timeline, but then again I haven't tried out ProRes RAW. BRaw seems lossy in the chroma channels so I'm not going there. I just feel it's better to ingest to the full dynamic range floating point RGB (EXR) or log dpx/ProRes. And I can assure you that raw processing controls are utilised on big productions, but at an ingest stage. There is always the option to go back and reprocess the raw if the debayer algorithm needs changing - or the colour temperature, but in that case the data bucket of the exr is so huge that a temperature shift to the rgb image is 99% of the time totally fine. The reality is that when you capture the whole thing to an EXR or DPX there is very little that is baked in. As to what you are saying about grading at a raw level in a more precise way - I do agree that grading software can seem kind of slap dash in some ways that is just really odd. I personally use Nuke for things like exposure and temperature changes, and Nuke's grading nodes are much more mathematically oriented than Resolve. Not that Nuke is a good grading tool per se, but it's more suited to making precise changes. It bothers me that if a dp says "can you push this plate +1 stop" that there is no obvious go-to linear exposure control in Resolve - except in the raw controls tab. Also it's really weird the way Resolve does not allow direct colour matrix input. Mate, you're preaching to the choir. I'm 100% with you on exposure in linear - Then why doesn't Resolve offer a linear exposure adjustment tool except on the raw tab? This baffles me. You work for Black Magic right? I'm not actually a colorist though. I mainly use Nuke (all-linear all the time) and am reluctantly learning Resolve. For me, going to and from XYZ to do some comp operation is a lot more intuitive than it is in Resolve. So that concept is not foreign to me either. I tech proof things in Nuke before trying to rebuild them with Resolve nodes. I appreciate that Resolve 16 added the colour temperature adjustment node, but I do agree about white balance in raw as being the best way to do it. A friend who worked on Rogue 1 told me the DP Greig Fraser apparently shot the Alexa 65 all at 6500k temperature since it was all RAW capture, and then of course that can be adjusted on raw ingest. He may be wrong about this, but this is what I heard since he is a UI designer and he was wanting the white point of the UI elements to match the footage - so that came out of a conversation with the DP on set. So yes we are talking about exposure balance, white balance/colour temperature and debayer at the raw ingest stage. Ie. you are proving my point - this is best done at ingest as a kind of tech grade first pass step that can always be revisited if need be. The thing you say this Light Iron guy is further backing up what I'm saying. If you need to "match shots" in DI then you should already be 90% of the way there with your first pass and CDL since by DI stage the film is 90% complete.
  7. Are you talking about Sigma adding DNG metadata? Obviously that's great, and I like Resolve's ability to create an IDT on the fly using that data. If it's something else you're talking about, I'd be interested to know. What I was describing is a more high precision IDT based on spectral sensitivity measurements - not "just" a matrix and a 1D delog lut. If you look at the rawtoaces github page, the intermediate method is as you describe the way Resolve works. It only tries that method if the afore-mentioned spectral sensitivity data is not available.
  8. Per frame metadata is a big part of feature film production and is not going anywhere. But at the lower end of the market it is probably not that relevant. Have you actually tried it? There are many reasons that VFX and DI facilities work with frames. A single frame is much smaller to read across the network than a large movie file and the software will display a random frame much faster when scrubbing around - not factoring in CPU overhead for things like compression - or debayering. As for my example of upload to cloud, a multithreaded command line upload of a frame sequence is much faster than a movie file, and I'm able to stream frames from cloud to timeline with a fast enough internet connection. But in a small setup where you are just making your own movies at home then this all may be a moot point. In a film post production pipeline, raw controls are really for ingest at the start of post, not grading at the end. But I agree that if you are a one person band who doesn't need to share files with anyone, then ingesting raw, editing and finishing in Resolve would be possible. Our workflows are very different because of different working environments. And for what you are doing, your way of working may be best for you.
  9. I prefer file sequences so I'm fine with what the Sigma fp is doing there. Fingers crossed DNG compression gets added eventually. Coming from using software like Nuke and Resolve in film production, all shows are ingested from raw or pro res as dpx or open exr and go all the way through to DI in this way (editing is dnx in Avid). Indie features and second tier tv shows are typically pro res or Avid dnx, and I understand at the dslr level it's all movie formats not frames. It's only at the dslr level that anyone is actually trying to edit raw. IMO it would be like trying to edit with the original camera negative on a Steenbeck. Maybe we can just say that it's a cultural difference. But discrete frames allow the following: - Varying metadata to be stored per frame. - The whole file isn't hosed because of one bad frame - File transfer is easier. Way less load on the network and this is a biggy, with transfer to/from cloud. Having the option of MXF is fine though. I remember testing a CinemaDNG mxf when the spec was just released, but no-one actually used it.
  10. It would be great if Sigma made an ACES IDT based on measuring the sensor's spectral sensitivity data. From the rawtoaces documentation on Github: The preferred, and most accurate, method of converting RAW image files to ACES is to use camera spectral sensitivities and illuminant spectral power distributions, if available. If spectral sensitivity data is available for the camera, rawtoaces uses the method described in Academy document P-2013-001 (.pdf download)
  11. My "raw workflow" is just expose with a light meter, also get a grey card and then import to Resolve. In the raw settings I may use highlight reconstruction and do an exposure adjustment. Generally from there, if not working in ACES I do a CST to log (Cineon or Alexa LogC) then put the PFE lut on it to see how it will look before any log space grading. Edit: to be clear the PFE lut is the last thing in the chain but I'll put it on before grading (time-wise, not in the node graph). This gives me a nice quick one-light and should match the on-set monitor. I didn't see anything about these images that would make me deviate from that. They seem like regular raw images to me once they are in Resolve, but I'll have to see when more becomes available.
  12. Thanks for clarifying there is some kind of curve on the 8 bit image - nice to know they thought about that! I'm not sure what you mean as far as color tables in DNG (will need to give the spec another look) but I'm mainly referring to Resolve interpreting colour matrices stored as DNG metadata - which it does. So that should be the crux of the colour transform decisions it's making on the raw image. Resolve actually does the best mainstream job at interpreting a Cinema DNG image because it puts it into a high dynamic range space with no highlight clipping if you have the Resolve project set up correctly, and it exposes it more or less correctly for middle grey.
  13. I took a look at the CDNGs in Resolve just with a basic ACES setup and they both seem pretty decent - even the 8 bit one which is interesting, because an 8 bit linear image should be practically useless. Maybe it's got a log curve under the hood? That lakeside image has a lot of dynamic range variation between highlight and shadow. It might just be the right kind of image to camouflage potential issues. I'll try a non-ACES workflow where I would just interpret to Alexa LogC/AWG. From there you would write out ProRes or something. I like cinema5d, but if I understood their review correctly they seem to be saying the camera's internal picture profiles are influencing the raw recording. I don't see how that is possible, those profiles would be only baked into the h264s. They are also advocating for a log profile to interpret the raw image as well as an option for the baked h264 footage. Resolve should do a decent job of translating the DNG frames based on metadata, then if you want you can convert to your chosen log profile with CST nodes or similar. If Sigma did create a log profile and gamut, and put out a white paper then that would be nice, but I'm assuming the DNG metadata currently there isn't garbage because the images seem to look okay. A known, published log profile and gamut is essential though when recording raw for monitoring over HDMI - assuming you can add a custom LUT to your monitor. That way, you can get a decent "one light" match to what you will eventually see in Resolve after ingesting your raw footage and applying your LUT.
  14. If I disable hardware decoding of h.265 in Resolve, the image displays correctly. As @androidlad said,
  15. I've been able to fix this clamping issue in Resolve 16 Studio beta 050 on my Mac by disabling the hardware decoding for h265 in the preferences. I'm 99% sure I tried doing this months ago and it had no effect, but now it's working. Not ideal, but I was not planning to edit in h265 anyway. So I'm just punting the transcode to Resolve. I haven't done any time comparisons between rendering say ProRes out of Resolve vs a h265 to ProRes transcode in EditReady. But the advantage of doing the transcode in Resolve may be that it's part of a general ingest step where some technical luts are applied, as well as sound syncing.
  16. Glad it worked for you! Right, I'm just making sure to set the ProRes clips to Full levels when importing to Resolve. The missing image detail is most definitely back. And just to emphasise, I've done tests with both F-log and HLG, and F-log is definitely recorded at Full levels since lens cap black comes in at 95 when the transcoded ProRes is imported into Resolve at Full levels. I'm 95% sure HLG is also meant to be imported as Full levels. I mean why would they make it awkward for the user and say F-log is Full range, but HLG is video range? The HLG black level is not 0 with the lens cap, it's about 32. But that's still half of "video levels" black with HLG which is 64. That's a stop more shadow data than "video levels" black.
  17. EditReady is working for me. I transcode from h.265 to ProRes 422HQ and I'm then able to import the full data levels into Resolve.
  18. This looks like a problem I've also got on my 15" Macbook Pro 2018. It looks like the difference between the h.265 displaying at full range levels (the less contrasty version on the iMac) and video range levels (the more contrasty version on the Macbook Pro). I believe the correct one is the iMac version. I'm wondering if the reason the Macbook Pro is displaying the image incorrectly is because it's using the T2 chip to play back h.265 movies. I've found this display issue in Resolve. My workaround has been to transcode from h.265 to ProRes HQ in EditReady before importing to Resolve, and setting the imported clip data levels in Resolve to Full range. I should add the that reason I came to the conclusion the Full range (iMac) version is correct is because I did a lens cap test with F-log h.265 and the black level fell on 95/1023 which is the correct code value for black in native F-log footage according to the Fuji F-log white paper. Additionally, I've found that when software like Resolve is interpreting the h.265 as Video range, it's clipping shadow and highlight detail that is not recoverable, that is visible in the Full range version.
  19. Thanks for letting me know that you are getting consistent behaviour. I think there's some issue just on my Mac, where Resolve full range is not matching h.265 full range because when I rolled back to Resolve 15, I suddenly had the same issue where previously 15 had behaved the same as what you're saying. I have no idea what has happened on my computer because I haven't updated the system software. I've reported this to BMD but they have no idea either.
  20. I just tried turning off GPU decoding of HEVC files and the h.265 clip is now way too lifted and flat when set to Full levels. Clearly this is buggy. So doing it on the GPU yields the "correct" result, except for the clipping. Transcoding to ProRes 422HQ in EditReady seems to be the best workaround for me right now to get the full image displaying correctly in Resolve.
  21. I've tried this out in Resolve 16b3 with the same files and unlike Resolve 15, I need to set the ProRes clip to Video levels, not Full levels. Besides that, the colours now apparently match with the latest bug fixes. However I'm getting a separate luma clipping issue. Details over here in the original thread.
  22. Gotcha, the old concatenation issue. I find that Nuke is a lot more transparent about stuff like this so I don't have to dig too deep. In fact I use Nuke as a sanity check and prototype for stuff I'm trying to do in Resolve. I was reading the main X-T3 thread and I wanted to check in on this thread to say that I tested the original clips to see if the new Resolve 16 beta 3 fixed your original 601 vs 709 issue. On my machine (Macbook Pro with Vega 20 GPU) it does indeed now match the Reds in his jacket. However in 16 I now need to set the ProRes clip data levels to Video, not Full levels. This kind of makes sense considering the typical ProRes usage scenario. As long as the full dynamic range is coming in and the luma is as intended, I'm fine with that. Separately, I'm still getting an issue on my machine at least, with clipped detail in the h.265 footage straight from camera. I think this is just a v16 beta issue but I'd be interested to know if anyone else is seeing this. I've reported it to BMD. Attached are screen grabs where I graded up to clearly show the issue. The waveforms (not attached) also show the clipping. My workaround for now has been to transcode the h.265 to ProRes 422HQ using EditReady, and then in Resolve set the ProRes to Video Levels to match the h.265 at Full levels - the luma levels then match, but it's not clipped. I believe the issue may be a bug with GPU decoding of h.265 clips.
  23. Fair enough that the correction needs to go first and work on the camera original image. In which case there's no need to break up the CST operation with the HLG to linear in one node and linear to LogC in the next one. So if I'm understanding you correctly, could you just use a CST node after your matrix fix, followed by an Arri logC to Rec709 LUT then concatenate the whole thing? There is then no explicit intermediary stage there that has a floating point output outside 0-1 and your start and end points are normalized images. Having said that, you should be able to handle values outside 0-1 with a shaper LUT so as to prevent clipping. I can definitely do this in Nuke. But then is Resolve really clipping something if you generate a LUT which first goes from HLG to linear then linear to LogC, then Rec709? I haven't tried in Resolve. But going to LogC is inherently a shaper LUT.
  24. You can break it down by doing HLG to Scene Linear under the HLG LUT menu -> custom matrix fix -> Rec2100 (Rec2020) to AWG (CST node) -> Linear AWG to the Alexa 709 look (download a LUT from the Arri LUT generator). I would do a sanity check to confirm it matches without the matrix fix, then add it in. As for baking it down into a single LUT, I know I can do this in Nuke but that's an expensive piece of software so it may not be an option for everyone. Edit: corrected order of operations.
×
×
  • Create New...