Jump to content


  • Content Count

  • Joined

  • Last visited

About Llaasseerr

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Raw over HDMI - just force the bayer image down HDMI then debayer later. It's a neat idea but not really useful for this camera since it will record raw natively. There may be some nice advantages that become apparent later. I didn't mention it with regards to monitoring though. What I'm saying is that to monitor raw in a practical,, usable way, you need a log image to compress the full dynamic range to a normalised space that can be sent over HDMI/SDI to any typical screen, then you add a transform from log to your final look. The final look is typically something like an Arri look lut when shooting Alexa, or ACES rrt/odt combo (they are both similar anyway). I make the LUT in Nuke or Lattice and then upload it to the monitor. So yes to confirm, not monitoring literally in log, but send a log signal over HDMI to put the LUT on it. You have to think of log in this case as being the same as raw, but the sensor's whole dynamic range has been compressed to fit down the HDMI with no clipping, via a log transfer function. That way, you get to monitor the raw image. Sure, some things are baked in. But you get the dynamic range. You get a very good, close representation of what you will look at in Resolve with the raw images later. What you are asking for, to see the clipping point of the raw channels, you can do that by monitoring a log signal since the Sigma log curve will encompass the entire dynamic range of the sensor. However there is a neat view mode on the Digital Bolex that is just the output of the bayer sensor, and you can clearly see where it's clipping the highlights and adjust the exposure accordingly. That would be nice to add. It's actually very similar to what ProRes raw is doing - just outputting the bayer image over HDMI. Yes the log monitoring approach doesn't account for extra white balance or highlight construction flexibility with raw. That's a nice little bonus you get to play with in Resolve.
  2. I'm glad they're adding log, but I would only use it for monitoring raw recording. For that, it's crucial. The log image with LUT will then match the RAW image with LUT in Resolve (different input but same output image). So as long as: - They publish the log curve and wide gamut they are using. They need a conventional Cineon/AlexaLogC/ACEScct type of log curve that will hold all the highlight information of the raw sensor. - It can send a log signal over HDMI so that you can record DNG raw and monitor with a lut on the HDMI log image. If it's sending out raw over HDMI that's quite cool, but as far as monitoring, you are limited to the Atomos Ninja V vs. every other EVF or external monitor on the market that allows a custom LUT. As for exposing raw, the only two things you need to think about are: 1. Expose with a grey card and a lightmeter like you are exposing film. You can also use false colour and a grey card but just make sure you're aware of what gamma setting the false colour is expecting, or make your own false colour LUT. 2. Check where the highlights are clipping. If you need to protect the highlights further, then underexpose by 1 or two stops and push by the same amount in post. Use Neat Video to clean up the noise floor if necessary. Zebras are for a video world. They can be semi relevant for checking highlights in log since it puts everything in a 0-1 range. Again, we need log output for monitoring and if for example shooting for ACES we can put a log to ACES rrt/odt LUT on the camera monitor/EVF and see how the full dynamic range of the highlights are rolling off. Then get the exact same result on the ingested DNG raw images to ACES in Resolve. Log for monitoring raw recording is crucial as it allows any Chinese 8 bit monitor with custom LUT option to display the full raw image dynamic range and get a very close match to what you will see in Resolve with your beautiful DNGs - as long as you have your technical LUTs set up correctly. Published log and gamut is a MUST though. Sigma, please do a white paper documenting this. This way, we can do a direct correlation with the linear raw image. And allow sending log over HDMI while recording raw.
  3. Agreed, slimRAW is a great workaround even if it does add a step to the ingest process. But it's a shame the Sigma probably won't see a lossless or lossy DNG variant that may have allowed 12 bit internal recording. Still, external SSDs are cheap and it allows for fast offloads. I do hope Apple ultimately prevails here though.
  4. Yes there are - I know of a few approaches already, we are really talking OOB though. Yes Resolve is powerful for colour, but it does not make that power explicit and precise like Nuke. It's dumbed down in some ways. Using Gain is the obvious method if you can't use the Raw tab (Prores footage?). As I mentioned, I think using an ACEScc log curve will make the printer lights behave the same as a Gain - I need to check, it's been a while. I personally dont't know though what is a 1 stop increment wiht printer lights. Using Gain is easier, you double it for each stop up or halve it for each stop down. But of course, there should just be a linear Exposure mode in stops that can be toggled right there on the panel as a fully fledged part of the UI. It would require that you tell it what the input transfer function is, then it would bracket that behind the scenes.
  5. Right. It would be great if there was an option to use the "printer lights" feature as a multiply (gain) operation in linear space in Resolve. That's how it is in Nuke, for example. I can't remember to be honest, but I think ACEScc may be a direct log conversion in the blacks, because the bottom of the curve is pinned at black - which is why ACEScct was developed, to feel more like a traditional log curve when grading. I'd need to check but I have been able to get the exact same results with offset in log (probably ACEScc) vs Gain in linear.
  6. Exactly - it's not intuitive, or consistent across different working scenarios. I do use that. There is also the option to do it in the Offset tab in log space, but that is not particularly transparent either. In theory the more recently added right click option to bracket a grade operation within a certain color space should make this work a little better, but in practice I've found it doesn't work consistently and explicitly as it does in say, Nuke. The presumption in the design of Resolve that one would only want to do a linear exposure adjustment on raw footage in the raw tab is a little odd.
  7. Well said. To speak to part what you re saying, I'm still not clear on why Blackmagic can't publish their log curve and gamut though. So we can go to linear. I mean it's one of the main reasons I never ended up buying a Blackmagic camera.
  8. If you mean some kind of container with still frames inside, then yes. I'm not clear on how MXF works but I'd like it if it was the best of both worlds and just a wrapper around the DNG frames that you can right click on and then go inside. I'll agree this is all pretty old tech but I'm not as opposed to the format as others. Personally I do think that the same general advantages I talked about with frames apply to intermediate sequences and source media as far as file handling. I'm not 100% on this, but if you record a movie then can't a couple of bad frames corrupt the whole mov file, rather than allowing you to salvage most of what was shot because it was frame based? I'm talking about source metadata that is carried all the way through from shoot to ingest through post and vfx to DI. It could be matrices, CDL, lens information, etc. So if you go to an intermediate format like exr then all the footage metadata from the shoot comes along for the ride. But I'll admit that this workflow is not something that most people on this forum are considering. That's cool. You wrote slimRAW right? I own it ?. I have a Digital Bolex and it's 100% necessary for that camera. I'm sure it will be gold for the Sigma fp. I personally don't believe it's worth the overhead of debayering raw on the timeline, but then again I haven't tried out ProRes RAW. BRaw seems lossy in the chroma channels so I'm not going there. I just feel it's better to ingest to the full dynamic range floating point RGB (EXR) or log dpx/ProRes. And I can assure you that raw processing controls are utilised on big productions, but at an ingest stage. There is always the option to go back and reprocess the raw if the debayer algorithm needs changing - or the colour temperature, but in that case the data bucket of the exr is so huge that a temperature shift to the rgb image is 99% of the time totally fine. The reality is that when you capture the whole thing to an EXR or DPX there is very little that is baked in. As to what you are saying about grading at a raw level in a more precise way - I do agree that grading software can seem kind of slap dash in some ways that is just really odd. I personally use Nuke for things like exposure and temperature changes, and Nuke's grading nodes are much more mathematically oriented than Resolve. Not that Nuke is a good grading tool per se, but it's more suited to making precise changes. It bothers me that if a dp says "can you push this plate +1 stop" that there is no obvious go-to linear exposure control in Resolve - except in the raw controls tab. Also it's really weird the way Resolve does not allow direct colour matrix input. Mate, you're preaching to the choir. I'm 100% with you on exposure in linear - Then why doesn't Resolve offer a linear exposure adjustment tool except on the raw tab? This baffles me. You work for Black Magic right? I'm not actually a colorist though. I mainly use Nuke (all-linear all the time) and am reluctantly learning Resolve. For me, going to and from XYZ to do some comp operation is a lot more intuitive than it is in Resolve. So that concept is not foreign to me either. I tech proof things in Nuke before trying to rebuild them with Resolve nodes. I appreciate that Resolve 16 added the colour temperature adjustment node, but I do agree about white balance in raw as being the best way to do it. A friend who worked on Rogue 1 told me the DP Greig Fraser apparently shot the Alexa 65 all at 6500k temperature since it was all RAW capture, and then of course that can be adjusted on raw ingest. He may be wrong about this, but this is what I heard since he is a UI designer and he was wanting the white point of the UI elements to match the footage - so that came out of a conversation with the DP on set. So yes we are talking about exposure balance, white balance/colour temperature and debayer at the raw ingest stage. Ie. you are proving my point - this is best done at ingest as a kind of tech grade first pass step that can always be revisited if need be. The thing you say this Light Iron guy is further backing up what I'm saying. If you need to "match shots" in DI then you should already be 90% of the way there with your first pass and CDL since by DI stage the film is 90% complete.
  9. Are you talking about Sigma adding DNG metadata? Obviously that's great, and I like Resolve's ability to create an IDT on the fly using that data. If it's something else you're talking about, I'd be interested to know. What I was describing is a more high precision IDT based on spectral sensitivity measurements - not "just" a matrix and a 1D delog lut. If you look at the rawtoaces github page, the intermediate method is as you describe the way Resolve works. It only tries that method if the afore-mentioned spectral sensitivity data is not available.
  10. Per frame metadata is a big part of feature film production and is not going anywhere. But at the lower end of the market it is probably not that relevant. Have you actually tried it? There are many reasons that VFX and DI facilities work with frames. A single frame is much smaller to read across the network than a large movie file and the software will display a random frame much faster when scrubbing around - not factoring in CPU overhead for things like compression - or debayering. As for my example of upload to cloud, a multithreaded command line upload of a frame sequence is much faster than a movie file, and I'm able to stream frames from cloud to timeline with a fast enough internet connection. But in a small setup where you are just making your own movies at home then this all may be a moot point. In a film post production pipeline, raw controls are really for ingest at the start of post, not grading at the end. But I agree that if you are a one person band who doesn't need to share files with anyone, then ingesting raw, editing and finishing in Resolve would be possible. Our workflows are very different because of different working environments. And for what you are doing, your way of working may be best for you.
  11. I prefer file sequences so I'm fine with what the Sigma fp is doing there. Fingers crossed DNG compression gets added eventually. Coming from using software like Nuke and Resolve in film production, all shows are ingested from raw or pro res as dpx or open exr and go all the way through to DI in this way (editing is dnx in Avid). Indie features and second tier tv shows are typically pro res or Avid dnx, and I understand at the dslr level it's all movie formats not frames. It's only at the dslr level that anyone is actually trying to edit raw. IMO it would be like trying to edit with the original camera negative on a Steenbeck. Maybe we can just say that it's a cultural difference. But discrete frames allow the following: - Varying metadata to be stored per frame. - The whole file isn't hosed because of one bad frame - File transfer is easier. Way less load on the network and this is a biggy, with transfer to/from cloud. Having the option of MXF is fine though. I remember testing a CinemaDNG mxf when the spec was just released, but no-one actually used it.
  12. It would be great if Sigma made an ACES IDT based on measuring the sensor's spectral sensitivity data. From the rawtoaces documentation on Github: The preferred, and most accurate, method of converting RAW image files to ACES is to use camera spectral sensitivities and illuminant spectral power distributions, if available. If spectral sensitivity data is available for the camera, rawtoaces uses the method described in Academy document P-2013-001 (.pdf download)
  13. Thank you for talking some sanity on this oft-misunderstood subject.
  14. My "raw workflow" is just expose with a light meter, also get a grey card and then import to Resolve. In the raw settings I may use highlight reconstruction and do an exposure adjustment. Generally from there, if not working in ACES I do a CST to log (Cineon or Alexa LogC) then put the PFE lut on it to see how it will look before any log space grading. Edit: to be clear the PFE lut is the last thing in the chain but I'll put it on before grading (time-wise, not in the node graph). This gives me a nice quick one-light and should match the on-set monitor. I didn't see anything about these images that would make me deviate from that. They seem like regular raw images to me once they are in Resolve, but I'll have to see when more becomes available.
  15. Thanks for clarifying there is some kind of curve on the 8 bit image - nice to know they thought about that! I'm not sure what you mean as far as color tables in DNG (will need to give the spec another look) but I'm mainly referring to Resolve interpreting colour matrices stored as DNG metadata - which it does. So that should be the crux of the colour transform decisions it's making on the raw image. Resolve actually does the best mainstream job at interpreting a Cinema DNG image because it puts it into a high dynamic range space with no highlight clipping if you have the Resolve project set up correctly, and it exposes it more or less correctly for middle grey.
  • Create New...