Jump to content

Llaasseerr

Members
  • Content Count

    51
  • Joined

  • Last visited

Everything posted by Llaasseerr

  1. I'd love to know more about this mode. From what I saw, it doesn't look like anything useful except they did not apply the s-curve and gamut transformation, and it still looked gamma encoded. A linear raw image can be transformed by your lut to your final output display look, but it needs to first be in a 0-1 space. There are two typical ways of doing this. 1. First transform to log as a shaper lut then apply your look lut which is a typical log to Rec709 lut maybe with a print film emulation as well. 2. The "none" image may be linear, but to retain all the highlight information a raw image must be exposed down. This is how tools like DCraw export images to disk as 16 bit integer files without any dynamic range loss. In order to apply your lut, you need to first adjust the exposure back to what it's meant to be, then transform to a log shaper lut, then proceed with the shaper lut->log to rec709 combo. For this to be accurate, at every step of the way you need to know what the colour transform is so that you can account for it in your lut creation.
  2. The main issue for me with ProResHQ is chroma subsampling. You also miss out on some highlight reconstruction options, but that's just putting data in to the image that was not there. But I agree, ISO and white balance operations are just rgb multipliers so they are not particularly destructive. To replicate what happens in linear raw you just need to delog the footage and if need be set the gamma to 1.0 before you do those operations, or use the correct mathematical operation in log space (add vs multiply). This will work in ACEScc log encoding. Edit: this is assuming you're actually log encoding your ProRes which we can't do with this camera. I'm personally fine with raw, but there needs to be an accurate monitoring method. Accurate to your final intent that is, since the raw image without context is meaningless.
  3. Right, that doesn't make sense on re-reading. The article seemed to be saying there was an option to capture 8 bit 4:2:2 ProRes from the incoming raw stream, instead of writing as ProRes Raw proper. It could be a mistake on their part. I haven't actually used a Ninja V before, but if there was an 8 bit compressed option I thought maybe it will be some kind of log version. Maybe some sort of dump to disk of the debayer that you are looking at while monitoring?
  4. This is a pretty interesting upgrade. I'd still like to see them add true 24p and also internal raw DCI-2k like the X-T3 does. I'm also very interested in what the output looks like when you capture the director's viewfinder mode. Can we do Super 16? Anamorphic 35mm? Does it have a big black area around it? As for the omission of a log profile, here's my take. For me, this would be most useful for monitoring with your dailies LUT while recording raw since currently there does not appear to be an accurate way to monitor raw. The Digital Bolex handles this brilliantly and simply by outputting log over HDMI. If you hook up an external monitor that accepts 3D LUTs then the log can be transformed to your final Rec709 look. That is also because they published their log spec, so you can do the transform accurately. Same idea as an Arri LogC to Rec709 LUT. This workflow means your on-set LUT matches what you will see in Resolve when viewing your raw dailies footage. Ideally you could capture raw but view some kind of Sigma log on the camera screen, and crucially, also add your own log to rec709 transform LUT on top. That way you don't need an external monitor at all. If the new HDR mode can be output over HDMI while recording internal raw, then if the HDR is a known spec like HLG then with the addition of a LUT it can be used to monitor the raw image, and it can also be used in place of a log profile. Another option is if you can display log versions for ProRes or BMD raw captures and you can put your own LUT on top of the log image. I'm not sure of the capabilities of the recorders though. I don't think the new 'none' mode is going to help much, but if Sigma released details explaining exactly what is captured in this mode, then we can make our own decisions. It certainly doesn't look like linear 1.0 gamma since it's not dark enough, and it most likely is clipping highlight information above 1.0. Edit: does this output some kind of 8 bit ProRes raw over HDMI? This was mentioned on another site. If this is some kind of log signal so it fits in 8 bits then again it can be used for monitoring.
  5. Yeah, it seems the Cinematics lens may be good because it "just works". Which on-set with limited time, is valuable. Correct me if I'm wrong but even if you can use shims on an Ursa to get the regular Sigma lens just so, that's something you have to think about redoing if you want to change lenses. Right, purchasing anything is all theoretical as of right now.
  6. Well that's great to know. Choice is good. Before, you said that neither of them were parfocal and now you're saying they are both "near parfocal". Which is it? My information was based on Sigma's own specs, also Cinematics saying "The lens has been changed into parfocal after modification" (quoted from their website). I don't really care either, I was just throwing out some talking points in connection with maybe buying the Ursa Mini 4K and also seeing if anyone had any DNG frames to share Instead you've gone out of your way to tell me my choices of sensor, lens mount, lens length, lens type and lens features are wrong or misinformed. Okay thanks.
  7. The original isn't parfocal. I have a Cooke S16 zoom for my Digital Bolex and am kind of spoilt by that. A lot of cine lenses are rehoused photo lenses - Duclos, etc for vintage photo lenses, but also things like Arri's original 65mm lenses were rehoused Hasselblad V lens elements.
  8. Don't assume I have no knowledge of the pros and cons of camera sensor DR. I realise it's lower DR than the 4.6k but I'm a big fan of global shutter among other things. As a personal challenge I'm interested to see what I can eke out of this largely ignored camera that continues to get cheaper. I'm not going for the Pro version, if that was my budget I would get an Alexa Classic. Tbh I'm not a fan of Blackmagic cameras but this old Ursa 4K intrigues me. The Cinematic rehoused Sigma lens I mentioned in EF mount is "a few hundred cheaper" than the PL version. You can check the pricing: http://www.pchood.com/index.php?route=common/home I think you're referring to the original Sigma photo lens but I want something parfocal. There are no rules beyond my individual film making needs. I'm not a gigging camera person that needs a stack of equipment to meet a client's whims. I work in the film industry as a vfx supervisor, but this is for my own no-budget creative projects where I determine the aesthetic including the lens lengths. And also maybe for shooting the odd vfx element where global shutter is very useful. I never said I wouldn't rent an entire set. I said I was interested in maybe getting a cheap great looking Chinese 35 like the 7Artisans Leica copy, and I name checked a movie that shot on a single 35 that I loved the look of. I would say for the majority of what I would do the 18-35 is great as a "variable prime" and if I needed more then I would buy the 50-100 later. On a 35mm film back like the Ursa 4K, going longer than 35 focal length has not been a need of mine so far. Cheers!
  9. Thanks for your feedback on this camera and yes if you have any sample DNG frames that would be great! For sure, the constrained DR is an issue. It may be a dealbreaker but I want to do more research. I have a feeling that the extra DR in raw may only be because of highlight reconstruction, but it's still useful. The trick is in how I choose to create the highlight rolloff with the sensor's captured linear data. Even with one less stop, a graceful rolloff created in post goes a long way. I'm concerned about FPN but it seems if handled correctly with fill and maybe some Neat Video that it's manageable. What I'm not clear on is if any firmware tweaks and V2 sensor have improved this since the BMPC4K. But it seems just shoot ISO 400 in most situations. Agreed, global shutter is HUGE for me. I own a Digital Bolex. And CFast cards being much cheaper too. I just want this as an experimental camera right now. No one wants this camera and it's going to keep getting cheaper. And what you're saying about just adding a V-mount - exactly. I'm looking at getting a FXLion Nano that I can use on both my Digital Bolex and this. Yes this would be great for shooting fashion esp. if adding the RawLite OLPF. And dealing with flashes/strobes, which even an Alexa handles badly.
  10. Like some others I'm looking into a second hand Ursa Mini 4k now that the price has come down. I'm not sure of the EF vs PL mount. The EF is more practical, but the PL feels more legit and one of the Chinese Cinematics rehoused Sigma 18-35 zooms could be the go. However that also comes in EF which is a few hundred cheaper. The decision would rest on whether I'm likely to want to rent a Cooke lens or something. I also like the idea of a 7artisans 35mm lens on the EF mount - shout out to "Call Me By Your Name"! My main concern is DR, however I noticed how much better the raw footage is in the BMPC4K (earlier gen sensor) than the ProRes, so I plan to shoot raw. I've been searching online for some sample DNGs from this sensor (preferably the Ursa Mini 4K) but so far I haven't found much since a lot of the stuff posted a few years ago has expired Dropbox links. If anyone has any raw footage from this camera I'd love to download a few DNG frames! Hit me up! The alternative is to rent one for a day.
  11. Raw over HDMI - just force the bayer image down HDMI then debayer later. It's a neat idea but not really useful for this camera since it will record raw natively. There may be some nice advantages that become apparent later. I didn't mention it with regards to monitoring though. What I'm saying is that to monitor raw in a practical,, usable way, you need a log image to compress the full dynamic range to a normalised space that can be sent over HDMI/SDI to any typical screen, then you add a transform from log to your final look. The final look is typically something like an Arri look lut when shooting Alexa, or ACES rrt/odt combo (they are both similar anyway). I make the LUT in Nuke or Lattice and then upload it to the monitor. So yes to confirm, not monitoring literally in log, but send a log signal over HDMI to put the LUT on it. You have to think of log in this case as being the same as raw, but the sensor's whole dynamic range has been compressed to fit down the HDMI with no clipping, via a log transfer function. That way, you get to monitor the raw image. Sure, some things are baked in. But you get the dynamic range. You get a very good, close representation of what you will look at in Resolve with the raw images later. What you are asking for, to see the clipping point of the raw channels, you can do that by monitoring a log signal since the Sigma log curve will encompass the entire dynamic range of the sensor. However there is a neat view mode on the Digital Bolex that is just the output of the bayer sensor, and you can clearly see where it's clipping the highlights and adjust the exposure accordingly. That would be nice to add. It's actually very similar to what ProRes raw is doing - just outputting the bayer image over HDMI. Yes the log monitoring approach doesn't account for extra white balance or highlight construction flexibility with raw. That's a nice little bonus you get to play with in Resolve.
  12. I'm glad they're adding log, but I would only use it for monitoring raw recording. For that, it's crucial. The log image with LUT will then match the RAW image with LUT in Resolve (different input but same output image). So as long as: - They publish the log curve and wide gamut they are using. They need a conventional Cineon/AlexaLogC/ACEScct type of log curve that will hold all the highlight information of the raw sensor. - It can send a log signal over HDMI so that you can record DNG raw and monitor with a lut on the HDMI log image. If it's sending out raw over HDMI that's quite cool, but as far as monitoring, you are limited to the Atomos Ninja V vs. every other EVF or external monitor on the market that allows a custom LUT. As for exposing raw, the only two things you need to think about are: 1. Expose with a grey card and a lightmeter like you are exposing film. You can also use false colour and a grey card but just make sure you're aware of what gamma setting the false colour is expecting, or make your own false colour LUT. 2. Check where the highlights are clipping. If you need to protect the highlights further, then underexpose by 1 or two stops and push by the same amount in post. Use Neat Video to clean up the noise floor if necessary. Zebras are for a video world. They can be semi relevant for checking highlights in log since it puts everything in a 0-1 range. Again, we need log output for monitoring and if for example shooting for ACES we can put a log to ACES rrt/odt LUT on the camera monitor/EVF and see how the full dynamic range of the highlights are rolling off. Then get the exact same result on the ingested DNG raw images to ACES in Resolve. Log for monitoring raw recording is crucial as it allows any Chinese 8 bit monitor with custom LUT option to display the full raw image dynamic range and get a very close match to what you will see in Resolve with your beautiful DNGs - as long as you have your technical LUTs set up correctly. Published log and gamut is a MUST though. Sigma, please do a white paper documenting this. This way, we can do a direct correlation with the linear raw image. And allow sending log over HDMI while recording raw.
  13. Agreed, slimRAW is a great workaround even if it does add a step to the ingest process. But it's a shame the Sigma probably won't see a lossless or lossy DNG variant that may have allowed 12 bit internal recording. Still, external SSDs are cheap and it allows for fast offloads. I do hope Apple ultimately prevails here though.
  14. Yes there are - I know of a few approaches already, we are really talking OOB though. Yes Resolve is powerful for colour, but it does not make that power explicit and precise like Nuke. It's dumbed down in some ways. Using Gain is the obvious method if you can't use the Raw tab (Prores footage?). As I mentioned, I think using an ACEScc log curve will make the printer lights behave the same as a Gain - I need to check, it's been a while. I personally dont't know though what is a 1 stop increment wiht printer lights. Using Gain is easier, you double it for each stop up or halve it for each stop down. But of course, there should just be a linear Exposure mode in stops that can be toggled right there on the panel as a fully fledged part of the UI. It would require that you tell it what the input transfer function is, then it would bracket that behind the scenes.
  15. Right. It would be great if there was an option to use the "printer lights" feature as a multiply (gain) operation in linear space in Resolve. That's how it is in Nuke, for example. I can't remember to be honest, but I think ACEScc may be a direct log conversion in the blacks, because the bottom of the curve is pinned at black - which is why ACEScct was developed, to feel more like a traditional log curve when grading. I'd need to check but I have been able to get the exact same results with offset in log (probably ACEScc) vs Gain in linear.
  16. Exactly - it's not intuitive, or consistent across different working scenarios. I do use that. There is also the option to do it in the Offset tab in log space, but that is not particularly transparent either. In theory the more recently added right click option to bracket a grade operation within a certain color space should make this work a little better, but in practice I've found it doesn't work consistently and explicitly as it does in say, Nuke. The presumption in the design of Resolve that one would only want to do a linear exposure adjustment on raw footage in the raw tab is a little odd.
  17. Well said. To speak to part what you re saying, I'm still not clear on why Blackmagic can't publish their log curve and gamut though. So we can go to linear. I mean it's one of the main reasons I never ended up buying a Blackmagic camera.
  18. If you mean some kind of container with still frames inside, then yes. I'm not clear on how MXF works but I'd like it if it was the best of both worlds and just a wrapper around the DNG frames that you can right click on and then go inside. I'll agree this is all pretty old tech but I'm not as opposed to the format as others. Personally I do think that the same general advantages I talked about with frames apply to intermediate sequences and source media as far as file handling. I'm not 100% on this, but if you record a movie then can't a couple of bad frames corrupt the whole mov file, rather than allowing you to salvage most of what was shot because it was frame based? I'm talking about source metadata that is carried all the way through from shoot to ingest through post and vfx to DI. It could be matrices, CDL, lens information, etc. So if you go to an intermediate format like exr then all the footage metadata from the shoot comes along for the ride. But I'll admit that this workflow is not something that most people on this forum are considering. That's cool. You wrote slimRAW right? I own it ?. I have a Digital Bolex and it's 100% necessary for that camera. I'm sure it will be gold for the Sigma fp. I personally don't believe it's worth the overhead of debayering raw on the timeline, but then again I haven't tried out ProRes RAW. BRaw seems lossy in the chroma channels so I'm not going there. I just feel it's better to ingest to the full dynamic range floating point RGB (EXR) or log dpx/ProRes. And I can assure you that raw processing controls are utilised on big productions, but at an ingest stage. There is always the option to go back and reprocess the raw if the debayer algorithm needs changing - or the colour temperature, but in that case the data bucket of the exr is so huge that a temperature shift to the rgb image is 99% of the time totally fine. The reality is that when you capture the whole thing to an EXR or DPX there is very little that is baked in. As to what you are saying about grading at a raw level in a more precise way - I do agree that grading software can seem kind of slap dash in some ways that is just really odd. I personally use Nuke for things like exposure and temperature changes, and Nuke's grading nodes are much more mathematically oriented than Resolve. Not that Nuke is a good grading tool per se, but it's more suited to making precise changes. It bothers me that if a dp says "can you push this plate +1 stop" that there is no obvious go-to linear exposure control in Resolve - except in the raw controls tab. Also it's really weird the way Resolve does not allow direct colour matrix input. Mate, you're preaching to the choir. I'm 100% with you on exposure in linear - Then why doesn't Resolve offer a linear exposure adjustment tool except on the raw tab? This baffles me. You work for Black Magic right? I'm not actually a colorist though. I mainly use Nuke (all-linear all the time) and am reluctantly learning Resolve. For me, going to and from XYZ to do some comp operation is a lot more intuitive than it is in Resolve. So that concept is not foreign to me either. I tech proof things in Nuke before trying to rebuild them with Resolve nodes. I appreciate that Resolve 16 added the colour temperature adjustment node, but I do agree about white balance in raw as being the best way to do it. A friend who worked on Rogue 1 told me the DP Greig Fraser apparently shot the Alexa 65 all at 6500k temperature since it was all RAW capture, and then of course that can be adjusted on raw ingest. He may be wrong about this, but this is what I heard since he is a UI designer and he was wanting the white point of the UI elements to match the footage - so that came out of a conversation with the DP on set. So yes we are talking about exposure balance, white balance/colour temperature and debayer at the raw ingest stage. Ie. you are proving my point - this is best done at ingest as a kind of tech grade first pass step that can always be revisited if need be. The thing you say this Light Iron guy is further backing up what I'm saying. If you need to "match shots" in DI then you should already be 90% of the way there with your first pass and CDL since by DI stage the film is 90% complete.
  19. Are you talking about Sigma adding DNG metadata? Obviously that's great, and I like Resolve's ability to create an IDT on the fly using that data. If it's something else you're talking about, I'd be interested to know. What I was describing is a more high precision IDT based on spectral sensitivity measurements - not "just" a matrix and a 1D delog lut. If you look at the rawtoaces github page, the intermediate method is as you describe the way Resolve works. It only tries that method if the afore-mentioned spectral sensitivity data is not available.
  20. Per frame metadata is a big part of feature film production and is not going anywhere. But at the lower end of the market it is probably not that relevant. Have you actually tried it? There are many reasons that VFX and DI facilities work with frames. A single frame is much smaller to read across the network than a large movie file and the software will display a random frame much faster when scrubbing around - not factoring in CPU overhead for things like compression - or debayering. As for my example of upload to cloud, a multithreaded command line upload of a frame sequence is much faster than a movie file, and I'm able to stream frames from cloud to timeline with a fast enough internet connection. But in a small setup where you are just making your own movies at home then this all may be a moot point. In a film post production pipeline, raw controls are really for ingest at the start of post, not grading at the end. But I agree that if you are a one person band who doesn't need to share files with anyone, then ingesting raw, editing and finishing in Resolve would be possible. Our workflows are very different because of different working environments. And for what you are doing, your way of working may be best for you.
  21. I prefer file sequences so I'm fine with what the Sigma fp is doing there. Fingers crossed DNG compression gets added eventually. Coming from using software like Nuke and Resolve in film production, all shows are ingested from raw or pro res as dpx or open exr and go all the way through to DI in this way (editing is dnx in Avid). Indie features and second tier tv shows are typically pro res or Avid dnx, and I understand at the dslr level it's all movie formats not frames. It's only at the dslr level that anyone is actually trying to edit raw. IMO it would be like trying to edit with the original camera negative on a Steenbeck. Maybe we can just say that it's a cultural difference. But discrete frames allow the following: - Varying metadata to be stored per frame. - The whole file isn't hosed because of one bad frame - File transfer is easier. Way less load on the network and this is a biggy, with transfer to/from cloud. Having the option of MXF is fine though. I remember testing a CinemaDNG mxf when the spec was just released, but no-one actually used it.
  22. It would be great if Sigma made an ACES IDT based on measuring the sensor's spectral sensitivity data. From the rawtoaces documentation on Github: The preferred, and most accurate, method of converting RAW image files to ACES is to use camera spectral sensitivities and illuminant spectral power distributions, if available. If spectral sensitivity data is available for the camera, rawtoaces uses the method described in Academy document P-2013-001 (.pdf download)
  23. Thank you for talking some sanity on this oft-misunderstood subject.
  24. My "raw workflow" is just expose with a light meter, also get a grey card and then import to Resolve. In the raw settings I may use highlight reconstruction and do an exposure adjustment. Generally from there, if not working in ACES I do a CST to log (Cineon or Alexa LogC) then put the PFE lut on it to see how it will look before any log space grading. Edit: to be clear the PFE lut is the last thing in the chain but I'll put it on before grading (time-wise, not in the node graph). This gives me a nice quick one-light and should match the on-set monitor. I didn't see anything about these images that would make me deviate from that. They seem like regular raw images to me once they are in Resolve, but I'll have to see when more becomes available.
×
×
  • Create New...