Jump to content

Llaasseerr

Members
  • Posts

    347
  • Joined

  • Last visited

Everything posted by Llaasseerr

  1. So I took a look at this, and it does look promising as far as where the highlights have been placed in the Ninja V recording. I noticed in the metadata that the gamut and log curve have been set to Vlog/Vgamut, and the monitoring mode is set to PQ. Now I don't know if any of that is destructive, because what I can see is that the underlying image file is still linear raw. But it does affect the way it's displayed in different software. So is it possible to cut out any variables and record a clip similar to this (especially with the clipping due to the sun hitting the window frame), but set to the Atomos recommended settings of linear/Sigma gamut for the camera output gamma/gamut on the Ninja V, and with the monitoring mode set to native? I've queued up the relevant part in the video below. What is also not clear to me is, were you able to monitor the image in Vlog/Vgamut by choosing that on the Ninja V? In which case, why also choose to add PQ as monitoring mode on top of that? Having said that, my understanding is that it's not possible to monitor the Sigma fp PRR image in another camera's log format, only in some HDR formats like PQ. So maybe you can only monitor in PQ, but can save with vlog/vgamut embedded for when opening on your Mac.
  2. It looks like you could shoot a clip with a grid or checkerboard by walking around it a bit to get some parallax and then 3D track that in the Fusion tab with the 3D tracker node. As part of the track it will calculate a lens distortion model+values that can be plugged into the Lens Distortion node, or maybe it will auto generate a Lens Distortion node with the correct settings already. I haven't used Fusion, but it seems to mirror the process in Nuke.
  3. Thank you! This looks perfect. Like you said, it includes the sensor clipping and also there are fairly dark shadow areas. I'll have a play with this.
  4. Yeah, I don't think Sigma really thought through the colour science aspect at all. So using the settings for a completely different camera is only ever going to be a workaround. Not to say you can't get a pleasing result, but it's not a professional solution. A good counterpoint example is the Digital Bolex. When it came out, there also was not an established workflow or monitoring method for the DNGs and they ended up recommending using the BMD log/color space. However, they did eventually develop a log profile for monitoring over HDMI and they also fleshed out how to accurately interpret the DNGs in a way that was true to the camera, plus they published a white paper. That's really what Sigma needs to do for the next model, or retroactively for the fp.
  5. The Alexa is about 8 1/3 stops over middle grey, it's a monster. In linear values, where grey is 0.18 the Alexa clips at about ~55-65. Agreed, it gets noisy quickly compared to a camera like the fp, so in my mind with the fp there's room to hack shifting the entire DR to the right because I can live with a bit of noise as long as it looks pretty organic. I don't get why all the smaller cams are obsessed with low noise floor at the expense of highlight DR. One of my pet theories is that the increased sensor voltage for highlgihts uses more power so they focus on clean shadows instead. And yes, the monitoring is sorted. That is severely lacking on the fp.
  6. Seriously, whatever you have is good! I just want to check out the ProRes Raw linear gamma /sigma gamut output. An outdoor scene as you describe would be interesting! By exposing for the clouds so they aren't clipped you are probably underexposing middle grey, so it's just a variant on what I'm talking about. I'm just looking at a more fixed method of shifting the entire highlight DR more towards an Alexa by pushing up the noise floor, so I would shoot with a -3 stop ND for example. It would allow me to "always" expose for middle grey like shooting an Alexa or film, if I assume a +3 exposure compensation in post. I would also try +4 or +5 and see where it breaks. The highlights just clip way too early in this camera, so it could be a solution.
  7. On the subject of how this compares to an Alexa, obviously it would be closest to the LF. Purely based on image, my hot take is the Alexa has 3.5-4 stops more DR in the highlights and better rolling shutter, but that given the low noise of this camera it could be underexposed to make up for that to some extent. Easily by 2 stops.
  8. What do you think of the loupe vs the side viewfinder? I think I might have a soft spot for the loupe.
  9. There's the whole issue with uploading to youtube as well, so yes it's better to look at the full quality frames to assess sensor noise. I don't even know if youtube applies a denoise before compressing the hell out of the image, but it would make sense if it did.
  10. @OleB is there any chance you can upload a 4k DCI ProRes Raw clip? Preferably one with clipped highlights shot at what you think is the best ISO. I'd be interested to try developing it in Assimilate Play Pro which, in my experience, is able to handle ProRes Raw import as well as Resolve can handle DNGs. I saw this test on youtube shot at ISO 3200 with PRR and I must say, it looks super clean which suggests that the exposure could really be pushed with an ND.
  11. Yes if you wanted to stay in ACES then those settings are fine. If you wanted to use Arri LogC/AWG, I suggest rendering it out which works well if you render something like a Prores 4444 file. Then you can bring it into a new non-ACES project afterwards if you want, because the DNGs will have been correctly interpreted by the DNG IDT within the ACES project. If reusing in a new ACES project or the same ACES project, then when importing use the Alexa IDT which I think is also called "Arri LogC EI800 AWG". First set the Output Transform to "no output transform" and apply an ACES Transform node to the footage, where the input is "no input transform" and the output is "Arri LogC EI800 AWG". Also set the gamut compress type to "reference gamut compress". Then render. Also note that you can create your own ACES workflow in a non-ACES project by using the ACES Transform node. But this doesn't work for importing DNGs because the IDT is only available in an ACES project. So I do that in a dedicated ACES project as a first step.
  12. Been using SlimRaw with my Digital Bolex for some time now! Glad to hear the false color works with DNGs when the camera is 'naked' - tbh, that would be my main user application. I think someone here - perhaps you - mentioned that it reads the value directly off the sensor regardless of the picture profile set, which sounds nice.
  13. Thanks for the distinction there with the inconsistency between internal vs ProRes RAW. I'm kind of wondering what the value proposition is of using the Ninja V if you don't need DCI 4k and can deal with the larger file sizes internally. But I'd still like to mess around with both of them if I do end up getting the camera. It does though seem like you've got a bit of a system there with the PQ workaround to check the highlights on the Ninja V. It's a shame Sigma have not thought things through more carefully and have a more consistent solution without unknowns. I mean I noticed on their blog they have guest writers offering their own ad-hoc solutions as to how to develop the DNGs in Resolve, so it seems like they don't even really know what they're doing. That doesn't mean it's impossible though. I was hoping maybe the linear image on the Ninja V without the PQ setting, but as per the Atomos setup video, would display the highlights in a flat-ish image like the "none" profile, but it appears not.
  14. Nice to hear the false color and exposure indicator works well. I tend to shoot a grey card if possible, rather than white balance, especially with raw. Right, it sounds like this camera can expose well, but we don't get the full nice preview across the whole image range. Appreciate you guys fleshing out the details here. The workflow you describe is pretty much how someone would shoot with an Alexa - expose for middle grey, and the highlights fall where they do. The difference is that the Alexa has monstrous dynamic range so you don't need to worry really about highlights clipping too soon. Maybe underexposing with an ND and setting the exposure compensation would do the trick though? There is this idea I've had in the back of my head that may be stupid, but fun. There's a tiny BMD micro converter that allows adding a LUT. So it's basically an affordable LUT box: https://www.blackmagicdesign.com/products/microconverters/techspecs/W-CONU-18 I also have a 3.5" Ikan VL-35 monitor lying around, so I could see adding these two if I had a cage around the camera and it wouldn't dominate the size so much. Then a more correct viewer could be added. But it probably would not deal with the clipped highlights issue. It would just offer a closer image to the final graded image in Resolve.
  15. I unfortunately don't have the camera on hand, but for the internal false color display the way I would check it is I would use my Sekonic incident light meter and grey card, and then see if the false color green is hitting the grey on the internal display when I set the exposure as per the light meter reading.
  16. Okay, if I understand you correctly then that makes sense in that I was expecting the highlights to be clipped on the internal display (unfortunately). My evaluation of the "none" picture mode on the camera is that it's the raw image with a Rec709 curve applied, and then anything above 1.0 is clipped because there's no rolloff applied - since it has no image processing beyond the Rec709 adjustment. Where in reality, a log curve is required to keep everything normalized to the display space of 0-1. But I had surmised that assuming a decent exposure, it would probably be accurate for checking a middle grey card across the ISO range. Is that a fair assumption?
  17. So when working with just the naked camera the false color preview in the new firmware 4.0 is only accurate when set to ISO 100?
  18. For the most flexibility, you can just use an ACES project to import the DNG files to Resolve, then export to a format you're familiar with (like Alexa LogC/Alexa wide gamut) and use in a non-ACES project. Depending on how the metadata is written in the ML raw files, it should work pretty well but obviously it's good to compare how well they work within the MLV app as well.
  19. Forgot to mention that you don't want an input transform applied in the color management tab either.
  20. This is a little out of date, but that's the general idea.
  21. It's actually done under the hood, so all you have to do is import DNGs into an ACES project. If you then wanted to render out, say, Alexa LogC ProRes 4444, you would apply an ACES Transform node and keep the input transform as "no input transform" which means linear, and set the output transform to Alexa. Also make sure regardless of whether you render linear EXRs or log Alexa footage that you don't have an output display transform set in the color management tab in the project settings, then render out and use in a new Resolve project. Or if you're going to stay in ACES, then that's unnecessary.
  22. Right. The issue is being unable to see clipping like you describe while monitoring, with the default profiles on the camera. It would be a good reason to consider underexposing at a fixed level with an ND, and having the ability to toggle a LUT on and off that included an exposure boost. Again, this is where a published Sigma log curve would be useful - for monitoring. As for working with the image, just to be clear I'm not advocating working in Rec709. Personally I would work in either linear ACES or then convert the linear image to Arri logC. the DNG IDT is the most accurate DNG raw import available in Resolve since it's generated on the fly. In the case of transforming to LogC, the ACES inverse Alexa IDT will do a pretty accurate transform from the linear image to Alexa. Arri's log color space is well documented and not confusing compared to BMD. Basically, I could round trip the image from LogC back to linear ACES and it would match the original imported DNG (via the ACES IDT), so then I know the image integrity has been maintained. My assumptions are based on sound DNG metadata though, as this is what Resolve relies on for accurate ACES input transform.
  23. On the subject of false colors, I was making my own LUT to conform to the El Zone standard but I have to get around to finishing it. Did you say that the new false colors are usable on the camera itself while recording out the Ninja?
  24. Gotcha. I appreciate your work translating the scaled ISO values. It does seem crazy the inconsistency with internal vs external recording, and I would need to try both to fully get it. My first impulse is that I would probably not apply PQ personally, but instead choose Native and create a custom LUT. But that is based on the assumption that the linear image as displayed by the Ninja V appeared to be the similar to the "None" profile on the camera, ie, flat-ish but not log. So in other words, they've scaled the linear image for display, because an actual linear image is super contrasty and clipped on a Rec709 monitor. I'm basing that on watching the Atomos Sigma fp setup video on youtube. The LUT approach may not be a good idea if the highlights are getting clipped in the default Ninja V display, whereas with PQ they will be fully present and rolling off according to the PQ spec. But the overall appearance of PQ would not match my intended final image, so it's a trade-off.
  25. Also I recently got a Ninja V, so it would be interesting to try out ProRes Raw and seeing if a more accurate monitoring LUT can be applied, despite the fact that it somewhat negates the size advantage of the camera. It could be cool to mount the Ninja V underneath the camera instead.
×
×
  • Create New...