Jump to content

androidlad

Members
  • Content Count

    1,042
  • Joined

  • Last visited

Posts posted by androidlad

  1. BMD patent document on the URSA 12K sensor design:

    https://patentimages.storage.googleapis.com/88/08/39/428dbbc9e5dfca/US20190306472A1.pdf

     

    Actual CFA pattern:

    2FGQBrx.jpg

     

    In the document, it specifically discusses colour-aware pixel-binning to increase sensitivity and double framerate, at the expense of lower resolutions. So the "oversampling" mentioned in marketing is without doubt BS, at least for full FOV framerates higher than 60fps.

    Because if it oversamples from full 12K, then the oversampled modes cannot exceed 60fps which is the full readout max fps.

    Maybe there's a feature similar to R5's "4K HQ" mode.

  2. This is the CFA pattern of the URSA 12K sensor:

    yzPCf0I.jpg

    6x6 block instead of traditional 2x2, 18 RGB pixels and 18 white/transparent pixels, which improves SNR a bit but reduces resolution.

    So the optimal shooting mode will be 4K (full RGB info without interpolation from 3x3 pixel-binning), 8K will be softer, 12K 1:1 will be very very soft.

     

  3. 4 hours ago, Caleb Genheimer said:

    The complete non-starter issue for me on all these external RAW updates over HDMI is the fact that it’s always C4K or 16:9. What’s the heckin point of a “medium format” or “full frame” RAW camera if I can’t even use the FULL frame? 

    #anamorphicproblems

    Seriously though, I want to use the full sensor height. Anything else is just infuriating. I assume it’s some limitation of the HDMI hardware, either the camera’s output, the signal format, or the Atmos’ input, but that doesn’t make it less frustrating.

    When there’s a camera that can shoot RAW and be speedboosted into IMAX 15/70 territory, I’ll be there with bells on. Until then, my GH5S and Blackmagic Pocket 4K will tide me over just fine.

    It requires far more aggressive line-skipping to readout the full height of the sensor which is 8736 pixels.

    Currently GFX100 uses 2/3 subsampling vertically to derive a 4352 pixel Bayer from a 6528 pixel height, and that already saturated the maximum readout time of 32ms required to achieve 30fps video frame rate.

  4. 1 hour ago, thebrothersthre3 said:

    Don't dynamic range and latitude directly correlate to each other tho? A sensor with more dynamic range will have more latitude and vice versa?

    Related, but not really directly correlated.

    Dynamic range is a measure of a camera system - how far it can see into the shadows and how far it can see into the highlights. Dynamic range can be measured objectively, but even then there's a subjective component as each and every viewer will have their own noise tolerance threshold. This governs how much of the shadow part of the dynamic range they find actually usable.

    Latitude is related to dynamic range, but it is also scene dependent. Latitude is the degree to which you can over or under expose a scene and be able to bring it back to a usable exposure value after recording. It is dependent upon dynamic range, which is going to set the overall boundary of by how much you can over and under expose, but it's limited by the scene too, and how bright and dark the scene itself goes.

    Say a scene has a range of brightness of 5 stops (a typical Macbeth chart for instance), and let's use a camera that has a 12 stop dynamic range. If we place the scene in the middle of that camera's recordable range, we have 7 stops to play we can we could over or under expose by 3.5 stops and still recover the scene.

    But if the scene was a real world scene of actor against a sunlit window and the range of brightness of 15 stops, you don't have any latitude at all - no matter how you expose that scene you're going to loose shadow or highlight information.

    So yes, latitude and dynamic range are related, but different. Latitude can't really be used to infer how much total dynamic range a camera has.

  5. 1 hour ago, Andrew Reid said:

    I'd have nothing against it having an OLPF but...

    6K is low pixel density?

    No OLPF is the modern standard on almost all cameras in 2020.

    Moire and aliasing are trumped up today by chart tests.

    The real world situation is what matters more.

    The 5D Mark II was an example of where moire issue impacted real world shooting regularly. Even the surface of rivers and water had moire.

    Compare the Fp to that and you'll see the moire issues are virtually non-existent outside of pixel peeping.

    The AF is better on the Fp than the Blackmagic Pocket 4K & 6K but you don't see Pocket users complaining about AF! Why is this?

    Sigma FP as well as Panasonic S1 which has no OLPF, are indeed very prone to moire in real world shooting senarios:

    A-still-from-the-timeline-Be-aware-of-po

    This is exactly the reason why S1H has OLPF.

  6. ProRes HQ cannot compare to ProRes RAW for adjusting white balance or ISO, because RAW is linear, scene-referred, the results are much better than gamma encoded color spaces. You can linearise but it adds quite a few additional steps.

  7. 16 minutes ago, TheBoogieKnight said:

    Hmm maybe I'm confused. As far as I saw it, if it's 2X skipped and binned, 1/4 of the light is reaching the same sensor area (fewer photo sites are used) compared to oversampling from the full sensor. I realise that each individual pixel is getting the same amount of light with skipping/binning, but you're losing the oversampling which would be taking 4 pixels and combining them into one, effectively giving two stops lower noise.

     

    I can get that exact same lower noise benefit by shooting with a 2 stop wider aperture. If I do this with skipping/binning, I'd reduce my DOF 2 stops. If I do this with a crop, I have to step back (or use a wider focal length) which means I get that DOF back to where it was.

     

    Of course oversampling has other advantages, but you're going to lose them with both binning/skipping and a crop.

    I know what you menat, but it's worded incorrectly, what you wanted to say is it would lose SNR.

    Note that pure pixel-binning actually increases SNR (2x2, 3x3 etc. you see on smartphone sensors).

  8. 59 minutes ago, TheBoogieKnight said:

    I don't think anyone has even touched one yet. Well maybe the peeps at Canon....

    Oh yeah it's already in the hands of many influencers/industry pros, who are anxiously awaiting the NDA lift.

  9. 48 minutes ago, Anaconda_ said:

    But then why can't you take a camera that could record external ProRes raw, plug it into the VA and record Braw? If you can, then why is their EVA1 support even a thing? If that's correct, by default BMD also support any other camera that can output raw over HDMI.

    I'm under the impression that the VA can't wrap any raw signal into Braw. BMD need to know the sensor data for their partial de-mosaic stuff. With that said, I still feel that Braw is sensor specific.

    Of course, please correct anything that's wrong here. Would love to understand it better.

    Most cameras that output ProRes RAW at the moment are mirrorless cameras with HDMI output, and Atomos developed the RAW over HDMI protocol, they only license to camera manufacturers for free.

    For those that output RAW over SDI, BMD need to develop support for the their RAW spec (EVA1 outputs 10bit Log-encoded RAW, Sony CineAlta outputs 16bit linear RAW). And the same applies to Atomos, but Atomos has its RAW over HDMI protocol and it's being widely adopted, so they pretty much have full control over the RAW spec.

    So instead of saying BRAW is sensor specific, you can say it's brand specific.

  10. 12 hours ago, Anaconda_ said:

    The codec is built individually for each sensor, so Braw for the Pocket 4k will be different to Braw for the Pocket 6k and every other camera that currently records it.

    Nope.

    BRAW is just a codec, it has nothing to do with sensors or camera models. It requires BMD's FPGA for the encoding.

    Same for ProRes RAW, Apple has licensed the encoder to Atomos and DJI, it can encode any incoming RAW signal.

  11. 30 minutes ago, Alt Shoo said:

    They addressed the issue with the lower DR and global shutter. 

    They did their best to optimise the DR. For a charge domain based global shutter, it's doing ok, but it's poor compared to conventional rolling shutter sensors.

    It's positioned primarily as a high-end crash cam, only global shutter can guarantee zero skew and zero flash banding.

×
×
  • Create New...