Jump to content


  • Content Count

  • Joined

  • Last visited

Everything posted by androidlad

  1. Free download of F-log ACES IDT DCTL for use in Resolve: https://blog.dehancer.com/articles/fuji-f-log-aces-idt-for-davinci-resolve-download-dctl/
  2. R5/R6 both have 10bit internal recording. BTW the new look of the forum, especially the new font, is obnoxious.
  3. The actual resolution in pixels is 2048 x 1536, versus the current high-end EVF with 1600 x 1200.
  4. That's their 100MP X-Trans project that has since been shelved.
  5. It requires far more aggressive line-skipping to readout the full height of the sensor which is 8736 pixels. Currently GFX100 uses 2/3 subsampling vertically to derive a 4352 pixel Bayer from a 6528 pixel height, and that already saturated the maximum readout time of 32ms required to achieve 30fps video frame rate.
  6. Related, but not really directly correlated. Dynamic range is a measure of a camera system - how far it can see into the shadows and how far it can see into the highlights. Dynamic range can be measured objectively, but even then there's a subjective component as each and every viewer will have their own noise tolerance threshold. This governs how much of the shadow part of the dynamic range they find actually usable. Latitude is related to dynamic range, but it is also scene dependent. Latitude is the degree to which you can over or under expose a scene and be able to bring it back
  7. Great, this aligns nicely with what C5D did with their over/under tests. But, this is testing the latitude, not dynamic range.
  8. Sigma FP as well as Panasonic S1 which has no OLPF, are indeed very prone to moire in real world shooting senarios: This is exactly the reason why S1H has OLPF.
  9. It's a shame the FP doesn't have an OLPF, with such a low pixel density it's very prone to moire/aliasing.
  10. ProRes HQ cannot compare to ProRes RAW for adjusting white balance or ISO, because RAW is linear, scene-referred, the results are much better than gamma encoded color spaces. You can linearise but it adds quite a few additional steps.
  11. "Supports HDR in movie shooting" Anyone tested this? I wonder what it does.
  12. If this was truth, then there would be accompanying empirical evidence to support it. For now, it's only your subjective opinion. Also in BRAW, 6K scored 11.8 stops and URSA Mini G2 scored 12.1.
  13. Is it difficult to make a 0.65x speedbooster? That way full frame glasses would have true and precise full frame equivalent on APS-C cameras. With current 0.71x speedbooster, there's still a crop factor of 1.09x, a 24mm lens would have a 26mm field of view.
  14. I know what you menat, but it's worded incorrectly, what you wanted to say is it would lose SNR. Note that pure pixel-binning actually increases SNR (2x2, 3x3 etc. you see on smartphone sensors).
  15. Oh yeah it's already in the hands of many influencers/industry pros, who are anxiously awaiting the NDA lift.
  16. Most cameras that output ProRes RAW at the moment are mirrorless cameras with HDMI output, and Atomos developed the RAW over HDMI protocol, they only license to camera manufacturers for free. For those that output RAW over SDI, BMD need to develop support for the their RAW spec (EVA1 outputs 10bit Log-encoded RAW, Sony CineAlta outputs 16bit linear RAW). And the same applies to Atomos, but Atomos has its RAW over HDMI protocol and it's being widely adopted, so they pretty much have full control over the RAW spec. So instead of saying BRAW is sensor specific, you can say it's brand sp
  17. Nope. BRAW is just a codec, it has nothing to do with sensors or camera models. It requires BMD's FPGA for the encoding. Same for ProRes RAW, Apple has licensed the encoder to Atomos and DJI, it can encode any incoming RAW signal.
  18. A9/A9 II uses 12-parallel ADC and DRAM to achieve 162fps full sensor readout at 14bit (internal speed, subject to I/O limitation). Obviously due to power and thermal requirements, DRAM is disabled and only 2-parallel ADC are used in video mode.
  19. androidlad

    RED Komodo

    They did their best to optimise the DR. For a charge domain based global shutter, it's doing ok, but it's poor compared to conventional rolling shutter sensors. It's positioned primarily as a high-end crash cam, only global shutter can guarantee zero skew and zero flash banding.
  20. According to a source, X-H2 is likely to have a conventional Bayer CFA.
  21. It was a custom made Petzval 85mm lens, and used sparingly in the film only for some portrait shots of Mulan. For all other shots, they were Panavision Sphero 65 lenses, very vintage but with some modern touches, CA is noticeable. Mandy Walker simplified it a bit too much in the interview, it's not really about the CA, it's the distinctive field curvature and sharpness roll-off from centre to edge, it's another way of isolating the subject instead of just using very shallow depth of field. This is one of the shots with the Petzval 85mm lens:
  22. YouTube re-compresses all uploads into H.264. For popular channels, YouTube uses better quality compression.
  23. XT4 has a Sports Finder mode with 1.25x crop for stills, and a 1.29x crop option for 4K video.
  24. Export H.264 directly from the timeline, with a x264 based plug-in (for example, Aftercodecs), and use the highest quality encoding parameters, and bitrate 12-15Mbps for 1080p depending on the length of your deliverables. YouTube is resolution agnostic now, so do not add any bars, export the picture in its native aspect ratio/resolution. If you want to disincentivize people to watch it on phones and smaller screens, then maybe don't upload it on YouTube.
  • Create New...