Jump to content

joema

Members
  • Content Count

    153
  • Joined

  • Last visited

About joema

  • Rank
    Active member

Profile Information

  • Gender
    Male
  • Interests
    Digital video, photography, documentary production

Recent Profile Visitors

3,117 profile views
  1. The R5 has zebras, but does it have waveform monitor?
  2. Excellent point, and that would be the other possible pathway to pursue. It avoids the mechanical and form factor issues. Maybe someone knowledgeable about sensor development could comment. Given a high development priority, what are the practical limits on low ISO? Of course ISOs are mainly digital so going far below the native ISO would have negative image impact. In theory you could have triple native ISO, with one super low to satisfy the "pseudo ND" requirement, and the other two native ISOs for normal range. But that would likely require six stops below normal range and a separate chain of analog amplifiers per pixel. You wouldn't want any hitch in image quality as it rises from (say) ISO 4 or 8 up to normal levels. I just sounds difficult and expensive.
  3. See attached for Sony internal eND mechanism. You are right there are various approaches, just none really good that fit a typical large-sensor, small mirrorless camera. This is a very important area, but it's a significant development and manufacturing cost who's benefit (from customer standpoint) is isolated mostly to upper-end, hard-core video-centric cameras like the S1H or A7SIII. It's not impossible, just really hard. Maybe someone will eventually do it. The Dave Dugdale drawing (which I can't find) was kind of like a reflex mirror rather than vertical/horizontal sliding. It may have required similar volume to a mirror box, which would be costly. For a typical short-flange mirrorless design, I can't see physically where the retracted eND element would fit.
  4. I think the Ricoh GR is limited to 2 stops and the XT100 is limited to 3 stops. This is likely because there's no physical space in the camera to move the ND element out of the light path when disabled. This is due to the large sensor size and small camera body. I don't know the specs but a 2-3 stop variable ND can probably get closer to 0 stops attenuation when disabled. A more practical 6 stop variable ND can't go to zero attenuation but will have at least 1 stop on the low end. Nobody wants to give away 1 stop in low light conditions. A boxy camera like the FS5 has a flat front which allows an internal mechanism to vertically slide the variable ND out of the light path when disabled. A typical mirrorless camera doesn't have this space. I agree an internal electronic variable ND would be a highly valuable feature. Sony has the technology. If the A7SIII features, price and video orientation are similar to the S1H, maybe somehow they could do it. Some time ago Dave Dugdale drew a rough cutaway diagram of a theoretical large-sensor mirrorless camera which he thought could house a variable ND with an in-camera mechanism to move it out of the light path. I can't remember what video that was in.
  5. On the post-production side, the problem I see is poor or inconsistent NLE performance on the compressed codecs. The 400 mbps HEVC from Fuji is a good example of that -- on a 10-core Vega 64 iMac Pro running FCPX 10.4.8 or Premiere 14.3.0 it is almost impossible to edit. Likewise Sony XAVC-S and XAVC-L, also Panasonic's 10-bit 400 mbps 4:2:2 All-I H264. Resolve Studio 16.2.3 is a bit better on some of those but even it struggles. Of course you can transcode to ProRes but then why not just use ProRes acquisition via Atomos. The problem is there are many different flavors of HEVC and H264, and the currently-available hardware accelerations (Quick Sync, NVENC/NVDEC, UVD/VCE) are in many different versions, each with unique limitations. On the acquisition side it's nice to have a high quality Long GOP or compressed All-I codec - it fits on a little card, data offload rate is very high due to compression, archiving is easy due to smaller file size, etc. But it eventually must be edited and that's the problem.
  6. I hope that is not a consumerish focus on something like 8k, or yet another proprietary raw format, at the cost of actual real-world features wanted by videographers. I'd like to see 10-bit or 12-bit ProRes acquisition from day one (if only to a Ninja V), improved IBIS, or simultaneous dual-gain capture like on the C300 Mark III: https://www.newsshooter.com/2020/06/05/canons-dual-gain-output-sensor-explained/ Obviously internal ND would be great but I just can't see mirrorless manufacturers doing that due to cost and space issues.
  7. All good points. Maybe the lack of ProRes on Japanese *mirrorless* cameras is a storage issue coupled with lack of design priority on higher-end video features. They have little SD-type cards which cannot hold enough data, esp at the high rate needed. I think you'd need UHS-II which are relatively small and expensive. The BMPCC4k sends data out USB-C so a Samsung T5 can record that at up to 500 MB/sec. The little mirrorless cameras could theoretically do that but (as a class) they just aren't as video-centric. The S1H has USB-C data output, is video-centric, but it doesn't do ProRes encoding. Why not? It also doesn't explain why larger higher-end cameras like the Canon C-series, Sony FS-series and EVA-1 don't have a ProRes option. Maybe it's because their data processing is based on ASICs and they don't have the general-purpose CPU horsepower to encode ProRes. I think Blackmagic cameras all use FPGAs which burn a lot more power but can be field-programmed for almost anything, in fact that's how they added BRAW. But the DJI Inspire 2's X5S camera has a ProRes option, so I can't explain that. For a little mirrorless camera, it's not that big a deal -- those can do external ProRes recording via HDMI to Atomos. Even given in-camera ProRes encoding, they'd likely need a USB-C-connected SSD to store that. Many people would use at least an external 5" monitor, which means you'd have two cable-connected external devices. A Ninja V is both a monitor and an external ProRes recorder - just one device. So in hindsight it seems the only user group benefiting from internal ProRes on a small camera would be those not using an external monitor, and they'd likely need external SSD storage due to the data rate of 4k ProRes.
  8. That might be a grey area. The RED patent describes the recorder as either internal to the camera or physically attached. Maybe you could argue the hypothetical future iPhone is not recording compressed raw video and there is no recorder physically inside or attached outside (as described in the patent), rather it's sending data via 5G wireless to somewhere else. Theoretically you could send it to a beige box file server having a 5G card, which is concurrently ingesting many diverse data streams from various clients. Next year you could probably send that unrecorded raw data stream halfway across the planet using SpaceX StarLink satellites. However it seems more likely Apple will not use that approach, but rather attack the "broad patent" issue in a better prepared manner.
  9. That seems to be the case. Other issue: despite the hardware-oriented camera/device aspect, it would seem the RED patent is more akin to a software patent. You can patent a highly-specific software algorithmic implementation such as HEVC, but not the broad concept of high efficiency long GOP compression. E.g, the HEVC patent does not preclude Google from developing the functionally-similar AV1 codec. However the RED patent seems to preclude any non-licensed use of the broad, fundamental concept of raw video compression, at least in regard to a camera and recorder. Hypothetically it would cover a future iPhone recording ProRes RAW, even if streaming it for recording via tethered 5G wireless link to a computer. In RF telecommunications there are now "software defined radios" where the entire signal path is implemented in general-purpose software. Similar to that, we are starting to see the term "software-defined camera". It would seem RED would want their patent enforced whether the camera internally used discrete chips or a general-purpose CPU fast enough to execute the entire signal chain and data path. If the RED patent can be interpreted as a software-type patent, this might be affected by recent legal rulings on software patents such as Alice Corp. vs CLS Bank: https://en.wikipedia.org/wiki/Alice_Corp._v._CLS_Bank_International
  10. Apparently correct. While the RED patent has sometimes been described as internal only, the patent clearly states: "In some embodiments, the storage device can be mounted on an exterior of the housing...the storage device can be connected to the housing with a flexible cable, thus allowing the storage device to be moved somewhat independently from the housing". The confusion may arise because most implementations of compressed raw recording use external storage devices. Hence it might appear these are evading the patent. But it's possiable external recorders merely isolate the license fee and associated hardware to that additional (optional) device so as not to burden the camera itself, when a significant % of purchasers won't use raw recording. Edit/add: Blackmagic RAW avoids this since their cameras partially debayer the data before internal recording, allegedly to facilitate downstream NLE performance. BRAW is now supported externally on a few non-Blackmagic cameras, but presumably licensing is handled by Atomos or whoever makes the recorder.
  11. The XT3 can use H264 or H265 video codecs, plus it can do H264 "All Intra" (IOW no interframe compression) which might be easier to edit, but the bitrate is higher. The key for all those except maybe All Intra is you need hardware accelerated decode/encode, plus editing software that supports that. The most common and widely-adopted version is Intel's Quick Sync. AMD CPUs do not have that. Premiere Pro started supporting Quick Sync relatively recently, so if you have an updated subscription that should help. Normal GPU acceleration doesn't help for this due to the sequential nature of the compression algorithm. It cannot be meaningfully parallelized to harness hundreds of lightweight GPU threads. In theory both nVidia and AMD GPUs have separate fixed-function video acceleration hardware similar to Quick Sync which is bundled on the same die but functionally totally separate. However each has had many versions and require their own software frameworks for the developer to harness those. For these reasons Quick Sync is much more widely used. The i7-2600 (Sandy Bridge) has Quick Sync but that was the first version and I'm not sure how well it worked. Starting with Kaby Lake it was greatly improved from a performance standpoint. In general, editing a 4k H264 or H265 acquisition codec is very CPU-bound due to the compute-intensive decode/encode operations. The I/O rate is not that high, e.g, 200 mbps is only 25 megabytes per sec. As previously stated you can transcode to proxies but that is a separate (possibly time consuming) step.
  12. Thanks for posting that. It appears that file is UHD 4k/25 10-bit 4:2:2 encoded by Resolve using Avid's DNxHR HQX codec in a Quicktime container. I see the smearing effect on movement. This was also in the original camera file? The filename states 180 deg. shutter. Can you switch the camera to another display mode and verify it is 1/50th?
  13. I have shot lots of field documentary material and I basically agree. We use Ursas, rigged Sony Alpha, DVX200, rigged BMP4CC4k, etc. I am disappointed the video-centric S1H does not allow punch-in to check focus while recording. The BMPCC4K and even my old A7R2 did that. An external EVF or monitor/recorder can provide that on the S1H, but if the goal is retaining a highly functional minimal configuration, lack of focus punch-in while recording is unfortunate. Panasonic's Matt Frazer said this was a limitation of their current imaging pipeline and would likely not be fixable via firmware. The Blackmagic battery grip allows the BMPC6K to run for 2 hrs. If Blackmagic produced a BMPCC6K "Pro" version which had IBIS, a brighter tilting screen, waveform, and maybe 4k 12-bit ProRes or 6k ProRes priced at $4k, that would be compelling.
  14. I worked on a collaborative team editing a large documentary consisting of 8,500 4k H264 clips, 220 camera hours, and 20 terabytes. It included about 120 multi-camera interviews. The final product was 22 min. In this case we used FCPX which has extensive database features such as range-based (vs. clip-based) keywording and rating. Before touching a timeline, there was a heavy organizational phase where a consistent keyword dictionary and rating criteria was devised and proxy-only media distributed among several geographically distributed assistant editors. All multicam material and content with external audio was first synchronized. FCPX was used to apply range-based keywords and ratings. The ratings included rejecting all unusable or low-quality material which FCPX thereafter suppresses from display. We used XML files including the 3rd-party utility MergeX to interchange library metadata for the assigned media: http://www.merge.software Before timeline editing started, by these methods the material was culled down to a more manageable size with all content organized by a consistent keyword system. The material was shot at 12 different locations over two years so it was crucial to thoroughly organize the content before starting the timeline editing phase. Once the timeline phase began, a preliminary brief demo version was produced to evaluate overall concept and feel. This worked out well and the final version was a more fleshed out version of the demo version. It is true that in documentary, the true story is often discovered during the editorial process. However during preproduction planning there should be some idea of possible story directions otherwise you can't shoot for proper coverage, and the production phase is inefficient. Before using FCPX I edited large documentaries using Premiere Pro CS6, and used an Excel Spreadsheet to keep track of clips and metadata. Editor Walter Murch has described using a Filemaker Pro database for this purpose. There are 3rd party media asset managers such as CatDV: http://www.squarebox.com/products/desktop/ and KeyFlow Pro: http://www.keyflowpro.com Kyno is a simpler screening and media management app which you could use as a front end, esp for NLEs that don't have good built-in organizing features: https://lesspain.software/kyno/ However it is not always necessary to use spreadsheets, databases or other tools. In the above-mentioned video about "Process of a Pro Editor", he just uses Avid's bin system and a bunch of small timelines. That was an excellent video, thanks to BTM_Pix for posting that.
  15. In the mirrorless ILC form factor at APS-C size and above, it is a difficult technical and price problem. Technical: No ND can go to zero attenuation while in place. To avoid losing a stop in low light, the ND must mechanically retract, slide or rotate out of the optical path. This is easier with small sensors since the optical path is smaller, so the mechanism is smaller. A box-shaped camcorder has space for a large, even APS-C-size ND to slide in and out of the optical path. On the FS7 it slides vertically: https://photos.smugmug.com/photos/i-k7Zrm9Z/0/135323b7/L/i-k7Zrm9Z-L.jpg There is no place for such machinery in the typical mirrorless ILC camera. That said, in theory if a DSLR was modified for mirrorless operation the space occupied by the former pentaprism might contain a variable ND mechanism. I think Dave Dugdale did a video speculating on this. Economic: hybrid cameras are intended for both still and video use, and typically tilted toward stills. A high-quality, large-diameter internal variable ND would be expensive, yet mostly the video users would benefit. In theory the mfg could make two different versions, one with ND and one without, but that would reduce economy of scale in manufacturing. Yet another option is an OEM designed variable ND "throttle" adapter. It would eliminate the ND-per-lens issue and avoid problems with the lens hood fitting over the screw-in ND filter. But this still has the issue of requiring removal every time you switch to high ISO shooting.
×
×
  • Create New...