Jump to content

joema

Members
  • Content Count

    140
  • Joined

  • Last visited

About joema

  • Rank
    Member

Profile Information

  • Gender
    Male
  • Interests
    Digital video, photography, documentary production

Recent Profile Visitors

2,854 profile views
  1. I worked on a collaborative team editing a large documentary consisting of 8,500 4k H264 clips, 220 camera hours, and 20 terabytes. It included about 120 multi-camera interviews. The final product was 22 min. In this case we used FCPX which has extensive database features such as range-based (vs. clip-based) keywording and rating. Before touching a timeline, there was a heavy organizational phase where a consistent keyword dictionary and rating criteria was devised and proxy-only media distributed among several geographically distributed assistant editors. All multicam material and content with external audio was first synchronized. FCPX was used to apply range-based keywords and ratings. The ratings included rejecting all unusable or low-quality material which FCPX thereafter suppresses from display. We used XML files including the 3rd-party utility MergeX to interchange library metadata for the assigned media: http://www.merge.software Before timeline editing started, by these methods the material was culled down to a more manageable size with all content organized by a consistent keyword system. The material was shot at 12 different locations over two years so it was crucial to thoroughly organize the content before starting the timeline editing phase. Once the timeline phase began, a preliminary brief demo version was produced to evaluate overall concept and feel. This worked out well and the final version was a more fleshed out version of the demo version. It is true that in documentary, the true story is often discovered during the editorial process. However during preproduction planning there should be some idea of possible story directions otherwise you can't shoot for proper coverage, and the production phase is inefficient. Before using FCPX I edited large documentaries using Premiere Pro CS6, and used an Excel Spreadsheet to keep track of clips and metadata. Editor Walter Murch has described using a Filemaker Pro database for this purpose. There are 3rd party media asset managers such as CatDV: http://www.squarebox.com/products/desktop/ and KeyFlow Pro: http://www.keyflowpro.com Kyno is a simpler screening and media management app which you could use as a front end, esp for NLEs that don't have good built-in organizing features: https://lesspain.software/kyno/ However it is not always necessary to use spreadsheets, databases or other tools. In the above-mentioned video about "Process of a Pro Editor", he just uses Avid's bin system and a bunch of small timelines. That was an excellent video, thanks to BTM_Pix for posting that.
  2. In the mirrorless ILC form factor at APS-C size and above, it is a difficult technical and price problem. Technical: No ND can go to zero attenuation while in place. To avoid losing a stop in low light, the ND must mechanically retract, slide or rotate out of the optical path. This is easier with small sensors since the optical path is smaller, so the mechanism is smaller. A box-shaped camcorder has space for a large, even APS-C-size ND to slide in and out of the optical path. On the FS7 it slides vertically: https://photos.smugmug.com/photos/i-k7Zrm9Z/0/135323b7/L/i-k7Zrm9Z-L.jpg There is no place for such machinery in the typical mirrorless ILC camera. That said, in theory if a DSLR was modified for mirrorless operation the space occupied by the former pentaprism might contain a variable ND mechanism. I think Dave Dugdale did a video speculating on this. Economic: hybrid cameras are intended for both still and video use, and typically tilted toward stills. A high-quality, large-diameter internal variable ND would be expensive, yet mostly the video users would benefit. In theory the mfg could make two different versions, one with ND and one without, but that would reduce economy of scale in manufacturing. Yet another option is an OEM designed variable ND "throttle" adapter. It would eliminate the ND-per-lens issue and avoid problems with the lens hood fitting over the screw-in ND filter. But this still has the issue of requiring removal every time you switch to high ISO shooting.
  3. Given a finite development budget, a lower-end 8K camera will have to trade off other features. Compare this 8K "affordable" prosumer camera to, say, a revised 4K Sony FS5 II, and assume they are about the same price for a "ready to use" configuration. I think it's reasonable to expect the FS5 II may have a greatly improved EVF and LCD, improved high ISO, sensor-based stabilization, internal 4k 10-bit 4:2:2 XAVC-L codec, maybe 4k internal at 60 fps (or above), maybe even optional internal ProRes recording. It already has electronic variable ND. Let's say the 8K RED/Foxconn camera does nice-looking 8K and has good dynamic range but isn't great at low light, has no sensor-based stabilization, and doesn't have built-in variable ND. These are just guesses, but I think they are legitimate possibilities. As a documentary filmmaker, I know which one I'd rather use. What about needing 8K for "professional" work? In 2015 the Oscar for best documentary went to CITIZENFOUR by Laura Poitras. It was shot in 1080p on a Sony FS100.
  4. As a documentary editor with 200 terabytes of archival 4k H264 material, I've extensively tested the 12-core D700 nMP vs a top-spec 2017 iMac 27. In FCPX the iMac is about 2x faster at importing and creating ProRes proxies and 2x faster at exporting to 4k or 1080p H264. There is a major and generally unreported performance increase between the 2015 and 2017 iMac 27 on H264 material in FCPX. I don't know why the 2017 model is so much faster; maybe it's the Kaby Lake Quick Sync. This is mostly 4k 8-bit 4:2:0 material; I haven't tested 10-bit 4:2:2 or HEVC. Obviously performance in Premiere or Resolve may be different. On ProRes it's a different story. If you do ProRes acquisition and have an end-to-end ProRes workflow, the nMP is pretty fast, at least from my tests. However if you acquire H264 then transcode to ProRes for editing, the 12-core D700 nMP transcodes only 1/2 as fast as the 2017 iMac (using FCPX). That said the nMP is very quiet, whereas the iMac fans spin up under sustained load. The nMP has lots of ports and multiple Thunderbolt 2 controllers feeding those ports. However I have multiple 32TB Thunderbolt 2 arrays simultaneously on my 2017 iMac and they work OK. Regarding acoustic noise, these spinning Thunderbolt RAID arrays also make noise, so the iMac fan noise under load is just one more thing. But I can understand people who don't like the noise. What about compute-intensive plugins such as Neat Video, Imagenomic Portraiture and Digital Anarchy Flicker Free? I tested all those and the nMP wasn't much faster than the 2017 iMac. Based on this, if you're using FCPX I'd definitely recommend the 2017 iMac over even a good deal 12-core nMP -- unless you have an all-ProRes workflow. If you are using Resolve and Premiere those are each unique workloads, even when processing the same material. They each must be evaluated separately, and performance results in one NLE don't necessarily apply to another. I've also done preliminary FCPX performance testing on both 8-core and 10-core iMac Pros. The iMac Pro is much faster than the nMP, especially on H264, because FCPX is apparently calling the UVD/VCE transcoding hardware on the Vega GPU. However even the 10-core Vega 64 iMac Pro isn't vastly faster on H264 than the 2017 top-spec iMac. I'm still testing it, but on complex real-world H264 timelines with lots of edits and many effects, the iMac Pro rendering and encoding performance to H264 is only about 15-20% faster than the 2017 iMac. That's not much improvement for an $8,000 computer. On some specific effects such as sharpen and aged film, the iMac Pro is about 2x faster whether the codec is ProRes or H264. The iMac Pro remains very quiet under heavy load, more like the nMP. In your situation I'd be tempted to either get a top-spec 2017 iMac or that $4000 deal on the base-model iMac Pro or wait for the modular Mac Pro. However Resolve and Premiere are both cross-platform so you also have the option of going Windows which gives you many hardware choices -- a blessing and a curse.
  5. joema

    iMac Pro

    That is a perceptive comment. When we first heard of the iMac Pro and upcoming "modular" Mac Pro, a key question was how will those Xeon-powered CPUs handle the world's most common codec, H264. The previous "new" Mac Pro doesn't handle it well, and the 2017 i7 iMac 27 is about 200% faster at ingesting H264 and transcoding to ProRes proxy or exporting to H264 (using FCPX). This includes most H264 variants such as Sony's XAVC-S. For people on the high end, it appears the iMac Pro handles RED RAW very well (at least using FCPX). For people with all-ProRes acquisition it seems pretty fast. For the currently-specialized case of H265/HEVC it seems fast. However for the common case of H264, the iMac Pro doesn't seem faster than the 2017 i7 iMac and in fact it may be slower, at least using Apple's own FCPX. Before the iMac Pro's release there was speculation it might use a customize Xeon with Quick Sync or maybe Apple would write to AMD's UVD and VCE transcoding hardware. This now appears to be not the case. If further testing corroborates the iMac Pro is weak on H264, it might well prove to be the codec equivalent of the MacBook Pro's USB-C design: doesn't work well with current technology but years in the future it might do better -- assuming of course customers have not abandoned it in the meantime.
  6. I have several Tiffen NDs. The optical quality is OK but (as with most 8-stop variable NDs) they have polarization artifacts at high attenuation. Another problem is Tiffen filters have no hard stops at each end, so you can't tell by feel where you are. I have some NiSi variable NDs and I like them much better. They have hard stops plus don't have the "X" effect at high attenuation, OTOH they are limited to six stops: https://www.dpreview.com/news/8909959108/nisi-launches-variable-nd-filter-with-the-dreaded-x-effect My favorite filter is the Heliopan 77mm, which also has hard stops and also avoids the "X" effect. It's minimum attenuation is only 1 stop and max is 6 stops. It is expensive but it's an excellent filter. IMO it doesn't make sense to put a cheap filter on a $2500 lens, but if you test a cheaper filter and it works for you, go ahead and use it. https://www.bhphotovideo.com/c/product/829300-REG/Heliopan_708290_82mm_Variable_Gray_ND.html Although not commonly discussed, a major factor with variable NDs is whether they fit inside the lens hood. You typically use them when shooting outside which often means the sun is out and you need a lens hood for best results if shooting within 180 degrees of the sun angle. There are various strap-on hoods, french flags, etc. but they can be cumbersome. Ironically even some very expensive cameras like the RED Raven have no built-in ND so you can end up using the same screw-on variable ND as somebody with with a GH5. This is a very difficult area because neither lens manufacturers nor filter manufacturers have specs on filter/lens hood fitment. A big place like B&H can sometimes give advice but not always. You basically need to take all your lenses to some place with a huge in-stock supply that would let you try them all; maybe B&H or the NAB show? If people would methodically post (maybe on a sticky thread) what filter fits inside the hood of what lens, that would help. I know from personal testing the Heliopan 77mm variable ND fits inside the lens hood of my Canon 70-200 2.8 IS II, and I can easily reach inside (with lens hood attached) and turn the filter. It will not fit inside the hood of the Sony 70-200 2.8 G-Master, and none of the NiSi, Tiffen or GenusTech 77mm variable NDs I've tried will fit. I have this 95mm filter which NiSi makes for Hasselblad, and it fits inside the lens hood of my Sony 28-135 f/4 cinema lens: https://www.aliexpress.com/item/NiSi-95-mm-Slim-Fader-Variable-ND-Filter-ND4-to-ND500-Adjustable-Neutral-Density-for-Hasselblad/32311172283.html Some of the longer Sony A-mount and FE-mount lenses actually have a cutout in the bottom of the lens hood where you can turn a variable filter -- provided it fits. Dave Dugdale did a variable ND test here:
  7. I have done extensive documentary editing using 4K XAVC-S and GH5 files using FCPX on 2015 and 2017 iMac 27 and 2014, 2015 and 2016 MacBook Pro 15. I used Premiere extensively from CS4 through CS6 and have a Premiere CC subscription but mainly use it for testing. Obtaining smooth editing performance on 4K K264 is difficult on almost any hardware or software. Unlike Premiere, FCPX uses Intel's Quick Sync acceleration for H264 and is much faster on the same Mac hardware -- yet even FCPX can be sluggish without proxies. Using 1080p proxies, FCPX is lightning fast at 4K on any recent Mac, even a 2013 MacBook Air. However compute-intensive effects such as Neat Video or Imagenomic Portraiture can slow down anything, no matter what the hardware or editing software. Editing 4K H264 using Premiere on a Mac tends to be CPU-bound, not I/O or GPU bound. You can see this yourself by watching the CPU and I/O with Activity Monitor. iStat Menus ver. 6 also allows monitoring the GPU. The I/O data rate for 4K H264 is not very high, and using proxies it's even lower. Using I/O optimizations like SSD, RAID, etc, tends to not help because you're already bottlenecked on the CPU. This is a generalization -- if you are editing four-angle multicam off a 5400 rpm USB bus-powered portable drive, then you could be I/O bound. I have done a lot of back-to-back testing of a 2014 vs 2016 top-spec MBP when editing 4K H264 XAVC-S and GH5 material using FCPX. The 2016 is much faster, although I'm not sure how representative this would be for Premiere. On FCPX my 2017 iMac 27 is about 2x faster than the 2015 iMac (both top spec) when transcoding or exporting H264 from FCPX. I think this is due to the improved Kaby Lake Quick Sync, but am not sure. A top-spec 2017 MBP might be considerably faster than your 2014 but this depends a lot on the software. Comparing top-spec configurations, the GPU is about 2x faster but the CPU only modestly faster. It might be enough to compensate while staying on Premiere, especially if your problem was GPU. But I'm suspicious why it's so slow if using 720p proxies. In my testing Premiere was very fast on 4K H264 if using proxies. This makes me think it's Warp stabilizer or some effect slowing it down. Can you reproduce the slowdown without any effects? Without effects does the extreme sluggishness only diminish or does it go away entirely? Resolve performance has been greatly improved in the latest version and in some benchmarks it's as fast as FCPX. You might want to consider that. FCPX is very good but it's a bigger transition from a conceptual standpoint, whereas Resolve is track-oriented like Premiere is.
  8. Always thoroughly test this type of utility on numerous clips before relying on it. I haven't tested this one but I've seen several cases where such utilities produce a damaged output file. I also suggest testing playback with two different players such as VLC and Windows Media Player or Quicktime Player. Make sure the player will properly show the beginning and end of the trimmed file, also that fast forward and fast reverse followed by normal speed playback works OK. Otherwise it is possible to trim a bunch of files, think they are OK, discard the originals, then later find out the trimmed files are compromised in some way.
  9. The *only* viable option? I have shot hundreds of hours of documentary video on the A7RII and even *it* works very well. We also use the GH5 and do two-camera interviews with it and the A7RII. The GH5 is excellent but in the real world each has pros and cons. Interestingly both A7RII and GH5 share a feature the A7SII (and probably A7SIII) don't have: ability to shoot in a crop mode that gives 1.5x on the A7R series and 1.4x on the GH5. That is really handy because it's like a flawless tele-converter without changing lenses. From actual hands-on field documentary work, the biggest A7RII issues are not the 8-bit 100 mbps codec or lacking 4k/60. It is things like this: - Inability to rapidly switch between Super35 and full frame mode - Slow boot up to fully operational status - Intermittently laggy control input - Cumbersome menu system with limited customization - Poor button ergonomics and poor tactile feedback - Poor battery life (although the battery grip on the A7RII fixes much of that) - No 1080p/120 - Focus peaking could be better For stills the biggest issue is incredibly slow writing rate to the SD card and almost non-existent multi-tasking during those long periods. Most or all of these are addressed in the A7RIII. So I don't see the GH5 as "the only viable option", even though my doc team uses one. I would much rather have Eye AF in video mode than a 10-bit codec. These are the differences between real world use vs comparing specs. If you want to see informed, experienced commentary about the A7RIII and video, check out Max Yuryev's latest commentary. This is the difference between someone who owns and uses both GH5 and A7RII vs someone who looks at specs:
  10. It shows the difference when you shoot 8-bit log then push the the colors hard in post. Of course 8-bit will degrade faster if it was captured in a flat profile. The question is how would the comparison look if the 8-bit side was *not* captured flat, then both sides were graded as best possible. It would likely look different but the 8-bit side would not have artifacts and banding. The 10-bit side might have more dynamic range due to the flat acquisition. But in that case it would be two different "looks", not one side with significant artifacts and one without.
  11. This is correct, and (once again) the OP equated 4k solely with distribution resolution. There are several reasons to shoot 4k: (1) Allows reframing in post (2) May allow better stabilization in post provided the shot is framed a little loose. OTOH digital stabilization often induces artifacts so the goal is not use this. (3) Each frame is an 8 megapixel still so frame grabs are a lot better. (4) Shooting in 4k may give better "shelf life" for the material, similar to how shooting TV color did in the 1960s. Even though initially few people had color TVs, eventually everyone would so the additional cost of color film was often worthwhile. (5) Large 4k productions impose a major new post production load. It is vastly harder than 1080p due to the volume and possible transcoding and workflow changes. When my doc group shot H264 1080p we could just distribute that in the post-production pipeline without a thought. With H264 4k, it must be transcoded to proxy, collaborative editing often requires a proxy-only workflow which then can expose complications for re-sync, etc. The reason this is an argument *for* 4k is it takes a long time to figure out the post production issues. Computers won't be much faster next year, so if you're *ever* going to transition to 4k and are shooting multicam and high shooting ratio material, you may as well start the learning curve now. Arguments against 4k: It may not look much (or any) better than good quality 1080p when viewed on typical playback devices, so why take the huge hit in post production. It can actually look worse than 1080p, depending on what cameras are used. Even though #5 above was listed as a 4k advantage, this is also one of the strongest arguments *against* 4k: the huge post production load. Whether you shoot in ProRes, DNxHD, H264, etc. it can be a huge burden. Worst of all is H264 since few computers are fast enough to smoothly edit this. Therefore it generally requires transcoding, proxies, and various knock-on effects regarding media management. It's not that bad if playing around with 4k or shooting a little commercial, but for larger productions I'd roughly I'd estimate it's over 10x (not 4x) as difficult as 1080p from an IT and data wrangling standpoint.
  12. It does not. His BruceX time was 18 sec, mine is 15.8 sec (average of several runs). My 2017 iMac 27 is the 4.2Ghz i7, 32GB, 2TB SSD, RP 580 GPU. His GeekBench 4 multi-core CPU score was 20,300, mine was 20,257. His Cinebench R15 CPU score was better at 1102, mine was 936. These are the vagaries of benchmarking and don't indicate a clear improvement over a factory top-spec iMac. The BruceX benchmark is especially sensitive to technique. Whether you reboot FCPX or macOS each time, whether you have background rendering off, whether you disable Spotlight indexing and Time Machine before running the benchmark -- all those have an effect. He was either unaware of these or did not mention them.
  13. Just keep in mind the 2017 iMac 27 i7 is twice as fast as a 2016 MBP or 2015 iMac 27 ONLY on *some* things -- specifically transcoding H264 to ProRes proxy. Considering that's a very time-consuming part of many H264 4k workflows, that's really useful. It's also limited to FCPX; the performance difference varies with each software. The 2017 iMac 27 i7 is also about 2x as fast (vs a 2015 iMac i7 or a 2016 MBP i7) on the GPU-oriented BruceX benchmark, but this is also a narrow task. On other GPU-heavy or mixed CPU/GPU tasks like Neat Video, it's usefully faster but not 2x faster. On a few H264 long GOP 4k codecs I tested, the 2017 iMac 27 i7 seems marginally fast enough to edit single-camera 4k material without transcoding (on FCPX), which is a big improvement from the 2015 iMac 27 i7 or 2016 top-spec MBP. However multicam still requires transcoding to proxy, and if you want to really blitz through the material, then proxy still helps. If you now or will ever use ProRes or DNxHD acquisition, this picture totally changes. It then becomes less CPU intensive but much more I/O intensive. You usually don't need to transcode in those cases but the data volume and I/O rates increase by 6x, 8x or more.
  14. I have top-spec versions of these: 2015 iMac 27, 2017 iMac 27, and 2016 MacBook Pro. I do FCPX editing professionally. In general the new 2017 iMac 27 is much faster on FCPX than the previous 2015 iMac and also the 2016 MBP. The FCPX performance improvement (esp. in H264 transcoding and rendering) of the 2017 model is far greater than synthetic benchmarks would indicate. The 2017 iMac 27 is the only machine I've ever used -- including a 12-core Mac Pro D700 -- that was fast enough to edit single-camera H264 long GOP 4k without transcoding. While it's about 2x the performance of the i7 2015 iMac 27 when rendering or exporting H264, and 1.6x faster on the GPU-intensive BruceX benchmark, it's not equally faster on all FCPX tasks and plugins. E.g, it's about 12% faster on Neat Video and 18% faster on Digital Anarchy flicker reduction. In theory you'd expect the 2017 iMac to be fastest on GPU-oriented tasks since the Radeon Pro 580 is much faster than the M395X in the 2015 iMac. However in FCPX the greatest improvement I've seen is in encode/decode and rendering of H264 material. There were Quick Sync improvements in Kaby Lake but I didn't think they were performance-related on H264, rather they expanded H265 coverage, but maybe I was wrong. Below: time to import and transcode to ProRes proxy ten 4k XAVC-S clips from a Sony A7RII, total running time 11 min 43 sec. It's interesting in this particular test the 2016 MBP was actually faster than the 2015 iMac, so a 2016 MBP is no slouch -- it just can't touch the 2017 iMac 27. Unfortunately I haven't tested the 2017 MBP. All tests repeated three times. 2015 iMac 27: 5 min 37 sec 2017 iMac 27: 2 min 40 sec 2016 MBP: 3 min 46 sec
  15. Yes, there is no place to put it. Also the A7 series is full frame, requiring an even larger ND than the FS5. This also increases the manufacturing cost as surface area of an optical element increases as the square of the radius. As you stated it must drop out of the light path else you take an ISO hit. The FS5 has space for this but a small mirrorless camera does not. I would like to have built-in variable ND but I can't see mirrorless manufacturers doing that. There are lots of other improvements the A7S/R series could benefit from, as have been seen on the A6500 and A9. They could make a factory-designed variable ND throttle adapter. That would avoid increasing the cost and complexity of the base camera but give us video people what we want.
×
×
  • Create New...