Jump to content

joema

Members
  • Posts

    160
  • Joined

  • Last visited

Everything posted by joema

  1. I have several Tiffen NDs. The optical quality is OK but (as with most 8-stop variable NDs) they have polarization artifacts at high attenuation. Another problem is Tiffen filters have no hard stops at each end, so you can't tell by feel where you are. I have some NiSi variable NDs and I like them much better. They have hard stops plus don't have the "X" effect at high attenuation, OTOH they are limited to six stops: https://***URL removed***/news/8909959108/nisi-launches-variable-nd-filter-with-the-dreaded-x-effect My favorite filter is the Heliopan 77mm, which also has hard stops and also avoids the "X" effect. It's minimum attenuation is only 1 stop and max is 6 stops. It is expensive but it's an excellent filter. IMO it doesn't make sense to put a cheap filter on a $2500 lens, but if you test a cheaper filter and it works for you, go ahead and use it. https://www.bhphotovideo.com/c/product/829300-REG/Heliopan_708290_82mm_Variable_Gray_ND.html Although not commonly discussed, a major factor with variable NDs is whether they fit inside the lens hood. You typically use them when shooting outside which often means the sun is out and you need a lens hood for best results if shooting within 180 degrees of the sun angle. There are various strap-on hoods, french flags, etc. but they can be cumbersome. Ironically even some very expensive cameras like the RED Raven have no built-in ND so you can end up using the same screw-on variable ND as somebody with with a GH5. This is a very difficult area because neither lens manufacturers nor filter manufacturers have specs on filter/lens hood fitment. A big place like B&H can sometimes give advice but not always. You basically need to take all your lenses to some place with a huge in-stock supply that would let you try them all; maybe B&H or the NAB show? If people would methodically post (maybe on a sticky thread) what filter fits inside the hood of what lens, that would help. I know from personal testing the Heliopan 77mm variable ND fits inside the lens hood of my Canon 70-200 2.8 IS II, and I can easily reach inside (with lens hood attached) and turn the filter. It will not fit inside the hood of the Sony 70-200 2.8 G-Master, and none of the NiSi, Tiffen or GenusTech 77mm variable NDs I've tried will fit. I have this 95mm filter which NiSi makes for Hasselblad, and it fits inside the lens hood of my Sony 28-135 f/4 cinema lens: https://www.aliexpress.com/item/NiSi-95-mm-Slim-Fader-Variable-ND-Filter-ND4-to-ND500-Adjustable-Neutral-Density-for-Hasselblad/32311172283.html Some of the longer Sony A-mount and FE-mount lenses actually have a cutout in the bottom of the lens hood where you can turn a variable filter -- provided it fits. Dave Dugdale did a variable ND test here:
  2. I have done extensive documentary editing using 4K XAVC-S and GH5 files using FCPX on 2015 and 2017 iMac 27 and 2014, 2015 and 2016 MacBook Pro 15. I used Premiere extensively from CS4 through CS6 and have a Premiere CC subscription but mainly use it for testing. Obtaining smooth editing performance on 4K K264 is difficult on almost any hardware or software. Unlike Premiere, FCPX uses Intel's Quick Sync acceleration for H264 and is much faster on the same Mac hardware -- yet even FCPX can be sluggish without proxies. Using 1080p proxies, FCPX is lightning fast at 4K on any recent Mac, even a 2013 MacBook Air. However compute-intensive effects such as Neat Video or Imagenomic Portraiture can slow down anything, no matter what the hardware or editing software. Editing 4K H264 using Premiere on a Mac tends to be CPU-bound, not I/O or GPU bound. You can see this yourself by watching the CPU and I/O with Activity Monitor. iStat Menus ver. 6 also allows monitoring the GPU. The I/O data rate for 4K H264 is not very high, and using proxies it's even lower. Using I/O optimizations like SSD, RAID, etc, tends to not help because you're already bottlenecked on the CPU. This is a generalization -- if you are editing four-angle multicam off a 5400 rpm USB bus-powered portable drive, then you could be I/O bound. I have done a lot of back-to-back testing of a 2014 vs 2016 top-spec MBP when editing 4K H264 XAVC-S and GH5 material using FCPX. The 2016 is much faster, although I'm not sure how representative this would be for Premiere. On FCPX my 2017 iMac 27 is about 2x faster than the 2015 iMac (both top spec) when transcoding or exporting H264 from FCPX. I think this is due to the improved Kaby Lake Quick Sync, but am not sure. A top-spec 2017 MBP might be considerably faster than your 2014 but this depends a lot on the software. Comparing top-spec configurations, the GPU is about 2x faster but the CPU only modestly faster. It might be enough to compensate while staying on Premiere, especially if your problem was GPU. But I'm suspicious why it's so slow if using 720p proxies. In my testing Premiere was very fast on 4K H264 if using proxies. This makes me think it's Warp stabilizer or some effect slowing it down. Can you reproduce the slowdown without any effects? Without effects does the extreme sluggishness only diminish or does it go away entirely? Resolve performance has been greatly improved in the latest version and in some benchmarks it's as fast as FCPX. You might want to consider that. FCPX is very good but it's a bigger transition from a conceptual standpoint, whereas Resolve is track-oriented like Premiere is.
  3. Always thoroughly test this type of utility on numerous clips before relying on it. I haven't tested this one but I've seen several cases where such utilities produce a damaged output file. I also suggest testing playback with two different players such as VLC and Windows Media Player or Quicktime Player. Make sure the player will properly show the beginning and end of the trimmed file, also that fast forward and fast reverse followed by normal speed playback works OK. Otherwise it is possible to trim a bunch of files, think they are OK, discard the originals, then later find out the trimmed files are compromised in some way.
  4. The *only* viable option? I have shot hundreds of hours of documentary video on the A7RII and even *it* works very well. We also use the GH5 and do two-camera interviews with it and the A7RII. The GH5 is excellent but in the real world each has pros and cons. Interestingly both A7RII and GH5 share a feature the A7SII (and probably A7SIII) don't have: ability to shoot in a crop mode that gives 1.5x on the A7R series and 1.4x on the GH5. That is really handy because it's like a flawless tele-converter without changing lenses. From actual hands-on field documentary work, the biggest A7RII issues are not the 8-bit 100 mbps codec or lacking 4k/60. It is things like this: - Inability to rapidly switch between Super35 and full frame mode - Slow boot up to fully operational status - Intermittently laggy control input - Cumbersome menu system with limited customization - Poor button ergonomics and poor tactile feedback - Poor battery life (although the battery grip on the A7RII fixes much of that) - No 1080p/120 - Focus peaking could be better For stills the biggest issue is incredibly slow writing rate to the SD card and almost non-existent multi-tasking during those long periods. Most or all of these are addressed in the A7RIII. So I don't see the GH5 as "the only viable option", even though my doc team uses one. I would much rather have Eye AF in video mode than a 10-bit codec. These are the differences between real world use vs comparing specs. If you want to see informed, experienced commentary about the A7RIII and video, check out Max Yuryev's latest commentary. This is the difference between someone who owns and uses both GH5 and A7RII vs someone who looks at specs:
  5. It shows the difference when you shoot 8-bit log then push the the colors hard in post. Of course 8-bit will degrade faster if it was captured in a flat profile. The question is how would the comparison look if the 8-bit side was *not* captured flat, then both sides were graded as best possible. It would likely look different but the 8-bit side would not have artifacts and banding. The 10-bit side might have more dynamic range due to the flat acquisition. But in that case it would be two different "looks", not one side with significant artifacts and one without.
  6. This is correct, and (once again) the OP equated 4k solely with distribution resolution. There are several reasons to shoot 4k: (1) Allows reframing in post (2) May allow better stabilization in post provided the shot is framed a little loose. OTOH digital stabilization often induces artifacts so the goal is not use this. (3) Each frame is an 8 megapixel still so frame grabs are a lot better. (4) Shooting in 4k may give better "shelf life" for the material, similar to how shooting TV color did in the 1960s. Even though initially few people had color TVs, eventually everyone would so the additional cost of color film was often worthwhile. (5) Large 4k productions impose a major new post production load. It is vastly harder than 1080p due to the volume and possible transcoding and workflow changes. When my doc group shot H264 1080p we could just distribute that in the post-production pipeline without a thought. With H264 4k, it must be transcoded to proxy, collaborative editing often requires a proxy-only workflow which then can expose complications for re-sync, etc. The reason this is an argument *for* 4k is it takes a long time to figure out the post production issues. Computers won't be much faster next year, so if you're *ever* going to transition to 4k and are shooting multicam and high shooting ratio material, you may as well start the learning curve now. Arguments against 4k: It may not look much (or any) better than good quality 1080p when viewed on typical playback devices, so why take the huge hit in post production. It can actually look worse than 1080p, depending on what cameras are used. Even though #5 above was listed as a 4k advantage, this is also one of the strongest arguments *against* 4k: the huge post production load. Whether you shoot in ProRes, DNxHD, H264, etc. it can be a huge burden. Worst of all is H264 since few computers are fast enough to smoothly edit this. Therefore it generally requires transcoding, proxies, and various knock-on effects regarding media management. It's not that bad if playing around with 4k or shooting a little commercial, but for larger productions I'd roughly I'd estimate it's over 10x (not 4x) as difficult as 1080p from an IT and data wrangling standpoint.
  7. It does not. His BruceX time was 18 sec, mine is 15.8 sec (average of several runs). My 2017 iMac 27 is the 4.2Ghz i7, 32GB, 2TB SSD, RP 580 GPU. His GeekBench 4 multi-core CPU score was 20,300, mine was 20,257. His Cinebench R15 CPU score was better at 1102, mine was 936. These are the vagaries of benchmarking and don't indicate a clear improvement over a factory top-spec iMac. The BruceX benchmark is especially sensitive to technique. Whether you reboot FCPX or macOS each time, whether you have background rendering off, whether you disable Spotlight indexing and Time Machine before running the benchmark -- all those have an effect. He was either unaware of these or did not mention them.
  8. Just keep in mind the 2017 iMac 27 i7 is twice as fast as a 2016 MBP or 2015 iMac 27 ONLY on *some* things -- specifically transcoding H264 to ProRes proxy. Considering that's a very time-consuming part of many H264 4k workflows, that's really useful. It's also limited to FCPX; the performance difference varies with each software. The 2017 iMac 27 i7 is also about 2x as fast (vs a 2015 iMac i7 or a 2016 MBP i7) on the GPU-oriented BruceX benchmark, but this is also a narrow task. On other GPU-heavy or mixed CPU/GPU tasks like Neat Video, it's usefully faster but not 2x faster. On a few H264 long GOP 4k codecs I tested, the 2017 iMac 27 i7 seems marginally fast enough to edit single-camera 4k material without transcoding (on FCPX), which is a big improvement from the 2015 iMac 27 i7 or 2016 top-spec MBP. However multicam still requires transcoding to proxy, and if you want to really blitz through the material, then proxy still helps. If you now or will ever use ProRes or DNxHD acquisition, this picture totally changes. It then becomes less CPU intensive but much more I/O intensive. You usually don't need to transcode in those cases but the data volume and I/O rates increase by 6x, 8x or more.
  9. I have top-spec versions of these: 2015 iMac 27, 2017 iMac 27, and 2016 MacBook Pro. I do FCPX editing professionally. In general the new 2017 iMac 27 is much faster on FCPX than the previous 2015 iMac and also the 2016 MBP. The FCPX performance improvement (esp. in H264 transcoding and rendering) of the 2017 model is far greater than synthetic benchmarks would indicate. The 2017 iMac 27 is the only machine I've ever used -- including a 12-core Mac Pro D700 -- that was fast enough to edit single-camera H264 long GOP 4k without transcoding. While it's about 2x the performance of the i7 2015 iMac 27 when rendering or exporting H264, and 1.6x faster on the GPU-intensive BruceX benchmark, it's not equally faster on all FCPX tasks and plugins. E.g, it's about 12% faster on Neat Video and 18% faster on Digital Anarchy flicker reduction. In theory you'd expect the 2017 iMac to be fastest on GPU-oriented tasks since the Radeon Pro 580 is much faster than the M395X in the 2015 iMac. However in FCPX the greatest improvement I've seen is in encode/decode and rendering of H264 material. There were Quick Sync improvements in Kaby Lake but I didn't think they were performance-related on H264, rather they expanded H265 coverage, but maybe I was wrong. Below: time to import and transcode to ProRes proxy ten 4k XAVC-S clips from a Sony A7RII, total running time 11 min 43 sec. It's interesting in this particular test the 2016 MBP was actually faster than the 2015 iMac, so a 2016 MBP is no slouch -- it just can't touch the 2017 iMac 27. Unfortunately I haven't tested the 2017 MBP. All tests repeated three times. 2015 iMac 27: 5 min 37 sec 2017 iMac 27: 2 min 40 sec 2016 MBP: 3 min 46 sec
  10. Yes, there is no place to put it. Also the A7 series is full frame, requiring an even larger ND than the FS5. This also increases the manufacturing cost as surface area of an optical element increases as the square of the radius. As you stated it must drop out of the light path else you take an ISO hit. The FS5 has space for this but a small mirrorless camera does not. I would like to have built-in variable ND but I can't see mirrorless manufacturers doing that. There are lots of other improvements the A7S/R series could benefit from, as have been seen on the A6500 and A9. They could make a factory-designed variable ND throttle adapter. That would avoid increasing the cost and complexity of the base camera but give us video people what we want.
  11. 180 minute final program length, 12TB storage and two GX85s using a 100 megabit/sec codec implies an approximate 60:1 shooting ratio, which is typical for a documentary, or even a bit low by today's standards. Your hardware shows it is possible to do quality professional work on a "shoestring" budget and with fairly low cost equipment. However I think most people who previously did large documentary projects on DV and H264 1080 (which did not require transcoding for performance on FCPX or Premiere) have been or will be shocked at the huge IT and workflow burden imposed by large-scale H264 4k. This has three components (1) Camera native material can no longer be smoothly edited but requires time-consuming transcoding, and (2) Camera file sizes are much larger (3) Even at 1/4 size, proxy files themselves take considerable size. I edited a documentary in 2010 shot on DV by multiple DVX100 cameras. The whole thing was about 500 gigabytes, about 40 hr of material. Initial post processing was trivial -- just capture the tapes, import and edit directly in CS4. By contrast I'm now working on an all-4k documentary which will ultimately be about 20 terabytes. Just to transcode the material of each shooting location to proxy takes days. It cannot be handed off to downstream editors on a portable hard drive -- we must use a complex FCPX proxy-only workflow, all the while testing and verifying the final relink will work. Right now I have 96 terabytes of Thunderbolt 2 RAID arrays connected to my iMac, and another 200 TB of off-line storage (part for other concurrent projects). I tested a 12-core Mac Pro last week and it was no faster for the time-consuming transcode phase. In the old days -- esp. after Premiere's Mercury Playback Engine -- things were very simple. We'd just import and edit. Part of this was the ability to edit camera native, and part was the lower volume of material. We didn't even need an assistant editor. Today (whether on Premiere or FCPX) H264 4k requires time-consuming transcoding to proxy, and the higher shooting ratios require even more time-consuming organizational steps to tag and log. It takes two assistant editors continuously busy handling this, and our storage capacity rivals some datacenters from the 1990s. If we shot the same footage in all 1080p H264, it would still require major logging and tagging but the IT issues in transcoding, proxy management, storage management and media distribution for collaborative work would be vastly easier. Thus the thread title is very valid -- is 4k really necessary? As an editor I like 4k. As an assistant editor, DIT and Data Wrangler I hate 4k. Good quality 1080 is so good I don't think most people on most viewing devices will spontaneously notice a difference. But 4k is the mandated future and content producers must eventually figure out how to handle that. As of today virtually no computers or editing software are fast enough to smoothly edit H264 4k without transcoding. That is a big adjustment for people coming from 1080.
  12. For a hobbyist playing around, 4k storage and processing is no problem. For a feature film shooting 200 hr of ProRes or raw and an IT team to handle it, also no problem. But for a large swath of production -- including documentary and news gathering -- 4k H264 post processing and storage is a big problem. It is too compute intensive to edit natively so requires transcoding to proxy or optimized media. This in turn greatly increases storage size, post processing time, general IT requirements and complexity. I just spent a week testing a 12-core Mac Pro seeking a better way to handle this and just ordered a 32 terabyte Thunderbolt 2 RAID (on top of many other RAID boxes). 4k is the driver for this. In a sense I wish 4k had never been invented since good quality 1080 is really good. In fact everything ABC, Fox and ESPN shoots and broadcasts is 720p/60. All that beautiful Hawaiian cinematography on the ABC TV series "Lost" was broadcast in 720p, but it looked very good. However 4k is the new standard and there's no sense fighting it. Discussions about 4k image quality vs 1080 are just the tip of the iceberg. All that 4k content must be processed somehow. It can take a long time to develop post processing hardware, software and procedures to adequately handle a large 4k production. That's why my doc team started two years ago on this, and we are just now getting a handle on it.
  13. The 2-bay Lacie at 400 megabytes/sec is probably OK. However that should be backed up regularly. You will likely have to transcode 4k H264 to proxy for smoothest editing, no matter how fast the iMac is. Even on a 12-core Mac Pro with dual D700 GPUs that is often required. I personally would prefer the 4.2Ghz i7 CPU, since so much of video editing and transcoding is CPU-bound. You can save some money by getting the lowest 8GB memory config and using third-party RAM. The exact internal storage is up to you but I would not get the 1TB Fusion Drive. I have tested 3TB Fusion Drive and 1TB SSD iMacs side-by-side and for FCPX editing with media on external storage, there's no significant performance difference, nor difference in FCPX startup time. However SSD is simpler, might be a little more reliable and if you're using external media anyway, why not use SSD.
  14. Using RAW or ProRes acquisition works for feature films, short commercials and people experimenting with one camera. It does not work as well for larger documentaries or news gathering. The video marketplace is a lot bigger than scripted narratives. E.g, my doc team has has 12 GoPros, five drones, cable cams, crane cams, gimbal cams, 3-axis motion control cams, two separate two-camera interview teams, etc -- all 4k AVC/H264. We have Atomos recorders but prefer not using those in the field unless absolutely required because of complexity and HDMI fragility. The Inspire 2 drone can shoot ProRes internally but this doesn't help if everything else is H264. With all our crews & cameras operating we can easily shoot 1 terabyte of 4k H264 per day at a single site. If we used all ProRes acquisition that would be 8 terabytes per day, which must be offloaded, checksummed and duplicated on site -- each day. So the idea of "just use RAW or ProRes acquisition" may work on smaller productions with lower shooting ratios, but it doesn't work for us. BTW it seems possible that 4k H264 8-bit 4:2:0 can be transcoded to 1080p 10-bit 4:4:4: https://www.provideocoalition.com/can-4k-4-2-0-8-bit-become-1080p-4-4-4-10-bit-does-it-matter/
  15. E.g, what exactly does an FS5 (for example) bring to the table that an A6500 or A7SII do not -- and at what price? Yes it has variable ND and XLR, but the internal codec seems identical: UHD 4k 8-bit 4:2:0 at 100 mbps. My doc team uses a multi-channel field recorder, so we don't need XLRs on every camera. Unlike the A7SII, A7RII and A6500, the FS5 doesn't have IBIS sensor-based stabilization. The FS5 EVF resolution is 1/2 of the the those cameras. Most lenses on the FS5 will require manual zoom, just like a mirrorless camera or DSLR. There are the Sony 28-135 f/4 and 18-110 f/4 power zoom lenses. I have the 28-135 on my A7RII; it's pretty good. I'm not slamming the FS5 -- I almost bought one. But considering the above it seems like the world's most expensive variable ND filter. We do have a Panasonic DVX200, which occupies a unique price/feature zone. If you need a run and gun ENG-type camera it seems a better fit than the FS5 or similar cameras. It gives a bit of shallow DOF with a good quality fixed lens in a familiar camcorder package. For more planned shooting the FS5 is good, but so is an A6500 or A7SII with the same lens. Since the advent of video DSLRs and mirrorless cameras, the manufacturers have not really provided uniquely compelling large-sensor camcorders at a price/feature point that justifies their use in run-and-gun or lower-budget scenarios. E.g, I see the FS5 often sold with the 18-105 kit lens. Which would produce a better image -- that or an A6500 with a Sony 18-110 f/4? They both have camcorder-like functionality. You can get into places (physically, perceptually and from a regulatory standpoint) with the A6500 that you can't reach with an FS5. Besides the Sony product line the GH5 even further raises the question of when exactly do you need a "pro" camcorder. It's true a pro camcorder has certain "street cred" but that's a pretty expensive price to pay for PR. It might be cheaper to rent one, do your behind-the-camera production shots for your demo reel, then use whatever camera works best in the real world.
  16. I have a 2013 iMac 27 with 3TB FD, a 2015 top-spec iMac 27 with 1TB SSD, 2015 and 2016 top-spec MBP 15s and am testing a 12-core nMP with D700s. This is FCPX 4k documentary editing where the primary codecs are some variant of H264. Even though FCPX is very efficient, in general H264 4k requires transcoding to proxy for smooth, fluid editing and skimming -- even on a top-spec 12-core nMP. If you have a top-spec MBP, iMac or Mac Pro, smaller 4k H264 projects can be done using the camera-native codec, but multicam can be laggy and frustrating. The Mac Pro is especially handicapped on H264 since the Xeon CPU does not have Quick Sync. In my tests, transcoding 4k H264 to ProRes proxy on a 12-core Mac Pro is nearly twice as slow as a 2015 top-spec iMac 27. For short projects with lower shooting ratios it's not an issue but for larger projects with high shooting ratios it's a major problem. We've got ProRes HDMI recorders but strapping on a bunch of 4k recorders is expensive and operationally more complex in a field documentary situation. That would eliminate the transcoding and editing performance problems but would exacerbate the data wrangling task by about 8x. This is especially difficult for multi-day field shoots where the data must be offloaded and backed up. However in part the viability of editing camera-native 4k depends on your preferences. If you do mainly single-cam work, and use modest shooting ratios so you don't need to skim and keyword a ton of material, and don't mind a bit of lag during edit, a top-spec iMac 27 is probably OK for H264 4k. Re effects, those can either be CPU-bound or GPU-bound, or a combination of both. Some like Neat Video allow you to configure the split between CPU and GPU. But in general effects use a lot of GPU, and like encode/decode, are slowed down by 4k since it's 4x the data per frame as 1080p. Re Fusion Drive vs SSD, for a while I had both 2013 and 2015 iMac 27s on my desk, one with 3TB FD and the other 1TB SSD. I tested a few small cases with all media on the boot drive, and really couldn't see much FCPX real-world performance difference. You are usually waiting on CPU or GPU. However if you transcode to ProRes, I/O rates skyrocket, making it more likely to hit an I/O constraint. Fusion Drive is pretty good but ideally you don't want media on the boot drive. SSD is fast enough to put media there but it's not big enough. Fusion Drive is big enough but may not be fast enough, thus the dilemma. A 3TB FD is actually a pretty good solution for small scale 1080p video editing, but 4k (even H264) chews through space rapidly. Also, performance will degrade on any spinning drive (even FD) as it fills up. Thus you don't really have 3TB at full performance, but need to maintain considerable slack space. In general we often under-estimate our storage needs, so end up using external storage even for "smaller" projects. If this is your likely destiny, why not use an SSD iMac which is at least a bit faster at a few things like booting? Just don't spend your entire budget on an SSD machine then use a slow, cheap bus-powered USB drive. If I was getting an 2017 iMac 27 for H264 4k editing, it would be a high-spec version, e.g, 4.2Ghz i7, 32GB RAM, 580 GPU, and probably 1TB SSD. Re the iMac Pro, what little we know indicates the base model will be considerably faster than the top-spec iMac 27 -- it has double the cores (albeit at slower clock rate) and roughly double the GPU performance. However unless Apple pulls a miracle out of their hat and upgrades FCPX to use AMD's VCE coding engine, the iMac Pro will not have Quick Sync, so it will be handicapped just like the current Mac Pro for that workflow. Apple is limited by what Intel provides but this is an increasingly critical situation for productions using H264 or H265 acquisition codecs and high shooting ratios.
  17. joema

    iMac Pro

    Although we loosely speak of H264 as a codec, it is actually a coding format. The codec is the specific software implementation which produces that coding format. E.g, x264 and DivX both are codecs which encode to the H264 format. Codecs which produce the H264 format are allowed to use varying levels of compression and bitrates. You would not want a single set of rigid encoding parameters for all H264 formats. The codecs that produce H264 formats are not inferior, any more so than ProRes is inferior to raw video. Each have specific tradeoffs and advantages. Like H264, there are many variants of codecs that produce various ProRes formats: http://www.4kshooters.net/2015/01/26/choose-the-version-of-prores-best-suited-to-your-project/ Typically the variants of ProRes are less compressed and easier to edit. However they are also much larger. This decreases the CPU burden to decode/encode, but greatly increases the I/O burden and disk storage required. It is all a tradeoff. My documentary team can shoot one terabyte of 4k H264 per day. If we only used ProRes it would be 8x that large, so we are thankful H264 exists. H264 does increase the computational burden to edit, just like H265 will increase this further. Fortunately computer hardware is getting faster, especially aided by fixed-function hardware accelerators like Quick Sync: https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video, AMD's VCE: https://en.wikipedia.org/wiki/Video_Coding_Engine and nVidia's NVENC: https://en.wikipedia.org/wiki/Nvidia_NVENC
  18. joema

    iMac Pro

    There is no simple answer since video editing and codecs span a wide range. H264 1080p can be edited natively with good performance using either Premiere or FCPX on most machines. You don't need a top-end CPU or GPU for this. OTOH most H264 4k codecs are difficult to edit, even on top-end machines, and often require transcoding to proxy for smoothest editing. Exceptions are H264 4k codecs like Canon's XF-AVC Intra, that is very fast to edit. There can also be a big difference between (say) Premiere and FCPX, especially on a Mac. In general FCPX is considerably more responsive, especially for editing H264 4k. It is about 4x faster exporting to H264 since it uses Quick Sync and Premiere does not. However Premiere has gotten faster the last year or so, even without proxy, which it now also has. That's the editing; effects are different. No matter how lightweight the codec, a computationally-intensive effect must be calculated for each 4k frame. Effects can be implemented entirely in the CPU, entirely in the GPU or a mixture of both. Some effects like Neat Video allow CPU vs GPU rendering, a mix of both and how many CPU cores to use. In general 4k is really difficult to edit. From a CPU standpoint the more (and faster) cores the better. An i7 iMac can be significantly faster than an i5 iMac of the same generation because (1) the CPU clock is faster, and (2) hyperthreading. The current iMac 27 the i7 is about 11% faster just from clock speed. Benefit from hyperthreading varies widely. I used the 3rd party CPUSetter utility to disable/enable hyperthreading on an i7 iMac, and this made about 30% difference in FCPX export speed to H264. For other tasks such Lightroom import and preview generation, it made no difference. Re Radeon 580, I haven't see any good benchmarks yet. However only certain tasks are amenable to GPU acceleration, e.g, H264 encode/decode cannot be meaningfully accelerated. The core algorithm is inherently sequential and not amenable to applying hundreds of lightweight GPU threads. But in general software developers increasingly try to leverage the GPU where possible. You can't update the GPU in an iMac so I'd tend to get the fastest one available.
  19. joema

    iMac Pro

    I don't work on long features in terms of deliverable, but 4k documentaries with high shooting ratios and lots of multicam. In this era that's not unusual -- 4k GoPros and drones are everywhere, A and B cam are 4k, etc. I shot a wedding last year and we used lots of 4k multicam. There is a major editing performance variation in various H264 codecs. E.g, the UHD 4k 4:2:0 100 mbps output from a DVX200, or Sony A7RII is very sluggish -- even in FCPX and on the fastest available iMac. By contrast the UHD 4k 4:2:2 300 mbps output from a Canon XC10 is also H264 but it's very smooth and fast to edit. I don't need proxy for that. But I can't control what codecs camera manufacturers use, just have to deal with it. We have ProRes recorders but generally don't use them due to added complexity in the field. The bottom line is in an era when high shooting ratios and H264 4k are common, current hardware is often not fast enough without transcoding -- even for FCPX. This isn't just timeline performance but the ability to skim material, mark keyword and favorite ranges is greatly degraded. It is for these increasingly common cases the iMac Pro is needed and definitely not overkill. Of course we defer compute-intensive effects to the very last step, but ultimately they must be applied *and* iteratively adjusted. Each tweak or adjustment to stabilization, Neat video, de-flickering, etc. must be rendered in the timeline to fully evaluate, and this is agonizingly slow on 4k. The greatly improved CPU and GPU performance for the iMac Pro is vitally needed for this.
  20. joema

    iMac Pro

    That video was the 2014 iMac 27. It was improved in 2015 (what I have) and Max re-tested it and determined it did not have the thermal throttling issues of the 2014 model. Re editing camera-native H264, I am a fan of that where possible -- lots of FCPX users needlessly transcode to optimized media. However for large quantities of H264 4k, you pretty much need proxy -- even if NOT multicam and without Neat or multiple effects. Even for single-cam material the skimmer is just not fast enough on a top-spec 2015 iMac 27 to blitz through large quantities of H264 4k content. If you play around with a 5 min 4k iPhone video, it's OK without proxy. If you have a long single 4k video (e.g, a classroom lecture) and all you need is chop the head and tail, you don't need proxy for that. But for evaluating and seriously editing lots of content, it's just too slow without proxy. Re the iMac 27 is the wrong machine for large proxy transcodes, there really isn't a much better machine. A 12-core nMP has 3x the cores but they run at 2.7ghz so overall it's about 2x the CPU throughput, but without Quick Sync. It might not be *any* faster. And buying a four-year-old nMP now? Now *that's* the wrong machine. By the same token the iMac Pro might not be hugely faster (for creating proxies) unless Apple figures out some way to use hardware acceleration for H264 decoding on a Xeon machine. But (like the nMP) it would be faster for various other editing and effects-related tasks.
  21. joema

    iMac Pro

    That is not my experience. I have a top-spec 2015 iMac 27 with multiple Thunderbolt 2 RAIDs (inc'l SSD), and I am constantly struggling with performance issues when editing H264 4k on FCPX. As fast and efficient as FCPX is, it's not sufficient to smoothly edit H264 4k multicam on a 4Ghz iMac. It takes one *week* to transcode to proxy the content from the documentary I'm working on. If applying compute-intensive effects like Neat Video, de-flicker or stabilization this takes forever on 4k. My iMac is *frequently* at the limit and this is obvious from looking at the pegged CPU cores in iStat Menus. For playing around with some short single-cam H264 4k videos, a $5000 iMac Pro might be overkill. But 4k is no longer an esoteric niche -- it is the new standard. If your cameras capture in ProRes it's less a problem, but this is typically only an option for scripted narratives, commercials and short-form material. For larger documentaries and news programs it is very common to capture in H264. So handling larger amounts of H264 4k and soon H265 is a big issue. For me an 8-core iMac Pro is not overkill, it's likely inadequate. That's why I'm glad they will offer an 18-core version. It's currently unclear whether the iMac Pro will support Quick Sync or some other hardware accelerated encoding, so the only way to compensate for this is lots of cores.
  22. Thank you. BTW a lot of professional news organizations shoot video with DSLRs. They don't want or need something like a C200. Note this three-camera interview in front of the White House: https://joema.smugmug.com/Photography/ABC-News-Using-DSLRs/n-BsScJC/ CNN using 5D Mark III: https://joema.smugmug.com/Photography/CNN-Using-5D-Mark-III/n-5JqGgB/ When using camcorders, these are often quite old, e.g, this recent 60 Minutes field interview was shot with a Panasonic HMC-150. They need to upgrade to something like a DVX200, not a C200. https://joema.smugmug.com/Photography/60-Minutes-using-Panasonic-HMC/n-MFg8L9/ The lens shown on the C200 is the same Canon 24-105 f/4 used on DSLRs. Of course you can put any lens on there you want, but why use a $7500 camera with a $950 lens. Also that lens (and most in that category) are manual zoom. Does Canon have any EF lens in the price/quality range of the Sony 18-110? https://www.bhphotovideo.com/c/product/1280873-REG/sony_selp18110g_e_mount_lens.html The 18-110 lens rehabilitates the FS5 into a camcorder but at $9300 total. That is double the price of a DVX200, and I doubt most news organizations care about the technical differences. Field news and field documentary often uses external audio, so XLR inputs on the camera (while nice) are frequently not used or could use 1/8" input. From one viewpoint the FS5 or C200 compared to a GH5 are the world's most expensive ND filters.
  23. So it appears the data was OK when played from the camera, but the problem happened when or after the card was removed. Then it appears you tried recovery software. The moment you first see a problem, it's best to re-insert the card in the camera and try to view it again. If it works you can transfer the data via cable. There's no guarantee that would have worked, but the moment you start manipulating the card outside the camera with recovery software, you never know what that will do. However the fact the card could not be formatted indicates it totally failed or was somehow damaged. My documentary team has many Sandisk 128GB and 256GB cards plus similar Sandisk cards. Over seven years after shooting many terabytes, we've had one Sandisk card fail in the field which was recoverable, and one Lexar 128GB card fail during initial testing, but never any in the field. While genuine Lexar and Sandisk cards are generally very reliable, any card (or camera) can fail. The GH5 supports dual card recording for video, which helps in some cases. However the camera can be lost, stolen or have a firmware issue that corrupts data on both cards. For critically important shooting, we download and duplicate (on site) the data from all cards twice per day, and keep the two backup drives in separate locations. When traveling home we carry the two drives in separate vehicles.
  24. In 2011 this was the top-spec iMac 27, except for the 1TB HDD. It should do OK on 1080p H264 video. The CPU is "Sandy Bridge" so has Quick Sync. However if the 1TB hard drive is original, it is 6 yrs old and should be replaced, probably with an SSD. This is not particularly easy but can be done: https://eshop.macsales.com/installvideos/imac_mid27_2011_ssddiy/
  25. Of those three, I'd suggest this one since it has a "Sandy Bridge" CPU and should support Quick Sync. However as fuzzynormal posted, a Hackintosh can also be a cost-effective option with good performance.
×
×
  • Create New...