Jump to content

joema

Members
  • Posts

    160
  • Joined

  • Last visited

Reputation Activity

  1. Like
    joema got a reaction from jonpais in Which iMac 2017 for editing and grading?   
    It does not. His BruceX time was 18 sec, mine is 15.8 sec (average of several runs). My 2017 iMac 27 is the 4.2Ghz i7, 32GB, 2TB SSD, RP 580 GPU. 
    His GeekBench 4 multi-core CPU score was 20,300, mine was 20,257. His Cinebench R15 CPU score was better at 1102, mine was 936. 
    These are the vagaries of benchmarking and don't indicate a clear improvement over a factory top-spec iMac. The BruceX benchmark is especially sensitive to technique. Whether you reboot FCPX or macOS each time, whether you have background rendering off, whether you disable Spotlight indexing and Time Machine before running the benchmark -- all those have an effect. He was either unaware of these or did not mention them.
  2. Like
    joema got a reaction from Jimmy G in Which iMac 2017 for editing and grading?   
    Just keep in mind the 2017 iMac 27 i7 is twice as fast as a 2016 MBP or 2015 iMac 27 ONLY on *some* things -- specifically transcoding H264 to ProRes proxy. Considering that's a very time-consuming part of many H264 4k workflows, that's really useful. It's also limited to FCPX; the performance difference varies with each software.
    The 2017 iMac 27 i7 is also about 2x as fast (vs a 2015 iMac i7 or a 2016 MBP i7) on the GPU-oriented BruceX benchmark, but this is also a narrow task. On other GPU-heavy or mixed CPU/GPU tasks like Neat Video, it's usefully faster but not 2x faster.
    On a few H264 long GOP 4k codecs I tested, the 2017 iMac 27 i7 seems marginally fast enough to edit single-camera 4k material without transcoding (on FCPX), which is a big improvement from the 2015 iMac 27 i7 or 2016 top-spec MBP. However multicam still requires transcoding to proxy, and if you want to really blitz through the material, then proxy still helps.
    If you now or will ever use ProRes or DNxHD acquisition, this picture totally changes. It then becomes less CPU intensive but much more I/O intensive. You usually don't need to transcode in those cases but the data volume and I/O rates increase by 6x, 8x or more.
     
  3. Like
    joema got a reaction from Fredrik Lyhne in Which iMac 2017 for editing and grading?   
    Just keep in mind the 2017 iMac 27 i7 is twice as fast as a 2016 MBP or 2015 iMac 27 ONLY on *some* things -- specifically transcoding H264 to ProRes proxy. Considering that's a very time-consuming part of many H264 4k workflows, that's really useful. It's also limited to FCPX; the performance difference varies with each software.
    The 2017 iMac 27 i7 is also about 2x as fast (vs a 2015 iMac i7 or a 2016 MBP i7) on the GPU-oriented BruceX benchmark, but this is also a narrow task. On other GPU-heavy or mixed CPU/GPU tasks like Neat Video, it's usefully faster but not 2x faster.
    On a few H264 long GOP 4k codecs I tested, the 2017 iMac 27 i7 seems marginally fast enough to edit single-camera 4k material without transcoding (on FCPX), which is a big improvement from the 2015 iMac 27 i7 or 2016 top-spec MBP. However multicam still requires transcoding to proxy, and if you want to really blitz through the material, then proxy still helps.
    If you now or will ever use ProRes or DNxHD acquisition, this picture totally changes. It then becomes less CPU intensive but much more I/O intensive. You usually don't need to transcode in those cases but the data volume and I/O rates increase by 6x, 8x or more.
     
  4. Like
    joema got a reaction from Fredrik Lyhne in Which iMac 2017 for editing and grading?   
    I have top-spec versions of these: 2015 iMac 27, 2017 iMac 27, and 2016 MacBook Pro. I do FCPX editing professionally. In general the new 2017 iMac 27 is much faster on FCPX than the previous 2015 iMac and also the 2016 MBP. The FCPX performance improvement (esp. in H264 transcoding and rendering) of the 2017 model is far greater than synthetic benchmarks would indicate.
    The 2017 iMac 27 is the only machine I've ever used -- including a 12-core Mac Pro D700 -- that was fast enough to edit single-camera H264 long GOP 4k without transcoding. While it's about 2x the performance of the i7 2015 iMac 27 when rendering or exporting H264, and 1.6x faster on the GPU-intensive BruceX benchmark, it's not equally faster on all FCPX tasks and plugins. E.g, it's about 12% faster on Neat Video and 18% faster on Digital Anarchy flicker reduction.
    In theory you'd expect the 2017 iMac to be fastest on GPU-oriented tasks since the Radeon Pro 580 is much faster than the M395X in the 2015 iMac. However in FCPX the greatest improvement I've seen is in encode/decode and rendering of H264 material. There were Quick Sync improvements in Kaby Lake but I didn't think they were performance-related on H264, rather they expanded H265 coverage, but maybe I was wrong.
    Below: time to import and transcode to ProRes proxy ten 4k XAVC-S clips from a Sony A7RII, total running time 11 min 43 sec. It's interesting in this particular test the 2016 MBP was actually faster than the 2015 iMac, so a 2016 MBP is no slouch -- it just can't touch the 2017 iMac 27. Unfortunately I haven't tested the 2017 MBP. All tests repeated three times. 
    2015 iMac 27: 5 min 37 sec
    2017 iMac 27: 2 min 40 sec
    2016 MBP: 3 min 46 sec
     
  5. Like
    joema got a reaction from webrunner5 in 1080 vs. 4K: What is REALLY necessary?   
    180 minute final program length, 12TB storage and two GX85s using a 100 megabit/sec codec implies an approximate 60:1 shooting ratio, which is typical for a documentary, or even a bit low by today's standards. Your hardware shows it is possible to do quality professional work on a "shoestring" budget and with fairly low cost equipment.
    However I think most people who previously did large documentary projects on DV and H264 1080 (which did not require transcoding for performance on FCPX or Premiere) have been or will be shocked at the huge IT and workflow burden imposed by large-scale H264 4k. This has three components (1) Camera native material can no longer be smoothly edited but requires time-consuming transcoding, and (2) Camera file sizes are much larger (3) Even at 1/4 size, proxy files themselves take considerable size.
    I edited a documentary in 2010 shot on DV by multiple DVX100 cameras. The whole thing was about 500 gigabytes, about 40 hr of material. Initial post processing was trivial -- just capture the tapes, import and edit directly in CS4.
    By contrast I'm now working on an all-4k documentary which will ultimately be about 20 terabytes. Just to transcode the material of each shooting location to proxy takes days. It cannot be handed off to downstream editors on a portable hard drive -- we must use a complex FCPX proxy-only workflow, all the while testing and verifying the final relink will work. Right now I have 96 terabytes of Thunderbolt 2 RAID arrays connected to my iMac, and another 200 TB of off-line storage (part for other concurrent projects). I tested a 12-core Mac Pro last week and it was no faster for the time-consuming transcode phase.
    In the old days -- esp. after Premiere's Mercury Playback Engine -- things were very simple. We'd just import and edit. Part of this was the ability to edit camera native, and part was the lower volume of material. We didn't even need an assistant editor. Today (whether on Premiere or FCPX) H264 4k requires time-consuming transcoding to proxy, and the higher shooting ratios require even more time-consuming organizational steps to tag and log. It takes two assistant editors continuously busy handling this, and our storage capacity rivals some datacenters from the 1990s.
    If we shot the same footage in all 1080p H264, it would still require major logging and tagging but the IT issues in transcoding, proxy management, storage management and media distribution for collaborative work would be vastly easier. Thus the thread title is very valid -- is 4k really necessary? As an editor I like 4k. As an assistant editor, DIT and Data Wrangler I hate 4k. Good quality 1080 is so good I don't think most people on most viewing devices will spontaneously notice a difference. But 4k is the mandated future and content producers must eventually figure out how to handle that. As of today virtually no computers or editing software are fast enough to smoothly edit H264 4k without transcoding. That is a big adjustment for people coming from 1080.
  6. Like
    joema got a reaction from jonpais in 1080 vs. 4K: What is REALLY necessary?   
    180 minute final program length, 12TB storage and two GX85s using a 100 megabit/sec codec implies an approximate 60:1 shooting ratio, which is typical for a documentary, or even a bit low by today's standards. Your hardware shows it is possible to do quality professional work on a "shoestring" budget and with fairly low cost equipment.
    However I think most people who previously did large documentary projects on DV and H264 1080 (which did not require transcoding for performance on FCPX or Premiere) have been or will be shocked at the huge IT and workflow burden imposed by large-scale H264 4k. This has three components (1) Camera native material can no longer be smoothly edited but requires time-consuming transcoding, and (2) Camera file sizes are much larger (3) Even at 1/4 size, proxy files themselves take considerable size.
    I edited a documentary in 2010 shot on DV by multiple DVX100 cameras. The whole thing was about 500 gigabytes, about 40 hr of material. Initial post processing was trivial -- just capture the tapes, import and edit directly in CS4.
    By contrast I'm now working on an all-4k documentary which will ultimately be about 20 terabytes. Just to transcode the material of each shooting location to proxy takes days. It cannot be handed off to downstream editors on a portable hard drive -- we must use a complex FCPX proxy-only workflow, all the while testing and verifying the final relink will work. Right now I have 96 terabytes of Thunderbolt 2 RAID arrays connected to my iMac, and another 200 TB of off-line storage (part for other concurrent projects). I tested a 12-core Mac Pro last week and it was no faster for the time-consuming transcode phase.
    In the old days -- esp. after Premiere's Mercury Playback Engine -- things were very simple. We'd just import and edit. Part of this was the ability to edit camera native, and part was the lower volume of material. We didn't even need an assistant editor. Today (whether on Premiere or FCPX) H264 4k requires time-consuming transcoding to proxy, and the higher shooting ratios require even more time-consuming organizational steps to tag and log. It takes two assistant editors continuously busy handling this, and our storage capacity rivals some datacenters from the 1990s.
    If we shot the same footage in all 1080p H264, it would still require major logging and tagging but the IT issues in transcoding, proxy management, storage management and media distribution for collaborative work would be vastly easier. Thus the thread title is very valid -- is 4k really necessary? As an editor I like 4k. As an assistant editor, DIT and Data Wrangler I hate 4k. Good quality 1080 is so good I don't think most people on most viewing devices will spontaneously notice a difference. But 4k is the mandated future and content producers must eventually figure out how to handle that. As of today virtually no computers or editing software are fast enough to smoothly edit H264 4k without transcoding. That is a big adjustment for people coming from 1080.
  7. Like
    joema got a reaction from jonpais in 1080 vs. 4K: What is REALLY necessary?   
    For a hobbyist playing around, 4k storage and processing is no problem. For a feature film shooting 200 hr of ProRes or raw and an IT team to handle it, also no problem. But for a large swath of production -- including documentary and news gathering -- 4k H264 post processing and storage is a big problem. It is too compute intensive to edit natively so requires transcoding to proxy or optimized media. This in turn greatly increases storage size, post processing time, general IT requirements and complexity. I just spent a week testing a 12-core Mac Pro seeking a better way to handle this and just ordered a 32 terabyte Thunderbolt 2 RAID (on top of many other RAID boxes). 4k is the driver for this.
    In a sense I wish 4k had never been invented since good quality 1080 is really good. In fact everything ABC, Fox and ESPN shoots and broadcasts is 720p/60. All that beautiful Hawaiian cinematography on the ABC TV series "Lost" was broadcast in 720p, but it looked very good.
    However 4k is the new standard and there's no sense fighting it. Discussions about 4k image quality vs 1080 are just the tip of the iceberg. All that 4k content must be processed somehow. It can take a long time to develop post processing hardware, software and procedures to adequately handle a large 4k production. That's why my doc team started two years ago on this, and we are just now getting a handle on it.
  8. Like
    joema got a reaction from jonpais in Which iMac 2017 for editing and grading?   
    The 2-bay Lacie at 400 megabytes/sec is probably OK. However that should be backed up regularly.
    You will likely have to transcode 4k H264 to proxy for smoothest editing, no matter how fast the iMac is. Even on a 12-core Mac Pro with dual D700 GPUs that is often required. I personally would prefer the 4.2Ghz i7 CPU, since so much of video editing and transcoding is CPU-bound. You can save some money by getting the lowest 8GB memory config and using third-party RAM. The exact internal storage is up to you but I would not get the 1TB Fusion Drive. I have tested 3TB Fusion Drive and 1TB SSD iMacs side-by-side and for FCPX editing with media on external storage, there's no significant performance difference, nor difference in FCPX startup time. However SSD is simpler, might be a little more reliable and if you're using external media anyway, why not use SSD.
  9. Like
    joema got a reaction from Fredrik Lyhne in Which iMac 2017 for editing and grading?   
    The 2-bay Lacie at 400 megabytes/sec is probably OK. However that should be backed up regularly.
    You will likely have to transcode 4k H264 to proxy for smoothest editing, no matter how fast the iMac is. Even on a 12-core Mac Pro with dual D700 GPUs that is often required. I personally would prefer the 4.2Ghz i7 CPU, since so much of video editing and transcoding is CPU-bound. You can save some money by getting the lowest 8GB memory config and using third-party RAM. The exact internal storage is up to you but I would not get the 1TB Fusion Drive. I have tested 3TB Fusion Drive and 1TB SSD iMacs side-by-side and for FCPX editing with media on external storage, there's no significant performance difference, nor difference in FCPX startup time. However SSD is simpler, might be a little more reliable and if you're using external media anyway, why not use SSD.
  10. Like
    joema got a reaction from Jimmy G in Which iMac 2017 for editing and grading?   
    I have a 2013 iMac 27 with 3TB FD, a 2015 top-spec iMac 27 with 1TB SSD,  2015 and 2016 top-spec MBP 15s and am testing a 12-core nMP with D700s. This is FCPX 4k documentary editing where the primary codecs are some variant of H264.
    Even though FCPX is very efficient, in general H264 4k requires transcoding to proxy for smooth, fluid editing and skimming -- even on a top-spec 12-core nMP. If you have a top-spec MBP, iMac or Mac Pro, smaller 4k H264 projects can be done using the camera-native codec, but multicam can be laggy and frustrating. The Mac Pro is especially handicapped on H264 since the Xeon CPU does not have Quick Sync. In my tests, transcoding 4k H264 to ProRes proxy on a 12-core Mac Pro is nearly twice as slow as a 2015 top-spec iMac 27. For short projects with lower shooting ratios it's not an issue but for larger projects with high shooting ratios it's a major problem.
    We've got ProRes HDMI recorders but strapping on a bunch of 4k recorders is expensive and operationally more complex in a field documentary situation. That would eliminate the transcoding and editing performance problems but would exacerbate the data wrangling task by about 8x. This is especially difficult for multi-day field shoots where the data must be offloaded and backed up.
    However in part the viability of editing camera-native 4k depends on your preferences. If you do mainly single-cam work, and use modest shooting ratios so you don't need to skim and keyword a ton of material, and don't mind a bit of lag during edit, a top-spec iMac 27 is probably OK for H264 4k. 
    Re effects, those can either be CPU-bound or GPU-bound, or a combination of both. Some like Neat Video allow you to configure the split between CPU and GPU. But in general effects use a lot of GPU, and like encode/decode, are slowed down by 4k since it's 4x the data per frame as 1080p. 
    Re Fusion Drive vs SSD, for a while I had both 2013 and 2015 iMac 27s on my desk, one with 3TB FD and the other 1TB SSD. I tested a few small cases with all media on the boot drive, and really couldn't see much FCPX real-world performance difference. You are usually waiting on CPU or GPU. However if you transcode to ProRes, I/O rates skyrocket, making it more likely to hit an I/O constraint.
    Fusion Drive is pretty good but ideally you don't want media on the boot drive. SSD is fast enough to put media there but it's not big enough. Fusion Drive is big enough but may not be fast enough, thus the dilemma. A 3TB FD is actually a pretty good solution for small scale 1080p video editing, but 4k (even H264) chews through space rapidly. Also, performance will degrade on any spinning drive (even FD) as it fills up. Thus you don't really have 3TB at full performance, but need to maintain considerable slack space. In general we often under-estimate our storage needs, so end up using external storage even for "smaller" projects. If this is your likely destiny, why not use an SSD iMac which is at least a bit faster at a few things like booting? Just don't spend your entire budget on an SSD machine then use a slow, cheap bus-powered USB drive.
    If I was getting an 2017 iMac 27 for H264 4k editing, it would be a high-spec version, e.g, 4.2Ghz i7, 32GB RAM, 580 GPU, and probably 1TB SSD. Re the iMac Pro, what little we know indicates the base model will be considerably faster than the top-spec iMac 27 -- it has double the cores (albeit at slower clock rate) and roughly double the GPU performance. However unless Apple pulls a miracle out of their hat and upgrades FCPX to use AMD's VCE coding engine, the iMac Pro will not have Quick Sync, so it will be handicapped just like the current Mac Pro for that workflow. Apple is limited by what Intel provides but this is an increasingly critical situation for productions using H264 or H265 acquisition codecs and high shooting ratios.
  11. Like
    joema got a reaction from Fredrik Lyhne in Which iMac 2017 for editing and grading?   
    I have a 2013 iMac 27 with 3TB FD, a 2015 top-spec iMac 27 with 1TB SSD,  2015 and 2016 top-spec MBP 15s and am testing a 12-core nMP with D700s. This is FCPX 4k documentary editing where the primary codecs are some variant of H264.
    Even though FCPX is very efficient, in general H264 4k requires transcoding to proxy for smooth, fluid editing and skimming -- even on a top-spec 12-core nMP. If you have a top-spec MBP, iMac or Mac Pro, smaller 4k H264 projects can be done using the camera-native codec, but multicam can be laggy and frustrating. The Mac Pro is especially handicapped on H264 since the Xeon CPU does not have Quick Sync. In my tests, transcoding 4k H264 to ProRes proxy on a 12-core Mac Pro is nearly twice as slow as a 2015 top-spec iMac 27. For short projects with lower shooting ratios it's not an issue but for larger projects with high shooting ratios it's a major problem.
    We've got ProRes HDMI recorders but strapping on a bunch of 4k recorders is expensive and operationally more complex in a field documentary situation. That would eliminate the transcoding and editing performance problems but would exacerbate the data wrangling task by about 8x. This is especially difficult for multi-day field shoots where the data must be offloaded and backed up.
    However in part the viability of editing camera-native 4k depends on your preferences. If you do mainly single-cam work, and use modest shooting ratios so you don't need to skim and keyword a ton of material, and don't mind a bit of lag during edit, a top-spec iMac 27 is probably OK for H264 4k. 
    Re effects, those can either be CPU-bound or GPU-bound, or a combination of both. Some like Neat Video allow you to configure the split between CPU and GPU. But in general effects use a lot of GPU, and like encode/decode, are slowed down by 4k since it's 4x the data per frame as 1080p. 
    Re Fusion Drive vs SSD, for a while I had both 2013 and 2015 iMac 27s on my desk, one with 3TB FD and the other 1TB SSD. I tested a few small cases with all media on the boot drive, and really couldn't see much FCPX real-world performance difference. You are usually waiting on CPU or GPU. However if you transcode to ProRes, I/O rates skyrocket, making it more likely to hit an I/O constraint.
    Fusion Drive is pretty good but ideally you don't want media on the boot drive. SSD is fast enough to put media there but it's not big enough. Fusion Drive is big enough but may not be fast enough, thus the dilemma. A 3TB FD is actually a pretty good solution for small scale 1080p video editing, but 4k (even H264) chews through space rapidly. Also, performance will degrade on any spinning drive (even FD) as it fills up. Thus you don't really have 3TB at full performance, but need to maintain considerable slack space. In general we often under-estimate our storage needs, so end up using external storage even for "smaller" projects. If this is your likely destiny, why not use an SSD iMac which is at least a bit faster at a few things like booting? Just don't spend your entire budget on an SSD machine then use a slow, cheap bus-powered USB drive.
    If I was getting an 2017 iMac 27 for H264 4k editing, it would be a high-spec version, e.g, 4.2Ghz i7, 32GB RAM, 580 GPU, and probably 1TB SSD. Re the iMac Pro, what little we know indicates the base model will be considerably faster than the top-spec iMac 27 -- it has double the cores (albeit at slower clock rate) and roughly double the GPU performance. However unless Apple pulls a miracle out of their hat and upgrades FCPX to use AMD's VCE coding engine, the iMac Pro will not have Quick Sync, so it will be handicapped just like the current Mac Pro for that workflow. Apple is limited by what Intel provides but this is an increasingly critical situation for productions using H264 or H265 acquisition codecs and high shooting ratios.
  12. Like
    joema got a reaction from KrisAK in iMac Pro   
    There is no simple answer since video editing and codecs span a wide range. H264 1080p can be edited natively with good performance using either Premiere or FCPX on most machines. You don't need a top-end CPU or GPU for this.
    OTOH most H264 4k codecs are difficult to edit, even on top-end machines, and often require transcoding to proxy for smoothest editing. Exceptions are H264 4k codecs like Canon's XF-AVC Intra, that is very fast to edit. There can also be a big difference between (say) Premiere and FCPX, especially on a Mac. In general FCPX is considerably more responsive, especially for editing H264 4k. It is about 4x faster exporting to H264 since it uses Quick Sync and Premiere does not. However Premiere has gotten faster the last year or so, even without proxy, which it now also has.
    That's the editing; effects are different. No matter how lightweight the codec, a computationally-intensive effect must be calculated for each 4k frame. Effects can be implemented entirely in the CPU, entirely in the GPU or a mixture of both. Some effects like Neat Video allow CPU vs GPU rendering, a mix of both and how many CPU cores to use. 
    In general 4k is really difficult to edit. From a CPU standpoint the more (and faster) cores the better. An i7 iMac can be significantly faster than an i5 iMac of the same generation because (1) the CPU clock is faster, and (2) hyperthreading. The current iMac 27 the i7 is about 11% faster just from clock speed. Benefit from hyperthreading varies widely. I used the 3rd party CPUSetter utility to disable/enable hyperthreading on an i7 iMac, and this made about 30% difference in FCPX export speed to H264. For other tasks such Lightroom import and preview generation, it made no difference.
    Re Radeon 580, I haven't see any good benchmarks yet. However only certain tasks are amenable to GPU acceleration, e.g, H264 encode/decode cannot be meaningfully accelerated. The core algorithm is inherently sequential and not amenable to applying hundreds of lightweight GPU threads. But in general software developers increasingly try to leverage the GPU where possible. You can't update the GPU in an iMac so I'd tend to get the fastest one available.
  13. Like
    joema got a reaction from Bioskop.Inc in iMac Pro   
    There is no simple answer since video editing and codecs span a wide range. H264 1080p can be edited natively with good performance using either Premiere or FCPX on most machines. You don't need a top-end CPU or GPU for this.
    OTOH most H264 4k codecs are difficult to edit, even on top-end machines, and often require transcoding to proxy for smoothest editing. Exceptions are H264 4k codecs like Canon's XF-AVC Intra, that is very fast to edit. There can also be a big difference between (say) Premiere and FCPX, especially on a Mac. In general FCPX is considerably more responsive, especially for editing H264 4k. It is about 4x faster exporting to H264 since it uses Quick Sync and Premiere does not. However Premiere has gotten faster the last year or so, even without proxy, which it now also has.
    That's the editing; effects are different. No matter how lightweight the codec, a computationally-intensive effect must be calculated for each 4k frame. Effects can be implemented entirely in the CPU, entirely in the GPU or a mixture of both. Some effects like Neat Video allow CPU vs GPU rendering, a mix of both and how many CPU cores to use. 
    In general 4k is really difficult to edit. From a CPU standpoint the more (and faster) cores the better. An i7 iMac can be significantly faster than an i5 iMac of the same generation because (1) the CPU clock is faster, and (2) hyperthreading. The current iMac 27 the i7 is about 11% faster just from clock speed. Benefit from hyperthreading varies widely. I used the 3rd party CPUSetter utility to disable/enable hyperthreading on an i7 iMac, and this made about 30% difference in FCPX export speed to H264. For other tasks such Lightroom import and preview generation, it made no difference.
    Re Radeon 580, I haven't see any good benchmarks yet. However only certain tasks are amenable to GPU acceleration, e.g, H264 encode/decode cannot be meaningfully accelerated. The core algorithm is inherently sequential and not amenable to applying hundreds of lightweight GPU threads. But in general software developers increasingly try to leverage the GPU where possible. You can't update the GPU in an iMac so I'd tend to get the fastest one available.
  14. Like
    joema got a reaction from Axel in iMac Pro   
    There is no simple answer since video editing and codecs span a wide range. H264 1080p can be edited natively with good performance using either Premiere or FCPX on most machines. You don't need a top-end CPU or GPU for this.
    OTOH most H264 4k codecs are difficult to edit, even on top-end machines, and often require transcoding to proxy for smoothest editing. Exceptions are H264 4k codecs like Canon's XF-AVC Intra, that is very fast to edit. There can also be a big difference between (say) Premiere and FCPX, especially on a Mac. In general FCPX is considerably more responsive, especially for editing H264 4k. It is about 4x faster exporting to H264 since it uses Quick Sync and Premiere does not. However Premiere has gotten faster the last year or so, even without proxy, which it now also has.
    That's the editing; effects are different. No matter how lightweight the codec, a computationally-intensive effect must be calculated for each 4k frame. Effects can be implemented entirely in the CPU, entirely in the GPU or a mixture of both. Some effects like Neat Video allow CPU vs GPU rendering, a mix of both and how many CPU cores to use. 
    In general 4k is really difficult to edit. From a CPU standpoint the more (and faster) cores the better. An i7 iMac can be significantly faster than an i5 iMac of the same generation because (1) the CPU clock is faster, and (2) hyperthreading. The current iMac 27 the i7 is about 11% faster just from clock speed. Benefit from hyperthreading varies widely. I used the 3rd party CPUSetter utility to disable/enable hyperthreading on an i7 iMac, and this made about 30% difference in FCPX export speed to H264. For other tasks such Lightroom import and preview generation, it made no difference.
    Re Radeon 580, I haven't see any good benchmarks yet. However only certain tasks are amenable to GPU acceleration, e.g, H264 encode/decode cannot be meaningfully accelerated. The core algorithm is inherently sequential and not amenable to applying hundreds of lightweight GPU threads. But in general software developers increasingly try to leverage the GPU where possible. You can't update the GPU in an iMac so I'd tend to get the fastest one available.
  15. Like
    joema got a reaction from jonpais in iMac Pro   
    I don't work on long features in terms of deliverable, but 4k documentaries with high shooting ratios and lots of multicam. In this era that's not unusual -- 4k GoPros and drones are everywhere, A and B cam are 4k, etc. I shot a wedding last year and we used lots of 4k multicam.
    There is a major editing performance variation in various H264 codecs. E.g, the UHD 4k 4:2:0 100 mbps output from a DVX200, or Sony A7RII is very sluggish -- even in FCPX and on the fastest available iMac. By contrast the UHD 4k 4:2:2 300 mbps output from a Canon XC10 is also H264 but it's very smooth and fast to edit. I don't need proxy for that.
    But I can't control what codecs camera manufacturers use, just have to deal with it. We have ProRes recorders but generally don't use them due to added complexity in the field. The bottom line is in an era when high shooting ratios and H264 4k are common, current hardware is often not fast enough without transcoding -- even for FCPX. This isn't just timeline performance but the ability to skim material, mark keyword and favorite ranges is greatly degraded. It is for these increasingly common cases the iMac Pro is needed and definitely not overkill.
    Of course we defer compute-intensive effects to the very last step, but ultimately they must be applied *and* iteratively adjusted. Each tweak or adjustment to stabilization, Neat video, de-flickering, etc. must be rendered in the timeline to fully evaluate, and this is agonizingly slow on 4k. The greatly improved CPU and GPU performance for the iMac Pro is vitally needed for this.
  16. Like
    joema got a reaction from Axel in iMac Pro   
    That video was the 2014 iMac 27. It was improved in 2015 (what I have) and Max re-tested it and determined it did not have the thermal throttling issues of the 2014 model. 
    Re editing camera-native H264, I am a fan of that where possible -- lots of FCPX users needlessly transcode to optimized media. However for large quantities of H264 4k, you pretty much need proxy -- even if NOT multicam and without Neat or multiple effects. Even for single-cam material the skimmer is just not fast enough on a top-spec 2015 iMac 27 to blitz through large quantities of H264 4k content. If you play around with a 5 min 4k iPhone video, it's OK without proxy. If you have a long single 4k video (e.g, a classroom lecture) and all you need is chop the head and tail, you don't need proxy for that. But for evaluating and seriously editing lots of content, it's just too slow without proxy.
    Re the iMac 27 is the wrong machine for large proxy transcodes, there really isn't a much better machine. A 12-core nMP has 3x the cores but they run at 2.7ghz so overall it's about 2x the CPU throughput, but without Quick Sync. It might not be *any* faster. And buying a four-year-old nMP now? Now *that's* the wrong machine.
    By the same token the iMac Pro might not be hugely faster (for creating proxies) unless Apple figures out some way to use hardware acceleration for H264 decoding on a Xeon machine. But (like the nMP) it would be faster for various other editing and effects-related tasks.
  17. Like
    joema got a reaction from jonpais in Don't count Apple (FCPX) out yet .........   
    I edit lots of H264 4k on a 2015 iMac 27 using FCPX. Using proxy and deferring Neat Video to the very last step is best. Also jcs had excellent advice about limiting use of Neat Video. If you haven't use proxy before, this will produce huge performance gains during the edit phase. Before the final export, you must switch it back to optimized/original, else the exported file will be at proxy resolution. That final export will be no faster but all the editing prior to that will be faster.
    However I'm not sure just adding 16GB more RAM is the solution. It sounds like a possible memory leak from either a bug in plugin or FCPX itself. Pursuing that is a step-by-step process of elimination and repeated testing, e.g, eliminate all effects then selectively add them back until the problem happens.
    Starting with 10.3.x, there is a new feature to remove all effects from all clips. So you can duplicate the project, then remove all effects from the duplicate then add them back selectively: 
    https://support.apple.com/kb/PH12615?locale=en_US
  18. Like
    joema got a reaction from Axel in Don't count Apple (FCPX) out yet .........   
    I edit lots of H264 4k on a 2015 iMac 27 using FCPX. Using proxy and deferring Neat Video to the very last step is best. Also jcs had excellent advice about limiting use of Neat Video. If you haven't use proxy before, this will produce huge performance gains during the edit phase. Before the final export, you must switch it back to optimized/original, else the exported file will be at proxy resolution. That final export will be no faster but all the editing prior to that will be faster.
    However I'm not sure just adding 16GB more RAM is the solution. It sounds like a possible memory leak from either a bug in plugin or FCPX itself. Pursuing that is a step-by-step process of elimination and repeated testing, e.g, eliminate all effects then selectively add them back until the problem happens.
    Starting with 10.3.x, there is a new feature to remove all effects from all clips. So you can duplicate the project, then remove all effects from the duplicate then add them back selectively: 
    https://support.apple.com/kb/PH12615?locale=en_US
  19. Like
    joema got a reaction from markr041 in Variable ND filters for video?   
    You generally need some type of ND when shooting outdoors at wide aperture. For scripted shooting, a matte box and drop-in fixed filters may work, but for documentary, news, run-and-gun, etc. a variable ND is handy. This is why upper-end camcorders have long had built-in selectable ND filters.
    However with the move to large sensors, the entire optical path gets larger. It becomes much harder both mechanically and optically to fit multiple precision fixed ND filters inside. The surface area of an optical element increases as the square of the radius, so it becomes much harder (and more expensive) to make a perfectly flat multicoated filter. The Sony FS5 has an electronic variable ND, showing how important this is for video.
    It doesn't make sense to put a $20 filter on a $2500 lens. However filter price and quality are not necessarily directly related.
    In documentary video I've used many different variable ND filters, and here are a few things to look for:
    (1) If at all possible get one that fits inside a lens hood. This is the most difficult requirement since there are no specs or standards for this. You use a variable ND outside under bright (often sunny) conditions -- the very conditions where you need a lens hood. However most variable ND filters and most lenses are incompatible. The ideal case would be certain Sony A or E-mount lenses with a cutout in the lens hood which allows turning the variable ND filter without removing the hood. However it's very difficult to find one which fits.
    (2) Get one with hard stops at the end of each range. Otherwise it's difficult to tell where you are on the attenuation scale, and this adds a few seconds which can make you miss a shot.
    (3) Get one which does not exhibit "X" patterns or other artifacts at high attenuation. This typically happens with filters having more than 6 stops attenuation.
    (4) Get one which has the least attenuation on the low end, ideally 1 stop or less. This reduces the times you have to remove the filter when shooting inside. A filter which goes from 1-6 stops is likely more useful and less likely to have artifacts at high attenuation than one which goes from 2-8 stops.
  20. Like
    joema got a reaction from Hanriverprod in Variable ND filters for video?   
    You generally need some type of ND when shooting outdoors at wide aperture. For scripted shooting, a matte box and drop-in fixed filters may work, but for documentary, news, run-and-gun, etc. a variable ND is handy. This is why upper-end camcorders have long had built-in selectable ND filters.
    However with the move to large sensors, the entire optical path gets larger. It becomes much harder both mechanically and optically to fit multiple precision fixed ND filters inside. The surface area of an optical element increases as the square of the radius, so it becomes much harder (and more expensive) to make a perfectly flat multicoated filter. The Sony FS5 has an electronic variable ND, showing how important this is for video.
    It doesn't make sense to put a $20 filter on a $2500 lens. However filter price and quality are not necessarily directly related.
    In documentary video I've used many different variable ND filters, and here are a few things to look for:
    (1) If at all possible get one that fits inside a lens hood. This is the most difficult requirement since there are no specs or standards for this. You use a variable ND outside under bright (often sunny) conditions -- the very conditions where you need a lens hood. However most variable ND filters and most lenses are incompatible. The ideal case would be certain Sony A or E-mount lenses with a cutout in the lens hood which allows turning the variable ND filter without removing the hood. However it's very difficult to find one which fits.
    (2) Get one with hard stops at the end of each range. Otherwise it's difficult to tell where you are on the attenuation scale, and this adds a few seconds which can make you miss a shot.
    (3) Get one which does not exhibit "X" patterns or other artifacts at high attenuation. This typically happens with filters having more than 6 stops attenuation.
    (4) Get one which has the least attenuation on the low end, ideally 1 stop or less. This reduces the times you have to remove the filter when shooting inside. A filter which goes from 1-6 stops is likely more useful and less likely to have artifacts at high attenuation than one which goes from 2-8 stops.
  21. Like
    joema got a reaction from Kisaha in Variable ND filters for video?   
    We've used Tiffen, Genustech, SLR Magic, NiSi and Heliopan. I didn't like the Tiffen because it made an "X" pattern at high attenuation. I use a 95mm NiSi on my Sony 28-135 f/4 cinema lens, and really like it because it fits under the lens hood, has hard stops and no artifacts: https://www.aliexpress.com/store/product/NiSi-95-mm-Slim-Fader-Variable-ND-Filter-ND4-to-ND500-Adjustable-Neutral-Density-for-Hasselblad/901623_32311172283.html
    However I also have other smaller NiSi filters I don't like as well because the frame is thicker. Overall the optical quality of the Genustech and SLR Magic seem OK, but most filters will impose some color cast that you must correct in post. I just got the Heliopan and haven't thoroughly tested it, but it fits under the lens hood of my Canon 70-200 2.8 IS II, which is a big plus.
    There are lots of variable ND filter "shootouts" on Youtube and other places. I suggest you watch those and buy from a retailer that has a good return policy.
  22. Like
    joema got a reaction from hansel in Variable ND filters for video?   
    You generally need some type of ND when shooting outdoors at wide aperture. For scripted shooting, a matte box and drop-in fixed filters may work, but for documentary, news, run-and-gun, etc. a variable ND is handy. This is why upper-end camcorders have long had built-in selectable ND filters.
    However with the move to large sensors, the entire optical path gets larger. It becomes much harder both mechanically and optically to fit multiple precision fixed ND filters inside. The surface area of an optical element increases as the square of the radius, so it becomes much harder (and more expensive) to make a perfectly flat multicoated filter. The Sony FS5 has an electronic variable ND, showing how important this is for video.
    It doesn't make sense to put a $20 filter on a $2500 lens. However filter price and quality are not necessarily directly related.
    In documentary video I've used many different variable ND filters, and here are a few things to look for:
    (1) If at all possible get one that fits inside a lens hood. This is the most difficult requirement since there are no specs or standards for this. You use a variable ND outside under bright (often sunny) conditions -- the very conditions where you need a lens hood. However most variable ND filters and most lenses are incompatible. The ideal case would be certain Sony A or E-mount lenses with a cutout in the lens hood which allows turning the variable ND filter without removing the hood. However it's very difficult to find one which fits.
    (2) Get one with hard stops at the end of each range. Otherwise it's difficult to tell where you are on the attenuation scale, and this adds a few seconds which can make you miss a shot.
    (3) Get one which does not exhibit "X" patterns or other artifacts at high attenuation. This typically happens with filters having more than 6 stops attenuation.
    (4) Get one which has the least attenuation on the low end, ideally 1 stop or less. This reduces the times you have to remove the filter when shooting inside. A filter which goes from 1-6 stops is likely more useful and less likely to have artifacts at high attenuation than one which goes from 2-8 stops.
  23. Like
    joema got a reaction from Kisaha in Variable ND filters for video?   
    You generally need some type of ND when shooting outdoors at wide aperture. For scripted shooting, a matte box and drop-in fixed filters may work, but for documentary, news, run-and-gun, etc. a variable ND is handy. This is why upper-end camcorders have long had built-in selectable ND filters.
    However with the move to large sensors, the entire optical path gets larger. It becomes much harder both mechanically and optically to fit multiple precision fixed ND filters inside. The surface area of an optical element increases as the square of the radius, so it becomes much harder (and more expensive) to make a perfectly flat multicoated filter. The Sony FS5 has an electronic variable ND, showing how important this is for video.
    It doesn't make sense to put a $20 filter on a $2500 lens. However filter price and quality are not necessarily directly related.
    In documentary video I've used many different variable ND filters, and here are a few things to look for:
    (1) If at all possible get one that fits inside a lens hood. This is the most difficult requirement since there are no specs or standards for this. You use a variable ND outside under bright (often sunny) conditions -- the very conditions where you need a lens hood. However most variable ND filters and most lenses are incompatible. The ideal case would be certain Sony A or E-mount lenses with a cutout in the lens hood which allows turning the variable ND filter without removing the hood. However it's very difficult to find one which fits.
    (2) Get one with hard stops at the end of each range. Otherwise it's difficult to tell where you are on the attenuation scale, and this adds a few seconds which can make you miss a shot.
    (3) Get one which does not exhibit "X" patterns or other artifacts at high attenuation. This typically happens with filters having more than 6 stops attenuation.
    (4) Get one which has the least attenuation on the low end, ideally 1 stop or less. This reduces the times you have to remove the filter when shooting inside. A filter which goes from 1-6 stops is likely more useful and less likely to have artifacts at high attenuation than one which goes from 2-8 stops.
  24. Like
    joema got a reaction from Ken Ross in New information regarding H.265 on the Panasonic GH5   
    That is a good question, but I don't think anyone knows the answer. Currently there seems little need for this since few cameras use H265. There is much greater need for updated and new camera formats such as MXF. Apple supports these either in FCPX or the downloadable Pro Video Formats: https://support.apple.com/kb/DL1898?locale=en_US
    Like Adobe does with Premiere Pro, Apple has a trial version of FCPX. If Apple added H265 support to FCPX, the licensing issues might force them to make a special "eval" version without H265. This confuses customers since they expect to evaluate the product against all codecs and formats. In fact Adobe claims the trial version of Premiere CC is absolutely full-featured, but it does not have H265 support. There is little Adobe can do about that since the H265 patent holders probably demand royalties from every copy, which conflicts with a free trial version.
    No software developer likes making special versions. Even though the source code change may be small, it still requires separate full-spectrum testing for function, performance, reliability and regressions. Adobe went ahead and did this for their trial version, but Apple may have decided it's not worth the expense and effort at this time.
    Also H265 is extremely compute-intensive to edit -- much more than H264. Except on a limited set of machines 4k H265 would likely require transcoding to provide good editing performance. The few people who need to edit H265 can already transcode it externally. That is not as convenient but until recently every Premiere user on earth had to externally transcode if they wanted proxy capability. Apple may think the few who need H265 support in FCPX can transcode externally for now.
  25. Like
    joema got a reaction from Vladimir in New information regarding H.265 on the Panasonic GH5   
    As I previously described, deployment of H265/HEVC has been slowed for non-technical reasons. There have been major disputes over licensing, royalties and intellectual property. At one point the patent holders were demanding a % of gross revenue from individual end users who encode H265 content. That is one reason Google developed the open source VP9 codec. The patent holders have recently retreated from their more egregious demands, but that negatively tainted H265 and has delayed deployment. 
    The licensing and royalty issue is why the evaluation version of Premiere Pro does not have H265.
    VP9 is replacing H264 on Youtube, and they will transition to VP9's successor AV1 soon. AV1 is also open source, not patent-encumbered, and significantly better than H265/HEVC: http://www.streamingmedia.com/Articles/Editorial/What-Is-.../What-is-AV1-111497.aspx
    Skylake's Quick Sync has partial hardware support for VP9 and Kaby Lake has full hardware support, but I don't know about AV1.
×
×
  • Create New...