Jump to content

joema

Members
  • Posts

    160
  • Joined

  • Last visited

Everything posted by joema

  1. It's not mine, but it was shot with a GH4. My documentary crew uses the GH4, A6300, A7RII and Panasonic AG-DVX200, plus DSLRs like the D810 and 5D3. The A6300 produces very good looking 4k material if equipped with the right lens, and if exposed and processed correctly. Out of all those cameras the 5D3 probably produces the best 1080p image out of camera, even if the resolution may not technically be true 1080 by actual measurement. However the A7RII in 4k Super35 crop mode has a little better low light ability and 4k gives a lot more pixels to work with. A lot of the "look" depends on the lens and the relationship to the sensor size. A full frame sensor with a high-quality f/2.8 telephoto lens can produce very nice looking footage, whether that is 4k or 1080p. But since few full-frame 4k cameras exist with direct pixel readout, the real-world comparison is often between a 4k direct-readout crop-mode sensor like the A7RII vs a full-frame 1080p sensor like the 5D3. The GH2 and GH3 can be very good with the right lens and proper lighting. In the 2012 Zacuto shootout, Francis Ford Coppola and several others preferred the GH2 over more expensive cameras: http://www.eoshd.com/2012/07/zacuto-revenge-shootout-part-2-results-revealed-francis-ford-coppola-and-audience-majority-give-win-to-gh2/ Of course cameras and sensors have progressed a lot since then, so it will be interesting to see how the GH5 performs.
  2. From an editing standpoint it is really nice to have 4k material -- especially if finishing in 1080p. Below is an example of locked-down GH4 footage that is manipulated in post. As you said, the 1080p of some newer 4k cameras is worse than the "old" 1080p-only cameras before them. Sadly that is another reason to shoot in 4k -- because they made 1080p worse in some cameras. In theory 4k 8-bit 4:2:0 can be transcoded to 1080p 10-bit 4:4:4 (provided you don't use cropping or stabilization). That is another advantage for 1080p delivery -- 4k can provide the bit depth and chroma sampling of using an external 1080p HDMI recorder without the complexity. However 4k makes the "data wrangling" task of post production a lot harder.
  3. joema

    Out now: FCP X 10.3

    I edited with Premiere for years before switching (mostly) to FCPX. You can get the job done in either editor. Both are used to edit Hollywood feature films, although most of those are edited in Avid. Assuming "Premiere" means the entire Adobe suite, you have a wider array of tools. E.g, you can do spectral audio editing using Audition, whereas in FCPX you'd have to get an expensive external tool like RX5 for that. Premiere is available on both Windows and Mac, so you can build a very powerful Windows editing machine using the latest hardware, whereas FCPX is Mac-only so you're limited to that hardware. OTOH FCPX is generally faster and more efficient. Running on my 2015 iMac 27, it transcodes and exports to H264 about 4x faster than Premiere CC -- on the same hardware. A big advantage of FCPX is "digital asset management". It is essentially a database merged with an editing program. Premiere by contrast has limited ability to catalog, tag and keyword content, and no ability to do this on ranges within clips. Working on a large project with 50 or more hours of material, it is easy to get bogged down just trying to find content. I worked on a large documentary using Premiere and that was a big problem. We evaluated CatDV (an external asset manager) but back then it was unsuitable so ended up having to write a complex Excel spreadsheet to keep track of all the content. By comparison FCPX has a built-in asset manager and makes finding content easy -- including tagged and keyworded ranges within clips. The FCPX "skimmer" is vastly faster than any other editor and facilitates rapid visual searches for content. Many people find FCPX easier to use -- initially. However IMO FCPX is harder to fully learn and exploit all the features. E.g, Premiere (at least prior to recently) had no storage management features, so obviously there was nothing to learn. FCPX has both managed and unmanaged libraries, plus all kinds of side issues related to this -- consolidation, creating "lean" transfer libraries, etc. For people coming from other track-based editors like Avid, Vegas, etc, Premiere is familiar and requires no fundamental reorientation. By contrast using FCPX most efficiently requires adopting a different workflow -- using the metadata features, tagging and keywording content in the Event Browser *before* you start cutting on the timeline, etc. This is especially true regarding the magnetic timeline. E.g, making a "split edit", aka "J cut" or "L cut" in Premiere is intuitive and straightforward -- the audio and video tracks are separate and this visually reinforces what you're doing. In FCPX, making that same edit while not detaching the audio is not as intuitive. Up until the recent FCPX 10.3 release, Premiere had a major ease-of-use advantage in doing certain tasks on a multicam clip. E.g, you could easily apply stabilization, optical flow smoothing or color-correction tracking directly to the multicam clip. By contrast FCPX required a complex workaround of looking up the timecode range in the base clips. As of 10.3 this has been improved but I haven't fully tested it. From a cost standpoint, Premiere (for the whole suite) is about $50 per month per person, and Adobe essentially discontinued any non-profit discount with CC. FCPX is $299 for a one-time purchase and you can use it on all the computers that "you own or control", and updates thus far have been free. If you ever stop paying Adobe $50 per month, you lose access to your projects, although your rendered output will still be there. IOW you are never "vested" in the software no matter how many years you pay. OTOH $50 a month is a lot less immediate out-of-pocket expense than the previous one-time-purchase of the Adobe suite, which was thousands of dollars. For that monthly price you are getting a huge amount of diverse software which is continuously updated.
  4. Quality is very good, not equal to the G3 but very good. It's just a lot bigger so if you care about aesthetics it's harder to conceal. However it is common for news-style interviews to have a hand-held mic in the frame, so it's no different than that. For windy outdoor conditions there is a foam wind muff for some of these but we usually don't use it. Using one (depending on your viewpoint) makes the mic (a) even bigger or (b) less obvious by making it less "technical" looking. Another technique (esp. easy with 4k) is put the mic a little lower on the shirt, then crop it out in post.
  5. Well, you know your own needs and if you're experienced with PP just stay with that. The problem is H.265/HEVC is extremely compute-intensive. A new 4k TV can handle this since they can add hardware support for H.265 decoding. Digital TV broadcasts currently use H.264, as does Blu-Ray but to squeeze 4k into over-the-air channel bandwidth will require H.265. Testing is ongoing and years in the future the upgraded ATSC 3.0 TV standard will support that. This will also probably be used for satellite and cable providers but that is years away. UHD Blu-Ray will apparently use H.265/HEVC but the decoding for that is currently only available in stand-alone hardware players. I don't think any PC or Mac can play a 4k UHD Blu-Ray disc. The Quick Sync in Intel's Skylake (used in the 2015 iMac 27) supports H.265 hardware acceleration for 8-bits per color channel, so if playback and editing software supports that it will be vastly faster. The upcoming Kaby Lake on-chip will support H.265 at 10-bits per color channel, but that will not be used for broadcast FCPX has used Quick Sync for years but unfortunately Adobe has not put this in PP for the Mac yet. They made some ambiguous statements at the last PP update which might imply they began using Quick Sync on PP in Windows. nVidia has hardware support for H.264 and H.265 in certain graphics cards, via the NVENC API. Likewise AMD has this in certain cards, accessed via the VCE API. However software developers must write to those APIs, and there are various versions and many different cards out there. Note this fixed-function logic for video acceleration is separate from the GPU, although it is bundled on the GPU card in a different chip. The software API fragmentation between NVENC and VCE plus the multiple versions of those discourages developers from using them. By contrast most computers with an Intel CPU Sandy Bridge and later has Quick Sync (excepting Xeon) so it's a broader platform to target. The problem with Macs is you can't change the GPU card to obtain better performance or to harness new software which has recently added support for NVENC or VCE. So (hypothetically speaking) if Adobe chose to support nVidia's NVENC over Quick Sync, there would be nothing the typical Mac owner could do, since recent Macs use AMD GPUs.
  6. FCPX exports to H264 at about 3.5x to 4x the performance of Premiere on Macs with Sandy Bridge or later CPUs (excepting Xeon on the Mac Pro). It is a huge performance difference. The CPU load during editing is much lower on FCPX, maybe because Apple uses Quick Sync which Premiere does not, at least on Mac. That said you're right FCPX does not yet support H265/HEVC and Premiere does but H265 is new and has limited support everywhere. To my knowledge the only camera which used that was the Samsung NX1 which was cancelled. If you give an H265 file to somebody they might not be able to play it without specific help, and if their computer doesn't have specialized H265 hardware acceleration it won't play smoothly. I've tested numerous 4k H265/HEVC files on my 2015 top-spec iMac 27 and several of them play sluggishly in any available player. The computational load of H265 CPU can be up to 10x that of H264, which is why hardware support for H265 encode/decode will be important -- whenever H265 becomes widely adopted. Re Compressor, this costs $49 (one time purchase) which is the same as Adobe's monthly rental fee for their suite including Premiere CC. If I had to edit a lot of H264 4k using Premiere CC, I would personally build a powerful Windows PC for that, not use a Mac. Premiere's recently-added proxy feature makes a huge improvement when editing H264 4k files.
  7. The problem is H264 4k is four times the data of 1080p. It is an incredible load on any editing machine. Even FCPX can struggle with this on a top-spec 2015 iMac 27, and it uses hardware accelerated Quick Sync on Sandy Bridge and later Intel CPUs (excluding Xeon). GPUs by themselves cannot meaningfully accelerate H264 encode/decode, so import, export and scrubbing the timeline is mostly a CPU-oriented task if no effects are used. Effects can often (but not always) be GPU-accelerated, but this does not remove the CPU load from H264 encode/decode -- it just adds another burden. The bottom line is if you want fluid, responsive H264 4k editing you generally need to use proxy files -- whether on Premiere CC or FCPX. A higher-end Mac Pro or powerful Windows workstation might be able to avoid that but not an iMac. I edit lots of 4k every day on my 2015 top-spec iMac 27 using both FCPX and Premiere CC. It does fine on 1080p, but for my taste it's just not fast enough on 4k without using proxy files, except in limited situations for small single-camera clips. Other people might tolerate some sluggishness but it gets irritating pretty quickly. Since the iMac is about to be refreshed I'd recommend waiting to see what that includes. For the first time in several years, new 14/16nm GPU technology is available which may provide a significant increase on the GPU side. Although the GPU is mostly only usable for effects, this is still an issue so the more GPU horsepower the better. E.g, if just editing seems slow on 4k, try applying a computationally-intensive effect like Neat Video noise reduction. This and similar effects are incredibly slow to run on 4k, whether using GPU or CPU rendering. For effects using GPU rendering, at least there is an option of using a faster GPU on machines where this is available.
  8. My documentary crew has many G3s and they have been pretty reliable. We've had a few interference issues over the years but not many. However we are usually not in a dense urban environment. We also sometimes use the "lipstick"-shaped Canon and Sony Blutooth wireless lavs. They are harder to conceal, but for informal walk-up interviews that is often OK. They are quicker to clip on than plumbing the G3 wire through the subject's clothing. We've never had interference issues with them, probably because they use 2.45 Ghz and adaptive frequency hopping, whereas the G3s use a single frequency between 500-600 Mhz. The new Sony ECM-W1M receiver mounts directly to a Sony hot shoe so that is nice when using Sony cameras: https://amzn.com/B00HPM086C The ECM-W1M is similar to the Sony ECM-AW4 which uses a 1/8" audio out instead of a Sony hot shoe, so it works with any camera: https://amzn.com/B00JWU6WWO The ECM-AW4 probably uses the same internals as the now-discontinued Canon WM-V1: https://www.bhphotovideo.com/c/product/751267-REG/Canon_5068B001_WM_V1_Wireless_Microphone.html Re Fuzzynormal's point of what's the use of monitoring if you can't stop -- in most cases you *can* stop, you just don't want to. I think most of us in the doc community have shot lots of interviews both monitored and unmonitored. Unmonitored audio is really dangerous because what looks OK on a meter could have all kinds of issues, including clipping, background noise, clothing noise, etc. I have shot lots of unmonitored stuff, and also had to spend many hours trying to fix it in iZotope RX5 -- that is no fun. That new Tascam DR-10L locally-recorded lav looks pretty good and I already pre-ordered one for testing. However despite the dual level recording, it doesn't solve all the possible issues that require monitoring. But the dual levels cover some situations and the lack of wireless interference covers others, so it probably will be useful in some cases.
  9. If you do not purchase a subscription to Premiere CC (which includes AME), you do not get H265 support. You can install the evaluation version of Premiere/AME CC on other computers which will work for 30 days but that will not include H265 support.
  10. This can be a confusing area. H265 is a new and important codec and it's obvious that potential buyers may want to test Premiere/AME's ability to handle this before buying the product: https://forums.adobe.com/message/3804375#3804375 Adobe recognized back with CS5 that a feature-limited version of the product prevented proper evaluation: "CS5 and earlier lacked many of the most useful and popular codecs...This meant that people had a hard time evaluating the software for real-world use." "The trial version of Adobe Premiere Pro CS5.5, and later includes all of the codecs that are included with the full version of Adobe Premiere Pro CS5.5. This means that you can import and export to all of the supported file formats using the trial version." Unfortunately this is no longer the case with CC. If you want to evaluate Premiere/AME's ability to handle H265, you will have to buy the product via a subscription.
  11. No. You said "If you are willing to pay Comcast $80 a month for a highly compressed crap picture who is an idiot in this scenario NBC or you?....Dude... $50 antenna and problem solved". That is not an option for the majority of viewers today. It may not be an option for you in the future, as the FCC plans on auctioning off the hugely valuable TV spectrum to wireless companies. They can do this because only about 7% of US households use antennas for OTA TV reception: http://www.tvtechnology.com/news/0002/cea-study-says-seven-percent-of-tv-households-use-antennas/220585 An indoor or tabletop antenna does not work for many users. Anyone interested in this can use the tools at http://www.antennaweb.org/ to examine their location and geography with respect to antenna type, size and compass heading required to receive local stations. You often cannot stick a gigantic (highly directional) antenna in your attic for several reasons: (1) Insufficient turning radius (2) Interference from metallic HVAC or insulation. That said, a 4-bay or 8-bay UHF bow tie antenna can work well in an attic if (a) You have an attic (b) If all the stations you need are within a narrow compass heading range (c) All the stations are on UHF (some HD channels are VHF), and (d) There is no major interfering metallic ducting or foil insulation. I have a 4-bay UHF bow-tie antenna and mast-mounted preamp in my attic and it works fairly well, although all the stations I need are within a narrow azimuth range (hence no rotator required), and they are fairly close. So many common factors often make it impractical to use an indoor or attic antenna. HOAs increasingly restrict outdoor antennas, however the 1996 FCC OTA reception rule says these can usually be challenged. Unfortunately most users are not aware of this: https://www.fcc.gov/media/over-air-reception-devices-rule So hopefully you can see that people who pay Comcast $80 a month are not idiots, and the problem is often far more difficult than "Dude... $50 antenna and problem solved" Besides being a professional documentary filmmaker, I have the highest class ham radio licence and have built many antennas by hand, including UHF, VHF and HF. I regularly teach classes on RF techniques, signals and modulation. I have installed many large TV antenna, rotator and low-noise mast-mounted preamp systems. It's important to give the OP the right advice. The advice about "buy a C100 mk II" does not work for the OP, since that is not a permitted camera from the standpoint of his 100 megabit/sec criteria. Although unstated in this case, networks which levy such requirements also often require 10-bit 4:2:2, which the C100 Mark II also does not do internally. My main point was many networks have such little professionalism and commitment to quality they allow the distribution chains handling their licensed content to grossly degrade the image, while hypocritically demanding standards like 100 megabit/sec for submitted material. I wanted to ensure everyone knows some networks widely disregard this at will, as shown in the above links I posted. But this doesn't mean shooting on an EOS M1 or M2 is the best approach, since they just aren't optimal from either codec or operational standpoint. If the OP literally must adhere to the delivery requirements (which likely include 100 megabit/sec and possibly 10-bit 4:2:2) he'll have to get a camera or combination of camera and recorder which support those. If transcoding is permissible then 4k 8-bit 4:2:0 can be converted to 1080p 10-bit 4:4:4: http://www.provideocoalition.com/can-4k-4-2-0-8-bit-become-1080p-4-4-4-10-bit-does-it-matter/ In that case he could probably use a GH4 which is a great camera if equipped with the right lenses and accessories. If that is not permissible, then it will be very interesting to see how the networks react to the GH5, which apparently will hit every check box they have previously used to exclude "lesser" cameras. Will they raise the arbitrarily-enforced extreme delivery standards yet again? Or will they simply use approved and unapproved equipment lists and exclude the GH5 this way?
  12. The networks have power over distributors like Comcast -- they simply choose not to exercise that power because quality is not a priority. If Comcast decided to cut the bit rate to 200 kilobits/sec to free up bandwidth for local shopping channels, thereby reducing the main program to a pixelated slide show, they'd get a call from the networks very quickly, as advertisers would be irate when viewers bailed. Re "who is an idiot" for not having an OTA antenna, a diminishing fraction of users have antennas, down to 7% by some estimates. The 93% of those you call "idiots" often have no choice and cannot practically use antennas. Since 1996 the FCC's Over The Air Reception Devices Rule says many HOAs can be challenged regarding antenna prohibitions but most people are not aware of this and cannot afford the hassle anyway: https://www.fcc.gov/media/over-air-reception-devices-rule Another issue with OTA TV is the value of the occupied RF spectrum is huge, and many other players want that spectral real estate. You may call those not using OTA "idiots" but when that last OTA spectral real estate is grabbed for other purposes, you'll find yourself in that category. http://www.tvnewscheck.com/article/91163/fate-of-ota-tv-hangs-in-the-balance-in-2016 http://variety.com/2013/biz/news/its-big-tv-vs-big-telecom-over-broadcast-spectrum-1200329490/ Re "do a pro job...just get a loan and buy a C100 mk II", that camera only does 8-bit 4:2:0 internally -- at only 24 megabits/sec. It would be rejected out of hand by the criteria the OP mentioned. Of course you can hang an HDMI ProRes recorder off it to achieve greater bit depth and chroma sub-sampling, but you didn't mention that. Despite these limits the networks widely use the C100 and similar DSLRs (without any external recorders). The rules about bitrate and color depth are largely arbitrary and ignored whenever the networks so choose. CNN using a variety of DSLRs and Canon C-series cameras: https://joema.smugmug.com/Photography/CNN-Moneyline-DSLR-Shoot/n-ffF2JW/ CNN using 5D Mark III: https://joema.smugmug.com/Photography/CNN-Using-5D-Mark-III/n-5JqGgB/ CNN field segment shot on C100: https://joema.smugmug.com/Photography/CNN-DSLR-Video/n-scsdxs/ ABC News shooting three-camera interview in front of White House: https://joema.smugmug.com/Photography/ABC-News-Using-DSLRs/n-BsScJC/ ABC Nightline using video DSLR: https://joema.smugmug.com/Photography/ABC-Nightline-Using-DSLR/n-HwH8hG/ 2014 Super Bowl commercial for Gold's Gym shot using Canon DSLRs: https://joema.smugmug.com/Photography/DSLRs-shoot-Arnold-Golds-Gym/n-jzcNXR/
  13. It is ironic that networks require this since the technical quality they deliver is often so poor. Note this frame grab of NBC footage from the Olympics. It is smeared, blurry, full of artifacts. Their excuse would probably be "it's not us, it's Comcast". However transmission of network content is a signal chain that's only as strong as the weakest link. If they permit gross degradation of image quality at any point in the chain, then being persnickety about technical matters at other points is simply lost in the noise. It implies they don't really care about image quality. The technical quality of NBC Olympic content delivered to end users was so bad that the below footage from 1894 was actually better. Imagine that -- some of the first film footage ever shot, and it's better than what NBC delivered. Despite having supercomputers on a chip, satellites in space, and optical fiber spanning the globe, the delivered quality was worse than an old piece of film.
  14. Go back to Kodachrome -- your life will definitely be easier since it's no longer available. However Kodachrome did make the world look like a "sunny day". That's because it was so slow you could only shoot on a sunny day.
  15. Re the point Axel and I made about importance of CPU and GPU and limited importance of I/O beyond about 500 MB/sec on H264 video editing, I should add that some highly-experienced people feel otherwise. Larry Jordan for example says I/O is the most important, GPU next and CPU last, and that you can edit 4k on almost any computer. That is roughly the opposite of my experience as a professional video editor using both FCPX and Premiere CC, but I edit a lot of H264 and only transcode to ProRes or other lower-compression codecs when it's unavoidable. In general I/O rates aren't that high when editing a long-GOP codec because otherwise the puny little CPU in the camera could not write the data to the card fast enough. When configuring a computer for editing, I/O is important but buying more I/O than is necessary usually results in short-changing yourself elsewhere. E.g, getting an external SSD array then running out of space because you didn't realize how rapidly video editing consumes disk space.
  16. I agree with all of these points except the last. I have a Pegasus R4 and several other larger Thunderbolt RAID-5 arrays on a top-spec 2015 iMac 27. The OP mentioned editing photos via Lightroom or video via Final Cut or maybe Premiere Pro, 1080p footage and later editing 4k. Those have entirely different demands. Almost anything can edit H264 1080p. By contrast significant amounts of H264 4k is really hard on almost any computer. FCPX is considerably faster than Premiere CC (I have both) but even FCPX can bog down on 4k, esp. multicam. It generally requires transcoding to proxy for smoothest performance on 4k, which takes lots of space. Proxy is about 2x the space of H264 camera files and optimized ProRes is about 8x the space. I have six Mac and one Windows machine and like OS X but if I were editing mostly on Premiere I'd build or buy a high-end Windows machine for this. You have a lot more configuration options and (as of today) the performance options on the Mac side are limited. This will probably change this fall with the new iMacs and hopefully refreshed nMP. Re Lightroom, it can definitely be sluggish even on a top-spec iMac 27 if editing lots of high-megapixel raw stills. It is unclear if this is a GPU limit due to the 5k screen or a CPU limit (say from bit-block-transfer operations). If you do lots of production work, e.g, an event photographer shooting > 1,000 38-50 megapixel raw stills a day, a high-end Windows machine is probably better. Your "fast enough" statement is correct and often misunderstood. For most video editing it generally doesn't help to have 1,000 or higher MB/sec -- often obtained at great cost financially and sacrificing larger size. Long before you need 1,000 MB/sec you are bottlenecked on CPU or GPU. And as you said, having media on smaller super-fast storage means you often don't have space to transcode to more efficient codecs. This means the high-speed storage has actually made the performance problem *worse* not better, since the most common limits are CPU and GPU not I/O. However I don't agree redundant arrays eliminate the need for extra backups. You can easily have a problem from user error, system software error, application software error, etc. which jeopardizes your data. RAID only helps for disk hardware problems. At a bare minimum I'd suggest Time Machine backup and it's really best to have a disconnected off line backup using Carbon Copy, etc. in addition to Time Machine. And for critical material you probably want additional backups beyond these.
  17. What is the source of this information? My understanding is Skylake already has full hardware support for 8-bit H.265/HEVC (such as output by the NX1). It was Haswell and Broadwell which had partial support. This was tested here: http://labs.divx.com/hevc-hwaccel-skylake Kaby Lake will have hardware support for 10-bit HEVC but this has nothing to do with whether Skylake has full hardware support for 8-bit HEVC. It does: http://www.fool.com/investing/general/2016/01/28/understanding-the-biggest-improvement-intel-corp-i.aspx
  18. I use both FCPX 10.2.3 and Premiere CC 2015.3.
  19. joema

    Camcorders

    My group has a G30 and XA25. I will be shooting some instructional material with the G30 tomorrow, just because it's easy. We usually use larger-sensor cameras but cameras like these are very nice for certain things. They are straightforward to use, relatively inexpensive, and have superb stabilization. Battery life is good, they don't have a 29 min. recording limit and they don't overheat. An experienced operator can get good looking content. When you consider how much material has been shot with the AG-DVX100 tape-based DV camcorder (including Oscar-nominated documentaries) and how superior modern HD camcorders like the G40 are, you might think why would anyone want anything else. The answer is despite the advantages it doesn't have that lush cinematic look of a higher-end large sensor camera, and doesn't do well in low light. Unlike a decade ago when DV was a common doc format, today even a well-operated entry-level DSLR can produce cinematic-looking material. Viewers come to expect that, whether they can verbalize it or not.
  20. As a former experienced Premiere editor who moved to FCPX, this can be a difficult transition. It's not like moving between other track-based editors such as Avid or Vegas. The paradigm is radically different, and for some users entails a lengthy learning curve. They are both good products. It's true FCPX is faster at various things on the same hardware but whether this produces the end product any faster is more complex. Premiere users often depend on After Effects or other components of the Adobe suite, so it's often easier to stay with that. They may be part of a workgroup so changing editing software is not an individual decision. Re 4K XAVC-S, I have terabytes of this and while FCPX on a top-spec iMac 27 can handle a single stream without transcoding to proxy, it still it still requires proxy for smooth editing of 4K H264 multicam. Premiere users on Windows can easily build or buy whatever hardware they need to obtain good performance. On Mac the options are more limited. However since Premiere now has integrated proxy support, that will solve most performance problems, at the time and space cost of transcoding the files. However the transcode is a background process so you can continue to work while that runs.
  21. Almost any 7200 rpm 3.5" drive would work for this, but they are externally-powered, hence not very convenient for portable use. For 1080p, it's no problem from a CPU or I/O standpoint. I edit a lot of 4k XAVC-S, and for camera native the data rate isn't that high. However the CPU load is very high, especially for Premiere. This leads to transcoding to proxy (a CPU-bound operation) which takes time and increases I/O load when completed, since the video files are much less dense. If you want portability, then staying with a bus-powered drive is nice but most USB 3 bus-powered drives are too slow, IMO. The 4TB Seagate Backup Plus Fast is bus-powered, only about $185, and it's pretty fast (internally RAID-0): https://amzn.com/B00HXAV0X6 I have several of those and they work well. Below are other bus-powered external SSD options I don't have personal experience with. Lacie 1TB Thunderbolt bus-powered SSD ($900): https://eshop.macsales.com/item/Lacie/9000602/ Transcend 1TB Thunderbolt bus-powered SSD ($589): https://amzn.com/B00NV9LTFW If USB 3 is OK, this 1TB bus-powered external SSD is about $400: https://eshop.macsales.com/item/OWC/ME6UM6PGT1.0/
  22. I have six Macs, three with Fusion Drive and three with SSD. While my media content is usually on external Thunderbolt arrays, I have done lots of testing with smaller projects on SSD. I don't see much performance difference attributable to I/O if editing H264. In hindsight this should be obvious -- if the I/O rate was that high, the puny CPU and I/O system in the camera could not write it to storage fast enough. Anyone who doubts this can simply monitor I/O rates when editing H264 content by using Activity Monitor or Windows Performance Monitor -- they aren't that high. SSD can make a difference if editing lower-compression codecs like ProRes. In that case the I/O rate can be 8x or 10x the camera native rate -- for a single stream. For three-camera multicam it could be 30x the camera native rate. In that case you may really need the additional I/O performance, but SSD is often too small or too expensive in those cases.
  23. There is no simple answer. Some systems can edit the camera native files with good performance for one stream. Most systems cannot do this smoothly for 4k H264 multicam, and some type of transcoding is needed, whether externally before ingest or to proxy during/after ingest. Fortunately Premiere now supports this and gives various resolution and codec options for proxy, including H264, Cineform, and ProRes 422. FCPX always transcodes to 1/4 res ProRes 422, e.g, 1080 from 4k. Also (as already mentioned) not all 4k H264 codecs are the same. Some may exhibit smoother editing on certain software. For documentary projects with a large shooting ratio, it is nice (in FCPX) to skim through the camera native files without transcoding all that. For scripted narratives or other content with a lower shooting ratio, the workflow might favor transcoding everything up front or possibly doing initial selects outside the editor before import. Some groups mandate ProRes recording off the camera, so all their cameras either do this internally or have external recorders. Others do the initial evaluation and selection using camera native files. Still others transcode to a mezzanine codec before ingest. It depends on the equipment, preferences and workflow policies of the group. My group can shoot a terabyte of 4k H264 per weekend so we don't transcode to ProRes before or after ingest since that would be at least 8 terabytes. We selectively transcode to proxy after ingest if needed for 4k multicam.
  24. Editing 4k H264 is often sluggish with virtually any editing software on almost any computer. It is just inherently hard -- it's 4x the data per frame of HD, it's stored in a compressed "long GOP" format which must be decoded on playback. It is generally a CPU-bound task, not I/O or GPU limited. FCPX is much faster than Premiere at this but even FCPX can struggle with 4k H264 multicam. Even on a high-end machine I would never edit 4k H264 multicam without transcoding to proxy -- using any editor. On my 2015 top-spec iMac 27 (4Ghz i7-6700K, 32GB, 1TB SSD, M395X, 16TB Thunderbolt RAID5), Premiere CC 2015.3 is borderline usable on a single stream of 4k H264. I have never seen *any* problem with pure 1x playback (at 1/4 res) of 4k H264 on Premiere CC on my iMacs or a several-year-old Windows PC with a 4Ghz i7-875K CPU and GTX-660. The lag happens when scrubbing the timeline or using JKL commands to rapidly change from FF to REW -- not during 1x playback. If your system can't even do 1x playback there may be something wrong, either in the configuration or hardware. Make sure your Source and Program monitors are set to 1/4 resolution. I could see editing small single-cam 4k projects without transcoding. For 4k H264 multicam, transcoding to proxy is essential whether using FCPX or Premiere. Fortunately Premiere 2015.3 has added proxy support which greatly speeds up H264 editing. I have tested this on 4k XAVC-S content from my A7RII and similar content from a Panasonic AG-DVX200. The downside is you must transcode, but at least Premiere now supports that internally. It can be done during or after ingest. In limited testing I've done, Premiere is about twice as slow as FCPX at transcoding to proxy, but it gets the job done. If you want fast, fluid 4k H264 editing on Premiere, this requires either a custom-built machine (or equivalent), or transcoding to proxy. Even using FCPX on a top-spec 2015 iMac, you have to transcode to proxy for fastest, smoothest editing performance on 4k H264.
  25. joema

    PC for NX1

    Most computers and editing software can struggle with 4k H264 or H265, depending on the specifics. 4k is 4x the data per frame as 1080p, and the encode/decode process is very CPU-intensive and cannot generally be GPU-accelerated. H265 is even worse on the decode side -- much more CPU-intensive than H264. Adobe has made some major performance improvements to the latest 2015.3 (10.3) version of Premiere, including proxy media and apparently Quick Sync encode/decode, (Windows only) although they have not described it that way. Your best bet is upgraded to Premiere 10.3 and use the proxy feature. You could also consider upgrading the GPU. The new GTX-1070 and 1080 are much faster than your GTX-960. This generally won't help on encode/decode but will definitely help on effects. If Adobe is using nVidia's NVENC hardware-assisted encode/decode logic, it might help even that, however to my knowledge Adobe has never described this in detail.
×
×
  • Create New...