Jump to content

joema

Members
  • Posts

    160
  • Joined

  • Last visited

Posts posted by joema

  1. 1 hour ago, mercer said:

    ...It looks nice. Great job. What did you shoot that with?...I could argue that the 1080p from the GH3 is more filmic looking and looks better than the 4K from the GX85 or A6300....

    It's not mine, but it was shot with a GH4. My documentary crew uses the GH4, A6300, A7RII and Panasonic AG-DVX200, plus DSLRs like the D810 and 5D3. The A6300 produces very good looking 4k material if equipped with the right lens, and if exposed and processed correctly. Out of all those cameras the 5D3 probably produces the best 1080p image out of camera, even if the resolution may not technically be true 1080 by actual measurement. However the A7RII in 4k Super35 crop mode has a little better low light ability and 4k gives a lot more pixels to work with. 

    A lot of the "look" depends on the lens and the relationship to the sensor size. A full frame sensor with a high-quality f/2.8 telephoto lens can produce very nice looking footage, whether that is 4k or 1080p. But since few full-frame 4k cameras exist with direct pixel readout, the real-world comparison is often between a 4k direct-readout crop-mode sensor like the A7RII vs a full-frame 1080p sensor like the 5D3. 

    The GH2 and GH3 can be very good with the right lens and proper lighting. In the 2012 Zacuto shootout, Francis Ford Coppola and several others preferred the GH2 over more expensive cameras: http://www.eoshd.com/2012/07/zacuto-revenge-shootout-part-2-results-revealed-francis-ford-coppola-and-audience-majority-give-win-to-gh2/

    Of course cameras and sensors have progressed a lot since then, so it will be interesting to see how the GH5 performs.

  2. 5 hours ago, Oliver Daniel said:

    ...I still use 1080p mostly on the FS5 and DJI X5R - the benefit of 4k is negligible if you are finishing in 1080p...The A7SII for example, has a worse 1080p mode than the original. 

    From an editing standpoint it is really nice to have 4k material -- especially if finishing in 1080p. Below is an example of locked-down GH4 footage that is manipulated in post.

    As you said, the 1080p of some newer 4k cameras is worse than the "old" 1080p-only cameras before them. Sadly that is another reason to shoot in 4k -- because they made 1080p worse in some cameras. In theory 4k 8-bit 4:2:0 can be transcoded to 1080p 10-bit 4:4:4 (provided you don't use cropping or stabilization). That is another advantage for 1080p delivery -- 4k can provide the bit depth and chroma sampling of using an external 1080p HDMI recorder without the complexity. However 4k makes the "data wrangling" task of post production a lot harder.

     

  3. 15 hours ago, Davey said:

    ....How much 'better' is Premiere as an editing suite over FCPX - for somebody with the same footage from the same camera and an expert with both, what can Premiere do that FCPX can't with regards to producing the exact same film?

    I edited with Premiere for years before switching (mostly) to FCPX. You can get the job done in either editor. Both are used to edit Hollywood feature films, although most of those are edited in Avid.

    Assuming "Premiere" means the entire Adobe suite, you have a wider array of tools. E.g, you can do spectral audio editing using Audition, whereas in FCPX you'd have to get an expensive external tool like RX5 for that.

    Premiere is available on both Windows and Mac, so you can build a very powerful Windows editing machine using the latest hardware, whereas FCPX is Mac-only so you're limited to that hardware.

    OTOH FCPX is generally faster and more efficient. Running on my 2015 iMac 27, it transcodes and exports to H264 about 4x faster than Premiere CC -- on the same hardware.

    A big advantage of FCPX is "digital asset management". It is essentially a database merged with an editing program. Premiere by contrast has limited ability to catalog, tag and keyword content, and no ability to do this on ranges within clips. Working on a large project with 50 or more hours of material, it is easy to get bogged down just trying to find content. I worked on a large documentary using Premiere and that was a big problem. We evaluated CatDV (an external asset manager) but back then it was unsuitable so ended up having to write a complex Excel spreadsheet to keep track of all the content.

    By comparison FCPX has a built-in asset manager and makes finding content easy -- including tagged and keyworded ranges within clips. The FCPX "skimmer" is vastly faster than any other editor and facilitates rapid visual searches for content.

    Many people find FCPX easier to use -- initially. However IMO FCPX is harder to fully learn and exploit all the features. E.g, Premiere (at least prior to recently) had no storage management features, so obviously there was nothing to learn. FCPX has both managed and unmanaged libraries, plus all kinds of side issues related to this -- consolidation, creating "lean" transfer libraries, etc. 

    For people coming from other track-based editors like Avid, Vegas, etc, Premiere is familiar and requires no fundamental reorientation. By contrast using FCPX most efficiently requires adopting a different workflow -- using the metadata features, tagging and keywording content in the Event Browser *before* you start cutting on the timeline, etc. This is especially true regarding the magnetic timeline. E.g, making a "split edit", aka "J cut" or "L cut" in Premiere is intuitive and straightforward -- the audio and video tracks are separate and this visually reinforces what you're doing. In FCPX, making that same edit while not detaching the audio is not as intuitive.

    Up until the recent FCPX 10.3 release, Premiere had a major ease-of-use advantage in doing certain tasks on a multicam clip. E.g, you could easily apply stabilization, optical flow smoothing or color-correction tracking directly to the multicam clip. By contrast FCPX required a complex workaround of looking up the timecode range in the base clips. As of 10.3 this has been improved but I haven't fully tested it.

    From a cost standpoint, Premiere (for the whole suite) is about $50 per month per person, and Adobe essentially discontinued any non-profit discount with CC. FCPX is $299 for a one-time purchase and you can use it on all the computers that "you own or control", and updates thus far have been free. If you ever stop paying Adobe $50 per month, you lose access to your projects, although your rendered output will still be there. IOW you are never "vested" in the software no matter how many years you pay.

    OTOH $50 a month is a lot less immediate out-of-pocket expense than the previous one-time-purchase of the Adobe suite, which was thousands of dollars. For that monthly price you are getting a huge amount of diverse software which is continuously updated.

  4. 8 hours ago, The Chris said:

    How's the quality with the Bluetooth mics? I'm going to be doing some traveling, but it would be nice to grab some sound from people I meet when shooting. This is much smaller than my G3 setup. Thanks. 

    Quality is very good, not equal to the G3 but very good. It's just a lot bigger so if you care about aesthetics it's harder to conceal. However it is common for news-style interviews to have a hand-held mic in the frame, so it's no different than that.

    For windy outdoor conditions there is a foam wind muff for some of these but we usually don't use it. Using one (depending on  your viewpoint) makes the mic (a) even bigger or (b) less obvious by making it less "technical" looking. 

    Another technique (esp. easy with 4k) is put the mic a little lower on the shirt, then crop it out in post. 

  5. 1 hour ago, Mat Mayer said:

    ...I MUST export H.265 files and H.264 files in an MP4 wrapper at SPECIFIC bitrates...I have very simple timelines (just one clip to grade) and am not using PP more than once a week, so slow performance is cool. I have PP templates to use too, so want to stay with that. Hopefully the new Macs at the end of this month will be a nice step up in speed. My mediocre Windows laptop is good enough for editing, so I just want the best bang for my buck with an iMac screen as they are so much nicer to use.

    My experiece is that more new 4K SMART TVs play H.265 than H.264...I dont know why computers can't play them, but they generally have trouble with H.264 in 4k too in my experience... The H.265 files are far smaller which is why I assume the TVs want to play them- in readiness for streaming smaller files. I think H.265 will be the norm next year and am hoping that cameras start filming in it too, starting with the GH5.

    Well, you know your own needs and if you're experienced with PP just stay with that. The problem is H.265/HEVC is extremely compute-intensive. A new 4k TV can handle this since they can add hardware support for H.265 decoding. Digital TV broadcasts currently use H.264, as does Blu-Ray but to squeeze 4k into over-the-air channel bandwidth will require H.265. Testing is ongoing and years in the future the upgraded ATSC 3.0 TV standard will support that. This will also probably be used for satellite and cable providers but that is years away. UHD Blu-Ray will apparently use H.265/HEVC but the decoding for that is currently only available in stand-alone hardware players. I don't think any PC or Mac can play a 4k UHD Blu-Ray disc.

    The Quick Sync in Intel's Skylake (used in the 2015 iMac 27) supports H.265 hardware acceleration for 8-bits per color channel, so if playback and editing software supports that it will be vastly faster. The upcoming Kaby Lake on-chip will support H.265 at 10-bits per color channel, but that will not be used for broadcast FCPX has used Quick Sync for years but unfortunately Adobe has not put this in PP for the Mac yet. They made some ambiguous statements at the last PP update which might imply they began using Quick Sync on PP in Windows.

    nVidia has hardware support for H.264 and H.265 in certain graphics cards, via the NVENC API. Likewise AMD has this in certain cards, accessed via the VCE API. However software developers must write to those APIs, and there are various versions and many different cards out there. Note this fixed-function logic for video acceleration is separate from the GPU, although it is bundled on the GPU card in a different chip. The software API fragmentation between NVENC and VCE plus the multiple versions of those discourages developers from using them. By contrast most computers with an Intel CPU Sandy Bridge and later has Quick Sync (excepting Xeon) so it's a broader platform to target.

    The problem with Macs is you can't change the GPU card to obtain better performance or to harness new software which has recently added support for NVENC or VCE. So (hypothetically speaking) if Adobe chose to support nVidia's NVENC over Quick Sync, there would be nothing the typical Mac owner could do, since recent Macs use AMD GPUs.

  6. 3 hours ago, Mat Mayer said:

    ....just learned this doesn't even export H.265 yet. And I need to be able to choose the export bitrate. This jamoke software lets you choose things like "Best" instead of a specific value. Looks like you have to use a separate software just for that (Compressor). Sticking with the devil I know PP: drag and drop, edit color, then export. Slow but it works. Looks like 28th October is when new iMacs might be announced....

    FCPX exports to H264 at about 3.5x to 4x the performance of Premiere on Macs with Sandy Bridge or later CPUs (excepting Xeon on the Mac Pro). It is a huge performance difference. The CPU load during editing is much lower on FCPX, maybe because Apple uses Quick Sync which Premiere does not, at least on Mac.

    That said you're right FCPX does not yet support H265/HEVC and Premiere does but H265 is new and has limited support everywhere. To my knowledge the only camera which used that was the Samsung NX1 which was cancelled. If you give an H265 file to somebody they might not be able to play it without specific help, and if their computer doesn't have specialized H265 hardware acceleration it won't play smoothly. I've tested numerous 4k H265/HEVC files on my 2015 top-spec iMac 27 and several of them play sluggishly in any available player. 

    The computational load of H265 CPU can be up to 10x that of H264, which is why hardware support for H265 encode/decode will be important -- whenever H265 becomes widely adopted.

    Re Compressor, this costs $49 (one time purchase) which is the same as Adobe's monthly rental fee for their suite including Premiere CC.

    If I had to edit a lot of H264 4k using Premiere CC, I would personally build a powerful Windows PC for that, not use a Mac.

    Premiere's recently-added proxy feature makes a huge improvement when editing H264 4k files.

  7. 5 hours ago, Mat Mayer said:

    Naaaa, I like the simplicity of sticking with camera files, but it's nice to know the option is there for proxies...Plus I want that new 5K screen after using a cheap monitor for 2 years and an average laptop....Just been to look and torn between the one in 1st post (£1600) and upgrading to i7 and the next step up graphics card (M395 with 2GB) for £500 extra.

    The problem is H264 4k is four times the data of 1080p. It is an incredible load on any editing machine. Even FCPX can struggle with this on a top-spec 2015 iMac 27, and it uses hardware accelerated Quick Sync on Sandy Bridge and later Intel CPUs (excluding Xeon). 

    GPUs by themselves cannot meaningfully accelerate H264 encode/decode, so import, export and scrubbing the timeline is mostly a CPU-oriented task if no effects are used. Effects can often (but not always) be GPU-accelerated, but this does not remove the CPU load from H264 encode/decode -- it just adds another burden.

    The bottom line is if you want fluid, responsive H264 4k editing you generally need to use proxy files -- whether on Premiere CC or FCPX. A higher-end Mac Pro or powerful Windows workstation might be able to avoid that but not an iMac. I edit lots of 4k every day on my 2015 top-spec iMac 27 using both FCPX and Premiere CC. It does fine on 1080p, but for my taste it's just not fast enough on 4k without using proxy files, except in limited situations for small single-camera clips. Other people might tolerate some sluggishness but it gets irritating pretty quickly.

    Since the iMac is about to be refreshed I'd recommend waiting to see what that includes. For the first time in several years, new 14/16nm GPU technology is available which may provide a significant increase on the GPU side. Although the GPU is mostly only usable for effects, this is still an issue so the more GPU horsepower the better.

    E.g, if just editing seems slow on 4k, try applying a computationally-intensive effect like Neat Video noise reduction. This and similar effects are incredibly slow to run on 4k, whether using GPU or CPU rendering. For effects using GPU rendering, at least there is an option of using a faster GPU on machines where this is available.

  8. On 10/4/2016 at 11:50 AM, M Carter said:

    I've been on shoots with cheap wireless and lots of problems. Even the Sennheiser G3s can get the occasional interference.

    My personal rule is "wireless only if you absolutely need wireless"...

    My documentary crew has many G3s and they have been pretty reliable. We've had a few interference issues over the years but not many. However we are usually not in a dense urban environment.

    We also sometimes use the "lipstick"-shaped Canon and Sony Blutooth wireless lavs. They are harder to conceal, but for informal walk-up interviews that is often OK. They are quicker to clip on than plumbing the G3 wire through the subject's clothing. We've never had interference issues with them, probably because they use 2.45 Ghz and adaptive frequency hopping, whereas the G3s use a single frequency between 500-600 Mhz. The new Sony ECM-W1M receiver mounts directly to a Sony hot shoe so that is nice when using Sony cameras: 

    https://amzn.com/B00HPM086C

    The ECM-W1M is similar to the Sony ECM-AW4 which uses a 1/8" audio out instead of a Sony hot shoe, so it works with any camera: https://amzn.com/B00JWU6WWO

    The ECM-AW4 probably uses the same internals as the now-discontinued Canon WM-V1: https://www.bhphotovideo.com/c/product/751267-REG/Canon_5068B001_WM_V1_Wireless_Microphone.html

    Re Fuzzynormal's point of what's the use of monitoring if you can't stop -- in most cases you *can* stop, you just don't want to. I think most of us in the doc community have shot lots of interviews both monitored and unmonitored. Unmonitored audio is really dangerous because what looks OK on a meter could have all kinds of issues, including clipping, background noise, clothing noise, etc. I have shot lots of unmonitored stuff, and also had to spend many hours trying to fix it in iZotope RX5 -- that is no fun.

    That new Tascam DR-10L locally-recorded lav looks pretty good and I already pre-ordered one for testing. However despite the dual level recording, it doesn't solve all the possible issues that require monitoring. But the dual levels cover some situations and the lack of wireless interference covers others, so it probably will be useful in some cases.

  9. 1 hour ago, 瞿盛龙 said:

    My friend did not buy Adobe Media Encoder CC 2015.3 but can be used normally, but installed on other computers can not be used, and who knows the mystery?

     

    If you do not purchase a subscription to Premiere CC (which includes AME), you do not get H265 support. You can install the evaluation version of Premiere/AME CC on other computers which will work for 30 days but that will not include H265 support. 

  10. 8 hours ago, 瞿盛龙 said:

    Who knows how to solve?

    This can be a confusing area. H265 is a new and important codec and it's obvious that potential buyers may want to test Premiere/AME's ability to handle this before buying the product: https://forums.adobe.com/message/3804375#3804375

    Adobe recognized back with CS5 that a feature-limited version of the product prevented proper evaluation: "CS5 and earlier lacked many of the most useful and popular codecs...This meant that  people had a hard time evaluating the software for real-world use."

    "The trial version of Adobe Premiere Pro CS5.5, and later includes all of the  codecs that are included with the full version of Adobe Premiere Pro  CS5.5. This means that you can import and export to all of the supported  file formats using the trial version."

    Unfortunately this is no longer the case with CC. If you want to evaluate Premiere/AME's ability to handle H265, you will have to buy the product via a subscription.

  11. 18 hours ago, Damphousse said:

    ...I asked you who is an idiot NBC or the person who chooses to pay for crappier picture quality AND complain while ignoring the free option the big bad corporation is beaming into their house.  Most people aren't complaining so they aren't idiots.  I don't usually sit around bitching about problems I can solve for free.

    No. You said "If you are willing to pay Comcast $80 a month for a highly compressed crap picture who is an idiot in this scenario NBC or you?....Dude... $50 antenna and problem solved".

    That is not an option for the majority of viewers today. It may not be an option for you in the future, as the FCC plans on auctioning off the hugely valuable TV spectrum to wireless companies. They can do this because only about 7% of US households use antennas for OTA TV reception: http://www.tvtechnology.com/news/0002/cea-study-says-seven-percent-of-tv-households-use-antennas/220585

    Quote

    And I don't know what HOAs have to do with anything.  My antenna is about two feet long and sits on a table.  Why would by HOA care about it?  You can stick a gigantic antenna inside your attic and the HOA wouldn't even know about it nor care.

    An indoor or tabletop antenna does not work for many users. Anyone interested in this can use the tools at http://www.antennaweb.org/ to examine their location and geography with respect to antenna type, size and compass heading required to receive local stations. You often cannot stick a gigantic (highly directional) antenna in your attic for several reasons: (1) Insufficient turning radius (2) Interference from metallic HVAC or insulation.

    That said, a 4-bay or 8-bay UHF bow tie antenna can work well in an attic if (a) You have an attic (b) If all the stations you need are within a narrow compass heading range (c) All the stations are on UHF (some HD channels are VHF), and (d) There is no major interfering metallic ducting or foil insulation. I have a 4-bay UHF bow-tie antenna and mast-mounted preamp in my attic and it works fairly well, although all the stations I need are within a narrow azimuth range (hence no rotator required), and they are fairly close.

    So many common factors often make it impractical to use an indoor or attic antenna. HOAs increasingly restrict outdoor antennas, however the 1996 FCC OTA reception rule says these can usually be challenged. Unfortunately most users are not aware of this: https://www.fcc.gov/media/over-air-reception-devices-rule

    So hopefully you can see that people who pay Comcast $80 a month are not idiots, and the problem is often far more difficult than "Dude... $50 antenna and problem solved"

    Quote

    Honestly you should read up on electromagnetic waves and digital broadcasting before posting misinformation.  Everybody that comes to my house and sees my setup is actually surprised.  The impediment is ignorance not a big bad corporation.

    Besides being a professional documentary filmmaker, I have the highest class ham radio licence and have built many antennas by hand, including UHF, VHF and HF. I regularly teach classes on RF techniques, signals and modulation. I have installed many large TV antenna, rotator and low-noise mast-mounted preamp systems.

    Quote

    Bottom line it's this guy's job.  Get proper gear, shoot the documentary, and then sell it when done.  Not worth thinking about nor arguing about.  Spend your energy on the creative stuff.

    It's important to give the OP the right advice. The advice about "buy a C100 mk II" does not work for the OP, since that is not a permitted camera from the standpoint of his 100 megabit/sec criteria. Although unstated in this case, networks which levy such requirements also often require 10-bit 4:2:2, which the C100 Mark II also does not do internally. 

    My main point was many networks have such little professionalism and commitment to quality they allow the distribution chains handling their licensed content to grossly degrade the image, while hypocritically demanding standards like 100 megabit/sec for submitted material. 

    I wanted to ensure everyone knows some networks widely disregard this at will, as shown in the above links I posted. But this doesn't mean shooting on an EOS M1 or M2 is the best approach, since they just aren't optimal from either codec or operational standpoint.

    If the OP literally must adhere to the delivery requirements (which likely include 100 megabit/sec and possibly 10-bit 4:2:2) he'll have to get a camera or combination of camera and recorder which support those. 

    If transcoding is permissible then 4k 8-bit 4:2:0 can be converted to 1080p 10-bit 4:4:4: http://www.provideocoalition.com/can-4k-4-2-0-8-bit-become-1080p-4-4-4-10-bit-does-it-matter/ In that case he could probably use a GH4 which is a great camera if equipped with the right lenses and accessories.

    If that is not permissible, then it will be very interesting to see how the networks react to the GH5, which apparently will hit every check box they have previously used to exclude "lesser" cameras. Will they raise the arbitrarily-enforced extreme delivery standards yet again? Or will they simply use approved and unapproved equipment lists and exclude the GH5 this way?

  12. 6 hours ago, Damphousse said:

    Oh, brother.  If you are willing to pay Comcast $80 a month for a highly compressed crap picture who is an idiot in this scenario NBC or you?

    My advice is use 1930s technology called an antenna and watch NBC for free with a much sharper picture.  Does NBC have to come to your house and spoon feed you?...

    Using Comcast as an excuse to do a pro job with an M1 is weak....If someone is thinking of doing this as a job I would probably just get a loan and buy a C100 mk II.  Use it for the project and sell it....

     

    The networks have power over distributors like Comcast -- they simply choose not to exercise that power because quality is not a priority. If Comcast decided to cut the bit rate to 200 kilobits/sec to free up bandwidth for local shopping channels, thereby reducing the main program to a pixelated slide show, they'd get a call from the networks very quickly, as advertisers would be irate when viewers bailed. 

    Re "who is an idiot" for not having an OTA antenna, a diminishing fraction of users have antennas, down to 7% by some estimates. The 93% of those you call "idiots" often have no choice and cannot practically use antennas. Since 1996 the FCC's Over The Air Reception Devices Rule says many HOAs can be challenged regarding antenna prohibitions but most people are not aware of this and cannot afford the hassle anyway: https://www.fcc.gov/media/over-air-reception-devices-rule

    Another issue with OTA TV is the value of the occupied RF spectrum is huge, and many other players want that spectral real estate. You may call those not using OTA "idiots" but when that last OTA spectral real estate is grabbed for other purposes, you'll find yourself in that category.

    http://www.tvnewscheck.com/article/91163/fate-of-ota-tv-hangs-in-the-balance-in-2016

    http://variety.com/2013/biz/news/its-big-tv-vs-big-telecom-over-broadcast-spectrum-1200329490/

    Re "do a pro job...just get a loan and buy a C100 mk II", that camera only does 8-bit 4:2:0 internally -- at only 24 megabits/sec. It would be rejected out of hand by the criteria the OP mentioned. Of course you can hang an HDMI ProRes recorder off it to achieve greater bit depth and chroma sub-sampling, but you didn't mention that.

    Despite these limits the networks widely use the C100 and similar DSLRs (without any external recorders). The rules about bitrate and color depth are largely arbitrary and ignored whenever the networks so choose.

    CNN using a variety of DSLRs and Canon C-series cameras: https://joema.smugmug.com/Photography/CNN-Moneyline-DSLR-Shoot/n-ffF2JW/

    CNN using 5D Mark III: https://joema.smugmug.com/Photography/CNN-Using-5D-Mark-III/n-5JqGgB/

    CNN field segment shot on C100: https://joema.smugmug.com/Photography/CNN-DSLR-Video/n-scsdxs/

    ABC News shooting three-camera interview in front of White House: https://joema.smugmug.com/Photography/ABC-News-Using-DSLRs/n-BsScJC/

    ABC Nightline using video DSLR: https://joema.smugmug.com/Photography/ABC-Nightline-Using-DSLR/n-HwH8hG/

    2014 Super Bowl commercial for Gold's Gym shot using Canon DSLRs: https://joema.smugmug.com/Photography/DSLRs-shoot-Arnold-Golds-Gym/n-jzcNXR/

     

     

  13. On September 17, 2016 at 4:33 AM, Frank5 said:

    ...A TV channel has offered to broadcast my latest work for which I am in preproduction. Their tech specs require footage to be recorded with at least 100Mbit/s on a "professional video camera" and require a special permission to use DSLR footage....

    It is ironic that networks require this since the technical quality they deliver is often so poor. Note this frame grab of NBC footage from the Olympics. It is smeared, blurry, full of artifacts. Their excuse would probably be "it's not us, it's Comcast". However transmission of network content is a signal chain that's only as strong as the weakest link. If they permit gross degradation of image quality at any point in the chain, then being persnickety about technical matters at other points is simply lost in the noise. It implies they don't really care about image quality.

    i-BKWbk3B-X3.jpg

     

    The technical quality of NBC Olympic content delivered to end users was so bad that the below footage from 1894 was actually better. Imagine that -- some of the first film footage ever shot, and it's better than what NBC delivered. Despite having supercomputers on a chip, satellites in space, and optical fiber spanning the globe, the delivered quality was worse than an old piece of film.

     

  14. 12 hours ago, Bioskop.Inc said:

    ...Seriously thinking of going back to shooting on film, for pictures or movies/docs - will make life much easier.

    Go back to Kodachrome -- your life will definitely be easier since it's no longer available. However Kodachrome did make the world look like a "sunny day". That's because it was so slow you could only shoot on a sunny day. 

  15. Re the point Axel and I made about importance of CPU and GPU and limited importance of I/O beyond about 500 MB/sec on H264 video editing, I should add that some highly-experienced people feel otherwise. Larry Jordan for example says I/O is the most important, GPU next and CPU last, and that you can edit 4k on almost any computer. That is roughly the opposite of my experience as a professional video editor using both FCPX and Premiere CC, but I edit a lot of H264 and only transcode to ProRes or other lower-compression codecs when it's unavoidable. In general I/O rates aren't that high when editing a long-GOP codec because otherwise the puny little CPU in the camera could not write the data to the card fast enough.

    When configuring a computer for editing,  I/O is important but buying more I/O than is necessary usually results in short-changing yourself elsewhere. E.g, getting an external SSD array then running out of space because you didn't realize how rapidly video editing consumes disk space.

  16. On 8/19/2016 at 1:25 PM, Axel said:

    ...With 4k, you need more storage. My Pegasus is no bottleneck, though it only reaches 500 MB/s read speed....You can set free a lot of CPU performance with optimized media. Since those eat more space, you need it external. Big and fast enough TB drives are the way to go. With redundant arrays you need no extra backups.

    I agree with all of these points except the last. I have a Pegasus R4 and several other larger Thunderbolt RAID-5 arrays on a top-spec 2015 iMac 27. 

    The OP mentioned editing photos via Lightroom or video via Final Cut or maybe Premiere Pro, 1080p footage and later editing 4k. Those have entirely different demands. Almost anything can edit H264 1080p. By contrast significant amounts of H264 4k is really hard on almost any computer. FCPX is considerably faster than Premiere CC (I have both) but even FCPX can bog down on 4k, esp. multicam. It generally requires transcoding to proxy for smoothest performance on 4k, which takes lots of space. Proxy is about 2x the space of H264 camera files and optimized ProRes is about 8x the space.

    I have six Mac and one Windows machine and like OS X but if I were editing mostly on Premiere I'd build or buy a high-end Windows machine for this. You have a lot more configuration options and (as of today) the performance options on the Mac side are limited. This will probably change this fall with the new iMacs and hopefully refreshed nMP.

    Re Lightroom, it can definitely be sluggish even on a top-spec iMac 27 if editing lots of high-megapixel raw stills. It is unclear if this is a GPU limit due to the 5k screen or a CPU limit (say from bit-block-transfer operations). If you do lots of production work, e.g, an event photographer shooting > 1,000 38-50 megapixel raw stills a day, a high-end Windows machine is probably better.

    Your "fast enough" statement is correct and often misunderstood. For most video editing it generally doesn't help to have 1,000 or higher MB/sec -- often obtained at great cost financially and sacrificing larger size. Long before you need 1,000 MB/sec you are bottlenecked on CPU or GPU.

    And as you said, having media on smaller super-fast storage means you often don't have space to transcode to more efficient codecs. This means the high-speed storage has actually made the performance problem *worse* not better, since the most common limits are CPU and GPU not I/O.

    However I don't agree redundant arrays eliminate the need for extra backups. You can easily have a problem from user error, system software error, application software error, etc. which jeopardizes your data. RAID only helps for disk hardware problems. At a bare minimum I'd suggest Time Machine backup and it's really best to have a disconnected off line backup using Carbon Copy, etc. in addition to Time Machine. And for critical material you probably want additional backups beyond these.

  17. On 8/19/2016 at 10:13 PM, jonpais said:

    Sky Lake has hybrid hardware/software support for HEVC, whereas Kaby Lake is supposed to be fully hardware supported, which should bring improvements, particularly where content is greater than HD (eg 4K)

    What is the source of this information? My understanding is Skylake already has full hardware support for 8-bit H.265/HEVC (such as output by the NX1). It was Haswell and Broadwell which had partial support. This was tested here: http://labs.divx.com/hevc-hwaccel-skylake

    Kaby Lake will have hardware support for 10-bit HEVC but this has nothing to do with whether Skylake has full hardware support for 8-bit HEVC. It does:

    http://www.fool.com/investing/general/2016/01/28/understanding-the-biggest-improvement-intel-corp-i.aspx

  18. 8 hours ago, mercer said:

    Even though it maxes out at 1080p, the G40 has full manual controls, with zebras, peaking and focus assist. It shoots in 24p and 60p. And has a fast 1.8 to 2.8 lens with a 35mm equivalent of 28mm on the wide end and 576mm on the long end...Obviously, the small sensor is the major downside of this camcorder and makes the camera have zero interest to most readers and members of this forum. But it is pretty interesting to see high end features filter down the chain.

    My group has a G30 and XA25. I will be shooting some instructional material with the G30 tomorrow, just because it's easy. We usually use larger-sensor cameras but cameras like these are very nice for certain things. They are straightforward to use, relatively inexpensive, and have superb stabilization. Battery life is good, they don't have a 29 min. recording limit and they don't overheat. An experienced operator can get good looking content.

    When you consider how much material has been shot with the AG-DVX100 tape-based DV camcorder (including Oscar-nominated documentaries) and how superior modern HD camcorders like the G40 are, you might think why would anyone want anything else.

    The answer is despite the advantages it doesn't have that lush cinematic look of a higher-end large sensor camera, and doesn't do well in low light. Unlike a decade ago when DV was a common doc format, today even a well-operated entry-level DSLR can produce cinematic-looking material. Viewers come to expect that, whether they can verbalize it or not.

  19. 14 hours ago, Asmundma said:

    Why not use FCPX, there no problem editing full resolution 4K  xavc-s files. It also faster to use for editing as well (after some learning). Renders faster according to many tests.

    if your on Mac, it's a no brainer. 

    No problem here with 4K A7s2 files....and FCPX 

    As a former experienced Premiere editor who moved to FCPX, this can be a difficult transition. It's not like moving between other track-based editors such as Avid or Vegas. The paradigm is radically different, and for some users entails a lengthy learning curve. They are both good products. It's true FCPX is faster at various things on the same hardware but whether this produces the end product any faster is more complex. Premiere users often depend on After Effects or other components of the Adobe suite, so it's often easier to stay with that. They may be part of a workgroup so changing editing software is not an individual decision.

    Re 4K XAVC-S, I have terabytes of this and while FCPX on a top-spec iMac 27 can handle a single stream without transcoding to proxy, it still it still requires proxy for smooth editing of 4K H264 multicam.

    Premiere users on Windows can easily build or buy whatever hardware they need to obtain good performance. On Mac the options are more limited. However since Premiere now has integrated proxy support, that will solve most performance problems, at the time and space cost of transcoding the files. However the transcode is a background process so you can continue to work while that runs.

  20. 7 hours ago, dafreaking said:

    So my main objective is probably using a SSD as a media drive for onsite edits for 70% XAVC-S 1080p (50M) and 30% XAVC 4K-S (100M). As these are time sensitive (things like same day edits and next day edits) would an SSD be worth it? or should I get something like a 1TB WD Black (Mobile) and stuff it in an enclosure? The editing will be performed on a decently specced Macbook Pro using Premiere.

    Almost any 7200 rpm 3.5" drive would work for this, but they are externally-powered, hence not very convenient for portable use. For 1080p, it's no problem from a CPU or I/O standpoint. I edit a lot of 4k XAVC-S, and for camera native the data rate isn't that high. However the CPU load is very high, especially for Premiere. This leads to transcoding to proxy (a CPU-bound operation) which takes time and increases I/O load when completed, since the video files are much less dense.

    If you want portability, then staying with a bus-powered drive is nice but most USB 3 bus-powered drives are too slow, IMO. The 4TB Seagate Backup Plus Fast is bus-powered, only about $185, and it's pretty fast (internally RAID-0): https://amzn.com/B00HXAV0X6 I have several of those and they work well. Below are other bus-powered external SSD options I don't have personal experience with.

    Lacie 1TB Thunderbolt bus-powered SSD ($900): https://eshop.macsales.com/item/Lacie/9000602/

    Transcend 1TB Thunderbolt bus-powered SSD ($589): https://amzn.com/B00NV9LTFW

    If USB 3 is OK, this 1TB bus-powered external SSD is about $400: https://eshop.macsales.com/item/OWC/ME6UM6PGT1.0/

  21. 6 hours ago, dafreaking said:

    ...People with real world experiences. How much of a difference (if any) has putting media files on SSDs made for you? More specifically timeline performance, i.e. scrubbing through footage etc.

    I have six Macs, three with Fusion Drive and three with SSD. While my media content is usually on external Thunderbolt arrays, I have done lots of testing with smaller projects on SSD. I don't see much performance difference attributable to I/O if editing H264. In hindsight this should be obvious -- if the I/O rate was that high, the puny CPU and I/O system in the camera could not write it to storage fast enough. Anyone who doubts this can simply monitor I/O rates when editing H264 content by using Activity Monitor or Windows Performance Monitor -- they aren't that high.

    SSD can make a difference if editing lower-compression codecs like ProRes. In that case the I/O rate can be 8x or 10x the camera native rate -- for a single stream. For three-camera multicam it could be 30x the camera native rate. In that case you may really need the additional I/O performance, but SSD is often too small or too expensive in those cases. 

  22. 3 hours ago, dafreaking said:

    So what seems to be the final consensus on this? Convert 4K 100M files to ProRes LT  or HQ or Regular? 

    There is no simple answer. Some systems can edit the camera native files with good performance for one stream. Most systems cannot do this smoothly for 4k H264 multicam, and some type of transcoding is needed, whether externally before ingest or to proxy during/after ingest. Fortunately Premiere now supports this and gives various resolution and codec options for proxy, including H264, Cineform, and ProRes 422. FCPX always transcodes to 1/4 res ProRes 422, e.g, 1080 from 4k.

    Also (as already mentioned) not all 4k H264 codecs are the same. Some may exhibit smoother editing on certain software.

    For documentary projects with a large shooting ratio, it is nice (in FCPX) to skim through the camera native files without transcoding all that. For scripted narratives or other content with a lower shooting ratio, the workflow might favor transcoding everything up front or possibly doing initial selects outside the editor before import.

    Some groups mandate ProRes recording off the camera, so all their cameras either do this internally or have external recorders. Others do the initial evaluation and selection using camera native files. Still others transcode to a mezzanine codec before ingest. It depends on the equipment, preferences and workflow policies of the group. My group can shoot a terabyte of 4k H264 per weekend so we don't transcode to ProRes before or after ingest since that would be at least 8 terabytes. We selectively transcode to proxy after ingest if needed for 4k multicam.

  23. On 7/16/2016 at 10:02 AM, Sackboydad said:

    ....struggling (badly) with PPro (currently the latest version 2015.3) and 4K XAVC-S files from my A7s-II.

    I get stuttering during playback on PPro timeline (or even just playing a clip off the timeline), even without effects.  But it's not constant.  It'll play then just start stuttering.  If I stop then start the timeline, it'll smooth out for a bit then start stuttering again...I have a GTX970 4GB GPU.  I know that's a decent video card.   My CPU is a little old... it's an Intel I7 3770K.  I have 32GB of RAM.  PPro and all it's mediacache files reside on two different SSDs.  The XAVC files are on a WD Black 6TB drives (but I tried putting all the XAVC stuff on SSD and the stuttering wasn't any better)...I am to the point where I HATE editing....I also do not want to transcode my XAVC into another format.  If I open task manager I can see my CPU is pegged at 100% when I playback.

    Editing 4k H264 is often sluggish with virtually any editing software on almost any computer. It is just inherently hard -- it's 4x the data per frame of HD, it's stored in a compressed "long GOP" format which must be decoded on playback. It is generally a CPU-bound task, not I/O or GPU limited.

    FCPX is much faster than Premiere at this but even FCPX can struggle with 4k H264 multicam. Even on a high-end machine I would never edit 4k H264 multicam without transcoding to proxy -- using any editor.

    On my 2015 top-spec iMac 27 (4Ghz i7-6700K, 32GB, 1TB SSD, M395X, 16TB Thunderbolt RAID5), Premiere CC 2015.3 is borderline usable on a single stream of 4k H264. I have never seen *any* problem with pure 1x playback (at 1/4 res) of 4k H264 on Premiere CC on my iMacs or a several-year-old Windows PC with a 4Ghz i7-875K CPU and GTX-660. The lag happens when scrubbing the timeline or using JKL commands to rapidly change from FF to REW -- not during 1x playback. If your system can't even do 1x playback there may be something wrong, either in the configuration or hardware. Make sure your Source and Program monitors are set to 1/4 resolution.

    I could see editing small single-cam 4k projects without transcoding. For 4k H264 multicam, transcoding to proxy is essential whether using FCPX or Premiere.

    Fortunately Premiere 2015.3 has added proxy support which greatly speeds up H264 editing. I have tested this on 4k XAVC-S content from my A7RII and similar content from a Panasonic AG-DVX200. The downside is you must transcode, but at least Premiere now supports that internally. It can be done during or after ingest. In limited testing I've done, Premiere is about twice as slow as FCPX at transcoding to proxy, but it gets the job done.

    If you want fast, fluid 4k H264 editing on Premiere, this requires either a custom-built machine (or equivalent), or transcoding to proxy. Even using FCPX on a top-spec 2015 iMac, you have to transcode to proxy for fastest, smoothest editing performance on 4k H264.

  24. 51 minutes ago, ReinisK said:

    Hey!

    How are your computers dealing with NX1 footage? Mine struggles with 4k, but 1080p is alright.

    Since I wasn't satisfied with the performance of 4k editing in Premiere Pro CC 9.1...when I use lumetri color and visioncolor luts (those are quite heavy though, and take longer to process, than some others). If I have a 5-10 minute sequence with some effects (usually warp stabilizer, coloring, some titles) and an adjustment layer for coloring, the performance just drops. I would have to wait for 20 secs or so, just to see one adjustment...My PC is:...Win 10, i7 4771, 24gb ram, 4gb 960 gtx, 240gb ssd...Usually I edit from external usb 3 hdd, have tried moving the project and files to the ssd, but that also doesn't seem to give much improvements.

    Most computers and editing software can struggle with 4k H264 or H265, depending on the specifics. 4k is 4x the data per frame as 1080p, and the encode/decode process is very CPU-intensive and cannot generally be GPU-accelerated. H265 is even worse on the decode side -- much more CPU-intensive than H264. 

    Adobe has made some major performance improvements to the latest 2015.3 (10.3) version of Premiere, including proxy media and apparently Quick Sync encode/decode, (Windows only) although they have not described it that way.

    Your best bet is upgraded to Premiere 10.3 and use the proxy feature. You could also consider upgrading the GPU. The new GTX-1070 and 1080 are much faster than your GTX-960. This generally won't help on encode/decode but will definitely help on effects. If Adobe is using nVidia's NVENC hardware-assisted encode/decode logic, it might help even that, however to my knowledge Adobe has never described this in detail.

×
×
  • Create New...