Jump to content

joema

Members
  • Posts

    160
  • Joined

  • Last visited

Everything posted by joema

  1. Editing videos is one of the hardest things for a computer to do. Your 2009 MBP uses dual-core Intel "Penryn" CPU which has a GeekBench4 score of about 2000/3400 (single/multicore). By contrast a two-year-old 2015 MBP does 4500/16000, so each core is faster than both cores on the 2009 machine. The 9400M GPU is about 10 times slower than a more recent MBP. Even the integrated GPU in more recent machines is faster. Worst of all the Penryn CPU does not have Quick Sync, which greatly accelerates H264 encode/decode. It's amazing your machine works as well as it does, which is probably only because FCPX is so efficient. Your best bet is just getting a newer machine, not trying to somehow rehabilitate the old one. A brand new one isn't necessary. Even an Apple refurbished 13" Macbook Air would be much faster than what you've got. A refurbished 2015 15" MBP is another possibility. A refurbished 27" iMac is another possibility, if you don't need portability. In the meantime you can improve performance by using proxy files. This is built into FCPX and works very well, although it takes some time and space to build them.
  2. I edit lots of H264 4k on a 2015 iMac 27 using FCPX. Using proxy and deferring Neat Video to the very last step is best. Also jcs had excellent advice about limiting use of Neat Video. If you haven't use proxy before, this will produce huge performance gains during the edit phase. Before the final export, you must switch it back to optimized/original, else the exported file will be at proxy resolution. That final export will be no faster but all the editing prior to that will be faster. However I'm not sure just adding 16GB more RAM is the solution. It sounds like a possible memory leak from either a bug in plugin or FCPX itself. Pursuing that is a step-by-step process of elimination and repeated testing, e.g, eliminate all effects then selectively add them back until the problem happens. Starting with 10.3.x, there is a new feature to remove all effects from all clips. So you can duplicate the project, then remove all effects from the duplicate then add them back selectively: https://support.apple.com/kb/PH12615?locale=en_US
  3. We've used Tiffen, Genustech, SLR Magic, NiSi and Heliopan. I didn't like the Tiffen because it made an "X" pattern at high attenuation. I use a 95mm NiSi on my Sony 28-135 f/4 cinema lens, and really like it because it fits under the lens hood, has hard stops and no artifacts: https://www.aliexpress.com/store/product/NiSi-95-mm-Slim-Fader-Variable-ND-Filter-ND4-to-ND500-Adjustable-Neutral-Density-for-Hasselblad/901623_32311172283.html However I also have other smaller NiSi filters I don't like as well because the frame is thicker. Overall the optical quality of the Genustech and SLR Magic seem OK, but most filters will impose some color cast that you must correct in post. I just got the Heliopan and haven't thoroughly tested it, but it fits under the lens hood of my Canon 70-200 2.8 IS II, which is a big plus. There are lots of variable ND filter "shootouts" on Youtube and other places. I suggest you watch those and buy from a retailer that has a good return policy.
  4. You generally need some type of ND when shooting outdoors at wide aperture. For scripted shooting, a matte box and drop-in fixed filters may work, but for documentary, news, run-and-gun, etc. a variable ND is handy. This is why upper-end camcorders have long had built-in selectable ND filters. However with the move to large sensors, the entire optical path gets larger. It becomes much harder both mechanically and optically to fit multiple precision fixed ND filters inside. The surface area of an optical element increases as the square of the radius, so it becomes much harder (and more expensive) to make a perfectly flat multicoated filter. The Sony FS5 has an electronic variable ND, showing how important this is for video. It doesn't make sense to put a $20 filter on a $2500 lens. However filter price and quality are not necessarily directly related. In documentary video I've used many different variable ND filters, and here are a few things to look for: (1) If at all possible get one that fits inside a lens hood. This is the most difficult requirement since there are no specs or standards for this. You use a variable ND outside under bright (often sunny) conditions -- the very conditions where you need a lens hood. However most variable ND filters and most lenses are incompatible. The ideal case would be certain Sony A or E-mount lenses with a cutout in the lens hood which allows turning the variable ND filter without removing the hood. However it's very difficult to find one which fits. (2) Get one with hard stops at the end of each range. Otherwise it's difficult to tell where you are on the attenuation scale, and this adds a few seconds which can make you miss a shot. (3) Get one which does not exhibit "X" patterns or other artifacts at high attenuation. This typically happens with filters having more than 6 stops attenuation. (4) Get one which has the least attenuation on the low end, ideally 1 stop or less. This reduces the times you have to remove the filter when shooting inside. A filter which goes from 1-6 stops is likely more useful and less likely to have artifacts at high attenuation than one which goes from 2-8 stops.
  5. The question was exclusively about editing H265 -- not watching it or using H265 on a closed ecosystem like Facetime. It's true the mobile phone vendors have been rolling out H265 *hardware* support, but that's because cellular data is so precious they need that extra compression -- typically for their own proprietary video use. But it does no wider good if the other end (e.g, Youtube) isn't encoding and sending H265. This article explains: http://www.dtcreports.com/weeklyriff/2016/01/10/why-smartphones-the-initial-products-on-hevc-rollouts-timeline/ By the time a significant % of video editors need native H265 support and have computers than can edit 4k H265 with good performance, FCPX will probably support that. Until then they can do what Premiere editors did for the past 10 years who needed proxy -- they can externally transcode.
  6. That is a good question, but I don't think anyone knows the answer. Currently there seems little need for this since few cameras use H265. There is much greater need for updated and new camera formats such as MXF. Apple supports these either in FCPX or the downloadable Pro Video Formats: https://support.apple.com/kb/DL1898?locale=en_US Like Adobe does with Premiere Pro, Apple has a trial version of FCPX. If Apple added H265 support to FCPX, the licensing issues might force them to make a special "eval" version without H265. This confuses customers since they expect to evaluate the product against all codecs and formats. In fact Adobe claims the trial version of Premiere CC is absolutely full-featured, but it does not have H265 support. There is little Adobe can do about that since the H265 patent holders probably demand royalties from every copy, which conflicts with a free trial version. No software developer likes making special versions. Even though the source code change may be small, it still requires separate full-spectrum testing for function, performance, reliability and regressions. Adobe went ahead and did this for their trial version, but Apple may have decided it's not worth the expense and effort at this time. Also H265 is extremely compute-intensive to edit -- much more than H264. Except on a limited set of machines 4k H265 would likely require transcoding to provide good editing performance. The few people who need to edit H265 can already transcode it externally. That is not as convenient but until recently every Premiere user on earth had to externally transcode if they wanted proxy capability. Apple may think the few who need H265 support in FCPX can transcode externally for now.
  7. As I previously described, deployment of H265/HEVC has been slowed for non-technical reasons. There have been major disputes over licensing, royalties and intellectual property. At one point the patent holders were demanding a % of gross revenue from individual end users who encode H265 content. That is one reason Google developed the open source VP9 codec. The patent holders have recently retreated from their more egregious demands, but that negatively tainted H265 and has delayed deployment. The licensing and royalty issue is why the evaluation version of Premiere Pro does not have H265. VP9 is replacing H264 on Youtube, and they will transition to VP9's successor AV1 soon. AV1 is also open source, not patent-encumbered, and significantly better than H265/HEVC: http://www.streamingmedia.com/Articles/Editorial/What-Is-.../What-is-AV1-111497.aspx Skylake's Quick Sync has partial hardware support for VP9 and Kaby Lake has full hardware support, but I don't know about AV1.
  8. There isn't really an incompatibility per se between Quick Sync and Resolve or Premiere, but the developers of those products haven't yet figured out how to use Quick Sync. Since Apple has used this for years on FCPX, I don't know what the hold up is. However the bottom line is Resolve and Premiere don't use Quick Sync (or don't use it effectively), so for users of those products Ryzen looks great. Also Intel doesn't put Quick Sync on any CPU over 4 cores, so for an 8-core i7-6900K vs 8-core Ryzen 7 1800X, Quick Sync doesn't enter the picture. This will put pressure on Intel to drop prices, use 8-core CPUs more broadly and maybe even move Quick Sync up to 8-core products.
  9. While it's good to see AMD putting pressure on Intel, this thread is about video editing, not gaming or general use. The majority of video shot today uses a long-GOP codec such as H264. Intel's Quick Sync is a huge performance advantage in encoding or decoding these common codecs, and unfortunately Ryzen has nothing like that. Years ago, hardware-accelerated transcoding had a reputation for producing lower quality, but on FCPX I have extensively tested H264 exports both with and without Quick Sync, and it's hard to tell the difference, except Quick Sync is 4x faster and CPU levels are much lower when scrubbing the timeline. Hopefully Ryzen will force Intel to broaden their i5 and i7 portfolio to include more 8-core products, and help make 8 cores the new standard for enthusiast machines.
  10. There is merit to the "form over function" complaint in the 2016 MacBook Pro, esp. in removing the SD card slot. Phil Schiller explained that camera wireless transfer was "very useful" vs the SD slot being "cumbersome". But for who? An SD slot is definitely not cumbersome for the pro users it would benefit. He apparently means Apple's designers view it as *aesthetically* cumbersome. His words are a topsy-turvy definition. What's "cumbersome" is current camera wireless transfer techniques. By contrast an SD card slot in an expensive Pro laptop is "useful". That said, the title of this thread is the Razer Blade laptop. A top-spec 2016 MacBook Pro costs 22% less and weighs 90% less than the Razer Blade. According to battery tests by laptopmag.com, the 2016 15" MBP lasted 10 hr 32 min vs the Razer Blade Pro lasting 2 hr 45 min. So the GTX-1080 in a laptop looks really good until you have to provide power and use it in a real world environment.
  11. Yes it's possible the upward orientation picked up reflected sound from above and behind the subject. Similar things can happen in outdoor settings if wind starts blowing through trees behind the subject. The main pickup lobe of the mic doesn't just go deaf when it reaches the subject but will pick up anything behind him. With shotguns it's important to point the null (ie broadside) toward the unwanted sound, don't just point the mic toward the subject. Think in terms of what you *don't* want to hear and point the mic broadside in that direction. Some mics also have a rear pickup lobe so you also need to be aware of that. It is hard to shoot the video while simultaneously having sound-isolating headphones on and assessing audio. This is why a dedicated sound person is good idea, if possible. Purpose-made sound blankets are not that expensive but if you compare to the cost of blowing the entire shoot due to poor audio or the cost of spending hours with iZotope RX5 trying to fix it, they are downright cheap.
  12. Was the ME66 shotgun mic boom operated or camera mounted? If camera mounted it may not help much since the main advantage of a shotgun mic is pointing the broad side toward unwanted sound. It is not like an "audio telescope". If this is an interview, you may need to frame it tighter to get the mic closer. If it is simply voiceover dialog, you can get the mic as close as necessary. If this is a scripted narrative involving subject/actor movement, then you may need the mic further away which could be pretty hard. A high-quality lav like a Sennheiser G3 if place well usually does pretty good. Sennheiser also makes a cardioid mic for the G3 lavs called the ME-4N. This may have some advantages over the stock omni-pattern mic if placed correctly: http://a.co/dbDpkE4 If a boom operated shotgun, you have many options about placement. In a large "echoey" sports gym if you place the mic low and angled upward the speaker, it may pick up sounds bouncing off the reflective ceiling above him. OTOH if you place the mic above him and pointed down it might block some of that out. You said you have no real options to change the room, but simply letting the subject stand on a sound blanket (or even a small carpet) and aiming the shotgun mic down might help. That way the blanket attenuates reflections from the floor and the mic null pattern attenuates sound reflected from the ceiling and walls. Monitoring the audio with good headphones and testing various mic angles and positions at the specific venue and position -- before shooting -- can help. If possible it's best to have a dedicated sound person.
  13. If the "ugly mushy...horrible Canon 720p" is that bad, why were you unable to distinguish the difference between that vs the FS7 or FS700 in Hmcindie's reel? He even provided the download link for the original content. While my documentary team has moved on to 4k and Sony/Panasonic cameras, we used the 5D3 and D8x0 for years. I've edited hundreds of hours of material from both, and would never describe the 5D3 1080p as "ugly mushy horrible Canon 720p". It is simply a different look from the D8x0 -- both are capable of excellent video imagery. After post processing I've never seen a single person who could tell the difference -- just as you could not tell the difference in hmcindie's material. It is true that technology has moved on and Canon has lagged severely in video. But if the 1080p content from the 5D3 was "horrible" (much less the 5D2), why have I never heard anyone describe Vincent LaForet's "Reverie" that way: https://vimeo.com/7151244 Likewise I have never heard anyone describe Shane Hurlbut's "Last 3 Minutes" (shot with 5D2) as "ugly mushy horrible": https://vimeo.com/10570139 If the 5D2 content is truly "horrible" why would LucanFilm's Rick McCallum use it on a major feature film? http://cdn.collider.com/wp-content/uploads/rick-mccallum-1.jpg McCallum on set with Philp Bloom: http://philipbloom.net/blog/wp-content/uploads/2016/08/Screen-Shot-2011-08-03-at-18.15.33-670x520-670x520.png
  14. That is a good question; unfortunately most software vendors (inc'l Apple) do not publish how the technically achieve sharpening. I also don't see any methodical reviews of video sharpening software. On FCPX I use the built-in sharpen effect plus I have the BorisFX Magic Sharpen plugin and FCPEffects Sharpen plugin, and there's a sharpening effect built into Neat Video. I also have Premiere CC with its two different sharpen effects but I haven't done a thorough side-by-side evaluation of these. This is an important part of video workflow, esp for cameras (like the 1DX and 5D3 before it) which are apparently designed to trade out-of-camera sharpness for reduced aliasing, on the theory you can recover the sharpness in post. These would be good topics for some enterprising reviewer to cover: (1) Comparative evaluation of various video sharpening effects, and (2) Evaluation of post-sharpening footage from various cameras vs their resistance or susceptibility to aliasing and moire.
  15. He did not mention what the in-camera sharpening settings were on either camera. If the answer is "the default", that isn't necessarily the optimal settings, nor an even comparison. The issue is how good can the video image be after post production, not out of the camera. I have the A7RII and am happy with it but just because the Canon looks less detailed out of the camera doesn't mean it's inferior. It could be specifically designed to render that image to avoid aliasing, with the intention of sharpening in post. Using deconvolution sharpening in post can recover the image clarity, but you can't recover footage with aliasing and moire. Given that choice I'd rather have an initially less-detailed image I can sharpen in post vs one with aliasing and moire that I can't fix. In Premiere, the sharpen effect uses deconvolution, whereas the unsharp mask uses contrast edge enhancement. For this reason I run my A7RII at minimal sharpening (called "detail" in the Sony documentation). The difference between these was once posted on Topaz Labs' web site: "Deconvolution is the process of approximately reversing the process that caused an image to be blurred. While unsharp masking increases the perceived sharpness of an image, deconvolution increases the actual sharpness based on information which describes some of the likely origins of the distortions when capturing the image. With deconvolution, lost image detail may be approximately recovered."
  16. I'm not an expert in this area, so some of this could be wrong. Panasonic says the GH5 uses HEVC/H265 for 6k stills which would imply it's theoretically usable for All-I (intraframe) video compression. However there could be performance factors (even with hardware assist) that preclude this. H264 is sometimes described as being usable for All-I, but I don't understand how that is very different from JPG compression of a still. If confined to a single frame or still image, there are major limitations to improving compression at the higher bit rates and file sizes typical of quality photos and video. It's true that newer intraframe algorithms like JPG 2000 can obtain better image quality at low bit rates, but at higher bit rates and and larger file sizes there is less difference. So it seems there's no magic algorithm that can vastly improve compression of a single frame at those bit rates. It may be that Panasonic is using H265 for 6k stills because there's some modest compression improvement over JPG and they've already got the hardware logic on the camera to handle the compute-intensive H265 algorithm. There are more compression gains available with interframe or GOP compression, which is how H264, H265, etc. are generally used. Re whether the GH5 supports H265 for 4k video, it may eventually do that, but these things are not easy. There are many factors in video encoding, compression and image quality. It's not like flipping a mode bit to enable it, with no more work required. As can be seen in the Tom's Hardware article from 2011, merely assessing the various parameters and image quality factors of different hardware-accelerated video encoding methods is complex: http://www.tomshardware.com/reviews/video-transcoding-amd-app-nvidia-cuda-intel-quicksync,2839.html It's not Panasonic's responsibility to expend a lot of resources on a fringe feature when they are trying to launch a new camera and need to prioritize commonly-needed features, all of which require design and testing. It is unclear if H265 will be the codec of the future. It has been encumbered with various legal conflicts about licensing and royalties, which is why the evaluation version of Premiere CC does not have H265 support. Likewise I don't think the free version of Resolve has it. It costs you $1000 if you want to edit H265 content with Resolve. No matter what Panasonic does with the GH5, as long as those royalty and licensing issues exist there will be a major impediment to the widespread adoption of H265. It may be that AV1 will be the codec of the future. Supposedly Youtube will be converting to AV1 as soon as possible: https://en.wikipedia.org/wiki/AOMedia_Video_1
  17. Software support is obviously required and this often lags hardware by years. E.g, Intel's Quick Sync hardware-assisted H264 encoder was introduced with Sandy Bridge in 2011. To my knowledge Premiere Pro only recently started supporting that -- and for Windows only, not Mac. That was roughly a six-year gap. Skylake's Quick Sync has HEVC/H265 support for 8-bits per color channel but Kaby Lake will be required for HEVC at 10-bits per color channel. Hopefully it won't take Adobe six more years to add support for that. I think nVidia's NVENC has HEVC hardware support starting with Pascal and AMD's VCE with Polaris, but the software development kits, APIs and drivers must be available and stable for application developers to use. So there is a difference between raw hardware availability (in silicon) vs being able to harness that from the application layer, which can require stable and tested SDK and driver support. Traditionally there has been concern over image quality of hardware-assisted encoding, but FCPX has used Quick Sync for for years (single pass only) and it looks OK to me. But I don't think it has H265 hardware support yet. Lots of people want H265 because the file sizes are smaller, but you don't get something for nothing. H265 requires vastly greater computational complexity which means the CPU burden to encode/decode is much greater. In this paper, VP9 was 2,000x slower to encode than x264, and H265 was 3x slower than VP9 (or 6,000x slower than x264). So it took thousands of times more computation to save at most about 50% in size. This is just a single paper and algorithms and efficiencies are improving but it illustrates the basic principle. iphome.hhi.de/marpe/.../Comp_LD_HEVC_VP9_X264_SPIE_2014-preprint.pdf If that computation is done in hardware (IOW you essentially get it for free) then it may be a worthwhile penalty. But if only software encode/decode is used for H265, it may be impractically slow. Also if full and high quality software support at the SDK level is not available, the fancy silicon doesn't help much. For the iPhone it is affordable for Apple to use H265 for Facetime. They completely control both hardware and software, and quantities of scale mean any design or fabrication cost is amortized over 50 million phones per year. If it costs a little more to add H265 logic to a corner of a SoC (System on a Chip) that already has 3 billion transistors, it's no problem. For a software developer like Adobe, they must deal with three basic H265 hardware acceleration schemes, NVENC, VCE and Quick Sync, some of which have multiple versions, each having varying capability and features. So maybe that explains the delay on Quick Sync in Premiere Pro. H265/HEVC has also been hampered for years by disputes over royalties and intellectual property, which is one reason Google is pushing VP9 which has roughly similar capability but is open source and royalty free. However VP9 itself will probably be replaced by the similar but improved royalty-free AV1: http://www.streamingmedia.com/Articles/Editorial/What-Is-.../What-is-AV1-111497.aspx
  18. Thanks for providing all that information. Max Yuryev recently did something similar on a lower-end Hackintosh optimized for FCPX video editing, although you could also run Premiere on it. He built a $1k quiet machine that is faster than a 6-core D700 2013 Mac Pro. It used a heavily overclocked 4-core CPU and a sealed water cooling kit. His goal was minimum software driver tweaking so he used dual AMD 280X GPUs -- essentially identical to the D700 -- rather than more recent nVidia cards. FCPX is also optimized for those cards. For Premiere you'd obviously replace those with higher-end nVidia GPUs. Overall not really at the performance level of your build, but it was quiet, inexpensive, and plenty fast for 4k editing on FCPX. He provides many benchmarks in this video. It would be interesting if you could somehow run some of the same benchmarks he ran and compare them.
  19. I have both -- a 16TB Thunderbay 4 in RAID-5 and an 8TB Thunderbolt Mini in RAID-0 using 4 x 2TB Samsung Evo 850s. The TB4 is very good but as it fills up the speed declines. Also spinning RAID system aren't good at small random I/Os, which unfortunately FCPX does a lot when scrolling through the Event Browser of a big library. The SSD array maintains excellent sequential speed even if 90% full, also it's vastly faster at random I/O. You described it exactly right -- the TB4 has excellent bank for your buck, which is important at larger sizes. The big benefit of Thunderbay and SoftRAID is you are not tied to a specific hardware enclosure. By contrast if my Pegasus R4 chassis fails, I can only use those drives in another R4 since it uses proprietary hardware RAID (which is no faster than SoftRAID, anyway).
  20. I use both Premiere CC and FCPX, although most of my work is on FCPX. There are plenty of bugs in both. I think the general consensus among unbiased people who have considerable production experience with both is that FCPX is somewhat more reliable but I have had FCPX crash and exhibit odd behavior *many* times. FCPX is generally faster than Premiere on the same Mac hardware but with Premiere on Windows you can more easily compensate by just getting a faster machine. FCPX has superb data organization features but IMO it's much harder for an experienced editor to change from Premiere to FCPX than from Premiere to some other track-oriented editor. With any software you can wonder into an "island of instability", where some obtuse set characteristics keep triggering slowdowns or crashes. Yet many other users of that same software still get production work done, because their environment somehow isn't triggering the problem. The trick is identifying the problem parameters and trying to avoid them. This can unfortunately be a time-consuming treasure hunt and require a lot of trial and error. However it can be faster to do that than switching to another editor.
  21. joema

    Transcoding?

    In this article Tony Northrup discusses how to manually use a proxy workflow in previous versions of Premiere: http://www.rangefinderonline.com/features/how-to/Getting-Acquainted-with-Offline-Video-Editing-to-Ease-You-Into-4K-8988.shtml As already stated, I'm not sure that would help with Lumetri performance, but maybe it would free up CPU cycles from constantly decoding H264. The new Premiere CC update for built-in proxy support is really nice -- it is vastly better than manually transcoding and linking up files.
  22. Avid's "proxy mode" when used with original resolution media is not proxy mode by the normal definition. They are simply using lower-res intermediate render files. You can do that in FCPX by dropping 4k footage into a 1080p timeline. It makes some modest improvement but is not remotely as useful as true proxy mode. As Axel said, Adobe has long said Premiere is so fast you don't need to transcode -- even at 4k. In fact right now Adobe's Premiere overview video on their web site says that: https://helpx.adobe.com/premiere-pro/how-to/what-is-premiere-pro-cc.html?set=premiere-pro--get-started--overview "....allows editors to work with 4k and beyond, without time-consuming transcoding....never needing to render until you work is complete" I used Premiere for years and still use CC occasionally. The Mercury Rendering Engine was revolutionary and it worked great up to 1080p, but it's just not fast enough (without proxy) for native H264 4k on most reasonable machines. E.g, have a top-spec 2015 imac 27 with a Thunderbolt SSD drive array and it's not fast enough. You might build a custom Windows editing machine that would work, but that's not what Adobe meant in the above statement. It was inevitable Adobe would have to add proxy support, and it works really well. If any Premiere CC users have not tried this, please do so. FCPX is a lot faster on the same hardware but even FCPX is not fast enough to handle large amounts of H264 4k without proxy, at least on a top-spec iMac. So in both cases you often need to generate proxy files. FCPX generates them considerably faster than Premiere but that's not a big deal since it's typically an unattended batch operation.
  23. I use X-Rite ColorMunki. It seems to work well on my 2015 iMac 27: http://a.co/hQbm2VL
  24. Northrup's point is the tortoise-like advance of dedicated cameras have been bypassed by smartphones, leaving cameras that feel clunky and archaic, esp from a consumer UI and ease-of-use standpoint. However he did NOT address that this was likely an unavoidable development, rather he implied it was poor decision making by the camera manufacturers. Apple is on track to spend $10 billion per year on R&D -- Nikon spends $550 million. Apple spends more on R&D than every camera company on earth combined (unless if you consider Samsung a camera company, who spends even more than Apple -- $14 billion). For many people the cost of their smart phone is subsidized by the cellular carrier. This produces a mobile device that is unusually powerful, very refined and artificially cheap relative to a consumer camera. It is not just limited to cameras. I have the highest-end automotive Garmin GPS. It is OK but the UI is nowhere near as responsive as a 2016-model smartphone. Garmin's annual revenue is 1/100th that of Apple or Samsung -- they can't spend the R&D of those companies, and there's no cellular carrier to subsidize their product. Of all the companies making cameras, Samsung might have the highest annual revenues. It's further interesting that the NX1 was probably closer to Northrup's ideal camera than most others. My point is there is a lot more to implementing Northrup's vision than the *idea*. Implementing that at a polished level and a consumer-affordable price requires an *immense* and ongoing R&D investment. Admittedly there are also the corporate cultural issues of a camera/instrument hardware manufacturer in a software, UI-centric world, but I question whether Canon and Nikon could have delivered Northrup's ideal consumer camera in a timeframe to make any difference -- even *IF* they had the vision. Like you, I am not sure if it would have made much difference anyway. This is a tidal wave of change sweeping across the photographic landscape. The idea that consumer cameras could somehow "carve out" a protected little enclave by adopting a few more consumer-friendly features is questionable.
  25. Re the article statement: "Canon IS not as effective as Panasonic’s sensor-based 5 axis image stabilisation and Dual IS" Having shot many hours of documentary video on both 5D Mark III using 70-200 f/2.8 IS II and the A7RII using 28-135 PZ cinema lens, that is not my experience (with Sony). In general I found the lens-based Canon OIS is overall better than the Sony 5-axis system, at least using the above lenses at similar focal lengths. The Canon system only stabilizes pitch and yaw, whereas Sony splits the burden between sensor which does roll+translation and the lens doing pitch and yaw. Despite this my informal video tests show I can hand-hold steadier video with the Canon. Maybe one factor is I'm usually using Super35 on the A7RII and maybe that somehow degrades stabilization. I have never read any article or review analyzing the effect of Super35 on Sony's 5-axis system, but it seems to not work as well in that case. The Canon with a Zacuto EVF was just a superb video machine since that big eye cup formed a 3rd contact point. Unfortunately it, the HDMI cable and brackets were unwieldy and delicate in the field. If the 1DX II and AF system can avoid this, that is great. I especially appreciate your points about needing stills and video from one camera. My documentary group is in a similar situation and we often use mirrorless or DSLRs for this reason. I also had the same experience regarding color. I have to work harder in post to get the look I want from the Sony, whereas the 5D3 content looked good out of the camera. Overall I'm still happier with the Sony since the total strengths outweigh the weaknesses (for me), but I often wish the color was like Canon.
×
×
  • Create New...