Jump to content

joema

Members
  • Posts

    160
  • Joined

  • Last visited

Reputation Activity

  1. Like
    joema got a reaction from Juank in Canon Cinema EOS C70 - Ah that explains it then!   
    There are several recent trends that make this difficult to assess. Past experience may not be a reliable indicator.
    Historically most use of RAW video has been proprietary formats which were expensive and complicated in both acquisition and post. By contrast both Blackmagic's BRAW and ProRes RAW are cheap to acquire and easy to handle in post. However they are both fairly recent developments, especially the rapidly-growing inexpensive availability of ProRes RAW via HDMI from various mirrorless cameras. If a given technology has only been widely available for 1-2 years, you won't immediately see great penetration in any segment of the video production community. There is institutional inertia and lag.
    However, long before BRAW and ProRes RAW, we had regular ProRes acquisition, either internally or via external recorders. Lots of shops have used ProRes acquisition because it avoids time-consuming transcoding and gives good quality in post. BRAW or ProRes RAW are no more complex or difficult to use than ProRes. This implies in the future, those RAW formats may grow and become somewhat more widely used, even in lower end productions.
    Conflicting with this is the more widespread recent availability of good-quality internal 10-bit 4:2:2 codecs on mirrorless cameras. I recently did a color correction test comparing 12-bit ProRes RAW from a Sony FS5  via Atomos Inferno to 10-bit 4:2:2 All-Intra from an A7SIII, and even when doing aggressive HSL masking, the A7SIII internal codec looked really good.
    So the idea is not accurate that the C70 is somehow debilitated because of not shipping with RAW capability on day 1. OTOH Sony will also face this same issue when the FX6 is shortly released. If it doesn't at least have ProRes RAW via HDMI to Atomos, that will be a perceptual problem because the A7SIII and S1H have it. It's also not just about RAW -- regular ProRes is widely used, e.g. various cameras inc'l Blackmagic record this internally or with an inexpensive external recorder.  The S1, S1H and A7SIII can record regular 10-bit 4:2:2 ProRes to a Ninja V, the BMPCC4k can record that internally or via USB-C to a Samsung T5, etc. 
    With a good quality 10-bit internal codec you may have less need for either RAW or ProRes acquisition. OTOH I believe some camera mfgs have an internal perceptual problem which is reflected externally in their products and marketing. E.g, I recently asked a senior Sony marketing guy what is the strategy for getting regular ProRes from the FX9. His response was why would I want that, why not use the internal codecs. There is some kind of disconnect, worsened by the new mirrorless cameras. Maybe the C70 lack of RAW is another manifestation. This general issue is discussed in the FX9 review starting at 06:25. While about the FX9 specifically, in broader terms the same issue (to varying degrees) affects the C70 and other cameras: 
     
  2. Like
    joema got a reaction from dellfonic in Canon Cinema EOS C70 - Ah that explains it then!   
    There are several recent trends that make this difficult to assess. Past experience may not be a reliable indicator.
    Historically most use of RAW video has been proprietary formats which were expensive and complicated in both acquisition and post. By contrast both Blackmagic's BRAW and ProRes RAW are cheap to acquire and easy to handle in post. However they are both fairly recent developments, especially the rapidly-growing inexpensive availability of ProRes RAW via HDMI from various mirrorless cameras. If a given technology has only been widely available for 1-2 years, you won't immediately see great penetration in any segment of the video production community. There is institutional inertia and lag.
    However, long before BRAW and ProRes RAW, we had regular ProRes acquisition, either internally or via external recorders. Lots of shops have used ProRes acquisition because it avoids time-consuming transcoding and gives good quality in post. BRAW or ProRes RAW are no more complex or difficult to use than ProRes. This implies in the future, those RAW formats may grow and become somewhat more widely used, even in lower end productions.
    Conflicting with this is the more widespread recent availability of good-quality internal 10-bit 4:2:2 codecs on mirrorless cameras. I recently did a color correction test comparing 12-bit ProRes RAW from a Sony FS5  via Atomos Inferno to 10-bit 4:2:2 All-Intra from an A7SIII, and even when doing aggressive HSL masking, the A7SIII internal codec looked really good.
    So the idea is not accurate that the C70 is somehow debilitated because of not shipping with RAW capability on day 1. OTOH Sony will also face this same issue when the FX6 is shortly released. If it doesn't at least have ProRes RAW via HDMI to Atomos, that will be a perceptual problem because the A7SIII and S1H have it. It's also not just about RAW -- regular ProRes is widely used, e.g. various cameras inc'l Blackmagic record this internally or with an inexpensive external recorder.  The S1, S1H and A7SIII can record regular 10-bit 4:2:2 ProRes to a Ninja V, the BMPCC4k can record that internally or via USB-C to a Samsung T5, etc. 
    With a good quality 10-bit internal codec you may have less need for either RAW or ProRes acquisition. OTOH I believe some camera mfgs have an internal perceptual problem which is reflected externally in their products and marketing. E.g, I recently asked a senior Sony marketing guy what is the strategy for getting regular ProRes from the FX9. His response was why would I want that, why not use the internal codecs. There is some kind of disconnect, worsened by the new mirrorless cameras. Maybe the C70 lack of RAW is another manifestation. This general issue is discussed in the FX9 review starting at 06:25. While about the FX9 specifically, in broader terms the same issue (to varying degrees) affects the C70 and other cameras: 
     
  3. Like
    joema got a reaction from Mmmbeats in Canon Cinema EOS C70 - Ah that explains it then!   
    There are several recent trends that make this difficult to assess. Past experience may not be a reliable indicator.
    Historically most use of RAW video has been proprietary formats which were expensive and complicated in both acquisition and post. By contrast both Blackmagic's BRAW and ProRes RAW are cheap to acquire and easy to handle in post. However they are both fairly recent developments, especially the rapidly-growing inexpensive availability of ProRes RAW via HDMI from various mirrorless cameras. If a given technology has only been widely available for 1-2 years, you won't immediately see great penetration in any segment of the video production community. There is institutional inertia and lag.
    However, long before BRAW and ProRes RAW, we had regular ProRes acquisition, either internally or via external recorders. Lots of shops have used ProRes acquisition because it avoids time-consuming transcoding and gives good quality in post. BRAW or ProRes RAW are no more complex or difficult to use than ProRes. This implies in the future, those RAW formats may grow and become somewhat more widely used, even in lower end productions.
    Conflicting with this is the more widespread recent availability of good-quality internal 10-bit 4:2:2 codecs on mirrorless cameras. I recently did a color correction test comparing 12-bit ProRes RAW from a Sony FS5  via Atomos Inferno to 10-bit 4:2:2 All-Intra from an A7SIII, and even when doing aggressive HSL masking, the A7SIII internal codec looked really good.
    So the idea is not accurate that the C70 is somehow debilitated because of not shipping with RAW capability on day 1. OTOH Sony will also face this same issue when the FX6 is shortly released. If it doesn't at least have ProRes RAW via HDMI to Atomos, that will be a perceptual problem because the A7SIII and S1H have it. It's also not just about RAW -- regular ProRes is widely used, e.g. various cameras inc'l Blackmagic record this internally or with an inexpensive external recorder.  The S1, S1H and A7SIII can record regular 10-bit 4:2:2 ProRes to a Ninja V, the BMPCC4k can record that internally or via USB-C to a Samsung T5, etc. 
    With a good quality 10-bit internal codec you may have less need for either RAW or ProRes acquisition. OTOH I believe some camera mfgs have an internal perceptual problem which is reflected externally in their products and marketing. E.g, I recently asked a senior Sony marketing guy what is the strategy for getting regular ProRes from the FX9. His response was why would I want that, why not use the internal codecs. There is some kind of disconnect, worsened by the new mirrorless cameras. Maybe the C70 lack of RAW is another manifestation. This general issue is discussed in the FX9 review starting at 06:25. While about the FX9 specifically, in broader terms the same issue (to varying degrees) affects the C70 and other cameras: 
     
  4. Like
    joema got a reaction from User in Prores vs h264 vs h265 and IPB vs ALL-I... How good are they actually?   
    On Mac you can use Invisor which also enables spreadsheet-like side-by-side comparison of several codecs. You can also drag/drop additional files from Finder to the comparison window, or select a bunch of files to compare using right-click>Services>Analyze with Invisor. I think it internally uses MediaInfo to get the data. It cannot extract as much as ffprobe or ExifTool but it's much easier to use and usually sufficient.
    https://apps.apple.com/us/app/invisor-media-file-inspector/id442947586?mt=12
  5. Like
    joema got a reaction from User in Prores vs h264 vs h265 and IPB vs ALL-I... How good are they actually?   
    Most commonly H264 is 8-bit Long GOP, sometimes called IBP.  This may date to the original H264 standard, but you can also have All-Intra H264 and/or 10-bit H264, it's just less common.
    I don't have the references at hand but if you crank up the bit rate sufficiently, H264 10-bit can produce very good quality, I think even the IBP variant. The problem is by that point you're burning so much data that you may as well use ProRes.
    In post production there can be huge differences in hardware-accelerated decode and encode performance between various flavors of a given general type. E.g, the 300 mbps UHD 4k/29.97 10-bit 4:2:2 All-Intra material from a Canon XC15 was very fast and smooth in FCPX on a 2015 iMac 27 when I tested it, but similar material from a Panasonic GH5 or S1 were very sluggish. Even on a specific hardware and OS platform, a mere NLE update can make a big difference. E.g, Resolve has had some big improvements on certain "difficult" codecs, even within the past few months, at least on Mac.
    Since HEVC is a newer codec, it seems that 10-bit versions are more common than H264 (especially as an NLE export format), but maybe that's only my impression. I think Youtube and Vimeo will accept and process 10-bit Long GOP HEVC OK, I tend to doubt they'd accept 10-bit All-Intra H264. There are some cameras that do 10-bit All-Intra HEVC such as the Fuji X-T3. I think some of these clips include that format https://www.imaging-resource.com/PRODS/fuji-x-t3/fuji-x-t3VIDEO.HTM
    But there's a lot more involved than just perceptual quality, data rate or file size. Once you expand post production beyond a very small group, you have an entire ecosystem that tends to be reliant on a certain codec; DNxHD or ProRes are good examples. It almost doesn't matter if another codec is a little smaller or very slightly different in perceptual quality on certain scene types, or can accommodate a few more transcode cycles with slightly less generational loss. Current codecs like DNxHD and ProRes work very well, are widely supported and not tied to any specific hardware manufacturer. 
    There's also ease of use in post production. Can the codec be played with common utilities or does it require a special player just to examine the material? If a camera codec, is it a tree-like hierarchical structure or is it a simple flat file with all needed metadata in the single file?
    Testing perceptual quality on codecs is a very laborious  complex process, so thanks for spending time on this and posting your results. Each codec variant may react differently to certain scene types. E.g, one might do well on trees but not water or fireworks. Below are some scenes used in academic research. 
    https://media.xiph.org/video/derf/
    http://medialab.sjtu.edu.cn/web4k/index.html
    http://ultravideo.cs.tut.fi/#testsequences
  6. Like
    joema got a reaction from Danyyyel in Sony A7S III   
    There is no one version of H.265. Like H.264 there are many, many different adjustable internal parameters. AMD's GPUs have bundled on them totally separate hardware called UVD/VCE which is similar to nVidia's NVDEC/NVENC. Over the years there have been multiple versions of *each* of those, just like Quick Sync has had multiple versions. Each version of each hardware accelerator has varying features and capability on varying flavors of H.264 and later H.265. 
    Even if a particular version of a hardware video accelerator works on a certain flavor of one codec, this is not automatic. Both application and system layer software must use development frameworks (often provided by the h/w vendor) to harness that support and provide it in the app. 
    You can easily have a case where h/w acceleration support has existed for years and a certain app just doesn't use it. That was the case with Premiere Pro for years which did not use Intel's Quick Sync. There are some cases now where DaVinci Resolve supports some accelerators for some codec flavors which FCPX does not.
    There is also a difference between the decode side and encode side. It is common on many platforms that 10-bit HEVC decoding is hardware accelerated for some codec variants but not 10-bit HEVC *encoding*. It is a complex. bewildering, multi-dimensional matrix of hardware versions, software versions, codec versions, sprinkled with bugs, limitations and poor documentation.
    In general Intel's Quick Sync has been more widely supported because it is not dependent on a specific brand or version of the video acceleration logic bundled on a certain GPU. However Xeon does not have Quick Sync so workstation-class machines must use the accelerator on GPUs or else create their own. That is what Apple did with their iMac Pro and new Mac Pro - they integrated that acceleration logic into their T2 chip.
  7. Like
    joema got a reaction from gethin in Canon 9th July "Reimagine" event for EOS R5 and R6 unveiling   
    Those are good points and helps us documentary/event people see that perspective. OTOH I'm not sure we have complete info on the camera. Is the thermal issue *only* when rolling or partially when it's powered up? Does it never happen when using an external recorder, or just not as quickly? I did a lot of narrative work last year and (like you) never rolled more than about 90 seconds. I see that point. But my old Sony A7R2 would partially heat up just from being powered on, which gave the heat buildup a "head start" when rolling commenced, making a shutdown happen faster. Is the R5 like that?
    Also unknown is the cumulative heat buildup based on shooting duty cycle. Even if you never shoot a 30 min. interview, if you do numerous 2-min b-roll shots in hot weather, will the R5 hit the thermal limit?
    More than the initial thermal time limit, the cooldown times seem troubling, esp. if hot ambient conditions amplify this.
  8. Like
    joema got a reaction from Vintage Jimothy in Canon 9th July "Reimagine" event for EOS R5 and R6 unveiling   
    Those are good points and helps us documentary/event people see that perspective. OTOH I'm not sure we have complete info on the camera. Is the thermal issue *only* when rolling or partially when it's powered up? Does it never happen when using an external recorder, or just not as quickly? I did a lot of narrative work last year and (like you) never rolled more than about 90 seconds. I see that point. But my old Sony A7R2 would partially heat up just from being powered on, which gave the heat buildup a "head start" when rolling commenced, making a shutdown happen faster. Is the R5 like that?
    Also unknown is the cumulative heat buildup based on shooting duty cycle. Even if you never shoot a 30 min. interview, if you do numerous 2-min b-roll shots in hot weather, will the R5 hit the thermal limit?
    More than the initial thermal time limit, the cooldown times seem troubling, esp. if hot ambient conditions amplify this.
  9. Like
    joema got a reaction from Emanuel in RED claim victory in Apple RAW patent battle   
    That seems to be the case.
    Other issue: despite the hardware-oriented camera/device aspect, it would seem the RED patent is more akin to a software patent. You can patent a highly-specific software algorithmic implementation such as HEVC, but not the broad concept of high efficiency long GOP compression. E.g, the HEVC patent does not preclude Google from developing the functionally-similar AV1 codec. 
    However the RED patent seems to preclude any non-licensed use of the broad, fundamental concept of raw video compression, at least in regard to a camera and recorder. Hypothetically it would cover a future iPhone recording ProRes RAW, even if streaming it for recording via tethered 5G wireless link to a computer.
    In RF telecommunications there are now "software defined radios" where the entire signal path is implemented in general-purpose software. Similar to that, we are starting to see the term "software-defined camera". 
    It would seem RED would want their patent enforced whether the camera internally used discrete chips or a general-purpose CPU fast enough to execute the entire signal chain and data path. 
    If the RED patent can be interpreted as a software-type patent, this might be affected by recent legal rulings on software patents such as Alice Corp. vs CLS Bank: https://en.wikipedia.org/wiki/Alice_Corp._v._CLS_Bank_International
  10. Like
    joema got a reaction from Pedro in Panasonic S1H review / hands-on - a true 6K full frame cinema camera   
    I have shot lots of field documentary material and I basically agree. We use Ursas, rigged Sony Alpha, DVX200, rigged BMP4CC4k, etc. 
    I am disappointed the video-centric S1H does not allow punch-in to check focus while recording. The BMPCC4K and even my old A7R2 did that. An external EVF or monitor/recorder can provide that on the S1H, but if the goal is retaining a highly functional minimal configuration, lack of focus punch-in while recording is unfortunate. Panasonic's Matt Frazer said this was a limitation of their current imaging pipeline and would likely not be fixable via firmware.
    The Blackmagic battery grip allows the BMPC6K to run for 2 hrs. If Blackmagic produced a BMPCC6K "Pro" version which had IBIS, a brighter tilting screen, waveform, and maybe 4k 12-bit ProRes or 6k ProRes priced at $4k, that would be compelling.
  11. Like
    joema got a reaction from Kisaha in hardware for video editing   
    The XT3 can use H264 or H265 video codecs, plus it can do H264 "All Intra" (IOW no interframe compression) which might be easier to edit, but the bitrate is higher.
    The key for all those except maybe All Intra is you need hardware accelerated decode/encode, plus editing software that supports that. The most common and widely-adopted version is Intel's Quick Sync. AMD CPUs do not have that. Premiere Pro started supporting Quick Sync relatively recently, so if you have an updated subscription that should help. Normal GPU acceleration doesn't help for this due to the sequential nature of the compression algorithm. It cannot be meaningfully parallelized to harness hundreds of lightweight GPU threads.
    In theory both nVidia and AMD GPUs have separate fixed-function video acceleration hardware similar to Quick Sync which is bundled on the same die but functionally totally separate. However each has had many versions and require their own software frameworks for the developer to harness those. For these reasons Quick Sync is much more widely used.
    The i7-2600 (Sandy Bridge) has Quick Sync but that was the first version and I'm not sure how well it worked. Starting with Kaby Lake it was greatly improved from a performance standpoint.
    In general, editing a 4k H264 or H265 acquisition codec is very CPU-bound due to the compute-intensive decode/encode operations. The I/O rate is not that high, e.g, 200 mbps is only 25 megabytes per sec.
    As previously stated you can transcode to proxies but that is a separate (possibly time consuming) step.
  12. Like
    joema got a reaction from EthanAlexander in hardware for video editing   
    The XT3 can use H264 or H265 video codecs, plus it can do H264 "All Intra" (IOW no interframe compression) which might be easier to edit, but the bitrate is higher.
    The key for all those except maybe All Intra is you need hardware accelerated decode/encode, plus editing software that supports that. The most common and widely-adopted version is Intel's Quick Sync. AMD CPUs do not have that. Premiere Pro started supporting Quick Sync relatively recently, so if you have an updated subscription that should help. Normal GPU acceleration doesn't help for this due to the sequential nature of the compression algorithm. It cannot be meaningfully parallelized to harness hundreds of lightweight GPU threads.
    In theory both nVidia and AMD GPUs have separate fixed-function video acceleration hardware similar to Quick Sync which is bundled on the same die but functionally totally separate. However each has had many versions and require their own software frameworks for the developer to harness those. For these reasons Quick Sync is much more widely used.
    The i7-2600 (Sandy Bridge) has Quick Sync but that was the first version and I'm not sure how well it worked. Starting with Kaby Lake it was greatly improved from a performance standpoint.
    In general, editing a 4k H264 or H265 acquisition codec is very CPU-bound due to the compute-intensive decode/encode operations. The I/O rate is not that high, e.g, 200 mbps is only 25 megabytes per sec.
    As previously stated you can transcode to proxies but that is a separate (possibly time consuming) step.
  13. Thanks
    joema got a reaction from kye in Workflow for editing large projects?   
    I worked on a collaborative team editing a large documentary consisting of 8,500 4k H264 clips, 220 camera hours, and 20 terabytes. It included about 120 multi-camera interviews. The final product was 22 min.
    In this case we used FCPX which has extensive database features such as range-based (vs. clip-based) keywording and rating. Before touching a timeline, there was a heavy organizational phase where a consistent keyword dictionary and rating criteria was devised and proxy-only media distributed among several geographically distributed assistant editors. All multicam material and content with external audio was first synchronized. FCPX was used to apply range-based keywords and ratings. The ratings included rejecting all unusable or low-quality material which FCPX thereafter suppresses from display. We used XML files including the 3rd-party utility MergeX to interchange library metadata for the assigned media: http://www.merge.software
    Before timeline editing started, by these methods the material was culled down to a more manageable size with all content organized by a consistent keyword system. The material was shot at 12 different locations over two years so it was crucial to thoroughly organize the content before starting the timeline editing phase.
    Once the timeline phase began, a preliminary brief demo version was produced to evaluate overall concept and feel. This worked out well and the final version was a more fleshed out version of the demo version.
    It is true that in documentary, the true story is often discovered during the editorial process. However during preproduction planning there should be some idea of possible story directions otherwise you can't shoot for proper coverage, and the production phase is inefficient. 
    Before using FCPX I edited large documentaries using Premiere Pro CS6, and used an Excel Spreadsheet to keep track of clips and metadata. Editor Walter Murch has described using a Filemaker Pro database for this purpose. There are 3rd party media asset managers such as CatDV: http://www.squarebox.com/products/desktop/ and KeyFlow Pro: http://www.keyflowpro.com
    Kyno is a simpler screening and media management app which you could use as a front end, esp for NLEs that don't have good built-in organizing features: https://lesspain.software/kyno/
    However it is not always necessary to use spreadsheets, databases or other tools. In the above-mentioned video about "Process of a Pro Editor", he just uses Avid's bin system and a bunch of small timelines. That was an excellent video, thanks to BTM_Pix for posting that.
  14. Like
    joema got a reaction from webrunner5 in Workflow for editing large projects?   
    I worked on a collaborative team editing a large documentary consisting of 8,500 4k H264 clips, 220 camera hours, and 20 terabytes. It included about 120 multi-camera interviews. The final product was 22 min.
    In this case we used FCPX which has extensive database features such as range-based (vs. clip-based) keywording and rating. Before touching a timeline, there was a heavy organizational phase where a consistent keyword dictionary and rating criteria was devised and proxy-only media distributed among several geographically distributed assistant editors. All multicam material and content with external audio was first synchronized. FCPX was used to apply range-based keywords and ratings. The ratings included rejecting all unusable or low-quality material which FCPX thereafter suppresses from display. We used XML files including the 3rd-party utility MergeX to interchange library metadata for the assigned media: http://www.merge.software
    Before timeline editing started, by these methods the material was culled down to a more manageable size with all content organized by a consistent keyword system. The material was shot at 12 different locations over two years so it was crucial to thoroughly organize the content before starting the timeline editing phase.
    Once the timeline phase began, a preliminary brief demo version was produced to evaluate overall concept and feel. This worked out well and the final version was a more fleshed out version of the demo version.
    It is true that in documentary, the true story is often discovered during the editorial process. However during preproduction planning there should be some idea of possible story directions otherwise you can't shoot for proper coverage, and the production phase is inefficient. 
    Before using FCPX I edited large documentaries using Premiere Pro CS6, and used an Excel Spreadsheet to keep track of clips and metadata. Editor Walter Murch has described using a Filemaker Pro database for this purpose. There are 3rd party media asset managers such as CatDV: http://www.squarebox.com/products/desktop/ and KeyFlow Pro: http://www.keyflowpro.com
    Kyno is a simpler screening and media management app which you could use as a front end, esp for NLEs that don't have good built-in organizing features: https://lesspain.software/kyno/
    However it is not always necessary to use spreadsheets, databases or other tools. In the above-mentioned video about "Process of a Pro Editor", he just uses Avid's bin system and a bunch of small timelines. That was an excellent video, thanks to BTM_Pix for posting that.
  15. Like
    joema got a reaction from markr041 in Variable ND filters for video?   
    You generally need some type of ND when shooting outdoors at wide aperture. For scripted shooting, a matte box and drop-in fixed filters may work, but for documentary, news, run-and-gun, etc. a variable ND is handy. This is why upper-end camcorders have long had built-in selectable ND filters.
    However with the move to large sensors, the entire optical path gets larger. It becomes much harder both mechanically and optically to fit multiple precision fixed ND filters inside. The surface area of an optical element increases as the square of the radius, so it becomes much harder (and more expensive) to make a perfectly flat multicoated filter. The Sony FS5 has an electronic variable ND, showing how important this is for video.
    It doesn't make sense to put a $20 filter on a $2500 lens. However filter price and quality are not necessarily directly related.
    In documentary video I've used many different variable ND filters, and here are a few things to look for:
    (1) If at all possible get one that fits inside a lens hood. This is the most difficult requirement since there are no specs or standards for this. You use a variable ND outside under bright (often sunny) conditions -- the very conditions where you need a lens hood. However most variable ND filters and most lenses are incompatible. The ideal case would be certain Sony A or E-mount lenses with a cutout in the lens hood which allows turning the variable ND filter without removing the hood. However it's very difficult to find one which fits.
    (2) Get one with hard stops at the end of each range. Otherwise it's difficult to tell where you are on the attenuation scale, and this adds a few seconds which can make you miss a shot.
    (3) Get one which does not exhibit "X" patterns or other artifacts at high attenuation. This typically happens with filters having more than 6 stops attenuation.
    (4) Get one which has the least attenuation on the low end, ideally 1 stop or less. This reduces the times you have to remove the filter when shooting inside. A filter which goes from 1-6 stops is likely more useful and less likely to have artifacts at high attenuation than one which goes from 2-8 stops.
  16. Like
    joema got a reaction from salim in Tiffen Variable ND 82 $69. Is it any good?   
    I have several Tiffen NDs. The optical quality is OK but (as with most 8-stop variable NDs) they have polarization artifacts at high attenuation. Another problem is Tiffen filters have no hard stops at each end, so you can't tell by feel where you are.
    I have some NiSi variable NDs and I like them much better. They have hard stops plus don't have the "X" effect at high attenuation, OTOH they are limited to six stops: https://***URL removed***/news/8909959108/nisi-launches-variable-nd-filter-with-the-dreaded-x-effect
    My favorite filter is the Heliopan 77mm, which also has hard stops and also avoids the "X" effect. It's minimum attenuation is only 1 stop and max is 6 stops. It is expensive but it's an excellent filter. IMO it doesn't make sense to put a cheap filter on a $2500 lens, but if you test a cheaper filter and it works for you, go ahead and use it.  https://www.bhphotovideo.com/c/product/829300-REG/Heliopan_708290_82mm_Variable_Gray_ND.html
    Although not commonly discussed, a major factor with variable NDs is whether they fit inside the lens hood. You typically use them when shooting outside which often means the sun is out and you need a lens hood for best results if shooting within 180 degrees of the sun angle. There are various strap-on hoods, french flags, etc. but they can be cumbersome. Ironically even some very expensive cameras like the RED Raven have no built-in ND so you can end up using the same screw-on variable ND as somebody with with a GH5.
    This is a very difficult area because neither lens manufacturers nor filter manufacturers have specs on filter/lens hood fitment. A big place like B&H can sometimes give advice but not always. You basically need to take all your lenses to some place with a huge in-stock supply that would let you try them all; maybe B&H or the NAB show? If people would methodically post (maybe on a sticky thread)  what filter fits inside the hood of what lens, that would help.
    I know from personal testing the Heliopan 77mm variable ND fits inside the lens hood of my Canon 70-200 2.8 IS II, and I can easily reach inside (with lens hood attached) and turn the filter. It will not fit inside the hood of the Sony 70-200 2.8 G-Master, and none of the NiSi, Tiffen or GenusTech 77mm variable NDs I've tried will fit.
    I have this 95mm filter which NiSi makes for Hasselblad, and it fits inside the lens hood of my Sony 28-135 f/4 cinema lens: https://www.aliexpress.com/item/NiSi-95-mm-Slim-Fader-Variable-ND-Filter-ND4-to-ND500-Adjustable-Neutral-Density-for-Hasselblad/32311172283.html
    Some of the longer Sony A-mount and FE-mount lenses actually have a cutout in the bottom of the lens hood where you can turn a variable filter -- provided it fits. 
    Dave Dugdale did a variable ND test here:
     
  17. Like
    joema got a reaction from Jimmy G in Which iMac 2017 for editing and grading?   
    Just keep in mind the 2017 iMac 27 i7 is twice as fast as a 2016 MBP or 2015 iMac 27 ONLY on *some* things -- specifically transcoding H264 to ProRes proxy. Considering that's a very time-consuming part of many H264 4k workflows, that's really useful. It's also limited to FCPX; the performance difference varies with each software.
    The 2017 iMac 27 i7 is also about 2x as fast (vs a 2015 iMac i7 or a 2016 MBP i7) on the GPU-oriented BruceX benchmark, but this is also a narrow task. On other GPU-heavy or mixed CPU/GPU tasks like Neat Video, it's usefully faster but not 2x faster.
    On a few H264 long GOP 4k codecs I tested, the 2017 iMac 27 i7 seems marginally fast enough to edit single-camera 4k material without transcoding (on FCPX), which is a big improvement from the 2015 iMac 27 i7 or 2016 top-spec MBP. However multicam still requires transcoding to proxy, and if you want to really blitz through the material, then proxy still helps.
    If you now or will ever use ProRes or DNxHD acquisition, this picture totally changes. It then becomes less CPU intensive but much more I/O intensive. You usually don't need to transcode in those cases but the data volume and I/O rates increase by 6x, 8x or more.
     
  18. Like
    joema got a reaction from Jimmy G in Which iMac 2017 for editing and grading?   
    I have a 2013 iMac 27 with 3TB FD, a 2015 top-spec iMac 27 with 1TB SSD,  2015 and 2016 top-spec MBP 15s and am testing a 12-core nMP with D700s. This is FCPX 4k documentary editing where the primary codecs are some variant of H264.
    Even though FCPX is very efficient, in general H264 4k requires transcoding to proxy for smooth, fluid editing and skimming -- even on a top-spec 12-core nMP. If you have a top-spec MBP, iMac or Mac Pro, smaller 4k H264 projects can be done using the camera-native codec, but multicam can be laggy and frustrating. The Mac Pro is especially handicapped on H264 since the Xeon CPU does not have Quick Sync. In my tests, transcoding 4k H264 to ProRes proxy on a 12-core Mac Pro is nearly twice as slow as a 2015 top-spec iMac 27. For short projects with lower shooting ratios it's not an issue but for larger projects with high shooting ratios it's a major problem.
    We've got ProRes HDMI recorders but strapping on a bunch of 4k recorders is expensive and operationally more complex in a field documentary situation. That would eliminate the transcoding and editing performance problems but would exacerbate the data wrangling task by about 8x. This is especially difficult for multi-day field shoots where the data must be offloaded and backed up.
    However in part the viability of editing camera-native 4k depends on your preferences. If you do mainly single-cam work, and use modest shooting ratios so you don't need to skim and keyword a ton of material, and don't mind a bit of lag during edit, a top-spec iMac 27 is probably OK for H264 4k. 
    Re effects, those can either be CPU-bound or GPU-bound, or a combination of both. Some like Neat Video allow you to configure the split between CPU and GPU. But in general effects use a lot of GPU, and like encode/decode, are slowed down by 4k since it's 4x the data per frame as 1080p. 
    Re Fusion Drive vs SSD, for a while I had both 2013 and 2015 iMac 27s on my desk, one with 3TB FD and the other 1TB SSD. I tested a few small cases with all media on the boot drive, and really couldn't see much FCPX real-world performance difference. You are usually waiting on CPU or GPU. However if you transcode to ProRes, I/O rates skyrocket, making it more likely to hit an I/O constraint.
    Fusion Drive is pretty good but ideally you don't want media on the boot drive. SSD is fast enough to put media there but it's not big enough. Fusion Drive is big enough but may not be fast enough, thus the dilemma. A 3TB FD is actually a pretty good solution for small scale 1080p video editing, but 4k (even H264) chews through space rapidly. Also, performance will degrade on any spinning drive (even FD) as it fills up. Thus you don't really have 3TB at full performance, but need to maintain considerable slack space. In general we often under-estimate our storage needs, so end up using external storage even for "smaller" projects. If this is your likely destiny, why not use an SSD iMac which is at least a bit faster at a few things like booting? Just don't spend your entire budget on an SSD machine then use a slow, cheap bus-powered USB drive.
    If I was getting an 2017 iMac 27 for H264 4k editing, it would be a high-spec version, e.g, 4.2Ghz i7, 32GB RAM, 580 GPU, and probably 1TB SSD. Re the iMac Pro, what little we know indicates the base model will be considerably faster than the top-spec iMac 27 -- it has double the cores (albeit at slower clock rate) and roughly double the GPU performance. However unless Apple pulls a miracle out of their hat and upgrades FCPX to use AMD's VCE coding engine, the iMac Pro will not have Quick Sync, so it will be handicapped just like the current Mac Pro for that workflow. Apple is limited by what Intel provides but this is an increasingly critical situation for productions using H264 or H265 acquisition codecs and high shooting ratios.
  19. Like
    joema got a reaction from jonpais in Editing 4K on a Macbook Pro   
    I have done extensive documentary editing using 4K XAVC-S and GH5 files using FCPX on 2015 and 2017 iMac 27 and 2014, 2015 and 2016 MacBook Pro 15. I used Premiere extensively from CS4 through CS6 and have a Premiere CC subscription but mainly use it for testing.
    Obtaining smooth editing performance on 4K K264 is difficult on almost any hardware or software. Unlike Premiere, FCPX uses Intel's Quick Sync acceleration for H264 and is much faster on the same Mac hardware -- yet even FCPX can be sluggish without proxies. Using 1080p proxies, FCPX is lightning fast at 4K on any recent Mac, even a 2013 MacBook Air. However compute-intensive effects such as Neat Video or Imagenomic Portraiture can slow down anything, no matter what the hardware or editing software.
    Editing 4K H264 using Premiere on a Mac tends to be CPU-bound, not I/O or GPU bound. You can see this yourself by watching the CPU and I/O with Activity Monitor. iStat Menus ver. 6 also allows monitoring the GPU. The I/O data rate for 4K H264 is not very high, and using proxies it's even lower.  Using I/O optimizations like SSD, RAID, etc, tends to not help because you're already bottlenecked on the CPU. This is a generalization -- if you are editing four-angle multicam off a 5400 rpm USB bus-powered portable drive, then you could be I/O bound.
    I have done a lot of back-to-back testing of a 2014 vs 2016 top-spec MBP when editing 4K H264 XAVC-S and GH5 material using FCPX. The 2016 is much faster, although I'm not sure how representative this would be for Premiere. On FCPX my 2017 iMac 27 is about 2x faster than the 2015 iMac (both top spec) when transcoding or exporting H264 from FCPX. I think this is due to the improved Kaby Lake Quick Sync, but am not sure. 
    A top-spec 2017 MBP might be considerably faster than your 2014 but this depends a lot on the software. Comparing top-spec configurations, the GPU is about 2x faster but the CPU only modestly faster. It might be enough to compensate while staying on Premiere, especially if your problem was GPU. But I'm suspicious why it's so slow if using 720p proxies. In my testing Premiere was very fast on 4K H264 if using proxies. This makes me think it's Warp stabilizer or some effect slowing it down. Can you reproduce the slowdown without any effects? Without effects does the extreme sluggishness only diminish or does it go away entirely?
    Resolve performance has been greatly improved in the latest version and in some benchmarks it's as fast as FCPX. You might want to consider that. FCPX is very good but it's a bigger transition from a conceptual standpoint, whereas Resolve is track-oriented like Premiere is.
  20. Like
    joema got a reaction from Gregormannschaft in Editing 4K on a Macbook Pro   
    I have done extensive documentary editing using 4K XAVC-S and GH5 files using FCPX on 2015 and 2017 iMac 27 and 2014, 2015 and 2016 MacBook Pro 15. I used Premiere extensively from CS4 through CS6 and have a Premiere CC subscription but mainly use it for testing.
    Obtaining smooth editing performance on 4K K264 is difficult on almost any hardware or software. Unlike Premiere, FCPX uses Intel's Quick Sync acceleration for H264 and is much faster on the same Mac hardware -- yet even FCPX can be sluggish without proxies. Using 1080p proxies, FCPX is lightning fast at 4K on any recent Mac, even a 2013 MacBook Air. However compute-intensive effects such as Neat Video or Imagenomic Portraiture can slow down anything, no matter what the hardware or editing software.
    Editing 4K H264 using Premiere on a Mac tends to be CPU-bound, not I/O or GPU bound. You can see this yourself by watching the CPU and I/O with Activity Monitor. iStat Menus ver. 6 also allows monitoring the GPU. The I/O data rate for 4K H264 is not very high, and using proxies it's even lower.  Using I/O optimizations like SSD, RAID, etc, tends to not help because you're already bottlenecked on the CPU. This is a generalization -- if you are editing four-angle multicam off a 5400 rpm USB bus-powered portable drive, then you could be I/O bound.
    I have done a lot of back-to-back testing of a 2014 vs 2016 top-spec MBP when editing 4K H264 XAVC-S and GH5 material using FCPX. The 2016 is much faster, although I'm not sure how representative this would be for Premiere. On FCPX my 2017 iMac 27 is about 2x faster than the 2015 iMac (both top spec) when transcoding or exporting H264 from FCPX. I think this is due to the improved Kaby Lake Quick Sync, but am not sure. 
    A top-spec 2017 MBP might be considerably faster than your 2014 but this depends a lot on the software. Comparing top-spec configurations, the GPU is about 2x faster but the CPU only modestly faster. It might be enough to compensate while staying on Premiere, especially if your problem was GPU. But I'm suspicious why it's so slow if using 720p proxies. In my testing Premiere was very fast on 4K H264 if using proxies. This makes me think it's Warp stabilizer or some effect slowing it down. Can you reproduce the slowdown without any effects? Without effects does the extreme sluggishness only diminish or does it go away entirely?
    Resolve performance has been greatly improved in the latest version and in some benchmarks it's as fast as FCPX. You might want to consider that. FCPX is very good but it's a bigger transition from a conceptual standpoint, whereas Resolve is track-oriented like Premiere is.
  21. Like
    joema got a reaction from tellure in Sony A7R III announced with 4K HDR   
    The *only* viable option? I have shot hundreds of hours of documentary video on the A7RII and even *it* works very well. We also use the GH5 and do two-camera interviews with it and the A7RII. The GH5 is excellent but in the real world each has pros and cons. Interestingly both A7RII and GH5 share a feature the A7SII (and probably A7SIII) don't have: ability to shoot in a crop mode that gives 1.5x on the A7R series and 1.4x on the GH5. That is really handy because it's like a flawless tele-converter without changing lenses.
    From actual hands-on field documentary work, the biggest A7RII issues are not the 8-bit 100 mbps codec or lacking 4k/60. It is things like this:
    - Inability to rapidly switch between Super35 and full frame mode
    - Slow boot up to fully operational status
    - Intermittently laggy control input
    - Cumbersome menu system with limited customization
    - Poor button ergonomics and poor tactile feedback
    - Poor battery life (although the battery grip on the A7RII fixes much of that)
    - No 1080p/120
    - Focus peaking could be better
    For stills the biggest issue is incredibly slow writing rate to the SD card and almost non-existent multi-tasking during those long periods.
    Most or all of these are addressed in the A7RIII. So I don't see the GH5 as "the only viable option", even though my doc team uses one.
    I would much rather have Eye AF in video mode than a 10-bit codec. These are the differences between real world use vs comparing specs.
    If you want to see informed, experienced commentary about the A7RIII and video, check out Max Yuryev's latest commentary. This is the difference between someone who owns and uses both GH5 and A7RII vs someone who looks at specs: 
     
  22. Like
    joema got a reaction from EthanAlexander in Sony A7R III announced with 4K HDR   
    The *only* viable option? I have shot hundreds of hours of documentary video on the A7RII and even *it* works very well. We also use the GH5 and do two-camera interviews with it and the A7RII. The GH5 is excellent but in the real world each has pros and cons. Interestingly both A7RII and GH5 share a feature the A7SII (and probably A7SIII) don't have: ability to shoot in a crop mode that gives 1.5x on the A7R series and 1.4x on the GH5. That is really handy because it's like a flawless tele-converter without changing lenses.
    From actual hands-on field documentary work, the biggest A7RII issues are not the 8-bit 100 mbps codec or lacking 4k/60. It is things like this:
    - Inability to rapidly switch between Super35 and full frame mode
    - Slow boot up to fully operational status
    - Intermittently laggy control input
    - Cumbersome menu system with limited customization
    - Poor button ergonomics and poor tactile feedback
    - Poor battery life (although the battery grip on the A7RII fixes much of that)
    - No 1080p/120
    - Focus peaking could be better
    For stills the biggest issue is incredibly slow writing rate to the SD card and almost non-existent multi-tasking during those long periods.
    Most or all of these are addressed in the A7RIII. So I don't see the GH5 as "the only viable option", even though my doc team uses one.
    I would much rather have Eye AF in video mode than a 10-bit codec. These are the differences between real world use vs comparing specs.
    If you want to see informed, experienced commentary about the A7RIII and video, check out Max Yuryev's latest commentary. This is the difference between someone who owns and uses both GH5 and A7RII vs someone who looks at specs: 
     
  23. Like
    joema got a reaction from Don Kotlos in Sony A7R III announced with 4K HDR   
    The *only* viable option? I have shot hundreds of hours of documentary video on the A7RII and even *it* works very well. We also use the GH5 and do two-camera interviews with it and the A7RII. The GH5 is excellent but in the real world each has pros and cons. Interestingly both A7RII and GH5 share a feature the A7SII (and probably A7SIII) don't have: ability to shoot in a crop mode that gives 1.5x on the A7R series and 1.4x on the GH5. That is really handy because it's like a flawless tele-converter without changing lenses.
    From actual hands-on field documentary work, the biggest A7RII issues are not the 8-bit 100 mbps codec or lacking 4k/60. It is things like this:
    - Inability to rapidly switch between Super35 and full frame mode
    - Slow boot up to fully operational status
    - Intermittently laggy control input
    - Cumbersome menu system with limited customization
    - Poor button ergonomics and poor tactile feedback
    - Poor battery life (although the battery grip on the A7RII fixes much of that)
    - No 1080p/120
    - Focus peaking could be better
    For stills the biggest issue is incredibly slow writing rate to the SD card and almost non-existent multi-tasking during those long periods.
    Most or all of these are addressed in the A7RIII. So I don't see the GH5 as "the only viable option", even though my doc team uses one.
    I would much rather have Eye AF in video mode than a 10-bit codec. These are the differences between real world use vs comparing specs.
    If you want to see informed, experienced commentary about the A7RIII and video, check out Max Yuryev's latest commentary. This is the difference between someone who owns and uses both GH5 and A7RII vs someone who looks at specs: 
     
  24. Like
    joema got a reaction from Don Kotlos in Sony A7R III announced with 4K HDR   
    It shows the difference when you shoot 8-bit log then push the the colors hard in post. Of course 8-bit will degrade faster if it was captured in a flat profile. The question is how would the comparison look if the 8-bit side was *not* captured flat, then both sides were graded as best possible. It would likely look different but the 8-bit side would not have artifacts and banding. The 10-bit side might have more dynamic range due to the flat acquisition. But in that case it would be two different "looks", not one side with significant artifacts and one without.
  25. Like
    joema got a reaction from jonpais in Why Shooting 4K?   
    This is correct, and (once again) the OP equated 4k solely with distribution resolution. There are several reasons to shoot 4k:
    (1) Allows reframing in post
    (2) May allow better stabilization in post provided the shot is framed a little loose. OTOH digital stabilization often induces artifacts so the goal is not use this.
    (3) Each frame is an 8 megapixel still so frame grabs are a lot better.
    (4) Shooting in 4k may give better "shelf life" for the material, similar to how shooting TV color did in the 1960s. Even though initially few people had color TVs, eventually everyone would so the additional cost of color film was often worthwhile.
    (5) Large 4k productions impose a major new post production load. It is vastly harder than 1080p due to the volume and possible transcoding and workflow changes. When my doc group shot H264 1080p we could just distribute that in the post-production pipeline without a thought. With H264 4k, it must be transcoded to proxy, collaborative editing often requires a proxy-only workflow which then can expose complications for re-sync, etc. The reason this is an argument *for* 4k is it takes a long time to figure out the post production issues. Computers won't be much faster next year, so if you're *ever* going to transition to 4k and are shooting multicam and high shooting ratio material, you may as well start the learning curve now.
    Arguments against 4k: It may not look much (or any) better than good quality 1080p when viewed on typical playback devices, so why take the huge hit in post production. It can actually look worse than 1080p, depending on what cameras are used.
    Even though #5 above was listed as a 4k advantage, this is also one of the strongest arguments *against* 4k: the huge post production load. Whether you shoot in ProRes, DNxHD, H264, etc. it can be a huge burden. Worst of all is H264 since few computers are fast enough to smoothly edit this. Therefore it generally requires transcoding, proxies, and various knock-on effects regarding media management. It's not that bad if playing around with 4k or shooting a little commercial, but for larger productions I'd roughly I'd estimate it's over 10x (not 4x) as difficult as 1080p from an IT and data wrangling standpoint.
×
×
  • Create New...