Jump to content

joema

Members
  • Posts

    160
  • Joined

  • Last visited

Reputation Activity

  1. Like
    joema got a reaction from Vladimir in New information regarding H.265 on the Panasonic GH5   
    As I previously described, deployment of H265/HEVC has been slowed for non-technical reasons. There have been major disputes over licensing, royalties and intellectual property. At one point the patent holders were demanding a % of gross revenue from individual end users who encode H265 content. That is one reason Google developed the open source VP9 codec. The patent holders have recently retreated from their more egregious demands, but that negatively tainted H265 and has delayed deployment. 
    The licensing and royalty issue is why the evaluation version of Premiere Pro does not have H265.
    VP9 is replacing H264 on Youtube, and they will transition to VP9's successor AV1 soon. AV1 is also open source, not patent-encumbered, and significantly better than H265/HEVC: http://www.streamingmedia.com/Articles/Editorial/What-Is-.../What-is-AV1-111497.aspx
    Skylake's Quick Sync has partial hardware support for VP9 and Kaby Lake has full hardware support, but I don't know about AV1.
  2. Like
    joema got a reaction from Stanley in Suitable Micophones to large indoor spaces   
    Was the ME66 shotgun mic boom operated or camera mounted? If camera mounted it may not help much since the main advantage of a shotgun mic is pointing the broad side toward unwanted sound. It is not like an "audio telescope".
    If this is an interview, you may need to frame it tighter to get the mic closer. If it is simply voiceover dialog, you can get the mic as close as necessary. If this is a scripted narrative involving subject/actor movement, then you may need the mic further away which could be pretty hard. 
    A high-quality lav like a Sennheiser G3 if place well usually does pretty good. Sennheiser also makes a cardioid mic for the G3 lavs called the ME-4N. This may have some advantages over the stock omni-pattern mic if placed correctly: http://a.co/dbDpkE4
    If a boom operated shotgun, you have many options about placement. In a large "echoey" sports gym if you place the mic low and angled upward the speaker, it may pick up sounds bouncing off the reflective ceiling above him. OTOH if you place the mic above him and pointed down it might block some of that out. 
    You said you have no real options to change the room, but simply letting the subject stand on a sound blanket (or even a small carpet) and aiming the shotgun mic down might help. That way the blanket attenuates reflections from the floor and the mic null pattern attenuates sound reflected from the ceiling and walls.
    Monitoring the audio with good headphones and testing various mic angles and positions at the specific venue and position -- before shooting -- can help. If possible it's best to have a dedicated sound person.
  3. Like
    joema got a reaction from Kisaha in Suitable Micophones to large indoor spaces   
    Was the ME66 shotgun mic boom operated or camera mounted? If camera mounted it may not help much since the main advantage of a shotgun mic is pointing the broad side toward unwanted sound. It is not like an "audio telescope".
    If this is an interview, you may need to frame it tighter to get the mic closer. If it is simply voiceover dialog, you can get the mic as close as necessary. If this is a scripted narrative involving subject/actor movement, then you may need the mic further away which could be pretty hard. 
    A high-quality lav like a Sennheiser G3 if place well usually does pretty good. Sennheiser also makes a cardioid mic for the G3 lavs called the ME-4N. This may have some advantages over the stock omni-pattern mic if placed correctly: http://a.co/dbDpkE4
    If a boom operated shotgun, you have many options about placement. In a large "echoey" sports gym if you place the mic low and angled upward the speaker, it may pick up sounds bouncing off the reflective ceiling above him. OTOH if you place the mic above him and pointed down it might block some of that out. 
    You said you have no real options to change the room, but simply letting the subject stand on a sound blanket (or even a small carpet) and aiming the shotgun mic down might help. That way the blanket attenuates reflections from the floor and the mic null pattern attenuates sound reflected from the ceiling and walls.
    Monitoring the audio with good headphones and testing various mic angles and positions at the specific venue and position -- before shooting -- can help. If possible it's best to have a dedicated sound person.
  4. Like
    joema got a reaction from Jn- in New information regarding H.265 on the Panasonic GH5   
    Software support is obviously required and this often lags hardware by years. E.g, Intel's Quick Sync hardware-assisted H264 encoder was introduced with Sandy Bridge in 2011. To my knowledge Premiere Pro only recently started supporting that -- and for Windows only, not Mac. That was roughly a six-year gap.
    Skylake's Quick Sync has HEVC/H265 support for 8-bits per color channel but Kaby Lake will be required for HEVC at 10-bits per color channel. Hopefully it won't take Adobe six more years to add support for that.
    I think nVidia's NVENC has HEVC hardware support starting with Pascal and AMD's VCE with Polaris, but the software development kits, APIs and drivers must be available and stable for application developers to use. So there is a difference between raw hardware availability (in silicon) vs being able to harness that from the application layer, which can require stable and tested SDK and driver support. 
    Traditionally there has been concern over image quality of hardware-assisted encoding, but FCPX has used Quick Sync for for years (single pass only) and it looks OK to me. But I don't think it has H265 hardware support yet.
    Lots of people want H265 because the file sizes are smaller, but you don't get something for nothing. H265 requires vastly greater computational complexity which means the CPU burden to encode/decode is much greater. In this paper, VP9 was 2,000x slower to encode than x264, and H265 was 3x slower than VP9 (or 6,000x slower than x264). So it took thousands of times more computation to save at most about 50% in size. This is just a single paper and algorithms and efficiencies are improving but it illustrates the basic principle.
    iphome.hhi.de/marpe/.../Comp_LD_HEVC_VP9_X264_SPIE_2014-preprint.pdf
    If that computation is done in hardware (IOW you essentially get it for free) then it may be a worthwhile penalty. But if only software encode/decode is used for H265, it may be impractically slow. Also if full and high quality software support at the SDK level is not available, the fancy silicon doesn't help much.
    For the iPhone it is affordable for Apple to use H265 for Facetime. They completely control both hardware and software, and quantities of scale mean any design or fabrication cost is amortized over 50 million phones per year. If it costs a little more to add H265 logic to a corner of a SoC (System on a Chip) that already has 3 billion transistors, it's no problem.
    For a software developer like Adobe, they must deal with three basic H265 hardware acceleration schemes, NVENC, VCE and Quick Sync, some of which have multiple versions, each having varying capability and features. So maybe that explains the delay on Quick Sync in Premiere Pro.
    H265/HEVC has also been hampered for years by disputes over royalties and intellectual property, which is one reason Google is pushing VP9 which has roughly similar capability but is open source and royalty free. However VP9 itself will probably be replaced by the similar but improved royalty-free AV1: http://www.streamingmedia.com/Articles/Editorial/What-Is-.../What-is-AV1-111497.aspx
  5. Like
    joema got a reaction from Eric Calabros in New information regarding H.265 on the Panasonic GH5   
    Software support is obviously required and this often lags hardware by years. E.g, Intel's Quick Sync hardware-assisted H264 encoder was introduced with Sandy Bridge in 2011. To my knowledge Premiere Pro only recently started supporting that -- and for Windows only, not Mac. That was roughly a six-year gap.
    Skylake's Quick Sync has HEVC/H265 support for 8-bits per color channel but Kaby Lake will be required for HEVC at 10-bits per color channel. Hopefully it won't take Adobe six more years to add support for that.
    I think nVidia's NVENC has HEVC hardware support starting with Pascal and AMD's VCE with Polaris, but the software development kits, APIs and drivers must be available and stable for application developers to use. So there is a difference between raw hardware availability (in silicon) vs being able to harness that from the application layer, which can require stable and tested SDK and driver support. 
    Traditionally there has been concern over image quality of hardware-assisted encoding, but FCPX has used Quick Sync for for years (single pass only) and it looks OK to me. But I don't think it has H265 hardware support yet.
    Lots of people want H265 because the file sizes are smaller, but you don't get something for nothing. H265 requires vastly greater computational complexity which means the CPU burden to encode/decode is much greater. In this paper, VP9 was 2,000x slower to encode than x264, and H265 was 3x slower than VP9 (or 6,000x slower than x264). So it took thousands of times more computation to save at most about 50% in size. This is just a single paper and algorithms and efficiencies are improving but it illustrates the basic principle.
    iphome.hhi.de/marpe/.../Comp_LD_HEVC_VP9_X264_SPIE_2014-preprint.pdf
    If that computation is done in hardware (IOW you essentially get it for free) then it may be a worthwhile penalty. But if only software encode/decode is used for H265, it may be impractically slow. Also if full and high quality software support at the SDK level is not available, the fancy silicon doesn't help much.
    For the iPhone it is affordable for Apple to use H265 for Facetime. They completely control both hardware and software, and quantities of scale mean any design or fabrication cost is amortized over 50 million phones per year. If it costs a little more to add H265 logic to a corner of a SoC (System on a Chip) that already has 3 billion transistors, it's no problem.
    For a software developer like Adobe, they must deal with three basic H265 hardware acceleration schemes, NVENC, VCE and Quick Sync, some of which have multiple versions, each having varying capability and features. So maybe that explains the delay on Quick Sync in Premiere Pro.
    H265/HEVC has also been hampered for years by disputes over royalties and intellectual property, which is one reason Google is pushing VP9 which has roughly similar capability but is open source and royalty free. However VP9 itself will probably be replaced by the similar but improved royalty-free AV1: http://www.streamingmedia.com/Articles/Editorial/What-Is-.../What-is-AV1-111497.aspx
  6. Like
    joema got a reaction from Dustin in Transcoding?   
    In this article Tony Northrup discusses how to manually use a proxy workflow in previous versions of Premiere: http://www.rangefinderonline.com/features/how-to/Getting-Acquainted-with-Offline-Video-Editing-to-Ease-You-Into-4K-8988.shtml
    As already stated, I'm not sure that would help with Lumetri performance, but maybe it would free up CPU cycles from constantly decoding H264.
    The new Premiere CC update for built-in proxy support is really nice -- it is vastly better than manually transcoding and linking up files.
  7. Like
    joema got a reaction from Don Kotlos in Premiere CC 2017 proxy workflow is amaaaazing   
    Avid's "proxy mode" when used with original resolution media is not proxy mode by the normal definition. They are simply using lower-res intermediate render files. You can do that in FCPX by dropping 4k footage into a 1080p timeline. It makes some modest improvement but is not remotely as useful as true proxy mode.
    As Axel said, Adobe has long said Premiere is so fast you don't need to transcode -- even at 4k. In fact right now Adobe's Premiere overview video on their web site says that:
    https://helpx.adobe.com/premiere-pro/how-to/what-is-premiere-pro-cc.html?set=premiere-pro--get-started--overview
    "....allows editors to work with 4k and beyond, without time-consuming transcoding....never needing to render until you work is complete"
    I used Premiere for years and still use CC occasionally. The Mercury Rendering Engine was revolutionary and it worked great up to 1080p, but it's just not fast enough (without proxy) for native H264 4k on most reasonable machines. E.g,  have a top-spec 2015 imac 27 with a Thunderbolt SSD drive array and it's not fast enough. You might build a custom Windows editing machine that would work, but that's not what Adobe meant in the above statement.
    It was inevitable Adobe would have to add proxy support, and it works really well. If any Premiere CC users have not tried this, please do so.
    FCPX is a lot faster on the same hardware but even FCPX is not fast enough to handle large amounts of H264 4k without proxy, at least on a top-spec iMac. So in both cases you often need to generate proxy files. FCPX generates them considerably faster than Premiere but that's not a big deal since it's typically an unattended batch operation.
  8. Like
    joema got a reaction from Dimitris Stasinos in IMac Monitor calibration (need suggestions)   
    I use X-Rite ColorMunki. It seems to work well on my 2015 iMac 27: http://a.co/hQbm2VL
  9. Like
    joema got a reaction from IronFilm in How to save the consumer camera: DON'T!   
    Northrup's point is the tortoise-like advance of dedicated cameras have been bypassed by smartphones, leaving cameras that feel clunky and archaic, esp from a consumer UI and ease-of-use standpoint.
    However he did NOT address that this was likely an unavoidable development, rather he implied it was poor decision making by the camera manufacturers. 
    Apple is on track to spend $10 billion per year on R&D -- Nikon spends $550 million. Apple spends more on R&D than every camera company on earth combined (unless if you consider Samsung a camera company, who spends even more than Apple -- $14 billion). For many people the cost of their smart phone is subsidized by the cellular carrier. This produces a mobile device that is unusually powerful, very refined and artificially cheap relative to a consumer camera.
    It is not just limited to cameras. I have the highest-end automotive Garmin GPS. It is OK but the UI is nowhere near as responsive as a 2016-model smartphone. Garmin's annual revenue is 1/100th that of Apple or Samsung -- they can't spend the R&D of those companies, and there's no cellular carrier to subsidize their product.
    Of all the companies making cameras, Samsung might have the highest annual revenues. It's further interesting that the NX1 was probably closer to Northrup's ideal camera than most others.
    My point is there is a lot more to implementing Northrup's vision than the *idea*. Implementing that at a polished level and a consumer-affordable price requires an *immense* and ongoing R&D investment. Admittedly there are also the corporate cultural issues of a camera/instrument hardware manufacturer in a software, UI-centric world, but I question whether Canon and Nikon could have delivered Northrup's ideal consumer camera in a timeframe to make any difference -- even *IF* they had the vision.
    Like you, I am not sure if it would have made much difference anyway. This is a tidal wave of change sweeping across the photographic landscape. The idea that consumer cameras could somehow "carve out" a protected little enclave by adopting a  few more consumer-friendly features is questionable.
  10. Like
    joema got a reaction from norliss in How to save the consumer camera: DON'T!   
    Northrup's point is the tortoise-like advance of dedicated cameras have been bypassed by smartphones, leaving cameras that feel clunky and archaic, esp from a consumer UI and ease-of-use standpoint.
    However he did NOT address that this was likely an unavoidable development, rather he implied it was poor decision making by the camera manufacturers. 
    Apple is on track to spend $10 billion per year on R&D -- Nikon spends $550 million. Apple spends more on R&D than every camera company on earth combined (unless if you consider Samsung a camera company, who spends even more than Apple -- $14 billion). For many people the cost of their smart phone is subsidized by the cellular carrier. This produces a mobile device that is unusually powerful, very refined and artificially cheap relative to a consumer camera.
    It is not just limited to cameras. I have the highest-end automotive Garmin GPS. It is OK but the UI is nowhere near as responsive as a 2016-model smartphone. Garmin's annual revenue is 1/100th that of Apple or Samsung -- they can't spend the R&D of those companies, and there's no cellular carrier to subsidize their product.
    Of all the companies making cameras, Samsung might have the highest annual revenues. It's further interesting that the NX1 was probably closer to Northrup's ideal camera than most others.
    My point is there is a lot more to implementing Northrup's vision than the *idea*. Implementing that at a polished level and a consumer-affordable price requires an *immense* and ongoing R&D investment. Admittedly there are also the corporate cultural issues of a camera/instrument hardware manufacturer in a software, UI-centric world, but I question whether Canon and Nikon could have delivered Northrup's ideal consumer camera in a timeframe to make any difference -- even *IF* they had the vision.
    Like you, I am not sure if it would have made much difference anyway. This is a tidal wave of change sweeping across the photographic landscape. The idea that consumer cameras could somehow "carve out" a protected little enclave by adopting a  few more consumer-friendly features is questionable.
  11. Like
    joema got a reaction from Cinegain in So what ever happened to 1080p?   
    From an editing standpoint it is really nice to have 4k material -- especially if finishing in 1080p. Below is an example of locked-down GH4 footage that is manipulated in post.
    As you said, the 1080p of some newer 4k cameras is worse than the "old" 1080p-only cameras before them. Sadly that is another reason to shoot in 4k -- because they made 1080p worse in some cameras. In theory 4k 8-bit 4:2:0 can be transcoded to 1080p 10-bit 4:4:4 (provided you don't use cropping or stabilization). That is another advantage for 1080p delivery -- 4k can provide the bit depth and chroma sampling of using an external 1080p HDMI recorder without the complexity. However 4k makes the "data wrangling" task of post production a lot harder.
     
  12. Like
    joema got a reaction from Davey in So what ever happened to 1080p?   
    From an editing standpoint it is really nice to have 4k material -- especially if finishing in 1080p. Below is an example of locked-down GH4 footage that is manipulated in post.
    As you said, the 1080p of some newer 4k cameras is worse than the "old" 1080p-only cameras before them. Sadly that is another reason to shoot in 4k -- because they made 1080p worse in some cameras. In theory 4k 8-bit 4:2:0 can be transcoded to 1080p 10-bit 4:4:4 (provided you don't use cropping or stabilization). That is another advantage for 1080p delivery -- 4k can provide the bit depth and chroma sampling of using an external 1080p HDMI recorder without the complexity. However 4k makes the "data wrangling" task of post production a lot harder.
     
  13. Like
    joema got a reaction from mercer in So what ever happened to 1080p?   
    From an editing standpoint it is really nice to have 4k material -- especially if finishing in 1080p. Below is an example of locked-down GH4 footage that is manipulated in post.
    As you said, the 1080p of some newer 4k cameras is worse than the "old" 1080p-only cameras before them. Sadly that is another reason to shoot in 4k -- because they made 1080p worse in some cameras. In theory 4k 8-bit 4:2:0 can be transcoded to 1080p 10-bit 4:4:4 (provided you don't use cropping or stabilization). That is another advantage for 1080p delivery -- 4k can provide the bit depth and chroma sampling of using an external 1080p HDMI recorder without the complexity. However 4k makes the "data wrangling" task of post production a lot harder.
     
  14. Like
    joema got a reaction from Davey in Out now: FCP X 10.3   
    I edited with Premiere for years before switching (mostly) to FCPX. You can get the job done in either editor. Both are used to edit Hollywood feature films, although most of those are edited in Avid.
    Assuming "Premiere" means the entire Adobe suite, you have a wider array of tools. E.g, you can do spectral audio editing using Audition, whereas in FCPX you'd have to get an expensive external tool like RX5 for that.
    Premiere is available on both Windows and Mac, so you can build a very powerful Windows editing machine using the latest hardware, whereas FCPX is Mac-only so you're limited to that hardware.
    OTOH FCPX is generally faster and more efficient. Running on my 2015 iMac 27, it transcodes and exports to H264 about 4x faster than Premiere CC -- on the same hardware.
    A big advantage of FCPX is "digital asset management". It is essentially a database merged with an editing program. Premiere by contrast has limited ability to catalog, tag and keyword content, and no ability to do this on ranges within clips. Working on a large project with 50 or more hours of material, it is easy to get bogged down just trying to find content. I worked on a large documentary using Premiere and that was a big problem. We evaluated CatDV (an external asset manager) but back then it was unsuitable so ended up having to write a complex Excel spreadsheet to keep track of all the content.
    By comparison FCPX has a built-in asset manager and makes finding content easy -- including tagged and keyworded ranges within clips. The FCPX "skimmer" is vastly faster than any other editor and facilitates rapid visual searches for content.
    Many people find FCPX easier to use -- initially. However IMO FCPX is harder to fully learn and exploit all the features. E.g, Premiere (at least prior to recently) had no storage management features, so obviously there was nothing to learn. FCPX has both managed and unmanaged libraries, plus all kinds of side issues related to this -- consolidation, creating "lean" transfer libraries, etc. 
    For people coming from other track-based editors like Avid, Vegas, etc, Premiere is familiar and requires no fundamental reorientation. By contrast using FCPX most efficiently requires adopting a different workflow -- using the metadata features, tagging and keywording content in the Event Browser *before* you start cutting on the timeline, etc. This is especially true regarding the magnetic timeline. E.g, making a "split edit", aka "J cut" or "L cut" in Premiere is intuitive and straightforward -- the audio and video tracks are separate and this visually reinforces what you're doing. In FCPX, making that same edit while not detaching the audio is not as intuitive.
    Up until the recent FCPX 10.3 release, Premiere had a major ease-of-use advantage in doing certain tasks on a multicam clip. E.g, you could easily apply stabilization, optical flow smoothing or color-correction tracking directly to the multicam clip. By contrast FCPX required a complex workaround of looking up the timecode range in the base clips. As of 10.3 this has been improved but I haven't fully tested it.
    From a cost standpoint, Premiere (for the whole suite) is about $50 per month per person, and Adobe essentially discontinued any non-profit discount with CC. FCPX is $299 for a one-time purchase and you can use it on all the computers that "you own or control", and updates thus far have been free. If you ever stop paying Adobe $50 per month, you lose access to your projects, although your rendered output will still be there. IOW you are never "vested" in the software no matter how many years you pay.
    OTOH $50 a month is a lot less immediate out-of-pocket expense than the previous one-time-purchase of the Adobe suite, which was thousands of dollars. For that monthly price you are getting a huge amount of diverse software which is continuously updated.
  15. Like
    joema got a reaction from Mat Mayer in Will this iMac be good enough for 4K video editing?   
    Well, you know your own needs and if you're experienced with PP just stay with that. The problem is H.265/HEVC is extremely compute-intensive. A new 4k TV can handle this since they can add hardware support for H.265 decoding. Digital TV broadcasts currently use H.264, as does Blu-Ray but to squeeze 4k into over-the-air channel bandwidth will require H.265. Testing is ongoing and years in the future the upgraded ATSC 3.0 TV standard will support that. This will also probably be used for satellite and cable providers but that is years away. UHD Blu-Ray will apparently use H.265/HEVC but the decoding for that is currently only available in stand-alone hardware players. I don't think any PC or Mac can play a 4k UHD Blu-Ray disc.
    The Quick Sync in Intel's Skylake (used in the 2015 iMac 27) supports H.265 hardware acceleration for 8-bits per color channel, so if playback and editing software supports that it will be vastly faster. The upcoming Kaby Lake on-chip will support H.265 at 10-bits per color channel, but that will not be used for broadcast FCPX has used Quick Sync for years but unfortunately Adobe has not put this in PP for the Mac yet. They made some ambiguous statements at the last PP update which might imply they began using Quick Sync on PP in Windows.
    nVidia has hardware support for H.264 and H.265 in certain graphics cards, via the NVENC API. Likewise AMD has this in certain cards, accessed via the VCE API. However software developers must write to those APIs, and there are various versions and many different cards out there. Note this fixed-function logic for video acceleration is separate from the GPU, although it is bundled on the GPU card in a different chip. The software API fragmentation between NVENC and VCE plus the multiple versions of those discourages developers from using them. By contrast most computers with an Intel CPU Sandy Bridge and later has Quick Sync (excepting Xeon) so it's a broader platform to target.
    The problem with Macs is you can't change the GPU card to obtain better performance or to harness new software which has recently added support for NVENC or VCE. So (hypothetically speaking) if Adobe chose to support nVidia's NVENC over Quick Sync, there would be nothing the typical Mac owner could do, since recent Macs use AMD GPUs.
  16. Like
    joema got a reaction from Mat Mayer in Will this iMac be good enough for 4K video editing?   
    The problem is H264 4k is four times the data of 1080p. It is an incredible load on any editing machine. Even FCPX can struggle with this on a top-spec 2015 iMac 27, and it uses hardware accelerated Quick Sync on Sandy Bridge and later Intel CPUs (excluding Xeon). 
    GPUs by themselves cannot meaningfully accelerate H264 encode/decode, so import, export and scrubbing the timeline is mostly a CPU-oriented task if no effects are used. Effects can often (but not always) be GPU-accelerated, but this does not remove the CPU load from H264 encode/decode -- it just adds another burden.
    The bottom line is if you want fluid, responsive H264 4k editing you generally need to use proxy files -- whether on Premiere CC or FCPX. A higher-end Mac Pro or powerful Windows workstation might be able to avoid that but not an iMac. I edit lots of 4k every day on my 2015 top-spec iMac 27 using both FCPX and Premiere CC. It does fine on 1080p, but for my taste it's just not fast enough on 4k without using proxy files, except in limited situations for small single-camera clips. Other people might tolerate some sluggishness but it gets irritating pretty quickly.
    Since the iMac is about to be refreshed I'd recommend waiting to see what that includes. For the first time in several years, new 14/16nm GPU technology is available which may provide a significant increase on the GPU side. Although the GPU is mostly only usable for effects, this is still an issue so the more GPU horsepower the better.
    E.g, if just editing seems slow on 4k, try applying a computationally-intensive effect like Neat Video noise reduction. This and similar effects are incredibly slow to run on 4k, whether using GPU or CPU rendering. For effects using GPU rendering, at least there is an option of using a faster GPU on machines where this is available.
  17. Like
    joema got a reaction from Frank5 in Documentary for TV broadcast with EOS M1   
    No. You said "If you are willing to pay Comcast $80 a month for a highly compressed crap picture who is an idiot in this scenario NBC or you?....Dude... $50 antenna and problem solved".
    That is not an option for the majority of viewers today. It may not be an option for you in the future, as the FCC plans on auctioning off the hugely valuable TV spectrum to wireless companies. They can do this because only about 7% of US households use antennas for OTA TV reception: http://www.tvtechnology.com/news/0002/cea-study-says-seven-percent-of-tv-households-use-antennas/220585
    An indoor or tabletop antenna does not work for many users. Anyone interested in this can use the tools at http://www.antennaweb.org/ to examine their location and geography with respect to antenna type, size and compass heading required to receive local stations. You often cannot stick a gigantic (highly directional) antenna in your attic for several reasons: (1) Insufficient turning radius (2) Interference from metallic HVAC or insulation.
    That said, a 4-bay or 8-bay UHF bow tie antenna can work well in an attic if (a) You have an attic (b) If all the stations you need are within a narrow compass heading range (c) All the stations are on UHF (some HD channels are VHF), and (d) There is no major interfering metallic ducting or foil insulation. I have a 4-bay UHF bow-tie antenna and mast-mounted preamp in my attic and it works fairly well, although all the stations I need are within a narrow azimuth range (hence no rotator required), and they are fairly close.
    So many common factors often make it impractical to use an indoor or attic antenna. HOAs increasingly restrict outdoor antennas, however the 1996 FCC OTA reception rule says these can usually be challenged. Unfortunately most users are not aware of this: https://www.fcc.gov/media/over-air-reception-devices-rule
    So hopefully you can see that people who pay Comcast $80 a month are not idiots, and the problem is often far more difficult than "Dude... $50 antenna and problem solved"
    Besides being a professional documentary filmmaker, I have the highest class ham radio licence and have built many antennas by hand, including UHF, VHF and HF. I regularly teach classes on RF techniques, signals and modulation. I have installed many large TV antenna, rotator and low-noise mast-mounted preamp systems.
    It's important to give the OP the right advice. The advice about "buy a C100 mk II" does not work for the OP, since that is not a permitted camera from the standpoint of his 100 megabit/sec criteria. Although unstated in this case, networks which levy such requirements also often require 10-bit 4:2:2, which the C100 Mark II also does not do internally. 
    My main point was many networks have such little professionalism and commitment to quality they allow the distribution chains handling their licensed content to grossly degrade the image, while hypocritically demanding standards like 100 megabit/sec for submitted material. 
    I wanted to ensure everyone knows some networks widely disregard this at will, as shown in the above links I posted. But this doesn't mean shooting on an EOS M1 or M2 is the best approach, since they just aren't optimal from either codec or operational standpoint.
    If the OP literally must adhere to the delivery requirements (which likely include 100 megabit/sec and possibly 10-bit 4:2:2) he'll have to get a camera or combination of camera and recorder which support those. 
    If transcoding is permissible then 4k 8-bit 4:2:0 can be converted to 1080p 10-bit 4:4:4: http://www.provideocoalition.com/can-4k-4-2-0-8-bit-become-1080p-4-4-4-10-bit-does-it-matter/ In that case he could probably use a GH4 which is a great camera if equipped with the right lenses and accessories.
    If that is not permissible, then it will be very interesting to see how the networks react to the GH5, which apparently will hit every check box they have previously used to exclude "lesser" cameras. Will they raise the arbitrarily-enforced extreme delivery standards yet again? Or will they simply use approved and unapproved equipment lists and exclude the GH5 this way?
  18. Like
    joema got a reaction from Frank5 in Documentary for TV broadcast with EOS M1   
    The networks have power over distributors like Comcast -- they simply choose not to exercise that power because quality is not a priority. If Comcast decided to cut the bit rate to 200 kilobits/sec to free up bandwidth for local shopping channels, thereby reducing the main program to a pixelated slide show, they'd get a call from the networks very quickly, as advertisers would be irate when viewers bailed. 
    Re "who is an idiot" for not having an OTA antenna, a diminishing fraction of users have antennas, down to 7% by some estimates. The 93% of those you call "idiots" often have no choice and cannot practically use antennas. Since 1996 the FCC's Over The Air Reception Devices Rule says many HOAs can be challenged regarding antenna prohibitions but most people are not aware of this and cannot afford the hassle anyway: https://www.fcc.gov/media/over-air-reception-devices-rule
    Another issue with OTA TV is the value of the occupied RF spectrum is huge, and many other players want that spectral real estate. You may call those not using OTA "idiots" but when that last OTA spectral real estate is grabbed for other purposes, you'll find yourself in that category.
    http://www.tvnewscheck.com/article/91163/fate-of-ota-tv-hangs-in-the-balance-in-2016
    http://variety.com/2013/biz/news/its-big-tv-vs-big-telecom-over-broadcast-spectrum-1200329490/
    Re "do a pro job...just get a loan and buy a C100 mk II", that camera only does 8-bit 4:2:0 internally -- at only 24 megabits/sec. It would be rejected out of hand by the criteria the OP mentioned. Of course you can hang an HDMI ProRes recorder off it to achieve greater bit depth and chroma sub-sampling, but you didn't mention that.
    Despite these limits the networks widely use the C100 and similar DSLRs (without any external recorders). The rules about bitrate and color depth are largely arbitrary and ignored whenever the networks so choose.
    CNN using a variety of DSLRs and Canon C-series cameras: https://joema.smugmug.com/Photography/CNN-Moneyline-DSLR-Shoot/n-ffF2JW/
    CNN using 5D Mark III: https://joema.smugmug.com/Photography/CNN-Using-5D-Mark-III/n-5JqGgB/
    CNN field segment shot on C100: https://joema.smugmug.com/Photography/CNN-DSLR-Video/n-scsdxs/
    ABC News shooting three-camera interview in front of White House: https://joema.smugmug.com/Photography/ABC-News-Using-DSLRs/n-BsScJC/
    ABC Nightline using video DSLR: https://joema.smugmug.com/Photography/ABC-Nightline-Using-DSLR/n-HwH8hG/
    2014 Super Bowl commercial for Gold's Gym shot using Canon DSLRs: https://joema.smugmug.com/Photography/DSLRs-shoot-Arnold-Golds-Gym/n-jzcNXR/
     
     
  19. Like
    joema got a reaction from Xavier Plagaro Mussard in Documentary for TV broadcast with EOS M1   
    It is ironic that networks require this since the technical quality they deliver is often so poor. Note this frame grab of NBC footage from the Olympics. It is smeared, blurry, full of artifacts. Their excuse would probably be "it's not us, it's Comcast". However transmission of network content is a signal chain that's only as strong as the weakest link. If they permit gross degradation of image quality at any point in the chain, then being persnickety about technical matters at other points is simply lost in the noise. It implies they don't really care about image quality.

     
    The technical quality of NBC Olympic content delivered to end users was so bad that the below footage from 1894 was actually better. Imagine that -- some of the first film footage ever shot, and it's better than what NBC delivered. Despite having supercomputers on a chip, satellites in space, and optical fiber spanning the globe, the delivered quality was worse than an old piece of film.
     
  20. Like
    joema got a reaction from Frank5 in Documentary for TV broadcast with EOS M1   
    It is ironic that networks require this since the technical quality they deliver is often so poor. Note this frame grab of NBC footage from the Olympics. It is smeared, blurry, full of artifacts. Their excuse would probably be "it's not us, it's Comcast". However transmission of network content is a signal chain that's only as strong as the weakest link. If they permit gross degradation of image quality at any point in the chain, then being persnickety about technical matters at other points is simply lost in the noise. It implies they don't really care about image quality.

     
    The technical quality of NBC Olympic content delivered to end users was so bad that the below footage from 1894 was actually better. Imagine that -- some of the first film footage ever shot, and it's better than what NBC delivered. Despite having supercomputers on a chip, satellites in space, and optical fiber spanning the globe, the delivered quality was worse than an old piece of film.
     
  21. Like
    joema got a reaction from sandro in 5DIV full spec and full image leak   
    What is the source of this information? My understanding is Skylake already has full hardware support for 8-bit H.265/HEVC (such as output by the NX1). It was Haswell and Broadwell which had partial support. This was tested here: http://labs.divx.com/hevc-hwaccel-skylake
    Kaby Lake will have hardware support for 10-bit HEVC but this has nothing to do with whether Skylake has full hardware support for 8-bit HEVC. It does:
    http://www.fool.com/investing/general/2016/01/28/understanding-the-biggest-improvement-intel-corp-i.aspx
  22. Like
    joema got a reaction from IronFilm in Camcorders   
    My group has a G30 and XA25. I will be shooting some instructional material with the G30 tomorrow, just because it's easy. We usually use larger-sensor cameras but cameras like these are very nice for certain things. They are straightforward to use, relatively inexpensive, and have superb stabilization. Battery life is good, they don't have a 29 min. recording limit and they don't overheat. An experienced operator can get good looking content.
    When you consider how much material has been shot with the AG-DVX100 tape-based DV camcorder (including Oscar-nominated documentaries) and how superior modern HD camcorders like the G40 are, you might think why would anyone want anything else.
    The answer is despite the advantages it doesn't have that lush cinematic look of a higher-end large sensor camera, and doesn't do well in low light. Unlike a decade ago when DV was a common doc format, today even a well-operated entry-level DSLR can produce cinematic-looking material. Viewers come to expect that, whether they can verbalize it or not.
  23. Like
    joema got a reaction from dafreaking in A story about 4K XAVC-S, Premiere and transcoding   
    Almost any 7200 rpm 3.5" drive would work for this, but they are externally-powered, hence not very convenient for portable use. For 1080p, it's no problem from a CPU or I/O standpoint. I edit a lot of 4k XAVC-S, and for camera native the data rate isn't that high. However the CPU load is very high, especially for Premiere. This leads to transcoding to proxy (a CPU-bound operation) which takes time and increases I/O load when completed, since the video files are much less dense.
    If you want portability, then staying with a bus-powered drive is nice but most USB 3 bus-powered drives are too slow, IMO. The 4TB Seagate Backup Plus Fast is bus-powered, only about $185, and it's pretty fast (internally RAID-0): https://amzn.com/B00HXAV0X6 I have several of those and they work well. Below are other bus-powered external SSD options I don't have personal experience with.
    Lacie 1TB Thunderbolt bus-powered SSD ($900): https://eshop.macsales.com/item/Lacie/9000602/
    Transcend 1TB Thunderbolt bus-powered SSD ($589): https://amzn.com/B00NV9LTFW
    If USB 3 is OK, this 1TB bus-powered external SSD is about $400: https://eshop.macsales.com/item/OWC/ME6UM6PGT1.0/
  24. Like
    joema got a reaction from dafreaking in A story about 4K XAVC-S, Premiere and transcoding   
    There is no simple answer. Some systems can edit the camera native files with good performance for one stream. Most systems cannot do this smoothly for 4k H264 multicam, and some type of transcoding is needed, whether externally before ingest or to proxy during/after ingest. Fortunately Premiere now supports this and gives various resolution and codec options for proxy, including H264, Cineform, and ProRes 422. FCPX always transcodes to 1/4 res ProRes 422, e.g, 1080 from 4k.
    Also (as already mentioned) not all 4k H264 codecs are the same. Some may exhibit smoother editing on certain software.
    For documentary projects with a large shooting ratio, it is nice (in FCPX) to skim through the camera native files without transcoding all that. For scripted narratives or other content with a lower shooting ratio, the workflow might favor transcoding everything up front or possibly doing initial selects outside the editor before import.
    Some groups mandate ProRes recording off the camera, so all their cameras either do this internally or have external recorders. Others do the initial evaluation and selection using camera native files. Still others transcode to a mezzanine codec before ingest. It depends on the equipment, preferences and workflow policies of the group. My group can shoot a terabyte of 4k H264 per weekend so we don't transcode to ProRes before or after ingest since that would be at least 8 terabytes. We selectively transcode to proxy after ingest if needed for 4k multicam.
  25. Like
    joema got a reaction from tellure in A story about 4K XAVC-S, Premiere and transcoding   
    What Don said is correct. a GTX980 or 1070 will probably only help scrubbing if the stuttering is caused by effects. H264 decoding is mostly a CPU task and there are only two ways to meaningfully accelerate that (1) Quick Sync, or (2) Proprietary decode hardware such as nVidia's NVENC or AMD's VCE. In either case the software must use those APIs. Currently Premiere CC does not use Quick Sync, although Adobe made some ambiguous statements at NAB about Iris Pro graphics which might imply this is planned for a future version, but they also said Windows only for now. Note that most Xeon CPUs do not have Quick Sync hardware. 
    I don't know if Premiere CC uses NVENC or to what extent. There are several versions of NVENC, each with varying capabilities. Except for Quick Sync, NVENC or VCE, I don't think GPU-accelerated XAVCS (which is H264) transcoding is possible. Some utilities may advertise this but the fine print usually says "only for effects". So if your timeline has lots of unrendered effects and you transcode that to an output file, the GPU can accelerate the effects rendering but not the encoding.
    Note NVENC/VCE are bundled into GPU cards but they are architecturally not part of the GPU. They are a separate block of ASIC logic (Application-Specific Integrated Circuit): https://en.wikipedia.org/wiki/Nvidia_NVENC
    The good news for Premiere users is Adobe knows this is a problem area and they will be adding significant performance improvements to Premiere, including integrated proxy media, Metal API for better effects performance (Mac only), and possibly Quick Sync (Windows only for now). 
    Until those improvements are available your best best for smooth H264 4K editing is using a manual proxy workflow. Tony Northrup describes the procedure here:
    http://www.rangefinderonline.com/features/how-to/Getting-Acquainted-with-Offline-Video-Editing-to-Ease-You-Into-4K-8988.shtml
    Re smooth scrubbing in Premiere CC on 4K XAVCS using a 6700k or 5820, my 2015 iMac has a 4Ghz 6700K, and it is definitely not smooth, nor is my 4Ghz Windows PC, but that has an older i7-875K and GTX-660. I will be testing a GTX-1070 pretty soon and will report any differences. Unless Adobe uses NVENC I really don't see how the GTX-1070/1080 by itself will help sluggish 4K H264 scrubbing since that is an decode problem not an effects problem. However most edited content has some effects so a faster GPU is useful for that.
    As bad as these issues are for H264, it will get even worse in the future if H265 or VP9 become more widely used, since those are even more CPU intensive. The Quick Sync on Skylake and later CPUs can accelerate H265/VP9 and I think nVidia's most recent version of NVENC can do this. The issue is software must take advantage of these features.
×
×
  • Create New...