Jump to content

theSUBVERSIVE

Members
  • Posts

    249
  • Joined

  • Last visited

Posts posted by theSUBVERSIVE

  1. On 7/14/2020 at 5:29 AM, Andrew Reid said:

    I completely agree.

    Canon thought the 8K and specs would outweigh the drawbacks of overheating problems.

    We'll see however if this harms the Canon brand image for reliability, and how many of the target pro customers (EOS R5 in particular) return the camera within a month because they can't get through shoots.

    It might end up being a non-issue, or it might end up hurting sales.

    By the sounds of McKinnon though he is having a lot of problems.

    We'll see.

    And yes at least they were transparent about it and made the recording time figures available.

    I don't think this will hurt their reputation because what could be worse than the complete lack of innovation from Canon regarding video on photo/hybrid cameras ever since they entered the Cinema market? Sony took more than two generations to get overheating completely right - spoiler alert: making the cameras bigger helped, who would have guessed that? - and people kept buying Sony anyway, if people are still willing to overlook Canon's past decade and get onboard again, at least they are bringing something new and compared to the last 5D, it's a big jump.

    The bad part is that maybe even in 4K it might overheat, a compromise in 8K, something most people don't need, is more acceptable than in 4K, which is already the standard. But even with the overheating, at least they are pushing technology and competition forward, so despite the "catch" I see that as an overall positive for Canon - although their bar was set pretty low to begin with... hahahaha... But their latest Cinema camera were well received, the jump people expected from the previous generation in some models came now and reviews have been very positive. But we'll see...

    Sony is relieved, at least they won't look that bad for not bringing more than 4K with the so-much-rumored-and-awaited A7SIII - since the S1H 6K still can't get AF right and Canon's 8K overheats. 

  2. 9 hours ago, rawshooter said:

    The likely reason is that Canon design and manufactures its own chips/electronics, and that its chip technology is old and not on the level of contemporary chip manufacturing of 14nm or less.

    Bigger structures/more nanometers mean: less power efficiency, more heat production. (That's why smartphones, using the latest and smallest chip technology, are so incredibly powerful at low electricity consumption and low heat dissipation.)

    Panasonic, as much bigger electronics manufacturer, has access to more modern chip technology. They likely also manufacture the electronics of the Sigma fp (since they also manufacture the electronics of Leica cameras, and since the fp uses a Panasonic battery).

    This is also the reason why Canon's self-manufactured sensor chip are behind in performance to Sony's.

    Canon is a big company with plenty of R&D, much more than Panasonic's Lumix + Varicam divisions and Sigma combined, I'm positive they can get their hands on whatever processing node they want and put together a camera that doesn't overheat if they wanted. The same way that Sony could have avoided overheating issues if they didn't want to shrink the camera down so much in the beginning - they knew about it since even the NEX cameras before had that issue.

    The positive is that Canon was more transparent about it from the get go, so I'm pretty sure that if Canon wanted, they could have chosen either the Sigma or the Panasonic solution or even both combined, but they chose not to.

    When the specs were out, everybody was asking themselves what was the catch. Would it be the crop factor, maybe a paid firmware, maybe there was no catch at all, but then there was.

    The rumors about the A7SIII seems to point to a 4K camera, so no 6K or 8K, but possibly 4K120p. I wonder what Panasonic will do for their GH6, they should once for all get their AF shit together.

  3. When these cameras were not out yet, a few years back, I thought that Panasonic should pack video-feats in the G9 just like upcoming GH5.

    At the time I though the GH5 would be something like the GH5s but with IBIS, so it would offer better low-light like the A7S line up, better rolling shutter and readout speed but the G9 could still offer compatible V-log L capabilities to work as a B-cam for video. That way, you could have higher-res pics with the G9 and better video with the GH5 and they would complement each other pretty nicely, in a similar way Sony does.

    In the end the G9 didn't get the video-feats - well, until now -, the G9 and GH5 ended up sharing the same 20MP sensor and there was a separated GH5s with better low light, multi-aspect sensor and no IBIS.

    Panasonic has backed down from the early promise of 8K by 2020 for the GH6, they showed their 8K camera with organic sensor, but from what I've read, it's a power hungry sensor, so it doesn't look like the technology is quite ready for consumers cameras. Will the GH6 or G10 come with ToF AF?

     

    G10:

    • higher MP count
    • IBIS
    • reliable continuous AF for sports
    • high-speed and high-capacity buffer
    • maybe dual-gain circuit if they can
    • 4K 10-bit

    GH6: 

    • new 20-ish MP sensor
    • dual-gain
    • IBIS
    • 240+fps slow-mo
    • maybe 4K90p or 120p?
    • 4K60p 10-bit internally 
    • reliable AF without hunting/pulsing in video
    • S1H's hinge/fully articulated screen
    • at least the same RAW recording as the S1H

    It seems like a reasonable list, the major doubts are the new sensors and ToF AF or PDAF, but if Panasonic can pull that off, they will be a pretty good pair for photo, video and hybrid shooters, enough to keep the Micro4/3 interesting without disrupting their S-series.

  4. I think it's an interesting compromise.

    It's not like Leica was known for video and people had that kind of expectation anyway. When it was launched, Panasonic was not part of the L-Mount Alliance, but still, it kind of packed everything they were able to put in it regarding video-oriented feats - it was the only FF with 10-bit. Most people will still use it primarily for stills and not video, but the feats are there and in a better capacity than let's say, Sony A9 II - which was expected to pack everything for stills AND video, the Irony is that even Canon went all-in with their 1DX MKIII but Sony kind of turned into Canon... hahahaha...

    For Panasonic and Leica it's also a good deal because Panasonic saved their video feats for the S1 and, of course, the S1H, the S1R having a more limited video capabilities while the SL2 comes as a Leica version of the S1R but with all the video feats the S1R doesn't have, so in that sense, these 2 cameras are not overlapping each other, it's a some sort of win-win for them both. Leica is still able to offer video feats as a bonus and despite the SL2 and the S1R being very similar internally, they are different.

     

  5. If they do it right, this camera could help Panasonic gain a lot of traction in the higher-end market and the timing is quite good since it doesn't look like Sony or Canon will be announcing any camera soon. But despite recent announcements from them, they also might be aiming Tokyo 2020 for their 8K cameras too, who knows.

     

    Edit: oh, after watching another interview, it's mainly for broadcast, so I don't know when they might use this sensor for a Varicam/Cinema solution

  6. On 8/28/2019 at 12:57 AM, newfoundmass said:

    Sony is far from focusing solely on FF. They've released like 7 APSC cameras in the last 4 years it feels like. 

    That's not really how you measure "focusing".

    Is Sony trying to grow and improve their APS-C cameras as they are doing with their FF? No, not even close.

    Did Sony release new weather-sealed versions of their old lenses or high-end cameras to pair with the expensive lenses? No, people are still waiting for the NEX-7's substitute - plenty of people asking for an A7000 out there... hahaha...

    Is Sony's APS-C system development comparable to Fuji's APS-C in the same timeline? Not really, even with Fuji releasing larger format sensor, they still have a pretty good line up for APS-C, the X-T30 being the latest. Fuji just needs to find a way to implement IBIS on more cameras and solve the heat problem so the cameras are not limited to 10 or 15 min in 4K.

    I think it's pretty fair to say Sony is NOT focusing on APS-C as they are on FF. Like I said, they are just making low-maintenance release just to show they are still there - and because it still selling well, I suppose. But it's not worth the R&D for a higher-end tier when they have FF for those people. 

    With the rumors of the FX cameras coming soon, I doubt Sony will push new video feats in their A9II or the A7SIII. The original A9 was already weirdly crippled in video, historically Sony has been doing that for video, they save it for their video cameras, so I don't see why all of sudden they will change that. It would be a nice surprise, but it just seems unlikely. Despite the A7SIII being a more video-oriented camera, with the FX cameras coming, it doesn't feel like Sony will be sharing its video feats with the A7SIII and so far, there is no sign that the A7SIII will be coming anytime soon.

  7. 7 hours ago, BTM_Pix said:

    Each lens is profiled to calibrate real distance and focus position.

    The camera's internal AF is bypassed completely and the lens is operated as though it were under manual control which allows instant switch or ramped changes as it is tracking.

    In terms of manufacturers making it better, then Panasonic et al will implement this in a different way and will definitely improve on it by virtue of having far, far smarter people than me involved in its development and having far more of them. In terms of speculating how they will do it internally whether its incorporated into the sensor itself or as part of the image processing pipeline through AI then they could go either way really. 

    I've mentioned a couple of times in this thread that what I've shown thus far is very much a jumping off point for this and by the time the commercial product version of it becomes available in early 2020 it won't have changed conceptually in terms of it being an outboard system that can be added to different cameras but how that is achieved will and, without giving too much away at this stage, already is changing.

    I'll drip a couple of things into this thread between now and then.

    @BTM_Pix Thank you very much for your reply.

    I wasn't even sure if in-sensor was possible in an interchangeable lens system.

    From what I was able to understand, there are a lot of different ways to implement LIDAR technology and one of the things I've read is that one LIDAR can interfere with each other. Is that true for all implementations or just for some? Because it would be an issue to have interference if there are two or more cameras.

    From what I've read, you can use one device that emits the laser and then you could use a CMOS sensor as a receiver/detector, if so I wonder if this would be a good option for an interchangeable lens system.

    Does anyone know the difference between hybrid CDAF/PDAF and Dual-Pixel AF? Does Dual-Pixel AF suffers the same degradation/banding as hybrid CD/PD AF?

  8. @BTM_Pix so how does it work for all the different focal length? You cross the data between what the camera knows about its lens and focal length with the data the LIDAR sensor gets? So as long as it's calibrated the LIDAR knows what the camera is looking at?

    Also, how does it work for continuous AF? It uses the camera's own algorithm to track?

    If you were to speculate over how Panasonic could implement ToF AF in their future camera, how do you think it would be the best way for them to do it?

    Is it even possible to make the normal camera sensor have some in-sensor LIDAR capability or does it have to be a completely separate sensor? If it has to be separate it would have to be implemented in a place that won't be easily obstructed, so probably near the EVF - like mounting on the hot shoe. But I wonder if it's possible to make it in a way that the LIDAR sensor can see the same as the main sensor/lens.

    Sorry for all the questions but I'm just trying to understand how does this work or what could be done by a manufacturer to make it even better. If they cross the data of the LIDAR with the lens data - like DFD does -, plus the data they get from AI regarding how to identify a subject, how to give priority to eye tracking - like Sony does -, if they combine all of that, they should get a pretty good AF.

  9.  

    2 minutes ago, crevice said:

    I think what it comes down to is actually even more simple than that - which camera will have the better image? At this point in the race, Sony is close to dead last, if not dead last when it comes to how the footage actually looks - so it's hard to actually say how much improvement we will see with the new A7S. Their colors aren't great and the footage lacks the cinema look that I am seeing from these S1H videos today. Obviously these video today were shot with super expensive lenses and by professionals, but they still look very impressive with great dynamic range, great motion cadence, and great color - with a subtle softness and good highlight roll off. To me that gives a very pleasing cinematic look that I do not have confidence in Sony to nail.

    I am curious, who makes the sensor in the S1H? Is it Sony or Panasonic?

    I agree with what you are saying but I don't think it's that simple because that's not how majority of people will look at it when making the decision to buy it or not.

    Different people have different uses and Sony appeals for a wider public than Panasonic does. Videographers know about Panasonic, but until Panasonic can solve AF, it will be harder for them to be the best selling hybrid camera. Despite the DFD not being that bad as a photo AF, it still gets the bad rep. Even within videographers, Panasonic became a "niche" product because a lot of people now use video AF - since all of the other manufacturers offer at least a decent solution - without the micro back-and-forth DFD has and they are much smoother.

    With Sony basically focusing solely on FF, people that can't afford FF or want a more budget friendly solution might go Panasonic but now even Fuji offers a compelling camera for this market as well.

    Indeed Panasonic got much better in color - back from that yellow-orange-ish color rendition Panasonic used to have - and now it has a pretty natural and accurate color right off the bat. It got easier to grade and not having 10-bit, kills it for Sony - for those that need it. Regarding the image, Panasonic got so much better and offers much more than Sony does now. But that's not really making Panasonic outsell Sony and I'm pretty sure Panasonic didn't go FF just to keep their niche, they will need to solve the AF if they want to sustain 2 mounts and expand their brand like they say.

     

  10. Regarding the A7SIII, there are a few things that I think it's highly unlikely that Sony will put on an Alpha camera and top the S1H. It feels like at least a couple of times Sony stepped back and went back to the drawing board for the A7SIII because of Panasonic - it should have been out there already.

    I'm pretty sure Sony is still debating if they should or should not put 10-bit 422. Can it record unlimited time? Will it overheat? Can Sony's IBIS match what Panasonic offers? Will it have a fully articulated screen? Can it output a full-size HDMI? Will it record RAW? Maybe Sony will finally use a 10-bit codec but I don't think they will offer unlimited time and better IBIS, a fully articulated screen? I'm not sure either.

    I don't think the A7SIII and the S1H will be aiming at the same market, Sony will probably excel the S1H in a few areas like AF, probably full-sensor readout speed and less rolling shutter, maybe a better high-ISO, slow motion and some other sensor-related-tech. The A7SIII will probably be a better A7SII with all the advancements Sony can put in this camera, but I highly doubt it will pack the same kind of feats Panasonic is doing with the S1H.

    There was a rumor about Panasonic working with time-of-flight distance AF, which is clearly Panasonic's major weakness compared to all other cameras when it comes to video. Sure, not everybody needs AF and a lot of people still prefers MF but it wouldn't hurt Panasonic if they could provide a comparable AF, at least one that it's there if you need. If Panasonic really implements a better AF, more L-mount lenses, it will be much harder for Sony to keep up with Panasonic if they don't step up in some other areas.

    So I don't think that Sony can compete with the S1H in its pros, same way the S1H won't appeal to the same type of crowd the A7SIII, even though the S1 is pretty well balanced all-arounder hybrid camera, until Panasonic figure AF out, a chunk of this market will still be Sony's.

    The full V-log implementation it's a very welcomed addition, I really like Varicam's color.

  11. When Sony bought some stock shares from Olympus, didn't they become partners - at some level? With Olympus helping Sony developing lenses or something?

    Could that have played a role as of why Olympus didn't commit to the L-Alliance? 

    I don't think they intend to be a Micro4/3 exclusive manufacturer - and I don't even know if they could sustain that - but I don't think Sony would be open for Olympus to develop their own E-mount camera, so the L-Alliance would be the better route and much better than developing their own system with a new mount.

    Sigma is offering mount swap for their FF lenses, which is great for people that are already invested in it, plus, they will sell their lenses with an L-mount. Not only they will benefit from that but consumers will benefit even more.

    The lack of fully articulated screen and PDAF are big question marks for Panasonic S cameras. The articulated screen might be due to the cables connection and the hinge of the articulated screen, but the lack of PDAF can't even be explained. Regardless of how invested they are in the DFD tech, I'm pretty sure they could manage to use PDAF, in the mean time since Dual Pixel showed up, all other manufacturers caught up to have better AF based on PD, Panasonic is the only odd one in the market.

    Sure, they can eventually come up with a reliable AF tracking for video with the AI DFD but during the GH5 development announcement, they talked like the AF was just as good as any other but that didn't pan out, so it becomes harder to buy whatever Panasonic is saying about the new AF.

  12. If this is really a thing - which I think a lot of us have wondered before, why not do like Sony A7s and A7R? Low Light and More Pixels - I think a new and more powerful processor is indeed important since Tracking AF performed different when just outputting the signal through HDMI compared to tracking and recording in camera. Which means it was a bit of a bottle neck as it shouldn't have any difference in performance.

    Canon might have Dual AF but so does Samsung, even Google used in the Pixel 2. I don't know if Google has a patent of their own, but I'm pretty sure Samsung's is not the same as Canon's, so it's possible for other companies to have similar tech, so it wouldn't surprise me - even more if people will HAVE to pay the extra - if either Sony/Panasonic developed something similar or they are indeed paying for it.

    Low light was one of the topics people said the GH5 couldn't keep up with the competition, but video AF was something that even more people complained about. So I think it would make sense for them to try to step up their game, be if with the combination of PDAF and CDAF or something like Dual Pixel.

  13. A couple of years ago I described what Sony could do to make a real PRO FF mirrorless. An A9 24MP FF E-mount, high frame rate burst with a bigger form factor, no overheating and bigger battery but I thought Sony would put an articulated screen and touch screen as well. That what made sense to go after Canikon and it seems that that's what they did.

    "One that could have around 24MP with fast readout that could do both 4K in Full Frame and 1:1 pixel 4K in APS-C/Super 35 mode. Not to mention great slow motion and burst frame rate."

    "The A9 will pack everything Sony can, the A7 family was just a study case and preparation because the plan was to make the A9 go against the big boys, Canon 1DX and Nikon D5 but doing so in a Sony fashion, full of big technological advancements."

    "I wonder how much will this A9 cost, maybe with the grip it will be at least $5000. Another interesting point is if Sony will pack advanced video feats in this camera."

    "There are two ways to look at the A9, one is that the A9 is a natural evolution of the A7 series but for me, I think that the A7 cameras existed so the A9 could exist. It’s like they used the A7 to fund the R&D of the A9, but they needed time and enough technological advancement to pull that off after the A-mount experience failed and also to make it worth announcing against Canikon’s big boys.

    I also think that this is why they haven’t made a bigger form factor for the A7 and I’ve said before, when I say bigger, I’m not saying DSLR-like, simply a bit bigger would do. Enough so overheating is no longer an issue and the camera doesn’t have any form factor compromise, for me, overheating has become a way to separate and justify bigger and more expensive cameras."

  14. There is no option to bake LUT into the footage - for now. But hopefully if enough people ask for it, there will be. I've asked around and this is something some of the Lumix ambassadors were also talking with Panasonic.

    As far as I know for now, all you can do is to convert a LUT to Varicam .vlt and use it, the function is called V-log L View Assist and it only works in V-log L and although it would be very interesting if Panasonic could open that option for other picture profiles, I doubt they will.

    But the ability to select, preview and bake a LUT in-camera would be very useful and welcomed.

  15. Quote

    2. The H.265 codec and 5K recording has plenty of potential on the GH5. With the current firmware it is under-exploited. The ‘6K photo mode’ is locked to 30fps files in 4:3 or 3:2 ratio with no option to change this to 24fps 16:9 if you would like to record short 5K videos. Perhaps there are heat and power consumption issues to be managed but it would be nice to have a 5K video mode. Besides I don’t know why they call it “6K” Photo Mode when the maximum horizontal resolution is 5K. Weird marketing Panasonic!

    I asked Sean Robinson about it but his reply was a bunch of marketing mumble jumble that didn't even make sense, it seems they simply wanted the 4K, 6K and 8K progression, even though nobody actually cares about that.

    I even asked why don't they release a 5K video then instead of just open gate hi-res anamorphic or since the 6K Photo is a 10-bit 420, will the hi-res anamorphic be 10-bit 422. Or as H.264 doesn't support HLG HDR, so does it mean that when Panasonic releases the HLG firmware there will be H.265 4K 10-bit 422 files as well? But he didn't answer any of that.

    Quote

    6. The EVF was sharp, but nowhere near as good as the Leica SL for manual focus with the focus aids switched off

    It seems that the EVF is basically the same as the Leica Q. The only other manufacturer that uses a 3.2" 1620k LCD screen is Canon, so it may be the same, who knows?

  16. 5 hours ago, Andrew Reid said:

    What Cinema5D are seeing is not a serious “problem with bit-depth” as they put it, it’s just compression and a very poor grade. Perfectly normal and the Sony FS5 initially had macro-blocking in 10bit far worse than what I see with the Panasonic GH5.

    Is 10-bit a typo? I remember the FS5 having macroblocking issues when recording 4K internally but the FS5 doesn't record 4K 10-bit 422, it's XAVC-L 8-bit 420.

  17. Yeah, these were almost exactly my comments there. C5D does far too often technical mistakes like this, it happens in basically every article. They look more knowledgeable than they actually are and I've seen this in so many articles, enough so I follow the news, but take any technical report with a grain of salt.

    They are confusing compression issues with color depth issues, two very distinctive things, propagating terrible misinformation for those that know even less than they do.

    To clear this up, they should have done an external recording to rule out compression and codec issues before jumping into 10-bit is no good. Plus, I'm not really into this level of pixel peeping anyway. They even missed some other observations about the other cameras artifacts or even other GH5's advantages too.

    In the end, they are spreading a lot of misinformation which will only confuse even more people that already don't eff understand what color depth and chroma subsampling is, this is already a complicated topic to discuss out there because there are already a lot of poorly done tests about this matter and this isn't doing it any favor at all.

     

  18. Sure, they could offer that but I doubt. I mean, marketing 1080p RAW is not as appealing as anything 4K, they probably don't think it's worth it. But it didn't need to be now, if BlackMagic is using SD cards to record 4K ProRes HQ with their 4K recorder, that's more than what is needed for their 1080p CinemaDNG RAW, so this argument could have been made quite earlier.

  19. From my understanding Leica wanted video shooters to use Super 35 lenses for the Leica SL, right? I'm not sure how many would do it but a Leica SL with an external recorder outputting 10-bit 422 should be interesting, how is the rolling shutter? The new M10 has a better sensor with less overall noise, which makes me wonder who made the sensor. The Leica Q and SL sensors are made by TowerJazz (by Panasonic), is this from somebody else?

    This also makes me think about the next Leica of this kind, Panasonic put a lot of good things in the SL, external 10-bit 422, L-Log, 120fps, etc. quite similar to the GH4, will the next Leica have internal 10-bit 422 and 4K60p like the GH5? I wonder if more people will think about Leica for video if they deliver that. That would be interesting, the GH4 has an extra crop and so does the SL, would the next one be a full sensor readout withuot crop like the GH5 as well? I would be like a FF GH5 with Leica colour science - and price! hehe...

  20. Oh, there is another interesting bit regarding the GH5 and H.265.

    The GH5 will support the HLG, which is a HDR standard, right? It has to be 10-bit 422 and that's something people already know and that's why the GH5 will support it but so far, H.264 doesn't support HLG and unless they add it or the GH5 starts to use VP9, the only other codec to support HLG is H.265. If so, this hints that either way, at least the HLG content will have to be output through HEVC.

  21. 6 hours ago, Don Kotlos said:

    Since it appears they are going to use the H264 for the 400 option, then the codec is already developed :

    http://pro-av.panasonic.net/en/sales_o/p2/AVC-ULTRAoverview.pdf

    But if it needs some modification for GH5, I am not sure how you are so confident that it is the case, or they need 4 extra months to do so. 

    Yes, but had they released the 400 option along with the GH5 without any V60 cards actually available, then people would try to find which cards work which don't and that would just create a very bad experience. Look at what happened with E-M1ii and the card incompatibility. 

    My guess is H265 still somewhat troublesome to edit with the majority of systems, and it might be the case they will release an AVC-ULTRA v2 based on H265 sometime in the future which can come as a firmware update. 

    I don't quite understand why did you conclude that I'm oh so confident it's going to be this or that. I'm not betting, I'm just speculating. If AVC Ultra also has 400Mbps and HEVC would take less than half of that bitrate, to conclude it has more chances of being based on AVC Ultra is not a matter of confidence, just logic. It might be HEVC, but it doesn't look like that since both are 400Mbps. Also you can't base on AVC Ultra to make a HEVC codec, to begin with it would be AVC, since that means H.264 and how H.265 works is very different, it can't be based on a different codec, it would simply be a new codec. Maybe HEVC Ultra? hahahaha...

    That's BS already. The GH5 is in development for how long? Didn't they know that it would need V60 cards for 400Mbps codec? Sure they did. I'll quote what I already wrote.

    "Anyway, Black Magic Design is already using some UHS-II cards for their 4K recorder up to around 120MB/s, so indeed there are already cards that could sustain 400Mbps. Panasonic saying it's because there is no V60 cards is just BS, the truth is that THEY don't have the codec ready yet. And as I said, for some reason manufacturers haven't done the tests or I don't know why they don't label the V60 and V90 capable cards with it."

    If Panasonic had the codec ready and they needed manufacturers to put V60 label on cards, it would be a simply matter of letting manufacturers know and since they already have cards that can sustain more tha V90 for minimum sequential writing speed, it would be just a mater of testing it and putting the label on it. It would certainly take less than 6-months to do so. So obviously it's not the cards that are not ready, but that's what the marketing is going to say, of course.

    But I see no problem in delivering it via FW, it's better than having to wait longer for the camera.

  22. 8 hours ago, sudopera said:

    You forget that Panny makes SD cards also, maybe they are waiting for the release of their new fast cards. I am not saying that this is the case but it is a possibility, especially if you look how they thrown their proprietary P2 cards into Varicam LT instead of widely accepted CFast 2.0.

    No, I didn't forget and at the same time it's not like Panasonic rely on that for their income, even if their were late to fast SD Cards that's still not something that would take so long to solve or anything that would prevent them from adopting the 400Mbps codec, most certainly it's because they haven't finalized the codec yet.

    More importantly, it was said in a couple of GH5 interviews that Panasonic has announced V60 and V90 cards at CES 2017 already.

  23. 1 minute ago, marcuswolschon said:

    Comparing H.264 IPB (what people see every day) to H.264 All-I is hardly comparing apples to oranges.

    I'm not saying that, really. I said that using the word "efficiency" in that example is like comparing two different things.

    If I was to use the word efficiency I would compare one codec to another or maybe All-I to All-I, IPB to IPB, because that's the adjective you use to measure quality, otherwise I would just talk about compression rate.

    Saying efficiency like you did, it does sound like you are saying All-I is bad compared to IPB when they simply have different applications and as far as I understood, you simply wanted to say that All-I has a lower compression rate than IPB and not really saying one is better or worse than the other. But when you used the word "inefficient" that's what it sounded like.

    Anyway, this is irrelevant, I was just explaining why people thought you were complaining about All-I.

  24. @joema Yeah, after asking it I went and read a bit more about it. My doubt came up to be because Andrew had said that HEVC was by its nature Long GOP but I've read a comment by Vitaliy Kiselev saying that that's not correct and that not only it has All-I but it's significantly more efficient as an Intra codec than AVC was. So I take his word in that. I understand that it's not so simple, but I hope that Panasonic brings HEVC for other modes if they can, DJI already adopted it and they are offering all sorts of choices, from RAW, ProRes to H.264 and H.265, Panasonic could at least offer AVC and HEVC.

    @marcuswolschon the word efficient was not used in the best way and that's why people looked at it as a complain. It's hard to use the word efficient comparing apples and oranges, maybe you should had talked about compression rate instead of efficiency.

    Anyway, Black Magic Design is already using some UHS-II cards for their 4K recorder up to around 120MB/s, so indeed there are already cards that could sustain 400Mbps. Panasonic saying it's because there is no V60 cards is just BS, the truth is that THEY don't have the codec ready yet. And as I said, for some reason manufacturers haven't done the tests or I don't know why they don't label the V60 and V90 capable cards with it.

    Just like HEVC, All-I is a great option to have and for those that can and will take advantage of it, having options is always good. The lack of support for All-I 10-bit 422 hardware acceleration is interesting and that explains why some people have complained that they didn't see any gain by using All-I AVC footage in terms of being less tasking. For most people, they will probably keep using IPB. An All-I HEVC would be very interesting too.

  25. 4 hours ago, joema said:

    Software support is obviously required and this often lags hardware by years. E.g, Intel's Quick Sync hardware-assisted H264 encoder was introduced with Sandy Bridge in 2011. To my knowledge Premiere Pro only recently started supporting that -- and for Windows only, not Mac. That was roughly a six-year gap.

    Skylake's Quick Sync has HEVC/H265 support for 8-bits per color channel but Kaby Lake will be required for HEVC at 10-bits per color channel. Hopefully it won't take Adobe six more years to add support for that.

    I think nVidia's NVENC has HEVC hardware support starting with Pascal and AMD's VCE with Polaris, but the software development kits, APIs and drivers must be available and stable for application developers to use. So there is a difference between raw hardware availability (in silicon) vs being able to harness that from the application layer, which can require stable and tested SDK and driver support. 

    Traditionally there has been concern over image quality of hardware-assisted encoding, but FCPX has used Quick Sync for for years (single pass only) and it looks OK to me. But I don't think it has H265 hardware support yet.

    Lots of people want H265 because the file sizes are smaller, but you don't get something for nothing. H265 requires vastly greater computational complexity which means the CPU burden to encode/decode is much greater. In this paper, VP9 was 2,000x slower to encode than x264, and H265 was 3x slower than VP9 (or 6,000x slower than x264). So it took thousands of times more computation to save at most about 50% in size. This is just a single paper and algorithms and efficiencies are improving but it illustrates the basic principle.

    iphome.hhi.de/marpe/.../Comp_LD_HEVC_VP9_X264_SPIE_2014-preprint.pdf

    If that computation is done in hardware (IOW you essentially get it for free) then it may be a worthwhile penalty. But if only software encode/decode is used for H265, it may be impractically slow. Also if full and high quality software support at the SDK level is not available, the fancy silicon doesn't help much.

    For the iPhone it is affordable for Apple to use H265 for Facetime. They completely control both hardware and software, and quantities of scale mean any design or fabrication cost is amortized over 50 million phones per year. If it costs a little more to add H265 logic to a corner of a SoC (System on a Chip) that already has 3 billion transistors, it's no problem.

    For a software developer like Adobe, they must deal with three basic H265 hardware acceleration schemes, NVENC, VCE and Quick Sync, some of which have multiple versions, each having varying capability and features. So maybe that explains the delay on Quick Sync in Premiere Pro.

    H265/HEVC has also been hampered for years by disputes over royalties and intellectual property, which is one reason Google is pushing VP9 which has roughly similar capability but is open source and royalty free. However VP9 itself will probably be replaced by the similar but improved royalty-free AV1: http://www.streamingmedia.com/Articles/Editorial/What-Is-.../What-is-AV1-111497.aspx

    Interesting, I've been looking for someone with enough knowledge of HEVC to ask a few questions, so if you have the time, I would appreciate. But before that, from what I've read, VP9 wasn't really as efficient as HEVC, being closer to AVC than HEVC but I don't know how much it has improved since then.

    Does HEVC have All-I encoding or just IPB? If it has All-I, how much less tasking would it be compared to the usual IPB? Is there advantages of having an All-I H.265 encoding?

    I fully understand that NLEs and computers haven't caught up with HEVC yet but since Panasonic GH5 is already capable of encoding it, I don't see why it shouldn't have H.265 for 4K as well, even if RIGHT NOW most people wouldn't be able to take fully advantage of that.

    Well, simply because some people would and as time passes, more and more people would, H.265 is after all the codec of the future and having a camera like a GH5, that is already making some splashes, using HEVC, that by itself would help the industry move forward faster since it would help creating a demand for that. Otherwise we will be stuck and simply waiting for Manufacturers and Software to start supporting whenever the feel like and since there is not much demand, why would that be a priority for them?

    So even if I can't personally take full advantage of HEVC now I would like Panasonic to think forward and implement it, if Photo 6K already uses it and if Anamorphic Hi-Res will also use it, I can't see why they couldn't be able to implement it for 4K or even create a 5K video mode. 6:9 5K is 4800px and DCI would be 5120px - which is why this is much more like Photo 5K than Photo 6K, what an unnecessary marketing BS.

    Anyway, I'm all for pushing technology forward. C'mon Panasonic, just give people the option to record in H.265 10-bit 422 - since Photo 6K seems to be 10-bit 420 for now.

     

     

×
×
  • Create New...