-
Posts
7,873 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
One of the challenges with spatial NR is that it softens edges and fine detail, so if you only NR the red channel then you might get rid of most of the noise but only smear a third of the edge definition, probably a good compromise. I have no idea how the NR features in the software actually work - maybe they're doing this already. Not sure about temporal NR on ALL-I vs Long-GOP but you may find that Long-GOP might have a finer and less compressed noise considering that the keyframe paints the scene and then the progressive frames only have to deal with what changes, which would be mostly noise unless there was heavy movement in the scene. All else being equal, the Long-GOP would have much more bandwidth allocated to the noise and therefore it would be higher quality, and perhaps eliminated more easily as it would be closer to being random. Certainly the comparisons I made showed that ALL-I was lower quality compared to Long-GOP when they both had the same bitrate.
-
Yeah, tomorrow is the best predictor of tomorrow. ..or to put it another way, things change but not as fast as you'd like.
-
No worries. You may find differences between how Canon and the external recorder encode things. Not all electronics / algorithms are equal and even something like throwing more CPU power at something might mean it can do the encoding at a higher quality. Ultimately when you push your footage you're pushing everything, not just the bit-depth. 14-bit footage will still break if you've got lots of noise, or if it's heavily compressed with lots of jagged edges. I came to the conclusion it's all about latitudes too. I then extrapolated that to the idea that the less I have to stress an image the better, so now I shoot in a modified Cine-D profile but also in 10-bit, rather than shooting HLG 10-bit and then having to push it around to get a 709 type of image. To put it another way, I get the camera to push the exposure to 709 before the conversion from RAW to 10-bit, instead of me doing it afterwards in post. Recording something flat and then expanding it to full contrast in post is really just stretching the bits further apart. If you recorded something in a LOG profile that went from 30IRE to 80IRE in 10-bit for the full exposure range and then expanded it to put the range from 0-100IRE then you've multiplied the signal by two, effectively giving you a 9-bit image. If that LOG image was 8-bit to begin with then you're now dealing with a 7-bit image by the time it gets to 709. Cinematographers talk about things like the Alexa only having a few stops of latitude, and they're shooting RAW! If we're shooting 8-bit or even 10-bit then that means we have almost no latitude at all, so that's the philosophy I've taken.
-
Here's a flat scene filmed in 8-bit C-Log on the XC10. Ungraded: With a conservative grade: Cropped in: Here's the vectorscope that shows the 8-bit quantisation, which is made worse by the low-contrast lighting conditions and the low-contrast codec (C-Log): I managed to "fix" the noise through various NR techniques, which also fixed the 8-bit quantisation: Yes, this is a real-world situation. Is it a disaster, not for me, an amateur shooting my own personal projects. Would it be a disaster for someone else? That's probably a matter of taste and personal preference. I have other clips where I really struggled to grade the footage and although the 8-bit codec wasn't the cause, it also added to the difficulty I experienced. I now shoot 10-bit with the GH5 and don't remember seeing a 'broken' vectorscope plot like the one shown above. When I tested 10-bit vs 12-bit vs 14-bit (using Magic Lantern RAW) I personally saw a difference going from 8 to 10-bit, I saw a very small difference going from 10 to 12-bit, and I couldn't see a difference going from 12 to 14-bit. Others said they could see differences, and maybe they could. A couple swore that 14-bit is their minimum. I've also seen YT videos where people test 8-bit vs 10-bit and some tests found they could break the 8-bit image under normal circumstances, others couldn't break the 8-bit under ridiculously heavy-handed grading that you'd never do in real life. Here's me trying to break a 10-bit file from the GH5... Ungraded 4K 150Mbps HLG: Unbroken despite horrific grading: It's probably a matter of tolerance. From my experience 8-bit is good enough in most situations and 10-bit is basically bullet-proof, but others have different tolerances and shoot in different situations. Also, different cameras perform differently.
-
I was aware of that when I said it, so I totally agree. I guess this raises an interesting question around BMs business model - will they ever stabilise their lineup, or will they just build new cameras with whatever seems like a good idea at the time? From a business perspective, it's way cheaper to take an existing model and make some upgrades, rather than having to re-do everything. I don't know what the insides of the P4K and P6K look like, maybe they re-used the same circuit board layouts, but maybe not. I suspect you already know this, but even the task of taking a circuit that is fully designed and working out how to physically design the circuit board (where all the little wires go on the board, and where they get routed so they don't cross each other or interfere etc) that's a long and complicated process, just in itself! If you can keep the same layout then you can save money on not having to reconfigure all the machines that build them, etc etc etc. Maybe BM will be famous for being the camera company that just builds new cameras, with the sensor, sensor size, resolutions, storage mechanisms, battery type, lens mount, etc etc, being different on almost every camera they ever make. It certainly means they can be super agile! If so, I vote for them making a camera that's smaller, mirrorless, has an articulating screen, IBIS, and very long battery life. They could make it FF to compete with the latest trends. Then I'd record in 1080 Prores internally (downsampled from the full sensor) and be a happy camper! If they put Prores 4444 or HQ in there then we'd have 264Mbps / 396Mbps in 12-bit and it would be practically like RAW. I'd really enjoy their updated colour science and likely inclusion of dual-native ISO. Yeah, who knows what. They're so unpredictable and seem to want to grab headlines, which I understand makes sense for commercial purposes, so I guess you can't rule anything out. It would make sense for them to release a box-style camera though, probably based on the P6K, as it would make a great crash / drone camera, which is something that Hollywood / film-makers would use, which seems to be their target market. Although having said that Hollywood are their target market, I wouldn't be surprised if secretly people like Monica Church, a Youtuber with ~1.4M subs who uses the P6K for little doc style pieces as well as vlogs, and may well have picked it for the punch-in ability for single-camera interview setups, which she uses from time to time. I suspect she has two of them, but not sure. There must be hundreds or even thousands of folks like Monica who use the P6K for stuff like this. Her previous camera was a GH5, which she loved: Yeah, crazy times!
-
Interesting combo - BMMCC and A7S3. Will you be grading the Sony to match the Micro, or the Micro to match the Sony? I'd be very curious to see some test shots if you get a chance to shoot something you're allowed to share!
-
I wonder if this is the WB settings not being consistent? If you do a custom WB do you still get the same colour temperature differences? Begin tiny rant... I see heaps of A vs B camera tests on YT and the WB is different between the cameras and all I can see is that the WB is wrong, so the test is basically useless. People always say "identical settings and the same lens" (they normally film them one after the other so can re-use the same lens) but to me if the WB doesn't match then they can't be the same settings - WB is a setting! End tiny rant... If you did a custom WB and they still didn't match then I'd say that this would be a significant issue, considering that the specific colour temperatures aren't arbitrary, they will be defined in some physics text somewhere.
-
That's kind of what I was thinking, but then people kept saying "a new camera from BM" and I was thinking that the P4K is over two years old so potentially due for a refresh (depending on how long the product cycle is) or BM could be introducing a new model (they've been pumping them out in the last couple of years!). Then I remembered the GH6. Then the C50. etc etc etc. So many manufacturers with so many camera lines. Sony has 5 lines in FF MILC hybrids alone (A7, A7S, A7R, A7C, A9) and there are manufacturers like BM who have lines across multiple sensor sizes, etc etc. I was thinking that if we tallied them all up there would be quite a few that were "overdue" for an update.
-
Compression done after-the-fact will always have the advantage because the whole file can be analysed and processing can be done in slower than real-time, whereas cameras can't see into the future and have to be able to compress in real-time. When you're talking about the types of bitrates that we operate at, ie, >50Mbps for 4K, it's in the diminishing returns part of the quality vs bitrate curve, so large decreases in bitrate don't impact the quality nearly as much.
-
Whatever works! I remember Comparing the cost of Neatvideo with Resolve when I bought my Resolve license a few years ago, and Resolve was cheaper - plus I got an entire NLE and colour suite! Neatvideo is cheaper now, but if you've already got it then that's awesome. I've never compared the two of them. Thanks, I saw that video in my feed but hadn't watched it yet. I heard the same thing about NR being applied to all footage. It was in the context of modern grading, so probably wasn't a comment on film, which it makes sense not to do NR on, because the decision to shoot on film, coupled with the decision about what film stocks to use, likely indicates that the grain is desirable for that project.
-
There's two ways to look at bitrate. The first is constant bitrate per pixel, which is how Prores works, for example, so that 4X the pixels should be 4X the bitrate in order to maintain the same quality of each pixel. The second is constant bitrate for the overall image. When they released 4K TVs they weren't twice as wide and twice as tall as 1080p TVs. On top of this, my experiments have revealed that a lower resolution looks worse than a higher resolution at an identical bitrate, so there's an argument to be made that keeping the same bitrate is actually not required. Another factor is that image quality goes up with improvements in codec, so h264 isn't as good as h265 (by a factor of about 2), and h266 is about 2 times as efficient as h265. Yet another factor is that displays are getting larger over time, and video quality expectations are also increasing over time. I take all of this and go somewhere in the middle, where I expect a higher bitrate for a higher resolution, but I don't expect to maintain the same bitrate per pixel. I also try to be pragmatic and don't get fussed as long as the bitrate is 'enough'. Obviously that differs between applications and situations, so that's up to everyone to determine for themselves.
-
It seems like we got given a dizzying array of new cameras, between the A7S3, Canon C70, and others, but it seems there's probably more to come this year. A few people have mentioned the idea that BM might release a new camera this year. There's still a mythical GH6 in the wind that could arrive at any time. The C50 is rumoured. Etc. Etc. Etc.... What cameras do you think will arrive in 2021? What are you hoping for? Personally I'm watching with anticipation of the drama of it all, both hoping for the perfect camera to be released, and also hoping it won't so I can use what I have and focus on the craft..
-
That's 80Mbps, which is nothing. At 80Mbps the extra detail in 8K footage from 4K or even 2K would probably be more than half just noise and compression artefacts. Potatojet did some good videos comparing 8K smartphone video with normal resolution video and the 8K image out of these things would still look like complete sh*t even if it was downscaled to 2K. Watch the YT at 1080 or 720p and you can still tell the difference. TLDR; don't bother. It's like a 65MP jpeg image compressed to 24Kb, lots of pixels but they're all crap. I did some tests yesterday on the GH5 modes and got pleasant surprise. Obviously one of the great features of the GH5 is that the image is downsampled from the 5.2K sensor, but I confirmed that the 2X digital crop is also downsampled, the 1080p 60p mode is nicely downsampled, and the 1080p 60p 2X digital crop mode is also downsampled! No Canon Cripple Hammer going on there! Also, the better I get at grading the more that I find that the limitation of the camera is me, and so when people criticise, they're just criticising their own limitations and trying to buy their way out. This was a quick test shot for highlight rolloff - graded in under 2 minutes so not perfect, but every time I look at the footage I forget about newer cameras..
-
and get rid of our smartphones and any 'smart' devices with a microphone. they're all listening...
-
GH5 with Canon FD 70-210/4 and 2X FD TC sitting on a packet of oats for lens support and stabilisation. I shoot a lot with long lenses for sports. Here's some random pics... I haven't been that enamoured with the FD as it's a bit soft at the longest end, although I'm being super picky as I'm recording on MFT, using a 2X TC, and sometimes using the digital 2X zoom or ETC mode to crop in further. This is equivalent to expecting the lens to have a resolution of ~250Mpixels, which is obviously ridiculous for a vintage zoom lens! I was contemplating getting a better L-series zoom from the same time, but have since re-evaluated and now I'm not so sure. My latest "long lens" setup is my Voigtlander 42.5mm / f0.95, which on MFT is an 85/1.9 equivalent. That's not a long lens for most people, but I've discovered the 2X digital zoom on the GH5, which downsamples a ~2.5K sensor area to a 1080p image, thus keeping all the downsampling goodness, and turns the lens into a 170mm / 3.8 equivalent, which probably is a "long lens". Of course, I bought the 42.5 in late 2019 in anticipation of the many trips we had planned in 2020, so naturally it's basically only done test shots in the backyard, so it's both an old purchase that I've practically forgotten about and also a completely new lens I haven't really used yet!
-
I'm sticking with the GH5 and concentrating on ironing out my colour grading and image processing workflow more. Beyond that I'm thinking more about editing style and how to get better at that. I watched the below videos recently and found them interesting, so they might be useful for those who aren't sure about what's good for what applications. This guy rates cameras against a few different types of productions, and rates the A7S3, FX6, C70, Komodo, and threw the P6K in there too. This guy rates cameras on a different scale and includes more cameras:
-
That makes sense about it being about how good the sensors are. The GH5 sensor must be getting old now.. In terms of colour science, if you're good enough then anything can be made to look like anything, but the problem is that those levels of skill are pretty rare, even amongst professional colourists! I would also suggest that the pursuit of IQ can be had either by having deep pockets or by being radically inconvenienced - there are some older cinema cameras that look great (for example the F3, BMMCC, etc) but compared to a modern DSLR/MILC they're a royal PITA.
-
I guess we're getting into tricky territory here, but I have the rather unpopular opinion of only caring about the user experience and the image. My passion for the size of the sensor is about the same as my passion for the number of letters in the part number of the audio processing chip - both have virtually no impact on the end result!
-
Yeah, this is a major issue. It's because the environment for colour management is basically a huge mess, and why BM still sells and recommends their own hardware monitoring - because it bypasses all the OS and UI colour tomfoolery. I read in one thread about applying an extra node in Resolve to compensate, I think it was a gamma conversion of some kind, and the idea is that you just grade as normal, apply the extra node, export, then disable the node again. I've found it makes a pretty close match, including all the way to viewing your video on YT, certainly close enough considering the woeful variation in colour performance of the various devices out there.
-
I just did a little test with ffmpeg, and encoding the ALL-I was twice as fast as encoding the Long-GOP file. So ALL-I should generate half the heat, so that's not it.
-
I feel like learning about NR is one of the things that really helped me get good results from my footage. I also think it was one of the things I understood the least. Certainly, the internet is full of people complaining about their cameras ISO performance. I'm sure that Sony sold a lot of A7S2s because people didn't really understand NR. In reality, cinema cameras that Hollywood uses are some of the noisiest cameras still currently in use - most consumer cameras have better noise performance! Here's a great tutorial about how to do it in Resolve, and as Waqas says, this might be the biggest reason to buy Resolve. Certainly it was when I bought it. I'd previously struggled in my own footage, especially when I did a bunch of things wrong because I didn't know what I was doing early on! But I was able to start with this C-Log 8-bit 4K image: Grade it to this: and then turn this level of noise: Into this: Perhaps the greatest challenge with the Canon footage is that the noise is in large blocks, rather than single-pixel noise which is easier to get rid of. My Canon 700D was the same, even in RAW, so maybe it's a Canon thing. As the combination of temporal and spatial NR combines adjacent pixels (which you can then sharpen back up again as Waqas shows above) it also helped turn this obviously 8-bit and heavily expanded / broken colour space: Into a much smoother colour rendition: It's not magic, but it's definitely a much better result than just living with the noise or binning shots that had a bit of noise in them for whatever reason. The thing that surprised me the most is that due to the noise from older cine cameras, basically every production shot on a cine camera will have NR applied like this. Once I learned that it seemed odd that consumers were expecting to have cameras that had no noise, when the high-end cameras don't perform like that!
-
@Mark Romero 2 @SteveV4D @herein2020 Good conversation, and I think it's actually clarified things for me. I don't have any issue with people shooting 4K, or 6K, or freaking 12K if they really want, but I think the issue I have is that people do so without really knowing what choices they're making. I'm speaking more broadly, and you guys may be well aware of the pros and cons, but many people reading this aren't posting or even logged in, so I'm conscious of those people, rather than just you guys 🙂 I think the issue is how we talk about these things. People talk like 4K has no downsides. When you bring them up, they get countered individually, and the net result seems to be that the arguments against 4K have valid counterarguments, but that's kind of a false impression. It's like.... "4K takes more processing power to edit" - "yeah, but modern computers are fine, so that's not enough reason to swap" .... 4K: 1 point, 1080p zero! "4K lets you crop in post" - "1080 is pretty good though" - "but not as much as 4K, so that's not a reason to swap" .... 4K: 2 points, 1080p still zero! "4K is what some clients ask for" - "that doesn't mean 4K is always the right choice" - "yeah but some do" .... 4K: 3 points, 1080p still zero! etc. It looks like 1080p scored zero points, but the problem with this logic is that it's just not true. 1080p won a few and came runner up in most of them. If the scores were a percentage rather than a winner/loser mentality then an overall score might be something like 4K 70% and 1080p 55%, but when you talk about 1080 people don't say "I've weighed up the pros and cons", they just look at you like you're suggesting they film with the lens cap on. Unfortunately it creates a situation where new people don't understand there's a choice to be made, and that depending on what you value more, the winner can be flipped to 1080p without that much effort because it's actually a much closer argument than people think. That makes sense and is very sad actually. It means that Panasonic will have taken the video features of the GH5 and essentially doubled their price. Also, they've smashed the S5 with the Canon Cripple Hammer in order to protect their own line of 'cine' cameras. Panasonic used to be the underdog who gave what they could, rather than the big corporate who took all they could. Assuming it's true, it's not a good look.
-
I just think it's sad that clients are asking for 4K. I think in reality most of them couldn't tell if you shot 1080 and then just sharpened it and then exported at 4K, but obviously doing that on commercial jobs isn't a good move. It's kind of funny that clients who want an ad for their business to put on social, or a promo for their whatever to put on TV in-stores want 4K, when 8 and 9-figure Hollywood films are being delivered at 2K. Do they think more resolution is better? If so, why do they think they need more resolution than a Hollywood film that will be projected on cinema screens a couple of stories tall? If they don't think more resolution is better then why ask for it? It just doesn't make any logical sense. In terms of Potatojet and the Alexa footage - I wasn't comparing the Alexa to the M50 outright (it might have sounded like that though) I was more saying about how the Alexa just looks spectacular and it's really obvious. Watch that footage at any resolution you like on YT and you'll see that the resolution/compression settings of YT don't take away that magic. The Magic of Alexa isn't in the resolution. That's why having cameras without the magic of the Alexa and adding resolution is kind of stupid - they're fixing the part of the camera that wasn't broken. Cropping heavily in post is an argument that seems to make sense, but really has its limits. Have you done a comparison between cropping into 150Mbps 4K and 200Mbps 1080p? or are you just blindly assuming that more is better? You seem to be latching onto the comparison between Long-GOP and ALL-I as the comparison between 150Mbps 4K and 400Mbps 4K, but that's not the comparison I'm making. I'm comparing 150Mbps 4K and 200Mbps 1080p. You're starting off with the assumption you need 4K, but until you've tried it you can't really comment with much certainty. AF is a great point - so let me phrase it like this... If you had ALL-I on the GH5 and then the S5 didn't have it, that would be an additional factor on top of the AF that would steer you towards a different brand. One thing that really makes the GH5 stand out are the codecs. I didn't limit myself to 1080p. I upgraded to ALL-I, upgraded from 150Mbps to 200bps, and upgraded to the resolution that many Hollywood films are shot in. You wouldn't know it from these 4K fanboy discussions, but people that have budgets of hundreds of millions of dollars for a film, and who win awards for making glorious images actually choose <3K resolutions by choice. Think about that for a second.. When they could have anything they want, they actively choose 2K. Do you think there's a reason that they might do that? 4K video in the kind of codecs we're talking about is closer to 2MP in resolution, so why quibble about 8MP vs 12MP 🙂 I love 4K. In fact, 4K is enough for me. I shoot 5K. and I shoot 5K RAW and then downsample it to 2K. I just do it all in-camera. In terms of reframing - try it for yourself. I've done the tests and I realised that you can push the image much further than I thought. You'll find me one these forums talking passionately about 4K and reframing, you'll find that I did the tests, and you'll find that I changed my mind. I must admit the Panasonic colour science has gotten a lot better since the GH5, that's for sure. It would just be great if they put the good codecs into the S5. For me, the S5 isn't a GH5 replacement.
-
I used to shoot 150Mbps Log-GOP 4K, and really liked its texture, which I found to be quite analog. But when I compared the IQ to the 200Mbps 1080p ALL-I mode and learned more about IQ in general, I realised that the difference wasn't that much. ....then when I realised I can shoot 200Mbps 10-bit 422 ALL-I 1080p in both 24p as well as 60p, I saw an enormous opportunity to be able to match the 60p clips I shoot to the 24p clips I shoot. Getting 10-bit 422 ALL-I 60p is a pretty rare thing, even in the latest cameras. ....then when I realised I can edit the 200Mbps 1080p from my archive spinning disks without having to render proxies, well, game over! Nah, YT compression shouldn't be blamed. Go watch the Potatojet vlog where he buys the Alexa - he's vlogging the whole thing when they take the Alexa out and get a shot of the train and as soon as they cut from his vlogging camera to the Alexa the image just goes from 4/10 to 10/10 instantly. Then watch that transition in YT 1080p. Then in YT 720p. Then 480p. Then 360p. You will see that the YT compression does not erase the magic of the Alexa colour science and DR. I switched to 1080p partly because I realised that what matters in an image isn't resolution, it's DR, CS, etc etc. I compared them, and I can say there's a HUGE difference between 150Mbps and 400Mbps. The cost of the SD card!! The lack of 10-bit internal 60p is only a problem for those that still think that 4K is worth the effort. A good comparison to make is 10-bit H265 4k 60p vs the 1080p 10-bit 422 ALL-I 60p mode. I think the difference would be absolutely enormous, in terms of editing computer, and basically negligible in terms of image quality. I mean this sincerely too, actually do some testing where you take the 1080p footage and apply various types of sharpening and processing to it to try and match the 4K. Of course, you have to do these comparisons on a timeline where both the 1080p and 4K have been graded, Glow and grain added, and the video compressed to its final delivery format. I have spent the last few months taking screen-grabs of my 4K display playing TV shows and movies (set a hotkey for screen-grab and it only takes a second) and you end up with a library of reference stills. In analysing that I have discovered that the actual sharpness of content is very low, partly because of the 180 shutter (I try and get frames where there is as little motion as possible) and partly because of the streaming compression. The images look glorious, being that they're from some of the best artists in the business, but 12MP RAW stills they are NOT. Before spending many thousands to get a camera that can do 10-bit 4K 60, and thousands more to edit 10-bit h265 clips, actually confirm that the difference is noticeable to the people watching your footage. If you're making corporate videos or whatever then sure, maybe the client wants something that looks super modern (and maaaaaaaybe they can tell the difference between 4K and a 1080p sharpened a little and upressed to 4K). But if you're trying for a theatrical aesthetic, or will be delivering via YT, then actually do comparisons instead of just buying into the hype. 4K is more of a brand name or status symbol than it is a resolution these days. Yes and no. If I was upgrading from the GH5 and wanted the ALL-I modes, then maybe I get the S1H instead of the S5. But maybe I realise that I can get the A7S3 for the same price as the S1H and maybe I go with that instead. Or maybe I can't afford it and I just don't upgrade. The S5 isn't only competing with the S1H, it's competing with the S1H's competitors, and other options too. By pushing people to re-buy all their lenses then that's a point where your ecosystem has to remain attractive.
-
That makes sense. In that case, I don't know why people are saying the S5 is a FF GH5. 100Mbps Long-GOP < 400Mbps ALL-I, regardless of how big the sensor is. In fact, I shoot 200Mbps 1080p because I value the 10-bit 422 ALL-I more than I value 4K.