Jump to content

kye

Members
  • Posts

    7,564
  • Joined

  • Last visited

Everything posted by kye

  1. I think we're getting off track here. 1) The first job is getting the signal converted to digital. I pulled up a few random spec sheets on Sony sensors of different sizes and they all had ADC built into the sensor, and they specifically state they're outputting digital from the sensor. This effectively takes analog interference out of the design process, unless you're designing the sensor, which almost no-one is. 2) I am talking specifically about error correction. To be specific, to send digital data in such a way that even with errors in transmission the errors can be detected and corrected. The ability to send digital data between chips without it getting errors in it is fundamental, and if you can't do it then the product is just as likely to get errors in the most-significant-bits as the least-significant-bits, which would effectively add white noise to the image at full signal strength. 3) I was saying that the deliberate manipulation of the digital signal before writing it to the card as non-de-bayered data (ie, RAW) is the area that I didn't know existed and might be where some of the non-tangible differences are. Certainly it's something to be looked into more. 4) Going back to the idea of spending money on training, I stand by that. Here's the problem - there is good info available for free online, but it's mixed in with ten tonnes of garbage and half-truths. How do you tell the difference? You can't, unless you already know the answer. So, how do you get past this impasse? You pay money to a reputable organisation for training... To be frank, the majority of benefit I got from the various training courses I've purchased has probably been from un-learning the vast quantity of crap that I swallowed from people who looked like they knew what they were doing online but really had just enough knowledge to be dangerous.
  2. That's not a brutal business perspective. This is a brutal business perspective - clients will absolutely be able to tell the difference between buying the better camera and buying the cheapest one and spending the extra on training. Training on lighting, composition, movement, etc. In terms of what is "better"... in that much referenced blind test with the GH4 in it, many of the Hollywood pros preferred the GH4 over high-end cinema cameras. Talking about the difference between two RED sensors is fine, but trying to apply that difference to the real world is preposterous. Well, it's one of the following: Processing in the analog domain before the ADC occurs Noise reduction or other operations that occur in the digital domain that DO change the digital output Noise reduction or other operations that occur in the digital domain that DO NOT change the digital output The last one is simple error correction, and will be invisible. If it's the first one, then it makes sense to do noise reduction, but (to be perfectly honest), if you have 23 crossings in between your sensor and the ADC chip then you deserve to be fired from the entire industry, and probably would be because the image would look like ISO 10 billion. This leaves the middle one, which is explicitly what ARRI are doing, so that's what I think we should be talking about.
  3. That caught my eye too. The thing that made me really curious was that he said that on the BMPCC "that image signal crosses other things 23 times on that one board, and in those 23 crossings every time it crosses it gets a noise reduction afterwards because otherwise it would be a completely unrecognisable image afterwards". This makes no sense to me, from my understanding and experience in designing and building digital circuit boards, so according to my understanding the statement is completely wrong. However, I trust that the statement is correct, because the source is infinitely more knowledgeable than I am. There is obviously something there that I don't understand (likely a great many things!) and this is something that isn't talked about anywhere. It does make me wonder, though, if this might be the reason that some cameras look more analog than others, even when they're recording RAW. The caveat is that the differences we're used to seeing between sensors, for example between the OG BMPCC and the BMPCC 4K, is that it might simply be due to subtle differences in colour science and colour grading. Sadly, the level of knowledge of these things applied to most camera tests is practically zero, and it might be that with sufficient skill any RAW camera can be matched to any other RAW camera and that there is no difference at all, beyond differences that occurred downstream from the camera. Another data point for RAW not really being RAW is that when ARRI released the Alexa 35, they talked about the processing that happened after the data was read from the sensor and before it was debayered: Source: https://www.fdtimes.com/pdfs/free/115FDTimes-June2022-2.04-150.pdf Page 52. Not only is there some stuff that gets done in between these two steps, there's a whole department for it! RAW isn't so RAW after all.....
  4. One thing I find a real challenge these days is the pace of real learning. Once you've watched a few dozen 5-15 minute videos that explain random pieces of a subject (likely sprinkled with misunderstandings and half-truths) then going to a source that is genuinely knowledgeable is often very difficult to watch, because not only do they start at the beginning and go slowly, they also repeat a bunch of stuff you've heard before. The pay-off is that they are actually a reliable source of information and that not only will they likely fill in gaps and correct mis-information, but they might actually change the way you think about a whole topic. I had this with a masterclass I did from Walter Volpatto - I completely changed my entire concept of colour grading. It was a revelation. It's actually a bit of a strange thing because I can see that lots of people think the way I used to, but there's no easy way to get them to flick that switch in their head, and when you try and summarise it, the statements sound sort-of vague and obvious and irrelevant.
  5. The other thing to mention, beyond what @IronFilm said, is that the thing that matters most for editing is if the codec is ALL-I or IPB. To drastically oversimplify, ALL-I codecs store each frame individually, but IPB codecs define most frames as incremental changes to the previous frame, so if you want to know what a frame looks like in an ALL-I codec you just decode that frame, but in an IPB one you have to decode previous frames and then apply the changes to them. In some cases, the software might have to decode dozens or hundreds of previous frames just to render one frame, so the simple task of cutting to frame X in the next clip is a challenge, and playing footage backwards becomes a nightmare. Prores is always ALL-I, but h264 and h265 codecs CAN be ALL-I too, but it's not very common. This is probably the cause of almost all of the performance differences between them.
  6. Thinking about this more, one of the things I liked about Prores over h264/5 was the aesthetic of the compression artefacts - the Prores artefacts were more organic and the h26x were more sharp/digital. I measured the relative quality of Prores vs h264 vs h265 back in 2020 and found that h265 > h264 > prores.. here's the table sorted by SSIM (a mathematical image quality comparison measurement - I compared every image with the uncompressed version to determine that score) These are all based on the outputs from Resolve, which is known for not having a very good h264/5 encoder, so this is likely wrong, but indicative. The things that kept me as being a fan of Prores at that time were: Prores implementations were always relatively unprocessed compared to h264/5 Prores errors were more analog in feel rather than sharp/digital However, Apple showed with the iPhone 14 that prores could be implemented and still have horrifically over-processed images, so that clearly showed that the processing isn't a codec thing, it's a processing thing. And, I've discovered that un-sharpening footage is easy in post, and that the difference in image frequency response (MTF) between film and RAW/unprocessed digital and over-sharpened digital footage is all fixed by just un-sharpening the footage, with the over-sharpened footage simply requiring more un-sharpening than the normal digital footage. Given these factors, I see no advantage to Prores over h264/5. In fact, now that manufacturers have insisted that all cameras are now megapixel-steroid-freaks, and the fact that Prores has a constant bitrate-per-pixel, the compressed RAW formats have lower bitrates than the Prores flavours, when shooting at higher resolutions anyway. This matters, as having decent downsampling in-camera isn't a given, and you're perhaps more likely to get a better compressed RAW implementation than a nice downsampling implementation.
  7. I'd assume there will be a bunch of sample footage available in due course.. both nicely shot stuff from the brand ambassadors as well as stuff shot by the mere mortals who don't have a $2000 per-day film set to point the thing at. The way I view cameras is this: Under ideal conditions, the pros will show you the potential - the most that the camera is capable of Under real-world conditions, the rest of the shooters will show you what is possible with a range of scenarios and levels of skill The ideal results may be extremely difficult to re-create, or may be very robust and very easy to recreate - seeing what the camera does under ideal conditions does not tell you this. The rest of the shooters will give you a sense of how well the camera operates under real-world conditions and when there is not an entire production team behind the project. The mix of good results vs awful results will give you a sense of how easy it is to work with, unfortunately, this can be a reflection of how good the camera is, or the level of skill of the majority of shooters - not every camera will be used by people with the same distribution of skill levels. My recommendation is to have a few reviewers / shooters who you have followed for a time and where they have similar shooting circumstances and similar skill-levels as you do, and see what they are able to do with it, and how that rates against the other equipment they have used.
  8. I'm sorry to hear that and glad you've recovered and doing well. These things definitely give clarity on what is important in life! Sounds like you definitely deserve a new camera - the S5iiX looks like a killer offering. I've been playing "if I won lotto, what would I buy" recently and the S5iiX is high on that list. Enjoy!! My only piece of advice is to just shoot a bunch of test shots and play with it to really understand how far you can push it etc. Especially shooting highly saturated, high DR scenes, and doing a series of under / over exposures, then trying to pull them all back to a normal exposure again. If you can get everything setup to where you can pull footage down / up to a normal exposure and have it look correct then you're in a good place with normal WB / exposure / saturation / etc adjustments.
  9. All those arguments could be made about shooting 1080p. Good luck trying to hold back "technological progress"!! TBH, My preferences are now 10-bit LOG ALL-I h26x files (due to their manageable file sizes and down-sampling in-camera advantages) or compressed RAW (for the complete lack of processing). I think Prores is a weaker alternative to those other two options, with the only advantage being wide-spread support. Sure, some of the negatives of prores are becoming less, but if the advantages of the alternatives overtake it, then it makes sense to switch over at some point. The primary purpose of 4K over 1080p was its ability to sell televisions, but hot damn did Hollywood go to 4K anyway. Compressed RAW has all sorts of benefits over Prores, and they're even relevant to film-making not just limited to selling consumer products....
  10. kye

    Panasonic G9 mk2

    I've seen people saying that the G9ii is the best MFT camera ever made by Panasonic, so the comparison is being made. Of course, this leaves lots of room for improvement, as the GH6 has advantages over the G9ii, so an updated GH-line with full-sized body could exceed both, and a smaller bodied GX-line with some trickle-down features would also be a significant improvement to that line.
  11. Maybe they're thinking of it like it's a great B or C camera on Venice / Burano shoots? If so, having a demo video with A9iii shots mixed in would make sense. The "lesser" folks who aren't shooting on Venice / Burano are more than capable of convincing themselves that "if it's good enough for them as a C-cam then it's an excellent choice as my A-cam!!"
  12. No worries! A couple of suggestions beyond learning more about colour management (so that you've got it clear in your head).... Learn to colour manage using Nodes, rather than using the Resolve settings. The reason I suggest this is that there are all sorts of cool little tricks you can do in Resolve by transforming to a different colour space, doing something, and then transforming back. This means that you'll be doing some colour management in the node graph. However, if you're doing that and also using Resolves colour management menus then your colour management configuration is split between nodes and menus etc and so it can become confusing because you can't see the whole pipeline in one place. This might be a good start for getting your head around colour management... This is Cullen Kelly, possibly the most experienced colourist on YT, and this talk is a complete introduction (ie, it's not just a fragment) to the topic: The other thing is to rate the camera like it's an acquisition device, not from the footage. What I mean by this is that each camera you're looking at has greater image quality than you will be able to capture, because they're all high-end cameras, so the limitation will be the footage you are able to capture with it. This comes down to workflow. You need to be able to understand how the camera works and how to get the optimal results from it, how to set the camera up optimally, you'll need to get it prepped, get it to the location, carry it, identify good locations while carrying it, set it up including mounting lenses and filters, turning everything on etc, prep the shot by composing / exposing / focusing, and only then hitting record. This all sounds obvious, but (to make my point a bit more obvious), if the camera was 100kg/lb then you wouldn't be mobile enough to get it to the best compositions, if you are tired you wouldn't even see the best compositions, if there's a great composition that is time-sensitive (like, something is moving - animals in a field - people walking etc) but it takes a minute to setup the shot then the moment will be gone, etc. You can only grade the footage you actually record...
  13. I understand your logic, but think about it this way.. When will a codec from 2007 stop being the industry standard? It has to some time, right? People aren't going to be using it in the year 2150... Not a lot of technologies from 2007 are still around! Prores is a great codec and we have championed its use for good reasons, but those reasons aren't what they used to be: It was high-quality, unlike the h26x codecs on budget cameras It was 10+ bit and ALL-I, unlike h26x codecs It was high-quality but had must more modest / manageable file sizes compared to RAW These are all still true, but the h26x codecs are now (mostly) 10+bit and ALL-I (and we have sufficient computing power to edit with IPB in practical resolutions). Compressed RAW makes swift work of the file sizes too.
  14. You'd think it would be, but there are still elements of it that I can't work out. 30p and an overly compressed and sharpened image is a pretty solid start though! The thing I don't like about the digital look is that it looks like real life in the sense that it makes the scene look like people in a room saying things to each other, rather than there being a sense of scale.. that "larger than life" thing people talk about.
  15. Thanks for sharing your impressions. Your description is pretty much how I was expecting it to perform. I think, for me, the take-aways are: you own one and have real-world experience in uncontrolled conditions it's not as good as a real camera it's good enough to use in professional gigs, even if only in a limited capacity
  16. Yeah, next to the support for a BM "box camera" there's a lot of people wanting a P4K Pro with NDs and a tilting screen etc.
  17. Well, this is a bit embarrassing! I did a detailed comparison between then when I was grading, but I must have picked an area that was slightly out of focus or something, because looking at the images I posted they're not even close!! Here are some updated images with a couple of blurs added. Just for fun I added a slightly emphasised halation-like one so I've tipped the balance slightly in the opposite direction now.....
  18. The other thing people don't consider is that there are a relatively small number of compositions that are required, and almost all shooting situations have various constraints or factors that influence where you position the camera, so for any given scenario these factors will often result in preferences for specific focal lengths over others. Sure, it depends on your preferences, but there is definitely common ground and preferences. Not a lot of folks would look at hand-held footage shot on a rectilinear ultra-wide (16mm FF equivalent or wider) lens from above head-height with tonnes of micro-jitters and say that's just as cinematic as a locked-off 85mm close-up shot from eye-level. If they did, then everything would be equal, and there would be no language of cinema at all.
  19. I used to shoot 90% of the time on a 35mm equivalent prime, but have since realised that one part of the "cinematic look" is using longer focal lengths. I'm switching to zooms for this purpose. Also, you might be interested to know that loads of the old 16mm zoom lenses were around 35mm at their widest end. When I first started looking at them I couldn't understand, especially considering that we're drowning in 16-35mm FF lenses at this point, but I realised that most productions don't shoot with anything that wide. If you look at those "list of movies shot on one prime lens" posts, you'll notice that basically none are shot on anything wider than 35mm, but that's 35mm on S35, so more like 50mm equivalent. Of course, I'm not saying anyone is wrong to shoot on anything wider than a 35mm equivalent, and of course I have been talking about narrative content, not event coverage, but I find it's useful to keep this in mind when thinking about what tools create what looks.
  20. Is it a fee per unit manufactured? Per product model? For the manufacturer?
  21. Of course it could be, but there's a question about wether they'll do it or not. This isn't the "Pro" version of the camera - it aligns with the P6K rather than the P4KPro which added NDs and other cool stuff. I'd imagine a Pro version to come out, maybe in a year, that adds internal NDs and other pro features. The other rationale might be that this is a step in the 'ecosystem war' they seem to have started. Conspicuously, Resolve does not support Prores RAW - which says to me that they're essentially holding Resolve hostage and saying 'if you want to grade in RAW on consumer cameras then use our RAW recorders or cameras'. They're not about to drop support for REDRAW or anything in the professional circles, but in the "you can shoot RAW with our camera you just need to assimilate into the Borg collective first" levels, they're fighting for control. I don't think this is the first example of them having conspicuous lack of support, but I can't recall the specifics off the top of my head right now. Incidentally, the more I learn about what it's like to grade in non-colour-managed environments, the more I realise that Resolve is the killer app for colour grading. While other apps do seem to offer colour management, I think that by the time someone wraps their head around the topic sufficiently to get the most from it, they'll be familiar with a bunch of advanced techniques (like subtractive colour models etc) and will want to colour grade in Resolve rather than in the other tools. And once you're colour grading in Resolve, you can just edit and mix in it too, which eliminates the round-tripping, and now if you switch to BM cameras or recorders then you can eliminate the converter for your Prores RAW setup, and bam, now you're fully in the BM ecosystem. Ironically, Instagram (which has sub-1080p resolution) has taught everyone that colour grading is king, and that might have positioned Resolve as the middle of the post-production workflow. I mean, if colour wasn't important then the internet wouldn't be full of people selling matrixes of numbers (LUTs).
  22. Yeah, that's sort-of the same as not using colour management, as (unless you use the HDR palette to grade with) you're not getting any of the advantages of colour management. Your workflow is basically the same as applying a LUT and then trying to grade in 709, which is essentially ignoring 95% of the power of the tools at your disposal. Without teaching you how to do colour management, as there are much better sources for that than me typing, what you want your pipeline to look like is this: A transformation from the cameras colour space / gamma into a working colour space / gamma Do all your adjustments in the working colour space / gamma Transform from the working colour space / gamma into a display colour space / gamma (e.g. 709) What you have told Resolve to do is to transform the cameras colour space / gamma into rec709, and any adjustments you're making to that using any of the tools will happen in rec709. Some of the tools are designed for rec709 but some are designed for log, and most of them won't really work that well in 709. Only the "old timers" who learned to grade in 709 and won't change still do it this way, everyone else has switched to grading in a log space - some still grade in Arri LogC because that's how they learned, but everyone else grades in ACES or Davinci Intermediate, as these spaces were designed for this. I transform everything into Davinci Intermediate / Davinci Wide Gamma and grade in that as my timeline / working space (these mean the same thing). I can pull in Rec709 footage, LOG footage, RAW footage, etc, and grade them all on the same timeline right next to each other - for example iPhone, GX85 (709), GH5, BMPCC, etc. I can raise and lower exposure, change WB, and do all the cool stuff to all the different clips and they all respond the same and just do what you want. If you've ever graded RAW footage and experienced how you can just push and pull the footage however you want and it just does it, then it's like that, but with all footage from all cameras. Before I adopted this method, colour grading just felt like fighting with the footage, now everything grades like butter.
  23. The order of each pair is the same, cam A then cam B. Were your observations consistent? I wouldn't be surprised if I didn't perfectly match the softness, considering the difference in lenses and resolutions. When I was done I noticed that in some shots the greens were more blue and others were more yellow, but I also noticed that it wasn't consistent which way around it was. I went back and checked, and it wasn't consistent, so it was just my grading. For my purposes, I concluded that it was irrelevant, even if I was using them both on the same shoot. If someone watches your video and their comment was "the hues of the trees didn't match" then that's a comment about your ability to tell an engaging story, not on your colour grading!!
  24. Are you using colour management of some kind? ACES / RCM? Once you've learned that, almost all footage is the same. I can't believe that people still grade and don't do this - I was never able to get anything to look good until I worked this out. Sadly, it took me far longer to realise this than it should have! I think this is why there are 50,000 YouTube videos from amateurs on "How I grade S-Log 2" and "How to grade J-Log 9" and why every second video from professional colourists (the few that share content online for free) is about colour management. It used to drive me crazy hearing the same thing over and over again until I realised that it's the key that unlocks everything, and that getting really good results is incredibly simple once you've figured it out (getting great results becomes super complex again because colour grading is an infinitely deep rabbit hole, but that's well beyond these discussions). The other variable is what the individual shots were. It's not uncommon for me to go shoot something, with one camera and one lens, and then some shots just drop into place and look great and other ones are fiddly and I can never get them how I want. This is a hidden trap for downloading footage other people have shot.
×
×
  • Create New...