-
Posts
7,846 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
Yeah, high resolution is great if you want high resolution. Not so good if you are interested in making a meaningful final film.
-
The image quality side-by-sides aren't even close..
-
170Mbps... not terrible. Let's see what the implementation is like, especially the sharpening, NR, and auto-awesome AI. I like the idea there's two of them, with the "Pro" one having a larger sensor and more resolution etc. I feel like most action products are the generic model and somehow they never release the higher up models.
-
Yes, I think the appropriate reaction is the 😂 face... we're happy for you, we are thankful for the idea, but we also know ourselves and thus, the tears!
-
In addition to the above, one of the previous wipes they apply is often the show LUT, which also reveals a bunch of stuff about the colour grade too. Things like: cooling shadows / warming highlights highlight rolloff subtractive saturation effects shifts to skin tones etc Normally the VFX will only have the show LUT and the "final" shot in the VFX breakdown isn't the same as the final shot in the film because the final VFX shot will be exported without the show LUT and will be coloured by the colourist in the final process, but it'll be in the same overall direction and colour breakdowns are often not available for those movies so it's good info to have.
-
I've had some contact with ILM and discussing how they emulate lenses in post and they do very detailed and meticulous work and overall it's very impressive. So much so that I actually find the VFX breakdowns a very instructive tool in understanding the look development process. When they show the scene starting with mesh and then wipe after wipe shows each stage of the VFX process, often with the final wipe being the one that just adds that cinematic magic. In analysing what that final wipe does, I normally see: vignetting un-sharpening overall, and often more in the corners bokeh and defocusing glow / halation / mist If you take the average nice-but-digital looking shot from a modern camera and apply similar effects, the flavour of image quickly goes from video to cinema..
-
Yeah, this is one of the other hidden costs of higher resolutions - the lack of ability to have reasonable bitrates with the higher resolutions. Obviously RAW scales with the resolution of the image, but so does the bitrates of the Prores codecs too... but here's the issue - the screens don't get bigger! and your vision doesn't get better either! You: I'd like to buy a higher resolution camera please. Shopkeeper: Sure. Here is a stack of larger hard drives, here is a huge new computer, here are the blazing media cards, here is ........
-
Quick - you've got just over a week to stock up!!! Seriously though, this is good. Anything that helps you focus on your goals more and be less distracted is a good move 🙂 While there's always things I'm curious about, sadly, there aren't a huge list of things I'd like to buy - they just don't make the things I really want!
-
Just un-sharpen in post. I remember a great thread on a colourist forum some months ago asking about sharpening, and the responses were that mostly people don't sharpen, and many reduce the sharpness of the image to avoid a digital look. I think that unsharpening might be one of those hidden things that camera fondlers would consider heresy but is widely done by the pros for high end work. They were talking about cameras that shoot RAW too, not compressed codecs from consumer cameras.
-
This is the internet... such things are irrelevant when arguing about technical matters!
-
Yep.. it's the darnedest thing - on a film set they're thimble-sized and in danger of being lost, but if you pick one up and then walk out of the film set it starts to grow... as you walk through crowded tourist hotspots it has become quite large, perhaps the size of a toddlers head, but as you walk away from the crowds it rapidly inflates to be the size of a watermelon, with passers-by stopping and staring at you.. by the time you leave the areas with moderate foot-traffic it has become the size of a dozen adult-themed helium balloons and gathers about the same amount of attention.
-
Hot damn! You mean we can shrink the camera bodies by just lopping bits off? Where's my hacksaw!!
-
Sony a9 III global shutter high ISO / dynamic range tests
kye replied to Andrew Reid's topic in Cameras
Most sensors do a full readout in the highest bit-depth at 24/25/30p, but at higher frame rates typically reduce the bit-depth of the read-out. Assuming this was to save a bit on data rates and processing, then that means they have been making progress - they just spent it all on resolution instead of bit-depth. Yet another hidden cost of this preposterous resolution pissing contest that the entire industry is doing, with consumers cheering all the way down. -
If only they weren't so large!
-
Availableism - nice! It reminds me of approaches like Dogme95 etc, which integrate that sort of element and go a lot further with it as well. Sadly, I'm not surprised about the grant decision. I've been to enough film festivals to know that the thinking is often enormously traditional / blinkered, and also motivated by who-you-know and all that crap too. One student film festival I went to had a film in the documentary category that was old people talking about their sex lives - it was very entertaining and the old folks were all very cute and it definitely deserved to win an award for concept / direction / producing, which it did. However it was shot terribly, there were booms in shot on half-a-dozen occasions, the camera wasn't held steady sometimes and was bumped significantly and obviously a couple of times, but it also won best cinematography and best sound, which was completely ridiculous. One of the things that dominates the overall architecture of how traditional films are made is that many of the people who are involved are not critical thinkers, they learned how to perform their role but they don't understand the other roles, the overall process, or even how to make a film. There are often territorial disputes as people defend their patch, etc. The film-making process is a factory production line, and most workers in a factory don't understand it's possible to redesign a factory and make it work better, let alone be ok with it when someone suggests it. Yeah, that's a big drawcard about the FX3 that makes it stand out. I don't understand why there aren't more cameras with the ISOs further apart. Most cameras are have lower base ISOs, and have a much smaller interval between the lower native ISO and the higher native ISO, combining to give the FX3 a huge advantage in low light.
-
Never mind that colourists working on high-end material still regard noise reduction as a critical tool for every shoot, including the ones where all the material was exposed properly in-camera and recorded at native ISO! It's fascinating to download cinema camera footage for the first time and see that it has more noise in it than a mirrorless low-light test. I got a bit of a shock when I saw that for the first time.
-
I agree, but would go further and say that not only will AI radically reduce the cost to make a film you could make now, it will also make films possible that really aren't possible (or aren't practically possible) now. So in that sense it doesn't just reduce costs, it expands the possibilities to be practically infinite.
-
I agree - it wasn't that different. I've spoken at length to a few in PMs about this and I find it a fascinating subject, but I see modern film-making having three pivotal points. There are probably others, but these are the ones I'm aware of. 1) The French New Wave, which was (how I see it anyway) an exploration of new possibilities of 16mm film that weren't possible with 35mm film. In many ways they took the traditional "coverage" of Hollywood and radically expanded it to include almost all the techniques used in modern film-making. 2) The DSLR revolution This is pretty much what this forum is for, and what we talk about. The tricky thing about this is that it didn't deliver what people thought it would. They thought it would mean that anyone could film a movie with a camera and no money at all and that there would be another revolution in cinema like the FNW part 2, but this didn't happen. What we got instead, and this is my impression, was TikTok, YouTube, influencers, live streaming / Twitch, etc, which are all new forms of film-making (despite how creative or worthwhile you might think they are) because they all involve video recordings made into final products and distributed to an audience. The promise was real though, and people like Noam Kroll have mapped out a path for up-ending a lot of the traditional processes. One particular process that he's put forward that I really like, and WOULD change film-making is: Come up with a concept for the film, restricted to things you already have access to (cast, locations, etc) Cast the film Work out half-a-dozen or so sections with major plot points for the film, even in very high-level terms Workshop the characters with the actors, develop a concept for the first section Shoot the first section, involve lots of improvisation from the cast, potentially not even writing a script beforehand Edit the first section, see what worked and what didn't, concentrate on the performances Develop/update the concept for the second section Shoot and edit that one, once again with improvisation and concentrating on performances IIRC Noam shot a film like this and ended up giving the two main actors writing credits too. He mentioned that he made a lot of adjustments to later sections based on what worked and didn't from earlier ones, and I have a vague memory that during some improvisational parts the actors did while on location, he even ended up changing some major plot points to replace them with more interesting ones inspired by the process. This is the kind of thing that would change the future of film-making. Not having a new camera body. 3) AI The third pivotal point I foresee will be AI. Anyone who has watched any (decent) anime will know that writers in anime have been enjoying the freedom to create worlds without any practical limitation for over a century now. Up until the last few decades no-one could do that with realistic motion pictures, and right now it's mostly limited to those with huge budgets or with incredible skill and huge amounts of free time. AI will change that. The ability to shoot anything you like, however flawed, and just have AI add and remove and bend and change what you shot into whatever your imagination can come up with will be groundbreaking, just like the freedom that anime artists have enjoyed solely until recently. When AI can create Inception from your iPhone footage, things will be unleashed. However, much like we saw with the DSLR revolution, there will be other forms of film-making that are invented too, like deep-fakes, alternate histories, and who-knows-what else. No. I watch a lot of YT. Far more than streaming sites. I do try and include lots of film-making stuff, as well as a great many other interesting and niche things. The world is a fascinating place!!
-
Sure - bigger is better. Easy. The real test would be how they evaluated a large, complicated looking, but crap setup vs a small one that had a much better image. Also, in this imaginary test, you can't pick something old looking vs something that looks much more modern, as that's another giveaway. Size is so correlated with image quality that it's difficult to think of counter-examples. Even the big but really old cinema / ENG cameras are either still actually really good (and 1080p), or they are horrifically dated and wouldn't fool many (old betamax cameras for example). I'm not saying there are no counter examples, but they're a very very small percentage of the possible comparisons. Also, people pretty much know that the longer the lens the more zoomed in it is.
-
Absolutely. I do all sorts of side-by-side tests and nit-pick minor differences in colour grading techniques too, but it's important to always remember to make final judgements on the final result, rather than some artificial situation. This is why I encourage people to grade the footage, export it, and upload it to whatever final platform they'll use for delivery. It doesn't matter how the image looks in your NLE, it's how the viewers will see it that matters (and if it's a paid gig then obviously the producer/director/client need to be happy too). The only reasons I do these nit-picky tests is: to keep familiar with shooting in-between my trips (which is what I shoot) to eek out the best results I can when shooting in difficult conditions with modest cameras to eek out the nicest colour grading I can (spectacular colour is the culmination of a bunch of small tweaks, not a few big ones) to test out new techniques and keep learning and improving my skill levels to understand and optimise the many trade-offs that we're forced to make I also do all these things with my cinematography, editing, sound design, and delivery. The whole pipeline matters.
-
I'm pretty sure there are only 4 types of cameras: Movie cameras: these are what they use to make movies Big cameras, which are used by "professional photographers" or rich tourists Cameras, which are used by normal tourists use to take photos and phones, which are used by adults to take photos of the family: or by millennials when they're awake and have left the house: There is another type of camera though. When you take any camera and point it at yourself in public, you instantly turn into a narcissist and the size of the camera no longer fits into the above categories, but is an indicator of how much better you are than everyone else:
-
Glad to hear you're enjoying it! Some cameras record "super-whites" which are values above 100%. They can be recovered if you pull down the exposure so they come into the "legal" range of 0-100%. IIRC my Canon XC10 does it, and my Panasonic GX85 definitely does it. Yet another reason to use colour space transforms rather than LUTs is that LUTs clip any values that are below 0% or above 100%, whereas the colour transforms are able to access and use that data, so it's accessible downstream. When I'm getting familiar with a camera, or I'm grading shots with a lot of dynamic range I'll often pull the exposure down and see what's in the highlights and pull it up to see what's in the shadows, so that if there's anything relevant then I can grade appropriately.
-
Hidden camera footage of how people react to a small lens being used... the judgement is real.
-
Interestingly, I watched this video which talks about how "No CGI" normally means "a shitload of invisible CGI". TLDW; one example shows Tom Cruise repeatedly saying they shot the fighter plan scenes with "no CGI", but the final images only contained planes that were completely added in post or added in post to replace planes that were actually flown. None of the planes flown in the shoot were visible in the final image. "No CGI" indeed. Lots of other examples...