-
Posts
7,845 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
I haven't been following him for very long so haven't seen a change, but I can tell you that I'd much prefer to hear the thoughts of someone as qualified and experienced as he is versus the gaggle / hoarde / confusion / seething-mass of "film-making" YouTubers that only know how to shoot a product review YT video! One thing I find in severe scarcity is people that have worked in professional settings and know how the real pros do things, that also know what it's like to make videos from idea to upload by themselves, and can also communicate it in a way that is clear and concise and doesn't have some sort of off-putting trait (like being an arrogant asshat and talking to the audience like they're morons - yes I am thinking of someone specifically). I also like the fact he's selling courses. The alternatives are that they do YT for a while but get no return and stop doing it, or they go full shill, or they somehow keep the channel going but you have no idea where their revenue or equipment to run it come from and so trust and credibility just erode over time. I wish more people from the industry would create a YT channel where they share their knowledge for free as advertising for their own courses. Imagine if Deakins etc had a YT channel where they did a 5-minute piece to camera per week!
-
Saying "even the a6700 can look good" is sort-of like saying "even the cheapest Ferrari can go fast".. the a6700 is a very modern camera and high-spec camera. I can understand why you would say something like this though - you've been watching too much "camera YT" and have fallen prey to the two biggest hidden problems: Older cameras are invisible on YT, despite being the majority of what is used People that talk about cameras, or even mention them in the video or description so they're searchable, are using the most recent cameras, or relatively recent cameras. The reason for this is simple - if you shot a video with the Sony a4000 then you're obviously not into the "tech" so it's not something you're thinking about , and putting that in the description isn't going to benefit you because no-one is searching for a4000 anymore. However, the people making videos about anything else other than cameras might be using the a4000, the a3000 or their phone from 5 years ago. I recently discovered a woodworking / renovation channel I like shoots with a C100, which records 24Mbps 1080p but his YT uploads are in 4K and the image is basically flawless. It's over a decade old and you can get entire setups with lenses batteries etc for $500 or so if you snag a deal. The camera body is the most discussed film-making item, but is the least important Go watch almost any video that talks about camera equipment in a balanced way and they'll tell you that the camera body is less important than the lenses or tripods etc. Watch and video about film-making equipment in a balanced way and they'll tell you that the camera rig is less important than lighting or cinematography etc. Watch and video about technical film-making in a balanced way and they'll tell you that equipment is less important than location choice, production design, hair & makeup, etc.. Watch and video about creative film-making in a balanced way and they'll tell you that the technical stuff is important to get right, but is far less important than writing, casting, acting and directing, etc. So... the camera body is the least important item in the least important sub-category of the least important sub-category of film-making.
-
@fuzzynormal I couldn't agree more! Disclaimer: I'm also an old. I have moved from shooting GH5 and manual F0.95 primes to GX85 and variable aperture zoom as my default setup, despite owning complete setups for GH5, OG BMPCC, OG BMMCC, XC10, and Canon 700D with ML, etc. I much prefer the GX85 to the GH5, but for some reason I cannot fathom, I am drawn to the GF3 whenever I get close to my equipment. I also edit in 1080p (but upres this to 4K for upload to YT). I prefer to downsample in-camera if it's not a significant quality bottleneck. I focus my time and energy on colour grading, editing, sound design and composition, as is evidenced by my many threads on these subjects.
-
I saw that one in my feed but haven't watched it yet. I highly recommend watching the YT film-makers that actually do real work. They have a balanced perspective and speak from experience. Like Luc Forsyth, who has shot major network TV shows: Finding good people on YT is quite challenging now, because they tend to just use their own names and the good people aren't talking about brands etc all the time so finding them can be a challenge.
-
Yeah, I assume that too. I've always been conscious of this too. The only potentially impartial reviews are when the person buys it anonymously like any member of the public would, gets it the same time it ships to everyone else, then they put it through their paces. The other issue with "reviews" that aren't long-term reviews is that the person hasn't had the product for long enough to really test it. People like Gerald might know what shortcomings to look for and actively go looking for them, but no-one can test reliability in less than a week. I find the same issues with product reviews on Amazon etc - they are essentially first-impression reviews.
-
If you're fighting the sun then yeah, serious horsepower definitely comes into the equation!
-
In my casual YT DOP channel viewing I rarely see anyone with anything larger than a 600W, even for blasting through windows to simulate daylight. I guess a lot of people have access to cameras that have good ISO performance above base and also shoot with fast lenses too, and aren't trying to light Hollywood-sized sets, so maybe 600W is enough? Mostly the discussion seems to be about placement and modifiers, not the overall power levels. But, this is just what I've seen, maybe the algorithm is hiding things from me 🙂
-
That all makes sense, and like most things, it might be a while before all the tech is handled correctly so it just works and you don't need to troubleshoot the process at each stage. I think that 180 VR video is likely to be a winning format. In 360 video people don't really know where to look, and some of the experiments I've seen with it were really hit and miss if there was a narrative arc that you were meant to be experiencing. Also, all these discussions were had when surround sound first came out and people didn't know what to do with it. People were genuinely talking about mixing live concerts in such a way that put the listener in the middle of the band on the stage, or floating above the band looking down on them like you were in a box seat, only it was a live outdoor concert and there was no seating, etc. Eventually people settled down and realised that for the most part people don't really want those things, but they do want a "if I had been there" sort of experience. So surround audio is mixed like you've got a great seat in the concert, and if there's picture too then the audio is mostly oriented around the point-of-view of that. So, if it's a choice between supplying 180 video and a stereo mix, or 360 video and full ambisonic mix, just to be able to look behind you at nothing of any significance whatsoever, I think many will opt for the first one! Having a wider spread seems sensible - if someone else hasn't done a bunch of tests then it would be relatively easy to do once you have the equipment.
-
After posting the previous post I went back and compared the looks a few times and realised I was a bit harsh on the ARRI LUT, considering that it was very flattering on my battered skin tone but basically didn't screw up the strong colours too much, whereas the film look is much stronger without being that much more flattering. Inspired by the ARRI LUT, I created this custom grade from scratch. SOOC (for reference): New Custom Look: ARRI LUT (for reference): I'm actually really happy with that look - I went a bit further in evening out the skin tones and brightening them up a bit and it didn't seem to come at the expense of anything else. I think I could easily build a look around this, and will experiment further.
-
I'm still lost down this rabbit hole, but these are an interesting reference. This is what happens if you put the GX85 through a "look". I put the GX85 test image through a bunch of output LUTs to see which (if any) I liked the flavour of. In order to compare them equally, I adjusted after the LUT to match the black and white points, exposure, and contrast. This way we're not just comparing the different contrast curves, but the other aspects of the LUTs like colour rendering etc. The node structure was this: slightly lower gain to bring GX85 image into range (it records super-whites) CST from 709/2.4 to Davinci Intermediate (my preferred working colour space) (my grade would go here, but for this test no adjustments were made) CST to whatever colour space the LUT expects The LUT (these all convert to 709/2.4) A curve to adjust the overall levels from the LUT to roughly approximate the GX85 image The round-trip from 709/2.4 to DWG to 709/2.4 is almost transparent, if you compensate for the gamut and saturation compression in the conversion at the end, so I didn't bother to grab it. Results: The famous ARRI K1S1 LUT (the ARRI factory LUT): One of the 5000 BMD LUTs that come with Resolve, which I tried just for fun: The Kodak 2383 PFE (Print Film Emulation) LUT. The D55 one seemed the closest match to the WB of the image for some reason, but everyone always uses the D65 ones, so I've included both here for comparison. The D65 one: The Kodak 2393 PFE. It doesn't come with Resolve but it's free online from a bunch of places. I like it because it doesn't tint the shadows as blue, so the image isn't as muddy / drab. The FujiFilm 3513 PFE: I find the ARRI LUT a bit weak - it helps but not as much as I'd like. The comparison above is flattering to the LUT because it has a bit more contrast compared to the SOOC so looks a bit better. The skintones are a little more flattering on it though, which might be enough if you want a more neutral look. All the PFE looks are very strong, and aren't really meant to be used on their own. The film manufacturers designed the colour science to look good when used with a negative film like Kodak 250D or 500T stocks, so it's "wrong" to use it unless you're grading a film scan, I think people use it like this anyway 🙂 Some time ago I purchased a power-grade that emulated both the 250D and 2393 PFE from Juan Melara, which looks like this: To me it looks much more normal than just the 2393 PFE on its own, but it's definitely a stronger look. The powergrade is split into nodes that emulate the 250D separately to the 2393, and the 2393 nodes are almost indistinguishable from the LUT, so I'd imagine this is probably a good emulation. Anyway, lots of flexibility in these 8-bit files!
-
I'd agree with you, except that about half the movies in the box office aren't much better.. add in all the religious propaganda movies that thrive in more religious areas of the world and sadly, I think there's a huge market 😞
-
One question that stood out to me immediately was how much you want the audio to move around when the person in the VR turns their head. For example, if you record stereo and the VR person turns their head the visuals will all move but the audio won't change at all - I would imagine that to be unnerving and potentially ruin the immersion wouldn't it? Does VR have a standard where you can record to that and then the headset will decode it to match where the viewer looks?
-
Maybe Canon are thinking that they'll keep their exclusivity on the super-sharp super-expensive lenses, but let the third-parties develop lower cost less technically perfect lenses? It would make sense and make the system a lot more accessible and attract a lot of new customers that wouldn't want to spend top dollar on pristine lenses. Their success on the EF line and how ubiquitous it was must have been a critical factor in their earnings over the decades, so making RF a new default standard is very much in their interests. You might be right about the split between FF and APS-C lenses though - that's still a strategy in a similar direction.
-
This might be a blessing-in-disguise, as a lot of cameras actually lose the last few seconds of footage before you hit stop. So it might be compensating for that and making sure no clips are cut short.
-
I understand that a person can look at a larger quantity of footage and notice similarities and themes, but there are still a great number of un-accounted-for variables that can always bite you in the ass if you were to actually get that camera. The general look that cameras have online is likely to be the default look, partly because most people don't know the first thing about colour grading and mostly because the people who are posting videos and specifying the model number of the camera are likely in the shallow end of the skills pool, so to speak. The exception is cinematographers doing camera tests, but these have their own issues. The challenge comes in when you try and change the image in post. Try to add a bit more contrast and you might find that the image doesn't keep the things you liked about the look. In fact, the nicer the image looks SOOC or with the default LUT on it, the more fragile the image might be because the more pushed it will be. The most flexible images are the most neutral, and our brain doesn't like neutral images, it wants ones with the right herbs and spices already added. There really is no substitute for actually shooting with the camera the way that you shoot, in the situations you shoot in, and then grade it the way you grade it, trying to get the look you want, with your level of skill. TBH, most of the videos I see that have the name of the camera in them, that are graded with a "look", actually look pretty awful and amateurish to me. Either this is their lack of skill as colourist to not be able to get the look they wanted, or they did get the look they wanted and the look is just awful, but it's not a promising picture either way. I wonder how many of them are using colour management. If a camera is a 10-bit LOG with decent bitrate then the camera is one CST away from being almost indistinguishable from any other camera. Skin tones are a challenge of course, but when well-shot on capable equipment these are pretty straight-forward. There's a few principles I think are at play here: What I hear from high-level colourists is that if a project is well shot on capable equipment (without a "we'll fix it in post" mindset) then you can get your colour management setup, put a look in place, and 80% of the shots just fall into place. Then the time can be spent refining the overall look, adding a specific look to certain scenes (night scenes, dream sequences, etc), fixing any problem shots, and then you'd do a fine-tune pass on all shots with very minor adjustments. If it's not well shot to get it mostly right in-camera then you're in all sorts of trouble for post. If the client is inexperienced and doesn't know what they want, or they want something that is very different to how they shot the project. It's very easy to see colour grading make big changes (e.g. shooting day for night) or see the amazing VFX work done by Hollywood etc, and assume that anyone with a grading panel and calibrated reference monitor can do anything with any footage. If the client is a diva, or is somehow mentally unbalanced. Film-making is difficult enough to make almost anyone mentally unbalanced by the time they get to post-production and they're sitting with the colourist and every mistake done at any point on the project is becoming clearly visible on the huge TV in their studio. Throwing a fit at this point is perhaps a predictable human reaction! One colourist I heard interviewed said that when they were colour grading rap videos in the 80's they had to tell one client who had about 20 people in the colour grading suite that the strippers, cocaine, and machine guns had to go back into the limo otherwise they wouldn't be able to colour grade the project. Of course, none of this is the fault of the camera. I'd even theorise that the brand of camera might be a predictor of how much the colour grading process was setup to fail - if people shot something on a Sony rather than a Canon you might find they're more likely to be a clueless and self-entitled influencer etc. God help the colourists that are going to face a barrage of projects over the next few years shot on the FX3 where the person thinks the colourist can duplicate The Creator in post for a few thousand dollars! Also, the stronger the look you apply in post, the more those small colour science differences get lost in the wash. It's also worth asking, do you think the colourists on reddit are the ones who are fully-booked with more professional clients who have realistic expectations, or the ones out there dealing with the stressed masses and going online to learn and vent? My experience on the colourist forums is that the most experienced folks burn out from answering the same questions over and over again, and arguing with people who don't want to learn or put in the work, so the people who are there are mostly those early in their journeys. Only you can know this, because what you love will be different to what anyone else loves. But don't ask random strangers online, actually try it.... https://sonycine.com/testfootage/ https://zsyst.com/sony-4k-camera-page/sony-f55-sample-footage-downloadable-samples/ 🙂
-
Panasonic S5 II (What does Panasonic have up their sleeve?)
kye replied to newfoundmass's topic in Cameras
Absolutely. It works even for people who are genuine as well. If someone is learning the subject then they'll be gradually exploring all the many aspects of it, but it's only once they've explored many / most of these things that they'll be starting to connect things together and getting clear on how they all relate to each other and how they all relate to the desired outcomes etc. It requires that the person go through all the detail in order to integrate it into a single summary that can be explained to the average grandmother. You can skip various bits of the picture, but the outcome of that is that your understanding will potentially be skewed in a certain way towards or away from a more balanced understanding. I've personally found that film-making is a very complex topic because it involves the full gamut of topics... light, optics, sound, analog and digital electronics, digital signal processing, the human visual system, psychoacoustics, editing which involves spatial and temporal dimensions, colour theory and emotional response to visual stimulus, sound design and mixing and mastering and emotional response to auditory stimulus, storytelling, logistics and planning, and depending on how you do it, it might include business and marketing and accounting etc, or recruiting and managing a multi-disciplinary team to perform a complex one-off project, etc. It's no wonder that it takes people a good while to muddle through it and that at any given time the vast majority are in the middle somewhere and that many get lost and never make it back out into the daylight again. Add into that the fragility of the ego, vested interests, social media dopamine addiction, cultural differences, limited empathy and sympathy, etc and it's a big old mess 🙂 -
Excellent point about the compatibility - I'm so used to MFT and almost everything being interchangeable that I'm not used to even thinking about these things! In terms of it being a prop, I would have thought that it would have been easier to grab whatever was the cheapest / most common / not-rented item from their camera rental house. I mean, if you're shooting a feature film then you're renting a bunch of stuff anyway, so renting an extra 16mm setup to use as a prop wouldn't be hard at all. They could have rented it from a production design rental house along with all the other props etc, but then anything in that place would be non-working and likely turned into a prop when it stopped working. In this sense, it's very unlikely to have been a camera / lens combination that wasn't compatible, as someone would have had to have glued the lens on the body or something, which takes extra effort etc which wouldn't be needed considering there would be that many of those cameras and lenses that wore out or got dropped into a river etc that they'd be worthless and ubiquitous.
-
I don't think so.. all the photos I found showed the Angenieux has the writing on the outside and not visible from the front Filters don't tend to have writing on them like that - that pattern looks like lens info anyway. None of the ones on here have writing that looks similar either: https://www.oldfastglass.com/cooke-10860mm-t3 It seems to have one of those boxes that controls the lens and provides a rocker switch for zooming etc, maybe that narrows it down? Maybe it's an ENG lens rather than a cinema lens?
-
I've heard that the 12K files are very usable in terms of performance, but it will likely depend on what mode you're shooting in. Most people aren't using the 12K at 12K - they're using it at 4K or 8K. Regardless, Resolve has an incredible array of functionality to improve performance and enable real-time editing and even colour correction on lesser hardware. This is a good overview:
-
When you say "like they are emitting light themselves" you have absolutely nailed the main problem of the video look. I don't know if you are aware of this, so maybe you're already way ahead of the discussion here, but here's a link to something that explains it way better than I ever could (linked to timestamp): This is why implementing subtractive saturation of some kind in post is a very effective way to reduce the "video look". I have recently been doing a lot of experimenting and a recent experiment I did showed that reducing the brightness of the saturated areas, combined with reducing the saturation of the higher brightness areas (desaturating the highlights) really shifted the image towards a more natural look. For those of us that aren't chasing a strong look, you have to be careful with how much of these you apply because it's very easy to go too far and it starts to seem like you're applying a "look" to the footage. I'm yet to complete my experiments, but I think this might be something I would adjust on a per-shot basis. You'd have to see if you can adjust the Sony to be how you wanted, I'd imagine it would just do a gain adjustment on the linear reading off the sensor and then put it through the same colour profile, so maybe you can compensate for it and maybe not. TBH it's pretty much impossible to evaluate colour science online. This is because: If you look at a bunch of videos online and they all look the same, is this because the camera can only create this look? or is this the default look and no-one knows how to change it? or is this the current trend? If you find a single video and you like it, you can't know if it was just that particular location and time and lighting conditions where the colours were like this, or if the person is a very skilled colourist, or if it involved great looking skin-tones then maybe the person had great skin or great skill in applying makeup, or even if they somehow screwed up the lighting and it actually worked out brilliantly just by accident (in an infinite group of monkeys with typewriters one will eventually type Shakespeare) and the internet is very very much like an infinite group of monkeys with typewriters! The camera might be being used on an incredible number of amazing looking projects, but these people aren't posting to YT. Think about it - there could be 10,000 reality TV shows shot with whatever camera you're looking at and you'd never know that they were shot on that camera because these people aren't all over YT talking about their equipment - they're at work creating solid images and then going home to spend whatever spare time they have with family and friends. The only time we hear about what equipment is being used is if the person is a camera YouTuber, if they're an amateur who is taking 5 years to shoot their film, if they're a professional who doesn't have enough work on to keep them busy, or if the project is so high-level that the crew get interviewed and these questions get asked. There are literally millions of moderately successful TV shows, movies, YouTube channels that look great and there is no information available about what equipment they use. Let's imagine that you find a camera that is capable of great results - this doesn't tell you what kind of results YOU will get with it. Some cameras are just incredibly forgiving and it's easy to get great images from, and there are other cameras that are absolute PIGS to work with, and only the worlds best are able to really make the most of them. For the people in the middle (ie. not a noob and not a god) the forgiving ones will create much nicer images than the pigs, but in the hands of the worlds best, the pig camera might even have more potential. It's hard to tell, but it looks like it might even be 1/2. You have to change the amount when you change the focal length, but I suspect Riza isn't doing that because of how she spoke about the gear. It's also possible to add diffusion in post. Also, lifting the shadows with a softer contrast curve can also have a similar effect.
-
Panasonic S5 II (What does Panasonic have up their sleeve?)
kye replied to newfoundmass's topic in Cameras
I think that if you can possibly manage it, it's best to provide the simplification yourself rather than through external means. This gives you flexibility in the odd example you need it, and doesn't lock you in over time. The basic principle I recommend is to separate R&D activities from production. Specifically, would recommend doing a test on the various ways you can do something, or tackle some problem, and the options for your workflow, evaluate the experience and results, then pick one and then treat it like that's your limitation. I'm about to do one of those cycles again, where I've had a bunch of new information and now need to consolidate it into a workflow that I can just use and get on with it. Similarly, I also recommend doing that with the shooting modes, as has happened here: I find that simple answers come when you understand a topic fully. If your answers to simple questions aren't simple answers then you don't understand things well enough. I call it "the simplicity on the other side of complexity" because you have to work through the complexity to get to the simplicity. In terms of my shooting modes I shoot 8-bit 4K IPB 709 because that's the best mode the GX85 has, and camera size is more important to me than the codec or colour space. If I could choose any mode I wanted I'd be shooting 10-bit (or 12-bit!) 3K ALL-I HLG 200Mbps h264, this is because: 10-bit or 12-bit gives lots of room in post for stretching things around etc and it just "feels nice" 3K because I only edit on a 1080p timeline but having 3K would downscale some of the compression artefacts in post rather than have all the downscaling happening in-camera (and if I zoom in post it gives a bit more extension - mind you you can zoom to about 150% invisibly if you add appropriate levels of sharpening) ALL-I because I want the editing experience to be like butter HLG because I want a LOG profile that is (mostly) supported be colour management so I can easily change exposure and WB in post photometrically without strange tints appearing, and not just a straight LOG profile because I want the shadows and saturation to be stronger in the SOOC files so there is a stronger signal to compression noise ratio 200Mbps h264 because ALL-I files need about double the bitrate compared to IPB, and also I'd prefer h264 because it's easier on the hardware at the moment but h265 would be fine too (remembering that 3K has about half the total pixels at 4K) The philosophy here is basically that capturing the best content comes first, and the best editing experience comes next, then followed by the easiest colour grading experience, then the best image quality after that. This is because the quality of the final edit is impacted by these factors in that order of importance.