Jump to content

kye

Members
  • Posts

    7,459
  • Joined

  • Last visited

5 Followers

About kye

Recent Profile Visitors

38,035 profile views

kye's Achievements

Long-time member

Long-time member (5/5)

5.2k

Reputation

  1. I'm still lost down this rabbit hole, but these are an interesting reference. This is what happens if you put the GX85 through a "look". I put the GX85 test image through a bunch of output LUTs to see which (if any) I liked the flavour of. In order to compare them equally, I adjusted after the LUT to match the black and white points, exposure, and contrast. This way we're not just comparing the different contrast curves, but the other aspects of the LUTs like colour rendering etc. The node structure was this: slightly lower gain to bring GX85 image into range (it records super-whites) CST from 709/2.4 to Davinci Intermediate (my preferred working colour space) (my grade would go here, but for this test no adjustments were made) CST to whatever colour space the LUT expects The LUT (these all convert to 709/2.4) A curve to adjust the overall levels from the LUT to roughly approximate the GX85 image The round-trip from 709/2.4 to DWG to 709/2.4 is almost transparent, if you compensate for the gamut and saturation compression in the conversion at the end, so I didn't bother to grab it. Results: The famous ARRI K1S1 LUT (the ARRI factory LUT): One of the 5000 BMD LUTs that come with Resolve, which I tried just for fun: The Kodak 2383 PFE (Print Film Emulation) LUT. The D55 one seemed the closest match to the WB of the image for some reason, but everyone always uses the D65 ones, so I've included both here for comparison. The D65 one: The Kodak 2393 PFE. It doesn't come with Resolve but it's free online from a bunch of places. I like it because it doesn't tint the shadows as blue, so the image isn't as muddy / drab. The FujiFilm 3513 PFE: I find the ARRI LUT a bit weak - it helps but not as much as I'd like. The comparison above is flattering to the LUT because it has a bit more contrast compared to the SOOC so looks a bit better. The skintones are a little more flattering on it though, which might be enough if you want a more neutral look. All the PFE looks are very strong, and aren't really meant to be used on their own. The film manufacturers designed the colour science to look good when used with a negative film like Kodak 250D or 500T stocks, so it's "wrong" to use it unless you're grading a film scan, I think people use it like this anyway 🙂 Some time ago I purchased a power-grade that emulated both the 250D and 2393 PFE from Juan Melara, which looks like this: To me it looks much more normal than just the 2393 PFE on its own, but it's definitely a stronger look. The powergrade is split into nodes that emulate the 250D separately to the 2393, and the 2393 nodes are almost indistinguishable from the LUT, so I'd imagine this is probably a good emulation. Anyway, lots of flexibility in these 8-bit files!
  2. I'd agree with you, except that about half the movies in the box office aren't much better.. add in all the religious propaganda movies that thrive in more religious areas of the world and sadly, I think there's a huge market 😞
  3. One question that stood out to me immediately was how much you want the audio to move around when the person in the VR turns their head. For example, if you record stereo and the VR person turns their head the visuals will all move but the audio won't change at all - I would imagine that to be unnerving and potentially ruin the immersion wouldn't it? Does VR have a standard where you can record to that and then the headset will decode it to match where the viewer looks?
  4. Maybe Canon are thinking that they'll keep their exclusivity on the super-sharp super-expensive lenses, but let the third-parties develop lower cost less technically perfect lenses? It would make sense and make the system a lot more accessible and attract a lot of new customers that wouldn't want to spend top dollar on pristine lenses. Their success on the EF line and how ubiquitous it was must have been a critical factor in their earnings over the decades, so making RF a new default standard is very much in their interests. You might be right about the split between FF and APS-C lenses though - that's still a strategy in a similar direction.
  5. This might be a blessing-in-disguise, as a lot of cameras actually lose the last few seconds of footage before you hit stop. So it might be compensating for that and making sure no clips are cut short.
  6. It's worth pointing out that the thermals might be the dominant factor here, considering that laptops will throttle down on their performance in order to manage overheating, so a few extra fans in the laptop can make more difference than which model of CPU / GPU you buy!
  7. Ha! Look at Tiffen putting all their info on there for a lens shade!! Talk about padding your part. How funny! Could the lens be a 15-150mm T3.1?
  8. I understand that a person can look at a larger quantity of footage and notice similarities and themes, but there are still a great number of un-accounted-for variables that can always bite you in the ass if you were to actually get that camera. The general look that cameras have online is likely to be the default look, partly because most people don't know the first thing about colour grading and mostly because the people who are posting videos and specifying the model number of the camera are likely in the shallow end of the skills pool, so to speak. The exception is cinematographers doing camera tests, but these have their own issues. The challenge comes in when you try and change the image in post. Try to add a bit more contrast and you might find that the image doesn't keep the things you liked about the look. In fact, the nicer the image looks SOOC or with the default LUT on it, the more fragile the image might be because the more pushed it will be. The most flexible images are the most neutral, and our brain doesn't like neutral images, it wants ones with the right herbs and spices already added. There really is no substitute for actually shooting with the camera the way that you shoot, in the situations you shoot in, and then grade it the way you grade it, trying to get the look you want, with your level of skill. TBH, most of the videos I see that have the name of the camera in them, that are graded with a "look", actually look pretty awful and amateurish to me. Either this is their lack of skill as colourist to not be able to get the look they wanted, or they did get the look they wanted and the look is just awful, but it's not a promising picture either way. I wonder how many of them are using colour management. If a camera is a 10-bit LOG with decent bitrate then the camera is one CST away from being almost indistinguishable from any other camera. Skin tones are a challenge of course, but when well-shot on capable equipment these are pretty straight-forward. There's a few principles I think are at play here: What I hear from high-level colourists is that if a project is well shot on capable equipment (without a "we'll fix it in post" mindset) then you can get your colour management setup, put a look in place, and 80% of the shots just fall into place. Then the time can be spent refining the overall look, adding a specific look to certain scenes (night scenes, dream sequences, etc), fixing any problem shots, and then you'd do a fine-tune pass on all shots with very minor adjustments. If it's not well shot to get it mostly right in-camera then you're in all sorts of trouble for post. If the client is inexperienced and doesn't know what they want, or they want something that is very different to how they shot the project. It's very easy to see colour grading make big changes (e.g. shooting day for night) or see the amazing VFX work done by Hollywood etc, and assume that anyone with a grading panel and calibrated reference monitor can do anything with any footage. If the client is a diva, or is somehow mentally unbalanced. Film-making is difficult enough to make almost anyone mentally unbalanced by the time they get to post-production and they're sitting with the colourist and every mistake done at any point on the project is becoming clearly visible on the huge TV in their studio. Throwing a fit at this point is perhaps a predictable human reaction! One colourist I heard interviewed said that when they were colour grading rap videos in the 80's they had to tell one client who had about 20 people in the colour grading suite that the strippers, cocaine, and machine guns had to go back into the limo otherwise they wouldn't be able to colour grade the project. Of course, none of this is the fault of the camera. I'd even theorise that the brand of camera might be a predictor of how much the colour grading process was setup to fail - if people shot something on a Sony rather than a Canon you might find they're more likely to be a clueless and self-entitled influencer etc. God help the colourists that are going to face a barrage of projects over the next few years shot on the FX3 where the person thinks the colourist can duplicate The Creator in post for a few thousand dollars! Also, the stronger the look you apply in post, the more those small colour science differences get lost in the wash. It's also worth asking, do you think the colourists on reddit are the ones who are fully-booked with more professional clients who have realistic expectations, or the ones out there dealing with the stressed masses and going online to learn and vent? My experience on the colourist forums is that the most experienced folks burn out from answering the same questions over and over again, and arguing with people who don't want to learn or put in the work, so the people who are there are mostly those early in their journeys. Only you can know this, because what you love will be different to what anyone else loves. But don't ask random strangers online, actually try it.... https://sonycine.com/testfootage/ https://zsyst.com/sony-4k-camera-page/sony-f55-sample-footage-downloadable-samples/ 🙂
  9. Absolutely. It works even for people who are genuine as well. If someone is learning the subject then they'll be gradually exploring all the many aspects of it, but it's only once they've explored many / most of these things that they'll be starting to connect things together and getting clear on how they all relate to each other and how they all relate to the desired outcomes etc. It requires that the person go through all the detail in order to integrate it into a single summary that can be explained to the average grandmother. You can skip various bits of the picture, but the outcome of that is that your understanding will potentially be skewed in a certain way towards or away from a more balanced understanding. I've personally found that film-making is a very complex topic because it involves the full gamut of topics... light, optics, sound, analog and digital electronics, digital signal processing, the human visual system, psychoacoustics, editing which involves spatial and temporal dimensions, colour theory and emotional response to visual stimulus, sound design and mixing and mastering and emotional response to auditory stimulus, storytelling, logistics and planning, and depending on how you do it, it might include business and marketing and accounting etc, or recruiting and managing a multi-disciplinary team to perform a complex one-off project, etc. It's no wonder that it takes people a good while to muddle through it and that at any given time the vast majority are in the middle somewhere and that many get lost and never make it back out into the daylight again. Add into that the fragility of the ego, vested interests, social media dopamine addiction, cultural differences, limited empathy and sympathy, etc and it's a big old mess 🙂
  10. Excellent point about the compatibility - I'm so used to MFT and almost everything being interchangeable that I'm not used to even thinking about these things! In terms of it being a prop, I would have thought that it would have been easier to grab whatever was the cheapest / most common / not-rented item from their camera rental house. I mean, if you're shooting a feature film then you're renting a bunch of stuff anyway, so renting an extra 16mm setup to use as a prop wouldn't be hard at all. They could have rented it from a production design rental house along with all the other props etc, but then anything in that place would be non-working and likely turned into a prop when it stopped working. In this sense, it's very unlikely to have been a camera / lens combination that wasn't compatible, as someone would have had to have glued the lens on the body or something, which takes extra effort etc which wouldn't be needed considering there would be that many of those cameras and lenses that wore out or got dropped into a river etc that they'd be worthless and ubiquitous.
  11. I don't think so.. all the photos I found showed the Angenieux has the writing on the outside and not visible from the front Filters don't tend to have writing on them like that - that pattern looks like lens info anyway. None of the ones on here have writing that looks similar either: https://www.oldfastglass.com/cooke-10860mm-t3 It seems to have one of those boxes that controls the lens and provides a rocker switch for zooming etc, maybe that narrows it down? Maybe it's an ENG lens rather than a cinema lens?
  12. I've heard that the 12K files are very usable in terms of performance, but it will likely depend on what mode you're shooting in. Most people aren't using the 12K at 12K - they're using it at 4K or 8K. Regardless, Resolve has an incredible array of functionality to improve performance and enable real-time editing and even colour correction on lesser hardware. This is a good overview:
  13. When you say "like they are emitting light themselves" you have absolutely nailed the main problem of the video look. I don't know if you are aware of this, so maybe you're already way ahead of the discussion here, but here's a link to something that explains it way better than I ever could (linked to timestamp): This is why implementing subtractive saturation of some kind in post is a very effective way to reduce the "video look". I have recently been doing a lot of experimenting and a recent experiment I did showed that reducing the brightness of the saturated areas, combined with reducing the saturation of the higher brightness areas (desaturating the highlights) really shifted the image towards a more natural look. For those of us that aren't chasing a strong look, you have to be careful with how much of these you apply because it's very easy to go too far and it starts to seem like you're applying a "look" to the footage. I'm yet to complete my experiments, but I think this might be something I would adjust on a per-shot basis. You'd have to see if you can adjust the Sony to be how you wanted, I'd imagine it would just do a gain adjustment on the linear reading off the sensor and then put it through the same colour profile, so maybe you can compensate for it and maybe not. TBH it's pretty much impossible to evaluate colour science online. This is because: If you look at a bunch of videos online and they all look the same, is this because the camera can only create this look? or is this the default look and no-one knows how to change it? or is this the current trend? If you find a single video and you like it, you can't know if it was just that particular location and time and lighting conditions where the colours were like this, or if the person is a very skilled colourist, or if it involved great looking skin-tones then maybe the person had great skin or great skill in applying makeup, or even if they somehow screwed up the lighting and it actually worked out brilliantly just by accident (in an infinite group of monkeys with typewriters one will eventually type Shakespeare) and the internet is very very much like an infinite group of monkeys with typewriters! The camera might be being used on an incredible number of amazing looking projects, but these people aren't posting to YT. Think about it - there could be 10,000 reality TV shows shot with whatever camera you're looking at and you'd never know that they were shot on that camera because these people aren't all over YT talking about their equipment - they're at work creating solid images and then going home to spend whatever spare time they have with family and friends. The only time we hear about what equipment is being used is if the person is a camera YouTuber, if they're an amateur who is taking 5 years to shoot their film, if they're a professional who doesn't have enough work on to keep them busy, or if the project is so high-level that the crew get interviewed and these questions get asked. There are literally millions of moderately successful TV shows, movies, YouTube channels that look great and there is no information available about what equipment they use. Let's imagine that you find a camera that is capable of great results - this doesn't tell you what kind of results YOU will get with it. Some cameras are just incredibly forgiving and it's easy to get great images from, and there are other cameras that are absolute PIGS to work with, and only the worlds best are able to really make the most of them. For the people in the middle (ie. not a noob and not a god) the forgiving ones will create much nicer images than the pigs, but in the hands of the worlds best, the pig camera might even have more potential. It's hard to tell, but it looks like it might even be 1/2. You have to change the amount when you change the focal length, but I suspect Riza isn't doing that because of how she spoke about the gear. It's also possible to add diffusion in post. Also, lifting the shadows with a softer contrast curve can also have a similar effect.
  14. I think that if you can possibly manage it, it's best to provide the simplification yourself rather than through external means. This gives you flexibility in the odd example you need it, and doesn't lock you in over time. The basic principle I recommend is to separate R&D activities from production. Specifically, would recommend doing a test on the various ways you can do something, or tackle some problem, and the options for your workflow, evaluate the experience and results, then pick one and then treat it like that's your limitation. I'm about to do one of those cycles again, where I've had a bunch of new information and now need to consolidate it into a workflow that I can just use and get on with it. Similarly, I also recommend doing that with the shooting modes, as has happened here: I find that simple answers come when you understand a topic fully. If your answers to simple questions aren't simple answers then you don't understand things well enough. I call it "the simplicity on the other side of complexity" because you have to work through the complexity to get to the simplicity. In terms of my shooting modes I shoot 8-bit 4K IPB 709 because that's the best mode the GX85 has, and camera size is more important to me than the codec or colour space. If I could choose any mode I wanted I'd be shooting 10-bit (or 12-bit!) 3K ALL-I HLG 200Mbps h264, this is because: 10-bit or 12-bit gives lots of room in post for stretching things around etc and it just "feels nice" 3K because I only edit on a 1080p timeline but having 3K would downscale some of the compression artefacts in post rather than have all the downscaling happening in-camera (and if I zoom in post it gives a bit more extension - mind you you can zoom to about 150% invisibly if you add appropriate levels of sharpening) ALL-I because I want the editing experience to be like butter HLG because I want a LOG profile that is (mostly) supported be colour management so I can easily change exposure and WB in post photometrically without strange tints appearing, and not just a straight LOG profile because I want the shadows and saturation to be stronger in the SOOC files so there is a stronger signal to compression noise ratio 200Mbps h264 because ALL-I files need about double the bitrate compared to IPB, and also I'd prefer h264 because it's easier on the hardware at the moment but h265 would be fine too (remembering that 3K has about half the total pixels at 4K) The philosophy here is basically that capturing the best content comes first, and the best editing experience comes next, then followed by the easiest colour grading experience, then the best image quality after that. This is because the quality of the final edit is impacted by these factors in that order of importance.
×
×
  • Create New...