Jump to content

kye

Members
  • Posts

    7,664
  • Joined

  • Last visited

Everything posted by kye

  1. Yeah, I assume that too. I've always been conscious of this too. The only potentially impartial reviews are when the person buys it anonymously like any member of the public would, gets it the same time it ships to everyone else, then they put it through their paces. The other issue with "reviews" that aren't long-term reviews is that the person hasn't had the product for long enough to really test it. People like Gerald might know what shortcomings to look for and actively go looking for them, but no-one can test reliability in less than a week. I find the same issues with product reviews on Amazon etc - they are essentially first-impression reviews.
  2. If you're fighting the sun then yeah, serious horsepower definitely comes into the equation!
  3. In my casual YT DOP channel viewing I rarely see anyone with anything larger than a 600W, even for blasting through windows to simulate daylight. I guess a lot of people have access to cameras that have good ISO performance above base and also shoot with fast lenses too, and aren't trying to light Hollywood-sized sets, so maybe 600W is enough? Mostly the discussion seems to be about placement and modifiers, not the overall power levels. But, this is just what I've seen, maybe the algorithm is hiding things from me 🙂
  4. That all makes sense, and like most things, it might be a while before all the tech is handled correctly so it just works and you don't need to troubleshoot the process at each stage. I think that 180 VR video is likely to be a winning format. In 360 video people don't really know where to look, and some of the experiments I've seen with it were really hit and miss if there was a narrative arc that you were meant to be experiencing. Also, all these discussions were had when surround sound first came out and people didn't know what to do with it. People were genuinely talking about mixing live concerts in such a way that put the listener in the middle of the band on the stage, or floating above the band looking down on them like you were in a box seat, only it was a live outdoor concert and there was no seating, etc. Eventually people settled down and realised that for the most part people don't really want those things, but they do want a "if I had been there" sort of experience. So surround audio is mixed like you've got a great seat in the concert, and if there's picture too then the audio is mostly oriented around the point-of-view of that. So, if it's a choice between supplying 180 video and a stereo mix, or 360 video and full ambisonic mix, just to be able to look behind you at nothing of any significance whatsoever, I think many will opt for the first one! Having a wider spread seems sensible - if someone else hasn't done a bunch of tests then it would be relatively easy to do once you have the equipment.
  5. After posting the previous post I went back and compared the looks a few times and realised I was a bit harsh on the ARRI LUT, considering that it was very flattering on my battered skin tone but basically didn't screw up the strong colours too much, whereas the film look is much stronger without being that much more flattering. Inspired by the ARRI LUT, I created this custom grade from scratch. SOOC (for reference): New Custom Look: ARRI LUT (for reference): I'm actually really happy with that look - I went a bit further in evening out the skin tones and brightening them up a bit and it didn't seem to come at the expense of anything else. I think I could easily build a look around this, and will experiment further.
  6. I'm still lost down this rabbit hole, but these are an interesting reference. This is what happens if you put the GX85 through a "look". I put the GX85 test image through a bunch of output LUTs to see which (if any) I liked the flavour of. In order to compare them equally, I adjusted after the LUT to match the black and white points, exposure, and contrast. This way we're not just comparing the different contrast curves, but the other aspects of the LUTs like colour rendering etc. The node structure was this: slightly lower gain to bring GX85 image into range (it records super-whites) CST from 709/2.4 to Davinci Intermediate (my preferred working colour space) (my grade would go here, but for this test no adjustments were made) CST to whatever colour space the LUT expects The LUT (these all convert to 709/2.4) A curve to adjust the overall levels from the LUT to roughly approximate the GX85 image The round-trip from 709/2.4 to DWG to 709/2.4 is almost transparent, if you compensate for the gamut and saturation compression in the conversion at the end, so I didn't bother to grab it. Results: The famous ARRI K1S1 LUT (the ARRI factory LUT): One of the 5000 BMD LUTs that come with Resolve, which I tried just for fun: The Kodak 2383 PFE (Print Film Emulation) LUT. The D55 one seemed the closest match to the WB of the image for some reason, but everyone always uses the D65 ones, so I've included both here for comparison. The D65 one: The Kodak 2393 PFE. It doesn't come with Resolve but it's free online from a bunch of places. I like it because it doesn't tint the shadows as blue, so the image isn't as muddy / drab. The FujiFilm 3513 PFE: I find the ARRI LUT a bit weak - it helps but not as much as I'd like. The comparison above is flattering to the LUT because it has a bit more contrast compared to the SOOC so looks a bit better. The skintones are a little more flattering on it though, which might be enough if you want a more neutral look. All the PFE looks are very strong, and aren't really meant to be used on their own. The film manufacturers designed the colour science to look good when used with a negative film like Kodak 250D or 500T stocks, so it's "wrong" to use it unless you're grading a film scan, I think people use it like this anyway 🙂 Some time ago I purchased a power-grade that emulated both the 250D and 2393 PFE from Juan Melara, which looks like this: To me it looks much more normal than just the 2393 PFE on its own, but it's definitely a stronger look. The powergrade is split into nodes that emulate the 250D separately to the 2393, and the 2393 nodes are almost indistinguishable from the LUT, so I'd imagine this is probably a good emulation. Anyway, lots of flexibility in these 8-bit files!
  7. I'd agree with you, except that about half the movies in the box office aren't much better.. add in all the religious propaganda movies that thrive in more religious areas of the world and sadly, I think there's a huge market 😞
  8. One question that stood out to me immediately was how much you want the audio to move around when the person in the VR turns their head. For example, if you record stereo and the VR person turns their head the visuals will all move but the audio won't change at all - I would imagine that to be unnerving and potentially ruin the immersion wouldn't it? Does VR have a standard where you can record to that and then the headset will decode it to match where the viewer looks?
  9. Maybe Canon are thinking that they'll keep their exclusivity on the super-sharp super-expensive lenses, but let the third-parties develop lower cost less technically perfect lenses? It would make sense and make the system a lot more accessible and attract a lot of new customers that wouldn't want to spend top dollar on pristine lenses. Their success on the EF line and how ubiquitous it was must have been a critical factor in their earnings over the decades, so making RF a new default standard is very much in their interests. You might be right about the split between FF and APS-C lenses though - that's still a strategy in a similar direction.
  10. This might be a blessing-in-disguise, as a lot of cameras actually lose the last few seconds of footage before you hit stop. So it might be compensating for that and making sure no clips are cut short.
  11. It's worth pointing out that the thermals might be the dominant factor here, considering that laptops will throttle down on their performance in order to manage overheating, so a few extra fans in the laptop can make more difference than which model of CPU / GPU you buy!
  12. Ha! Look at Tiffen putting all their info on there for a lens shade!! Talk about padding your part. How funny! Could the lens be a 15-150mm T3.1?
  13. I understand that a person can look at a larger quantity of footage and notice similarities and themes, but there are still a great number of un-accounted-for variables that can always bite you in the ass if you were to actually get that camera. The general look that cameras have online is likely to be the default look, partly because most people don't know the first thing about colour grading and mostly because the people who are posting videos and specifying the model number of the camera are likely in the shallow end of the skills pool, so to speak. The exception is cinematographers doing camera tests, but these have their own issues. The challenge comes in when you try and change the image in post. Try to add a bit more contrast and you might find that the image doesn't keep the things you liked about the look. In fact, the nicer the image looks SOOC or with the default LUT on it, the more fragile the image might be because the more pushed it will be. The most flexible images are the most neutral, and our brain doesn't like neutral images, it wants ones with the right herbs and spices already added. There really is no substitute for actually shooting with the camera the way that you shoot, in the situations you shoot in, and then grade it the way you grade it, trying to get the look you want, with your level of skill. TBH, most of the videos I see that have the name of the camera in them, that are graded with a "look", actually look pretty awful and amateurish to me. Either this is their lack of skill as colourist to not be able to get the look they wanted, or they did get the look they wanted and the look is just awful, but it's not a promising picture either way. I wonder how many of them are using colour management. If a camera is a 10-bit LOG with decent bitrate then the camera is one CST away from being almost indistinguishable from any other camera. Skin tones are a challenge of course, but when well-shot on capable equipment these are pretty straight-forward. There's a few principles I think are at play here: What I hear from high-level colourists is that if a project is well shot on capable equipment (without a "we'll fix it in post" mindset) then you can get your colour management setup, put a look in place, and 80% of the shots just fall into place. Then the time can be spent refining the overall look, adding a specific look to certain scenes (night scenes, dream sequences, etc), fixing any problem shots, and then you'd do a fine-tune pass on all shots with very minor adjustments. If it's not well shot to get it mostly right in-camera then you're in all sorts of trouble for post. If the client is inexperienced and doesn't know what they want, or they want something that is very different to how they shot the project. It's very easy to see colour grading make big changes (e.g. shooting day for night) or see the amazing VFX work done by Hollywood etc, and assume that anyone with a grading panel and calibrated reference monitor can do anything with any footage. If the client is a diva, or is somehow mentally unbalanced. Film-making is difficult enough to make almost anyone mentally unbalanced by the time they get to post-production and they're sitting with the colourist and every mistake done at any point on the project is becoming clearly visible on the huge TV in their studio. Throwing a fit at this point is perhaps a predictable human reaction! One colourist I heard interviewed said that when they were colour grading rap videos in the 80's they had to tell one client who had about 20 people in the colour grading suite that the strippers, cocaine, and machine guns had to go back into the limo otherwise they wouldn't be able to colour grade the project. Of course, none of this is the fault of the camera. I'd even theorise that the brand of camera might be a predictor of how much the colour grading process was setup to fail - if people shot something on a Sony rather than a Canon you might find they're more likely to be a clueless and self-entitled influencer etc. God help the colourists that are going to face a barrage of projects over the next few years shot on the FX3 where the person thinks the colourist can duplicate The Creator in post for a few thousand dollars! Also, the stronger the look you apply in post, the more those small colour science differences get lost in the wash. It's also worth asking, do you think the colourists on reddit are the ones who are fully-booked with more professional clients who have realistic expectations, or the ones out there dealing with the stressed masses and going online to learn and vent? My experience on the colourist forums is that the most experienced folks burn out from answering the same questions over and over again, and arguing with people who don't want to learn or put in the work, so the people who are there are mostly those early in their journeys. Only you can know this, because what you love will be different to what anyone else loves. But don't ask random strangers online, actually try it.... https://sonycine.com/testfootage/ https://zsyst.com/sony-4k-camera-page/sony-f55-sample-footage-downloadable-samples/ 🙂
  14. Absolutely. It works even for people who are genuine as well. If someone is learning the subject then they'll be gradually exploring all the many aspects of it, but it's only once they've explored many / most of these things that they'll be starting to connect things together and getting clear on how they all relate to each other and how they all relate to the desired outcomes etc. It requires that the person go through all the detail in order to integrate it into a single summary that can be explained to the average grandmother. You can skip various bits of the picture, but the outcome of that is that your understanding will potentially be skewed in a certain way towards or away from a more balanced understanding. I've personally found that film-making is a very complex topic because it involves the full gamut of topics... light, optics, sound, analog and digital electronics, digital signal processing, the human visual system, psychoacoustics, editing which involves spatial and temporal dimensions, colour theory and emotional response to visual stimulus, sound design and mixing and mastering and emotional response to auditory stimulus, storytelling, logistics and planning, and depending on how you do it, it might include business and marketing and accounting etc, or recruiting and managing a multi-disciplinary team to perform a complex one-off project, etc. It's no wonder that it takes people a good while to muddle through it and that at any given time the vast majority are in the middle somewhere and that many get lost and never make it back out into the daylight again. Add into that the fragility of the ego, vested interests, social media dopamine addiction, cultural differences, limited empathy and sympathy, etc and it's a big old mess 🙂
  15. Excellent point about the compatibility - I'm so used to MFT and almost everything being interchangeable that I'm not used to even thinking about these things! In terms of it being a prop, I would have thought that it would have been easier to grab whatever was the cheapest / most common / not-rented item from their camera rental house. I mean, if you're shooting a feature film then you're renting a bunch of stuff anyway, so renting an extra 16mm setup to use as a prop wouldn't be hard at all. They could have rented it from a production design rental house along with all the other props etc, but then anything in that place would be non-working and likely turned into a prop when it stopped working. In this sense, it's very unlikely to have been a camera / lens combination that wasn't compatible, as someone would have had to have glued the lens on the body or something, which takes extra effort etc which wouldn't be needed considering there would be that many of those cameras and lenses that wore out or got dropped into a river etc that they'd be worthless and ubiquitous.
  16. I don't think so.. all the photos I found showed the Angenieux has the writing on the outside and not visible from the front Filters don't tend to have writing on them like that - that pattern looks like lens info anyway. None of the ones on here have writing that looks similar either: https://www.oldfastglass.com/cooke-10860mm-t3 It seems to have one of those boxes that controls the lens and provides a rocker switch for zooming etc, maybe that narrows it down? Maybe it's an ENG lens rather than a cinema lens?
  17. I've heard that the 12K files are very usable in terms of performance, but it will likely depend on what mode you're shooting in. Most people aren't using the 12K at 12K - they're using it at 4K or 8K. Regardless, Resolve has an incredible array of functionality to improve performance and enable real-time editing and even colour correction on lesser hardware. This is a good overview:
  18. When you say "like they are emitting light themselves" you have absolutely nailed the main problem of the video look. I don't know if you are aware of this, so maybe you're already way ahead of the discussion here, but here's a link to something that explains it way better than I ever could (linked to timestamp): This is why implementing subtractive saturation of some kind in post is a very effective way to reduce the "video look". I have recently been doing a lot of experimenting and a recent experiment I did showed that reducing the brightness of the saturated areas, combined with reducing the saturation of the higher brightness areas (desaturating the highlights) really shifted the image towards a more natural look. For those of us that aren't chasing a strong look, you have to be careful with how much of these you apply because it's very easy to go too far and it starts to seem like you're applying a "look" to the footage. I'm yet to complete my experiments, but I think this might be something I would adjust on a per-shot basis. You'd have to see if you can adjust the Sony to be how you wanted, I'd imagine it would just do a gain adjustment on the linear reading off the sensor and then put it through the same colour profile, so maybe you can compensate for it and maybe not. TBH it's pretty much impossible to evaluate colour science online. This is because: If you look at a bunch of videos online and they all look the same, is this because the camera can only create this look? or is this the default look and no-one knows how to change it? or is this the current trend? If you find a single video and you like it, you can't know if it was just that particular location and time and lighting conditions where the colours were like this, or if the person is a very skilled colourist, or if it involved great looking skin-tones then maybe the person had great skin or great skill in applying makeup, or even if they somehow screwed up the lighting and it actually worked out brilliantly just by accident (in an infinite group of monkeys with typewriters one will eventually type Shakespeare) and the internet is very very much like an infinite group of monkeys with typewriters! The camera might be being used on an incredible number of amazing looking projects, but these people aren't posting to YT. Think about it - there could be 10,000 reality TV shows shot with whatever camera you're looking at and you'd never know that they were shot on that camera because these people aren't all over YT talking about their equipment - they're at work creating solid images and then going home to spend whatever spare time they have with family and friends. The only time we hear about what equipment is being used is if the person is a camera YouTuber, if they're an amateur who is taking 5 years to shoot their film, if they're a professional who doesn't have enough work on to keep them busy, or if the project is so high-level that the crew get interviewed and these questions get asked. There are literally millions of moderately successful TV shows, movies, YouTube channels that look great and there is no information available about what equipment they use. Let's imagine that you find a camera that is capable of great results - this doesn't tell you what kind of results YOU will get with it. Some cameras are just incredibly forgiving and it's easy to get great images from, and there are other cameras that are absolute PIGS to work with, and only the worlds best are able to really make the most of them. For the people in the middle (ie. not a noob and not a god) the forgiving ones will create much nicer images than the pigs, but in the hands of the worlds best, the pig camera might even have more potential. It's hard to tell, but it looks like it might even be 1/2. You have to change the amount when you change the focal length, but I suspect Riza isn't doing that because of how she spoke about the gear. It's also possible to add diffusion in post. Also, lifting the shadows with a softer contrast curve can also have a similar effect.
  19. I think that if you can possibly manage it, it's best to provide the simplification yourself rather than through external means. This gives you flexibility in the odd example you need it, and doesn't lock you in over time. The basic principle I recommend is to separate R&D activities from production. Specifically, would recommend doing a test on the various ways you can do something, or tackle some problem, and the options for your workflow, evaluate the experience and results, then pick one and then treat it like that's your limitation. I'm about to do one of those cycles again, where I've had a bunch of new information and now need to consolidate it into a workflow that I can just use and get on with it. Similarly, I also recommend doing that with the shooting modes, as has happened here: I find that simple answers come when you understand a topic fully. If your answers to simple questions aren't simple answers then you don't understand things well enough. I call it "the simplicity on the other side of complexity" because you have to work through the complexity to get to the simplicity. In terms of my shooting modes I shoot 8-bit 4K IPB 709 because that's the best mode the GX85 has, and camera size is more important to me than the codec or colour space. If I could choose any mode I wanted I'd be shooting 10-bit (or 12-bit!) 3K ALL-I HLG 200Mbps h264, this is because: 10-bit or 12-bit gives lots of room in post for stretching things around etc and it just "feels nice" 3K because I only edit on a 1080p timeline but having 3K would downscale some of the compression artefacts in post rather than have all the downscaling happening in-camera (and if I zoom in post it gives a bit more extension - mind you you can zoom to about 150% invisibly if you add appropriate levels of sharpening) ALL-I because I want the editing experience to be like butter HLG because I want a LOG profile that is (mostly) supported be colour management so I can easily change exposure and WB in post photometrically without strange tints appearing, and not just a straight LOG profile because I want the shadows and saturation to be stronger in the SOOC files so there is a stronger signal to compression noise ratio 200Mbps h264 because ALL-I files need about double the bitrate compared to IPB, and also I'd prefer h264 because it's easier on the hardware at the moment but h265 would be fine too (remembering that 3K has about half the total pixels at 4K) The philosophy here is basically that capturing the best content comes first, and the best editing experience comes next, then followed by the easiest colour grading experience, then the best image quality after that. This is because the quality of the final edit is impacted by these factors in that order of importance.
  20. In case you haven't seen it, Riza did a video on how she shoots. TLDR; she has the most basic standard equipment, but creates the look in production design and in post.
  21. Can you dial in the amount of it? It might be good if used at a lower strength perhaps. One look that I quite like is when a GoPro or action camera is mounted to an off-road vehicle but has the stabilisation on and the image ends up being neither locked to the vehicle or locked to the scene but is somewhere in-between. I like the look because it is sort-of like how you experience very bumpy rides - you stabilise with your body and head but not perfectly. The fact it's moving and responding to the bumps against the vehicle also makes it look like there's a good human camera op too, which makes it look less artificial than if it was locked onto something in the shot.
  22. My impression was that people like the Alexa image for everything. The DR only matters if you're filming something that is high DR, and even then if you're watching it on a 709 display then the Alexa images have the same DR as every other camera. I also find that Alexas look green in camera tests, but probably the main reason for loving the Alexa look is that when used on big productions or by people that know how to use it, the images look great. However, the Alexa is just a very high quality RAW camera - the files that come out of it are as neutral as you can imagine. It's the paradox of modern camera discussions. The best looking images come from the most expensive cameras because the people who know that production design and colour grading is what makes great looking images are going to all that trouble anyway and so may as well rent an Alexa (or RED). Would the production have looked as good if they shot it on a P4K or S1H? I'd say maybe 95% as good - maybe 100%. But, because the people using the P4K or S1H aren't using them in situations where they've put as much effort into production design or colour grading, those images don't look as good. Not really my tastes. Riza does a lot of work to light herself really well, but the diffusion and colour grading aren't to my tastes. The image is too diffused and too cream and pastel green/brown for me. Most of that is in her production design, considering that the blue and yellow in this shot looks relatively normal: It's the "aesthetic" look that is trendy right now on YT, but Riza takes it to a whole new level. In a way it's similar to this palette: But compare the two shots above and notice that the bottom one is a lot crisper - Riza uses a huge amount of diffusion so everything looks hazy. Maybe it's just my associations.. when I grew up the interiors that were the right age to be old and shitty were the Mission Brown ones, and compared to the colour schemes of the time it just looked drab and dull, which combined with the fact it was old and falling apart, really didn't enamour it to me! I suspect that for the cultural references of the people making this content right now, this probably balances out the previous aesthetic choices in a way that makes them feel better about themselves and about life etc. In times of change people get pretty stressed and going for a soft brown and green palette it's probably unconsciously evoking nature and naturalness in some way - which makes sense if you think about the existential threats of climate change and AI that we are currently facing. People of this age are having climate anxiety in a big way, so it's a real thing in their world.
  23. Ha - what a workflow! Considering that normally you'd want to grade in between the two emulations (negative and print films) that's not exactly a good setup!
  24. Here's a few images from the GF3, not exactly the best video camera in the world, but even it has some nice colour. These are all shot with the Mir 37mm f2.8 with speed booster and wide open, and all shots are SOOC: Obviously these are very challenging conditions with mixed colour temps and low light so the ISO probably wasn't at its native setting either, but not bad. These all look a bit flat to me, even from such an old camera with a low DR compared to now, but my literally my first thought is to increase contrast and then evaluate the saturation. I've analysed GX85 colour before in this thread: https://www.eoshd.com/comments/topic/59121-gx85-alexa-conversion-and-colour-profile-investigations/ The default profiles are like most modern profiles, and bear a resemblance to some of the best colour in the business... GX85 Natural Profile: Alexa: To get a sense of how similar these are, if they were technically correct those lines would go straight to the middle of the reference boxes on the overlay. Obviously they're way off, but in a relatively similar way. Obviously this is a dramatic simplification of the whole colour science, but it gives a sense of it. My experience of the GH5 is that it it a real work-horse and that everything has been thought-through so that it quietly does the job and stays out of your way. The image was practically indestructible, even if you tried. I've posted these previously, but here's what happens if you try to break the image.. Here's the flattest image I could find - SOOC HLG: With the most extreme amount of contrast you can make with the curves tool (literally a vertical line): I think that was the 150Mbps IPB codec too - the 400Mbps ALL-I might be better again. When I had the XC10 and was shooting 8-bit C-Log I was trying and failing to get good colour from it and trying to learn colour grading and colour management etc, and I was watching all these colour grading tutorials of people grading RAW Alexa and RED and BM footage and there was this smoothness and elegance in how it all worked - they adjusted this control and that control and the footage just glided around like it had infinite subtlety and richness in the files, but the XC10 footage was just the opposite. Then I bought the GH5 and the files felt exactly how all those colour grading tutorials looked - the files were just like velvet. Of course, it's not quite as good as the high-end cine cameras, but the footage is seriously malleable and if you know what you're doing then you can really extract great images from it. All modern cameras are like hypercars and most film-making uses only a tiny fraction of their potential.
×
×
  • Create New...