Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 04/25/2024 in Posts

  1. I think that if you can possibly manage it, it's best to provide the simplification yourself rather than through external means. This gives you flexibility in the odd example you need it, and doesn't lock you in over time. The basic principle I recommend is to separate R&D activities from production. Specifically, would recommend doing a test on the various ways you can do something, or tackle some problem, and the options for your workflow, evaluate the experience and results, then pick one and then treat it like that's your limitation. I'm about to do one of those cycles again, where I've had a bunch of new information and now need to consolidate it into a workflow that I can just use and get on with it. Similarly, I also recommend doing that with the shooting modes, as has happened here: I find that simple answers come when you understand a topic fully. If your answers to simple questions aren't simple answers then you don't understand things well enough. I call it "the simplicity on the other side of complexity" because you have to work through the complexity to get to the simplicity. In terms of my shooting modes I shoot 8-bit 4K IPB 709 because that's the best mode the GX85 has, and camera size is more important to me than the codec or colour space. If I could choose any mode I wanted I'd be shooting 10-bit (or 12-bit!) 3K ALL-I HLG 200Mbps h264, this is because: 10-bit or 12-bit gives lots of room in post for stretching things around etc and it just "feels nice" 3K because I only edit on a 1080p timeline but having 3K would downscale some of the compression artefacts in post rather than have all the downscaling happening in-camera (and if I zoom in post it gives a bit more extension - mind you you can zoom to about 150% invisibly if you add appropriate levels of sharpening) ALL-I because I want the editing experience to be like butter HLG because I want a LOG profile that is (mostly) supported be colour management so I can easily change exposure and WB in post photometrically without strange tints appearing, and not just a straight LOG profile because I want the shadows and saturation to be stronger in the SOOC files so there is a stronger signal to compression noise ratio 200Mbps h264 because ALL-I files need about double the bitrate compared to IPB, and also I'd prefer h264 because it's easier on the hardware at the moment but h265 would be fine too (remembering that 3K has about half the total pixels at 4K) The philosophy here is basically that capturing the best content comes first, and the best editing experience comes next, then followed by the easiest colour grading experience, then the best image quality after that. This is because the quality of the final edit is impacted by these factors in that order of importance.
    2 points
  2. It's definitely an Angenieux lens. Here are more shots and there's a Tiffen lens shade with writing on it. Only Angenieux has zooms machined like that. I found some better shots: I'm fairly sure it's a "version" of the Angenieux 12-120mm, but I cannot guarantee it.
    1 point
  3. If it was a real working 16mm film camera, I don't think it would be an ENG (Electronic News Gathering) lens, as they are designed for professional portable video cameras (which in the late 1970s would have been triple vacuum tube image sensor cameras using a dichroic colour splitting prism, thus having a long flange-to-sensor optical path). But of course in the movie it's basically a prop, so doesn't have to be a working camera.
    1 point
  4. Tamron is also putting out an RF-S (APS-C) zoom lens (11-20mm F2.8). Makes sense as Canon have barely developed any RF-S lenses (not a single prime or fixed aperture lens). So this will surely help sell crop bodies. So yeah while this is great news for APS-C Canon owners, I really don’t see this as a solid indicator Canon will open third party FF RF lenses although one can hope!
    1 point
  5. 1 point
  6. Well that is pretty decent. In that case, I might look even harder at the system next year… The R3 is for my needs the best body currently on the market. A Nikon Z6iii with additional battery grip is probably going to beat it…maybe…for me and as someone now 50% invested in Nikon, almost certainly where I will be going. But at the end of every season, I review my needs and Canon could be an option, especially now that Sigma have joined the party, but would need to have brought out a pretty extensive FF line by next Spring. I think Canon really do need to lighten up on their lens stance though or it will bite them in the arse. If it is not already nibbling…
    1 point
  7. When you say "like they are emitting light themselves" you have absolutely nailed the main problem of the video look. I don't know if you are aware of this, so maybe you're already way ahead of the discussion here, but here's a link to something that explains it way better than I ever could (linked to timestamp): This is why implementing subtractive saturation of some kind in post is a very effective way to reduce the "video look". I have recently been doing a lot of experimenting and a recent experiment I did showed that reducing the brightness of the saturated areas, combined with reducing the saturation of the higher brightness areas (desaturating the highlights) really shifted the image towards a more natural look. For those of us that aren't chasing a strong look, you have to be careful with how much of these you apply because it's very easy to go too far and it starts to seem like you're applying a "look" to the footage. I'm yet to complete my experiments, but I think this might be something I would adjust on a per-shot basis. You'd have to see if you can adjust the Sony to be how you wanted, I'd imagine it would just do a gain adjustment on the linear reading off the sensor and then put it through the same colour profile, so maybe you can compensate for it and maybe not. TBH it's pretty much impossible to evaluate colour science online. This is because: If you look at a bunch of videos online and they all look the same, is this because the camera can only create this look? or is this the default look and no-one knows how to change it? or is this the current trend? If you find a single video and you like it, you can't know if it was just that particular location and time and lighting conditions where the colours were like this, or if the person is a very skilled colourist, or if it involved great looking skin-tones then maybe the person had great skin or great skill in applying makeup, or even if they somehow screwed up the lighting and it actually worked out brilliantly just by accident (in an infinite group of monkeys with typewriters one will eventually type Shakespeare) and the internet is very very much like an infinite group of monkeys with typewriters! The camera might be being used on an incredible number of amazing looking projects, but these people aren't posting to YT. Think about it - there could be 10,000 reality TV shows shot with whatever camera you're looking at and you'd never know that they were shot on that camera because these people aren't all over YT talking about their equipment - they're at work creating solid images and then going home to spend whatever spare time they have with family and friends. The only time we hear about what equipment is being used is if the person is a camera YouTuber, if they're an amateur who is taking 5 years to shoot their film, if they're a professional who doesn't have enough work on to keep them busy, or if the project is so high-level that the crew get interviewed and these questions get asked. There are literally millions of moderately successful TV shows, movies, YouTube channels that look great and there is no information available about what equipment they use. Let's imagine that you find a camera that is capable of great results - this doesn't tell you what kind of results YOU will get with it. Some cameras are just incredibly forgiving and it's easy to get great images from, and there are other cameras that are absolute PIGS to work with, and only the worlds best are able to really make the most of them. For the people in the middle (ie. not a noob and not a god) the forgiving ones will create much nicer images than the pigs, but in the hands of the worlds best, the pig camera might even have more potential. It's hard to tell, but it looks like it might even be 1/2. You have to change the amount when you change the focal length, but I suspect Riza isn't doing that because of how she spoke about the gear. It's also possible to add diffusion in post. Also, lifting the shadows with a softer contrast curve can also have a similar effect.
    1 point
  8. He’s the only man who doesn’t hate the sound of his own voice 😉
    1 point
  9. Actually, what am I blathering about?! The only time I need the EIS is for tracking the couple, outdoors, once, maybe twice. Doh. Absolute none issue. I'll simply put it on a function button! I'd normally be tracking at '42mm' and it will be '58mm'. I'll just increase the distance between myself and the subject by a couple of feet et voila. Move on, nothing to see here 🤪
    1 point
  10. It's only "Standard" with a 1.09 crop or "High" with a 1.43 crop. There's no adjustment for the behavior, thank god. There are already so many things to adjust on this camera that trying to figure out the perfect stabilization to use on a given shot would be like trying to figure out which picture profile setting on a Sony camera works best- paralysis by analysis. I'm trying to simplify these days as it shouldn't be that hard... I might even go back to iMovie, seriously!
    1 point
  11. Al Dolega

    Shure MoveMic

    PicoMic is still smaller and lighter, and the newer Pro version can use a lav mic. The receiver is bigger, but it's also the storage/charging dock for the transmitters, which is handy when you're going from mic'd to not mic'd, changing speakers often, etc. They just need internal recording and they'd be perfect IMO.
    1 point
×
×
  • Create New...