Jump to content

kye

Members
  • Posts

    7,846
  • Joined

  • Last visited

Everything posted by kye

  1. I'm guessing you have found people that you can socialise with during the week? I sometimes have breaks between contracts and found that basically everyone I knew was busy working Monday to Friday before 6pm, and were tired after that. I always suspected there was a parallel world of people who worked evenings and nights and weekends and they all just hung out with each other while the business-hours folks were all at work..
  2. I saw that link, but (and I know this might make me sound a little crazy here....) I didn't think that manufacturers announced alliances by wearing t-shirts and posts on rumour sites. Yeah, I know, crazy talk. I guess if @MrSMW says "Panasonic and Leica made a joint announcement that they are 'furthering' their collaboration." then it must be true - regardless of the source!
  3. This is an excellent question. I have been delving into things and I have come to three significant observations: Camera sensors are Linear measurement devices Normal cameras do not look anything like an Alexa - even when shooting RAW still images that aren't debayered and even well within their best operating area (low DR, base ISO, etc) The OLPF and spectral sensitivity of various cameras doesn't change much The problem with these three statements is that they can't all be correct without there being something else going on. The only way I know how to resolve this contradiction is if there is processing going on before the debayering in the ARRI cameras. I don't think the layout presents an enormous challenge for them to be able to put multiple sensors together like the LF does, but you might be right about it being a long wait. The 35 is named to match the 65, and the introduction of the 35 means that now they have cameras over 4K in all sensor sizes. Obviously the new 35 has greater dynamic range and colour depth than the ALEV3 cameras, but considering how much those already had, you could almost just think of the DR on the 35 like a special application camera (like super slow motion) where a production would just use one for those shots.
  4. This seems like a good overview, and includes a few little sections to make sure social-media-bros now some of the basics...
  5. Yeah, I've seen glimpses of that life and it doesn't look easy by any stretch of the imagination!
  6. I'm not sure that this applies for wedding videographers.... 🙂
  7. I'd suggest that you also take into account resolution of the various platforms and what resolutions you'll want to deliver in. For example, it seems that snapchat might be the most extreme vertically: ...but if it only requires a very low resolution then you can get away with a wider sensor read-out and still oversample. Also take into account sensor resolution. An 8K 16:9 would give more resolution than a 4K 3:2. As is almost always the best advice - clearly define what it is that you want to end up with and then work backwards from there.
  8. ...and more chance of extra equipment getting stolen from set while everyone is busy doing things. I see posts on FB groups every so often listing serial numbers of items that got stolen so the group members can keep an eye out if they pop up for sale. Mostly someone asks how they got stolen and it's mostly out of the boot/trunk of locked cars, but the odd one disappears from set, unfortunately.
  9. I must admit that I haven't kept up with your discussions on this, but I got the impression that you can't use the False Colour mode on the FP to accurately monitor things across all the ISOs - is that correct? The way I would use this camera would be either manually exposing or using it in auto-ISO and using exposure compensation, but I would be using the false-colours in either mode to tell me what was clipped and where the middle was. I'd be happy to adjust levels shot-by-shot in post (unlike professional workflows when working with a team) and know how to do that in Resolve, so I'd be comfortable raising or lowering the exposure based on what was clipping and what I wanted to retain in post. If the false-colour doesn't tell me those things then it would sort of defeat its entire purpose..
  10. "HLG" isn't a standard as-such, more like a concept. There are multiple standards within HLG (rec2020 and rec2100, maybe more) but the HLG from the camera may not be an implementation of one of those. I've tried this on the GH5 and found it wasn't exact, but was a "good enough" match to the rec2100, but I'm not doing this at any professional level. It's easily testable though - just do a set of over/under exposures on a reference scene and see if matching the levels in post results in a match or not. You can try converting from various standards and see if any of them get the shots to match. Its very easy in Resolve to put a reference exposure over the top and set the blending mode to "Difference" and you can clearly see the errors. IIRC on the GH5 the errors were strange colour shifts in the shadows and highlights and some of the more saturated colours, but was a pretty good match in the mids and lower saturation. I tried compensating with curves etc but it was messy.
  11. Good post and good points made, and nice to see appetite for a more nuanced discussion around it. I see a number of elements in the overall set of functionality: being able to reliably focus on a thing (this is where PDAF has the clear advantage) being able to recognise objects that might be good to focus on (face detect, eye detect, animals, etc) and also being able to recognise that there might be candidates that are quite out-of-focus being able to choose the right subject from the faces / animals detected being able to know what to do when an object tracking is lost (focus on someone else, the background, or hold focus?) being able to adjust the focus racking speed to be context-sensitive being able to anticipate focus on an object before that object is in frame The PDAF vs CDAF debate only applies to #1 - literally none of the other elements are related to it at all. Obviously PDAF is much better at number 1 than a non-dedicated focus puller, and this is what CDAF systems get criticised for (either not doing it at all, going the wrong way, or pulsing while tracking). Modern cameras are all getting much better at #2 with face-detect being pretty reliable and ubiquitous at this point. MILC/cinema cameras seem to have zero capability to do #3, which is why you have outlined the use of the joystick and touchscreen and other techniques, and I agree with you that if someone practices a bit and gets familiar with their camera then this is probably a good enough way to control the AF. #4 is only just being added to new cameras so I think is probably "cutting edge". Apart from the "face AF only" mode not being on all cameras, the ability to change the focus mode on the fly is probably absent or tedious (maybe I'm wrong here but I doubt that it's as easy as the controls for #3. #5 is adjustable deep in the menus of most cameras, but changing it on the fly is likely to be cumbersome / prohibitive, and the camera automatically doing it is completely absent from all cameras and will be a long wait. This is particularly relevant because there's always a balance to the camera not getting jumpy when something moves through the shot but also not having a huge lag when the thing to focus on actually does need to change. #6 isn't available in any real capacity from cameras Anyone who focuses manually will be worse at #1 than PDAF systems, and have full and complete capability on 2-6. Mostly the conversation only talks about #1 with sideways references to 2-3 and no acknowledgement the others exist. In terms of aesthetics, I greatly prefer to have imperfect #1 if the rest are on-point. In fact, as someone who spends most of the time behind the camera on my own family videos, the way I use the camera (and edit) has proven to be a significant presence in these videos, and focus pulling is a significant contributor to that aesthetic.
  12. If only it were the case that something simply being true meant that no-one needed to be convinced of it! At this point, I've read so much BS online that I require quite a high amount of evidence that something is true before I repeat it to others, which was the point of me phrasing it like that in my original comment 🙂 It really seems like the FP is a great FF cinema camera really just lacking a good post-process and support for it in the NLE space. I really hope they rectify this in future firmware updates - the sensor has soooo much potential!
  13. The thing never discussed in these debates is "who". Who should it focus on? Very early on in the AF game you could program your camera with photos of your family and then it would look for those faces specifically and focus on them, not whatever face happened to be most convenient at the time. I've never heard features like this mentioned in these debates. Sure, AF from Canon / Sony are phase detect, which means that it can focus on the object it chooses for you. Sure, AF from Canon / Sony are face / eye detect, which means that it can focus on the face or eye it chooses for you. I've used some of the worst AF systems ever made and had many many shots ruined by them. As often as not, they were ruined because it focussed on the wrong thing or wrong person. AF is great if you're trying to keep a talking head in-focus on 100mm at F1.2, so I guess it's good enough for professional videographers, but as someone who shoots home and travel content, it's no-where near sophisticated enough for my needs.
  14. I have spoken about this at length one other forums and been convinced that the Alexa is simply a Linear capture and all the processing is in the LUT. Here is what happens when you under/over expose the Alexa and compensate for that exposure under the LUT: https://imgur.com/a/OGbI2To Result: identical looking images (apart from noise and clipping limits) Of course, the Alexa is a very high DR Linear capture device so I'm not criticising it at all. However, the FP is also a high-quality Linear capture device, and the fact that the ARRI colour magic is in the LUT means that we can all use it on our own footage if we can convert that footage back to Linear / LOG from whatever is recorded in-camera.
  15. Sounds like you need to calibrate your monitor! Let's have an argument about what specification your monitor is, if it has a dedicated video output or is being polluted by your operating systems colour management, what calibration device you use and what software you paired with it, what options you chose for calibration, what the ambient lighting conditions are in your studio and what CRI and calibration the globes have, what colour the walls of your studio are - I mean did you even buy the special neutral colour paint???? No wonder it's all gone topsy-turvy for you!!! (In all seriousness, those things do matter, but making a film that isn't boring is still way more important....)
  16. I've tried correcting skintones on hundreds? thousands? of shots of GH5 10-bit footage. I haven't tried breaking them or really pushing the grade specifically on them though. As I said, I'm not sure if 10-bit is enough or not, maybe not. I know that some of the ML users here have reported seeing not only differences between 10-bit and 12-bit RAW, but also between the 12 and 14 bit RAW, so that's more data to factor in. I guess mostly, I'd like to be given the option! ......and without having to build a whole rig around an external recorder 🙂
  17. Agreed. Something that I think that doesn't get talked about much and isn't well understood is the relationship between resolution, bit-depth, and bitrate. To some extent they are linked and can, under some circumstances, offset the weaknesses in the others. I'd still like Prores 444 though, because it would give me downsampled 1080p with low levels of sharpening, high-enough but not unmanageable bit-rates and RAW-like bit-depth 🙂
  18. I so wish that companies would put Prores 444 into their cameras as a 12-bit standard that is well supported... well, that they'd put Prores in their cameras at all, then 444 after that 🙂 Mind you, I have tried to break the 10-bit files from my GH5 and they've held up, so maybe there isn't much need for it? Not entirely sure on that one.
  19. You're probably thinking of Prores RAW, which Resolve doesn't support. Normal prores has full support though. I've noticed that people online often say Prores when they mean Prores RAW and it's quite confusing as they're definitely not the same.
  20. I think all crews will always be stretched. The crew might be stretched because they're all trying to do their own thing well and any spare time/resources is quickly put to doing something better or something more that wasn't originally included. The crew might be stretched because they're all trying to do as little as possible and therefore the task expands to fill the available time. The crew might be stretched because they're a smooth running operation who are coordinated and efficient through experience and discipline (which is no easy feat to implement). The way you can tell the difference is that on the first instance the results are greater than expected from the time and resources, the second is that the results aren't better than expected, and the last is that things seem relaxed and smooth and there are time and/or resources spare at the end. Absolutely. People concentrate on image image image and it's so short sighted. I talk a lot about dynamic range and stabilisation etc in my work and share nice images I've captured and people respond with the old GH5 is enough, but what they're not seeing are clip after clip after clip where I couldn't get my shit together in time and missed the moment, where the DR wasn't enough, where the stabilisation made it unusable, etc etc. Every usable clip on the card is better than every clip you can't use or didn't capture, and camera choice is huge in that equation.
  21. I thought Gerald was the detailed spec guy? Like, he was the one who would find and mention all the bizarre and non-sensical cripple-hammer combinations, because he was the only one that had used the camera for more than a day before posting a review about it. But would you want his recommendations and conclusions? Definitely not - he's a technician not an artist. Film-making isn't about technical perfection, it's about the aesthetic. Geralds (and most internet technologists) favourite aesthetic is beige filters on a beige lens on a beige sensor with enormous resolution to reproduce the mind-numbing dullness of the whole affair. If you only read the internet, then this would be a delicious delicious image:
  22. It's a good question why they don't add those modes. I've noticed on looking at the specifications sheets on sensors that have been shared over the years that often Sony (it's always Sony) will have a full-resolution readout of the sensor up to a certain frame rate (eg, 30p) and then a lower resolution at a higher rate (eg, 60p or 120p) but don't list any lower/faster modes than that. For example, they might list 6k60, 4k120, but not 1080p240 (which is actually half the number of pixels per second as 4k120). Let alone faster readouts for 720p or 480p. Obviously the tiny sensors for smartphones and the 1" ones like the RX100 (and GoPro?) etc do have those fast/lower-res modes, but maybe they just don't include them on larger sensors - not sure why.
×
×
  • Create New...