Jump to content

kye

Members
  • Posts

    7,486
  • Joined

  • Last visited

Everything posted by kye

  1. Did you convert your footage into LogC and the right colour space before putting it through the LUT? I haven't played with any XT-3 footage but my GH5 10-bit footage is really difficult to break, so I think it might be a processing problem.
  2. It depends on what you value. A fixed ND will mean you need to adjust the camera settings to get perfect exposure - eg, vary the ISO, shutter, or aperture. A variable ND allows you to shoot with your preferred settings and adjust exposure with the ND. My preference was to get a high-quality ND without the dreaded X pattern for a reasonable amount of money, and I don't mind about using shutter to adjust exposure. Your preferences are probably different, but these are what you are prioritising when you decide.
  3. Let us know if you have questions. I'd encourage everyone to just start using it and jump in. Once you're familiar with how to use it then it's good to learn tools you don't know yet, but in the beginning its good to concentrate on how you like to work. Pumping out little 10-60s videos is fun and gets you familiar with the whole workflow
  4. @BenEricson is correct that you should try it in 4K mode.. @webrunner5 is wrong here - the XC10 in 4K 305Mbps mode has the edge on the C100, it's a closely run race though, which is a tribute to the C100 because it's doing it with about 10% the file size. You may also want to play with sharpening and see what you prefer there. I like the less sharpened look but everyone is different
  5. In general, if you're shooting with 8-bit codecs it's best to get the exposure and colour as close as you can in-camera so you're not trying to push/pull the image too much in post. 10-bit is another story, and of course bitrate also comes into it too. In terms of the dull lighting from the fog/smog it depends on what you're making and what is in the frame, but at a certain point you have to accept that flat lighting creates a flat image regardless of what you do. What kind of end result are you hoping for?
  6. I've watched a few of his videos but don't remember them either way, so would have to have another look
  7. Wow @kaylee you should tell us what you really think!! To be fair, those images are screen grabs so won't be accurate in an absolute sense, but are useful as a comparison. Androidlad was partly right and partly wrong in the other thread. He was right about the pictures I posted being screengrabs, but he was wrong about the vectorscopes because they were taken from a downloaded copy of the video that hadn't gone through a display correction of any kind. I blocked him partly because he assumes that other people are wrong without cause, partly because he didn't try to clarify or have a civil conversation, and partly because I already have enough immature bickering in my life from my two children However, having said all that, it is pretty difficult to get past the colour of 14 bit RAW!
  8. Yes, lighting.. I forget that other people have control of that lol. If we add in other things like acting, sound design, music, grading, vfx, art department, etc the camera percentage approaches zero!
  9. I agree. It's definitely hard though when you have the best in the business making great looking edits and using the best equipment - the logic would then suggest that part of the output is the operator and part is the equipment, so if you want better results quickly then buying the part of that picture you can kind-of makes sense. Unfortunately, the thing that isn't obvious is that it's more like 50% operator, 30% lenses, and only 20% camera, but it's the camera that people fixate on
  10. So anyway, if you were to download those videos and pull them into Resolve then they're the vectorscopes you'd get. It's worth looking at the GHAlexa LUTs from @Sage to see what they do to footage too. My top impressions are the knee in the highlights, the overall saturation, and the skin tones. I suspect that if you watch a bunch of test videos shot on Alexa (eg, maybe lens tests or something that's not too heavily graded) then you'll start to see similarities between them and train your eye
  11. As you know I make travel and home videos for my family and looking back on my finished projects it's the ones that are full of shots of us that are the nicest. The kids are in "that phase" where they don't like me pointing a camera at them (or seeing themselves appear in the final product - my daughter says "ew!" every time she sees herself..) but I'm persevering for those moments I can get a nice shot. I've recently started getting over-the-shoulder shots of us looking at big buildings or grand views, or doing the mid-shot and then pan to the thing. No idea if these will work but I'm trying Trees and stuff are nice, but unless you're the cinema equivalent of Ansel Adams it can all start to look at bit like clip-art at some point.
  12. I remember watching an episode of Top Gear and Richard Hammond was driving some crazy car around the track and said "ooh this car is so much better than me!" and I've always remembered that. I'm on my second trip with the GH5 and I'm starting to get a feel for it now and when I take the images into Resolve at the end of the day and have a little play with them I am reminded that it is indeed so much better than me. I think actually that this "benchmark" is a good one for buying equipment - if all the features that you use on your camera are better than you then there's no point upgrading.
  13. Thanks - that's useful I did a fake multi-cam of my wife giving a speech by digitally punching in to make the other two angles and I understand what you say about precision and pacing being really important. I can't claim to be any good at it, but I did notice that when I tried a few different cuts on certain sections the message of what she was saying really got impacted, which was interesting to see.
  14. I'd imagine they'll fix that pretty quickly as it's a pretty major feature and I'd imagine lots of people use it. Just out of curiosity, how do you use it? I had a three-angle setup that I tried to edit once and I watched a few tutorials on it and I found it to be more fussing around than just putting the clips on individual tracks and cutting them up by hand. I was cutting around people walking in the foreground though, and I've never cut video live before, so maybe those were the stumbling blocks? Do you just hit go and then change angles live and then tweak a bit and render that out?
  15. On the subject of proxies, Resolve has a built-in function to render them so you don't even need to manage them, I'm sure that the other editors also have one-click solutions too. I edit 4k 10-bit 150Mbps GH5 footage like butter on my 13 inch 2016 MBP laptop with prores proxies. It takes time to render them obviously, but it means you're not carrying around a 5k or 5kg computer
  16. This is very interesting to me. What kind of performance do you get with and without it enabled? I would imagine there would be a performance hit when running it (as the BM hardware solutions would also impose I imagine).
  17. Yeah, I wanted extra low light and almost went with an A7III partly for that reason. I ended up with the GH5 and am really happy with what I am getting precisely because of the Voitlander 17.5mm 0.95 prime on it. The night city portraits are spectacular and I'm super happy with the combination, despite the GH5 not really being a low light star.
  18. A lot of the first footage from the camera was hand-held and had that shaky look. I'm not a fan of it (which is why I got the GH5) but it's an aesthetic that some people like I guess. It also depends on how shaky you are, if you practice a lot then you can get gimbal-like results from lens IS, but you'd have to work at it. I think working out what aesthetic elements you like and don't like is a big part of film-making. I've worked out that I don't like non-stabilised hand-held, but I do like wide apertures, wide DR and the inaccuracies involved with MF are also kind of charming in a human/imperfection kind of way. It's an art after all, not a science
  19. Have a nice life dickhead. Blocked.
  20. This is an interesting question and I did some googling and I think it said that HLG is rec2100, but maybe I didn't understand that correctly. Does anyone know for sure?
  21. Who said screen grab? You seem very quick to point fingers at other people here, so before doing it maybe you should learn to pay more attention. You might learn something.
  22. It's definitely a 'look' but as @Deadcode says it's quite achievable with the hue vs hue curve to push the skintones together a bit and lessen the hue spread. Also worth paying attention to is the saturation as that's quite controlled too. It's worth pulling a still into Resolve and with a tiny window having a look around the face to see which bits make up the overall vectorscopes above. Personally, when I first saw them I was quite surprised at how yellow and processed they looked and while I've gotten used to the look since then it's still quite a strong look for my eyes. Apart from lighting I think that the hue can vary depending on the person too Sick or not there's lots of variation.
  23. Yep.. that's right, sorry I should have said. Specifically the line in Resolve. Theres a bit of a debate about if it's the skin colour line or just an indicator, but the point is that skin tones actually vary wildly in hue depending on lighting conditions. The idea that skin tones should be on the line are shot down pretty quickly on the grading forums. The top row are from the ARRI video, and the bottom row are from other videos that got good reception. Here are a bunch of vector scopes from the ARRI video - notice the highly, highly, highly controlled skin tones! These are a couple from the C200 demo video (IIRC) and note the more spread out line, and also that the line is between the indicator line and the Red reference box, and even slightly beyond the red in certain areas.
  24. Some time ago I compared the skintones from the ARRI LF and C200 demo videos and the ARRI skintones were on the line or to the yellow side and the Canon ones were spread between the line and magenta reference point.
×
×
  • Create New...