Jump to content

kye

Members
  • Posts

    7,979
  • Joined

  • Last visited

Everything posted by kye

  1. Absolutely. There are some situations where the physical size of a GoPro is required, but the majority would be fine with any of those cameras. It's not like GoPros are really that much cheaper these days either! I think that vlogging really benefits from a very wide angle lens, as it makes it look like the person is a sensible distance away from the camera, rather than them in that uncomfortable spot where they might be talking to you but might also be leaning in hoping for a quick kiss In this sense I think that the Osmo Pocket (at 26mm equiv) and RX0 (similar FOV) missed the mark for this. A GoPro is a more flexible tool IMHO being that you can crop in for a smaller FOV at a lower resolution, but it's pretty hard to crop-out and get wider than the lens!! Especially considering that very few people with either of those cameras won't also have a phone with a 24-28mm lens. yeah, the second-hand market is basically a graveyard for older models. That's why I don't really feel bad about doing this to my Hero3 but it not surviving the process: In terms of vlogging with an F3, sure you can.... Just hit the gym first!!
  2. I agree. In fact, now I think about it, if I had to choose between 4K 8-bit and 1080p 10-bit I'd be torn, but might go 1080 with 10bit just because of the flexibility in terms of doing radical grades in post, which I do quite a bit when in less-than-ideal available lighting conditions. I see GoPros (or at least, action cameras) quite a bit on Netflix shows. They're always for shots mounted on the outside of a car, or other difficult placement situations, but what gives them away is the over-sharpened over-contrasty image, not the FOV or image quality more generally. I don't know why people don't just apply a blur filter over the top to match them better, even if it's not about matching, just making them not look like the image has been embossed onto something would be a great step forwards! If I vlogged I'd be pretty tempted by an action camera. Wide FOV, portable, you can get decent stabilisation with more recent ones, some have 100Mbps files, etc. Not bad for that particular use-case. There aren't many other use-cases where they'd be a good fit, but for some I'd imagine they're a reasonable choice, but that wouldn't be that big a segment of the market..
  3. Yeah, I was contemplating the better IBIS of Olympus but the 10-bit internal of the GH5 won out. I was emailing with one of the Olympus ambassadors about options and when I said that I had gone GH5 for the 10-bit even they conceded that 10-bit internal was a hard thing to pass up.
  4. Ah, I misread your comment and thought you were calling BS in the other direction lol. Yes, if you're putting a priority on accuracy then the IDTs don't work. I tried both and confirmed it for myself some time ago. Taking two clips in HLG and exposing them one stop apart, then bringing them into Resolve, taking one of them and doing an X to Linear conversion, scaling up the brightness, then converting Linear to X again. I tried every combination of X I could find (2100, 2020, etc) with each way to raise the luma (Offset, curves, Gain, etc) and no combination was a perfect match. In the end I realised that many of the conversions were similar, thus my earlier comments about log profiles being similar. When I finally wrapped my head around the idea that grading isn't about accuracy, it's about what looks nice, then I stopped worrying about profiles. Many professional / high-end colourists don't bother with CST / ACES / RCM and just take the log files and adjust colour balance with LGG, adjust contrast and primaries and are done, often only taking 10-20s per shot for the bulk of the work before starting the polishing and 'look' adjustments. Instead of doing a CST -> adjustment -> CST can you just use a single node and set the CS of the node (it's in the menu you get via right-clicking the node) and would it do the same thing? My understanding is that the CST is simply a user interface to the same RCM functionality that applies to clips in the media pool, timelines, etc.. I've been meaning to try it, but just never got around to it. It would save a lot of time and simplify the node structure somewhat.
  5. As people are posting their whole rigs, mine is the GH5 and Sony X3000 action camera. GH5 for "proper" shots, and the X3000 for random travel sequences between locations, and if I suddenly need a super-wide angle. For the GH5 I have the FF equivalent of 15mm F4, 35mm F2, and 85mm F2. I'll also take a longer lens on a trip if there's animals or sports, but it's not a core lens in my kit.
  6. Are you using the Panasonic IDT and finding that it does a really good job? If so, I wouldn't be surprised. Anyone who has spent any time grading (or colour matching) different gamma/colour spaces will know how similar they really are to each other. Both Log and Linear are mathematical terms used to describe certain functions. Sensors 'see' in Linear, and then encode to Log (assuming that's how you're recording) and although they do differ in subtle ways, both in deviating from a mathematical Log curve as well as doing 'nice' things to the colours, I'd recommend people to get some Log footage and then convert it to 709 using different input profiles and see what differences there are. Many of the settings are so similar that when you click on the new one there is so little change that you wonder if you hit the right buttons... For the GH5 I've been playing with both the HLG.2020 and HLG.2100 profiles and getting a decent look from both in various circumstances.
  7. Also, if you're going to test the cards by recording with the camera, make sure to record something with intense movement, not just a still scene. The bitrates can be a lot lower in reality if nothing much is moving. Trees blowing in the wind, fountains (zooming into just the water spraying part), or design your own. I got five still images that were completely different, put them into a 1080p60 timeline and exported them as a prores file. Then play the video on loop so it just sits and kind of flickers as the video loops. Then point the camera at it, set a smaller aperture to get everything in focus, and record it at 24/25p for a few minutes. I analysed the footage I took to confirm that this test works and even if the shutter is open between frames no frame is anything like the previous one.
  8. Agreed. There is a huge debate about log in 8-bit vs 10-bit, and there are instances where people do grades and show minor issues, and there are tests where people deliberately film difficult situations and then try to break the 8-bit footage and can't do it. There are many more complexities involved, as @KnightsFan mentioned, and compression is the biggest one. Trying to have a serious debate about 10-bit vs 12-bit is fine, but just don't try and push the angle that 10-bit is anything less than 99% as usable in 99% of the situations. If you're after ultimate quality then sure, shoot RAW that's totally fine, and yes it does give ultimate freedom, but that's like comparing a Ferrari and a Lamborghini - you can compare and one might be faster than the other but you can seriously claim that the slower one isn't fast enough to be used for everything except the smallest of situations. See my above shot of the HK harbour and tell me how 10-bit would somehow have been better when there aren't really serious artefacts in a grade that no-one would ever do in any situation in the real world. In terms of WB, yes, the colour science makes it trickier, but if you know what you're doing and have a half-decent software tool at your disposal then the only thing stopping you from getting an excellent balance is skill. and I would know - I shoot available light and often mixed lighting all the time and the thing that limits me is my ability to adjust WB in post. Check out these images.... An ungraded frame from a shoot I did, note the horrendous green/magenta lighting: and the two grades I got back from (the very gracious and much more experienced) members over at LiftGammaGain.com.. The first from Szilard Totszegi: and second from Cary Knoop: and the thread is here: https://liftgammagain.com/forum/index.php?threads/advice-for-grading-mixed-green-magenta-light-sources.12727/ This was after I'd battled with the video for hours, and gotten no-where near what they managed to do. Even after seeing what was possible and outright trying to copy their efforts to learn from their examples, I still didn't get it as good.
  9. Indeed it is We went on a special trip to a beach where you can see it over the ocean and all we saw was clouds. Admit defeat, run back to the bus, then get back to the port and the thing is in full view and the sun set over it. Life is funny like that sometimes!
  10. That's how I shoot all the time, grabbing a quick shot then running to catch up. Or getting shots of the family while they're doing their thing. For me the strengths of the GH5 really help with that. IBIS for walking while shooting (trying to simultaneously do the ninja walk and also not walk into a pole or anything) or for standing and grabbing a faux tripod shot. The 10-bit for being able to significantly push the image due to no control of lighting and little to no time to work the scene. Etc... My other approach is just to get volume, as not only does it mean you get lucky more often, but the practice increases your hit rate too. Here's some random GH5 shots from filming at the speed of life...
  11. I wasn't. That makes sense. I guess it just depends on what lenses you have on it. I have a GF3, and when paired with the 14mm f2.5 pancake it's (just..) pocketable, but you put the kit lens on it, or anything with a longer focal length or wider aperture and the size advantage disappears pretty quickly, but whatever works for you. I thought for a second there your reference of people who aren't so 'patient' might be to people in fast cars, maybe a street racing reference, illegal street racing, small camera doesn't attract attention etc... for street videos, or maybe undercover law enforcement. Or alternatively, referencing some kind of meme I'm unaware of, or humour that I wasn't getting, or..... or.... ???
  12. I'm confused. It was a genuine question.. like, either I learn something about the GX9, I learn something about the GH5, or I can offer advice.
  13. I've posted this before, but here's a clip I tried to push to breaking point, but it seemed pretty much unbreakable. This is the GH5 150Mbps 10-bit mode, so not even the 400Mbps All-I. When I was playing with ML raw I compared the 10-bit to the 12 and 14 bit modes, and although I could see a slight difference between the 10 and 12 if you pushed it hard, I concluded that 10 bits was enough for me. In terms of the GH5, the dual ISO is one feature that would be great to have, but the IBIS more than outweighs it, and in comparison to the S1H, the smaller size and cost more than outweigh it for me. Literally, if I became a billionaire tomorrow, the GH5 would still be the best camera available for what I do.
  14. Considering the GH5 is currently winning, it seems that the answer to your question is "3". Or, put another way, 0.000821/day. ??? What is it about the GX9 that makes it 'faster'? I never really thought of the GH5 as slow...
  15. ......and I could keep shooting on auto ISO / auto SS and it would no longer look like I left my ND filters at home!
  16. Maybe, but it would have been strange for me to say it all those years ago, when I only just now looked it up and learned what it meant!
  17. Not sure about the quality, but Resolve comes with a Media Management engine built in. It's kind of hidden, but its super simple to use, and has all the cool codecs. https://***URL not allowed***/davinci-resolve-media-manager/
  18. Meh... Kids want to be <insert the occupation of the tallest most impressive person they saw in the last 2 years here> when they grow up. All the kids that wanted to be astronauts when they grew up became accountants anyway, and once they make a few videos and meet the algorithm then they'll grow up to become accountants as well Same with anyone deemed 'successful'. I once had a very intelligent relative (who is a high-level manager in a small-medium sized business) tell me I should 'write an app and make a comfortable living off it' and I thought he was making a joke, but he wasn't. When I showed him the millions of apps on the App Store and told him that writing an app is like starting a business he changed his mind, but he genuinely didn't know. No-one makes a million TV shows or movies of people losing their savings for every movie or TV show about a dot-com billionaire. Of course it looks easier than it is.
  19. You can, but adjusting it would be difficult and it's a subtle effect, so you'd have to do lots of tests and dial things in. I'd suggest just doing it in post. That way you can adjust it shot-to-shot if you need to. Most lenses are softer wide open and sharpen up, so having an adjustable approach (especially in post where you're already setup to evaluate and match shots) shouldn't add that much to a workflow. Although, the counter-argument to that is to go with a Pro Mist or equivalent, and you'll find that any effect you deliberately add will make the footage much more uniform and differences will be much harder to spot. But you can't adjust the effect, so buy wisely
  20. Excellent question. I've been trying to peel the onion and get to the heart of things in my cinematic lenses thread, but I still have many more layers to go I think. When I did my big lens comparison I tried looking at how 3D each lens was. I put the lenses up on the screen, closed one eye, and looked at the image through a short PVC pipe so that I couldn't see the edges of the image, and I asked myself how convincing a 3D image it created. By only looking at it with mono vision it should have been a reasonable approximation of what the eye actually sees, and I made sure to evaluate the scene by only looking at the object that was in focus in the image, otherwise if you look around then the eye doesn't have to re-focus and the 'illusion' was broken. There were subtle differences between the lenses, but one overwhelming factor in this test was that if two lenses were at the same aperture (eg, 2.8, or 5.6) then the one with the slightly longer focal length had the advantage (eg. 58mm over 55mm, etc) This, of course, means that the aperture was slightly larger and the background was slightly more out of focus. I believe that there are queues that are genuine and that the people on here (and also real DPs doing real lens tests) who talk about it are genuinely seeing something. Unfortunately I'm not seeing it, so it's hard to do tests and get more insight. I'm very interested to see if there's anything new we can uncover.....
  21. No one has used the phrase "out of focus areas" which I thought was quite common. Yup, wide aperture = faster SS
  22. I've had success matching softer lenses with sharper lenses (or matching a softer wide-open image with a sharper stopped-down image from the same lens). I'd recommend taking a blurred copy of the image and adding it on top with a small opacity, there will be tweaks to get it perfect but that should get you into the ballpark.
  23. It would be interesting to see how many videos are sponsored and not declared. I've seen some (I don't think they were from Potato Jet though) but mostly they do this awkward "sponsor time" kind of insert which I just skip if it goes for more than a few seconds. I suspect that there's lots of people who don't like the inserts, because many channels now put a count-down timer or a progress bar up, which I think helps with the psychology of people sticking around (I find it makes it easier to skip lol). In terms of advertising, I'd prefer to be able to watch these guys get paid for creativity and have cameras advertised to me (or website companies, or music streaming services, or power tools companies) rather than be watching a nature documentary on TV and then have some person who sounds unhinged yelling at me that their furniture store is going bankrupt (again, for the 10th year in a row) and there are crazy bargains. In the end, someone has to pay. One of my favourite YouTubers Alex (a French guy who makes cooking and recipe videos) just started a series on making meatballs, and decided that instead of his normal process of learning the classic recipes, breaking them down, then perfecting them, the first video in the series ends with him getting on a plane to the US. I can't recall if that video was sponsored, but I don't think the economics of the ad revenue he gets from his videos enable him to fly to another continent for a week or two to meet other chefs and get glimpses into their kitchens etc. He did a series on pizza, where he went to Italy and filmed inside the kitchens of the best pizza places there. To me, the quality of the content makes the ad inserts worth it because it's better than the person that has to work a job and can't devote as much time and travel to it, and it's better than the content being interrupted for minutes at a time to be screamed at by crazy people about their bargains, or to have barely clothed people try to FOMO me into buying their slightly-better-but-100-times-more-expensive goods. The best solution would be to have a subscription network partly owned by the creators where the revenue model is by membership and it's ad-free, and we're starting to see these things with Makers Mob and Nebula, or on the free platforms with Patreon.
×
×
  • Create New...