Jump to content

kye

Members
  • Posts

    8,102
  • Joined

  • Last visited

Reputation Activity

  1. Haha
    kye reacted to FHDcrew in Panasonic G9 Mark II. I was wrong   
    Formatting is being super weird right now ignore post
  2. Haha
    kye reacted to Snowfun in Panasonic G9 Mark II. I was wrong   
    Would it be worth quoting Genesis 1:3 as this would solve the high ISO need…
    (with apologies)
  3. Like
    kye reacted to FHDcrew in Panasonic G9 Mark II. I was wrong   
    Definitely. It’s super versatile. 
  4. Like
    kye got a reaction from Thpriest in Panasonic G9 Mark II. I was wrong   
    Digital zoom is definitely an underrated feature of these higher resolution cameras.
    On my GH5 I used the 2x punch-in on my 17.5mm F0.95 to get 35mm and 70mm FOVs, and on my GX85 I used it with the 14mm F2.5 pancake lens to get 31mm and 62mm FOVs in a pocketable form-factor.
    The crop function on the GH7 is different and a bit more restrictive.  You get continuous zooming, but only to the point where the resolution you've chosen is at/near 1:1 crop into the sensor.  So, if you've got the 14mm lens on there and you're shooting in C4K, you enable the feature and it pops up a box on the screen saying "14mm" and you can zoom in more and more by pushing or holding a button and it goes from 14 - 15 -16 - 17mm, but it won't let you go further.  If you're in 1080p mode then it goes from 14mm to 38mm.
    Conveniently, if you disable the mode then it goes back to 14mm but if you re-enable it then it goes back to whatever zoom you were at previously, so it's easy to set a zoom level you like and then jump in and out of that FOV.
    My testing didn't indicate any IQ issues with it, in 24p mode anyway, so I think it's probably downscaling from a full sensor read-out.
    Not only is it really good for getting more FOVs from primes, but it's also great in extending the long end of your zooms too.
  5. Like
    kye got a reaction from FHDcrew in Panasonic G9 Mark II. I was wrong   
    Digital zoom is definitely an underrated feature of these higher resolution cameras.
    On my GH5 I used the 2x punch-in on my 17.5mm F0.95 to get 35mm and 70mm FOVs, and on my GX85 I used it with the 14mm F2.5 pancake lens to get 31mm and 62mm FOVs in a pocketable form-factor.
    The crop function on the GH7 is different and a bit more restrictive.  You get continuous zooming, but only to the point where the resolution you've chosen is at/near 1:1 crop into the sensor.  So, if you've got the 14mm lens on there and you're shooting in C4K, you enable the feature and it pops up a box on the screen saying "14mm" and you can zoom in more and more by pushing or holding a button and it goes from 14 - 15 -16 - 17mm, but it won't let you go further.  If you're in 1080p mode then it goes from 14mm to 38mm.
    Conveniently, if you disable the mode then it goes back to 14mm but if you re-enable it then it goes back to whatever zoom you were at previously, so it's easy to set a zoom level you like and then jump in and out of that FOV.
    My testing didn't indicate any IQ issues with it, in 24p mode anyway, so I think it's probably downscaling from a full sensor read-out.
    Not only is it really good for getting more FOVs from primes, but it's also great in extending the long end of your zooms too.
  6. Like
    kye reacted to Alt Shoo in Panasonic G9 Mark II. I was wrong   
    I think a lot of people wrote Micro Four Thirds off before really paying attention to what changed. Once you start working with the newer Panasonic bodies as a system not just a sensor the color, IBIS, and real time LUT workflow start to make a lot of sense, especially for documentary and run and gun work.
    I’ve been baking looks in camera more and more instead of relying on heavy grading later and it’s honestly sped everything up. I just put out a short video showing how I’m using that approach on the GH7 if anyone’s interested: 
    My YouTube is more of a testing and experimenting space rather than where I post my “serious” work, but a lot of these ideas end up feeding directly into professional gigs.
     
     
  7. Like
    kye got a reaction from Jahleh in What does 16 stop dynamic range ACTUALLY look like on a mirrorless camera RAW file or LOG?   
    My advice is to forget about "accuracy".  I've been down the rabbit-hole of calibration and discovered it's actually a mine-field not a rabbit hole, and there's a reason that there are professionals who do this full-time - the tools are structured in a way that deliberately prevents people from being able to do it themselves.
    But, even more importantly, it doesn't matter.  You might get a perfect calibration, but as soon as your image is on any other display in the entire world then it will be wrong, and wrong by far more than you'd think was acceptable.  Colourists typically make their clients view the image in the colour studio and refuse to accept colour notes when viewed on any other device, and the ones that do remote work will setup and courier an iPad Pro to the client and then only accept notes from the client when viewed on the device the colourist shipped them.
    It's not even that the devices out there aren't calibrated, or even that manufacturers now ship things with motion smoothing and other hijinx on by default, it's that even the streaming architecture doesn't all have proper colour management built in so the images transmitted through the wires aren't even tagged and interpreted correctly.
    Here's an experiment for you.
    Take your LOG camera and shoot a low-DR scene and a high-DR scene in both LOG and a 709 profile.  Use the default 709 colour profile without any modifications.
    Then in post take the LOG shot and try and match both shots to their respective 709 images manually using only normal grading tools (not plugins or LUTs).
    Then try and just grade each of the LOG shots to just look nice, using only normal tools.
    If your high-DR scene involves actually having the sun in-frame, try a bunch of different methods to convert to 709.  Manufacturers LUT, film emulation plugins, LUTs in Resolve, CST into other camera spaces and use their manufacturers LUTs etc.
    Gotcha.  I guess the only improvement is to go with more light sources but have them dimmer, or to turn up the light sources and have them further away.  The inverse-square law is what is giving you the DR issues.
    That's like comparing two cars, but one is stuck in first gear.  Compare N-RAW with Prores RAW (or at least Prores HQ) on the GH7.
    I'm not saying it'll be as good, but at least it'll be a logical comparison, and your pipeline will be similar so your grading techniques will be applicable to both and be less of a variable in the equation.
    People interested in technology are not interested in human perception.  
    Almost everyone interested in "accuracy" will either avoid such a book out of principle, or will die of shock while reading it.  The impression that I was left with after I read it was that it's amazing that we can see at all, and that the way we think about the technology (megapixels, sharpness, brightness, saturation, etc) is so far away from how we see that asking "how many megapixels is the human eye" is sort-of like asking "What does loud purple smell like?".
    Did you get to the chapter about HDR?  I thought it was more towards the end, but could be wrong.
    Yes - the HDR videos on social media look like rubbish and feel like you're staring into the headlights of a car.
    This is all for completely predictable and explainable reasons..  which are all in the colour book.
    I mentioned before that the colour pipelines are all broken and don't preserve and interpret the colour space tags on videos properly, but if you think that's bad (which it is) then you'd have a heart attack if you knew how dodgy/patchy/broken it is for HDR colour spaces.
    I don't know how much you know about the Apple Gamma Shift issue (you spoke about it before but I don't know if you actually understand it deeply enough) but I watched a great ~1hr walk-through of the issue and in the end the conclusion is that because the device doesn't know enough about the viewing conditions under which the video is being watched, the idea of displaying an image with any degree of fidelity is impossible, and the gamma shift issue is a product of that problem.
    Happy to dig up that video if you're curious.  Every other video I've seen on the subject covered less than half of the information involved.
  8. Like
    kye got a reaction from j_one in What does 16 stop dynamic range ACTUALLY look like on a mirrorless camera RAW file or LOG?   
    My advice is to forget about "accuracy".  I've been down the rabbit-hole of calibration and discovered it's actually a mine-field not a rabbit hole, and there's a reason that there are professionals who do this full-time - the tools are structured in a way that deliberately prevents people from being able to do it themselves.
    But, even more importantly, it doesn't matter.  You might get a perfect calibration, but as soon as your image is on any other display in the entire world then it will be wrong, and wrong by far more than you'd think was acceptable.  Colourists typically make their clients view the image in the colour studio and refuse to accept colour notes when viewed on any other device, and the ones that do remote work will setup and courier an iPad Pro to the client and then only accept notes from the client when viewed on the device the colourist shipped them.
    It's not even that the devices out there aren't calibrated, or even that manufacturers now ship things with motion smoothing and other hijinx on by default, it's that even the streaming architecture doesn't all have proper colour management built in so the images transmitted through the wires aren't even tagged and interpreted correctly.
    Here's an experiment for you.
    Take your LOG camera and shoot a low-DR scene and a high-DR scene in both LOG and a 709 profile.  Use the default 709 colour profile without any modifications.
    Then in post take the LOG shot and try and match both shots to their respective 709 images manually using only normal grading tools (not plugins or LUTs).
    Then try and just grade each of the LOG shots to just look nice, using only normal tools.
    If your high-DR scene involves actually having the sun in-frame, try a bunch of different methods to convert to 709.  Manufacturers LUT, film emulation plugins, LUTs in Resolve, CST into other camera spaces and use their manufacturers LUTs etc.
    Gotcha.  I guess the only improvement is to go with more light sources but have them dimmer, or to turn up the light sources and have them further away.  The inverse-square law is what is giving you the DR issues.
    That's like comparing two cars, but one is stuck in first gear.  Compare N-RAW with Prores RAW (or at least Prores HQ) on the GH7.
    I'm not saying it'll be as good, but at least it'll be a logical comparison, and your pipeline will be similar so your grading techniques will be applicable to both and be less of a variable in the equation.
    People interested in technology are not interested in human perception.  
    Almost everyone interested in "accuracy" will either avoid such a book out of principle, or will die of shock while reading it.  The impression that I was left with after I read it was that it's amazing that we can see at all, and that the way we think about the technology (megapixels, sharpness, brightness, saturation, etc) is so far away from how we see that asking "how many megapixels is the human eye" is sort-of like asking "What does loud purple smell like?".
    Did you get to the chapter about HDR?  I thought it was more towards the end, but could be wrong.
    Yes - the HDR videos on social media look like rubbish and feel like you're staring into the headlights of a car.
    This is all for completely predictable and explainable reasons..  which are all in the colour book.
    I mentioned before that the colour pipelines are all broken and don't preserve and interpret the colour space tags on videos properly, but if you think that's bad (which it is) then you'd have a heart attack if you knew how dodgy/patchy/broken it is for HDR colour spaces.
    I don't know how much you know about the Apple Gamma Shift issue (you spoke about it before but I don't know if you actually understand it deeply enough) but I watched a great ~1hr walk-through of the issue and in the end the conclusion is that because the device doesn't know enough about the viewing conditions under which the video is being watched, the idea of displaying an image with any degree of fidelity is impossible, and the gamma shift issue is a product of that problem.
    Happy to dig up that video if you're curious.  Every other video I've seen on the subject covered less than half of the information involved.
  9. Like
    kye got a reaction from Video Hummus in Panasonic G9 Mark II. I was wrong   
    Welcome back!

    Can you tell me your name?  Where are we?  What year is it?
    Good, good...
    You've been in a DOF-induced coma for the last 7 years.  We'll contact your families and let them know you've woken up - they'll be very happy to see you!
  10. Like
    kye got a reaction from Jahleh in What does 16 stop dynamic range ACTUALLY look like on a mirrorless camera RAW file or LOG?   
    I'm seeing a lot of connected things here.
    To put it bluntly, if your HDR grades are better than your SDR grades, that's just a limitation in your skill level of grading.  I say this as someone who took an embarrassing amount of time to learn to colour grade myself, and even now I still feel like I'm not getting the results I'd like.
    But this just goes to reinforce my original point - that one of the hardest challenges of colour grading is squeezing the cameras DR into the display space DR.  The less squeezing required the less flexibility you have in grading but the easier it is to get something that looks good.  The average quality of colour grading dropped significantly when people went from shooting 709 and publishing 709 to shooting LOG and publishing 709.
    Shooting with headlamps in situations where there is essentially no ambient light is definitely tough though, and you're definitely pushing the limits of what the current cameras can do, and it's definitely more than they were designed for!
    Perhaps a practical step might be to mount a small light to the hot-shoe of the camera, just to fill-in the shadows a bit.  Obviously it wouldn't be perfect, and would have the same proximity issues where things that are too close to the light are too bright and things too far away are too dark, but as the light is aligned with the direction the camera is pointing it will probably be a net benefit (and also not disturb whatever you're doing too much).
    In terms of noticing the difference between SDR and HDR, sure, it'll definitely be noticeable, I'd just question if it's desirable.  I've heard a number of professionals speak about it and it's a surprisingly complicated topic.  Like a lot of things, the depth of knowledge and discussion online is embarrassingly shallow, and more reminiscent of toddlers eating crayons than educated people discussing the pros and cons of the subject.  
    If you're curious, the best free resource I'd recommend is "The Colour Book" from FilmLight.  It's a free PDF download (no registration required) from here: https://www.filmlight.ltd.uk/support/documents/colourbook/colourbook.php
    In case you're unaware, FilmLight are the makers of BaseLight, which is the alternative to Resolve except it costs as much as a house.  
    The problem with the book is that when you download it, the first thing you'll notice is that it's 12 chapters and 300 pages.  Here's the uncomfortable truth though, to actually understand what is going on you need to have a solid understanding of the human visual system (or eyes, our brains, what we can see, what we can't see, how our vision system responds to various situations we encounter, etc).  This explanation legitimately requires hundreds of pages because it's an enormously complex system, much more than any reasonable person would ever guess.
    This is the reason that most discussions of HDR vs SDR are so comically rudimentary in comparison.  If camera forums had the same level of knowledge about cameras that they do about the human visual system, half the forum would be discussing how to navigate a menu, and the most fervent arguments would be about topics like if cameras need lenses or not, etc.
  11. Like
    kye reacted to Jahleh in What does 16 stop dynamic range ACTUALLY look like on a mirrorless camera RAW file or LOG?   
    I shoot also mostly with available light, and when the sun has set in the light of dim headlamps. So being able to push and pull shadows and highlights is extremely important. In that regard GH7 is no slouch, but it is not quite the same than Z6iii, ZR nor even S5ii was either.
    If you have a good HDR capable display (and I don’t mean your tiny phones, laptop or medium sized  displays, but a 65” or bigger OLED with infinite contrast, or a JVC projector with good contrast and inky blacks) one must be a wooden eye to not notice the difference between SDR and HDR masters. 
    At least with my grading skills the 6 stops of DR in SDR look always worse than what I can get from HDR.
  12. Like
    kye reacted to eatstoomuchjam in "Left-handed Girl" anyone?   
    Probably.  I just found it really overbearing.
    I personally don't bother with diffusion filters at all.  The short, lacking detail reason is that I'll just use a vintage lens if I want a vintage look.
    And yes, your observations align with mine about using diffusion filters.  On low-budget sets, they also add headaches on controlled shots as the DP is now complaining that the lights are interacting with their diffusion filter in a bad way, causing time loss due to coddling the darn thing.
  13. Like
    kye got a reaction from Jahleh in What does 16 stop dynamic range ACTUALLY look like on a mirrorless camera RAW file or LOG?   
    I shoot in uncontrolled conditions, using only available light, and shoot what is happening with no directing and no do-overs.  This means I'm frequently pointing the camera in the wrong direction, shooting people backlit against the sunset, or shooting urban stuff in midday-sun with deep shadows in the shade in the same frame as direct sun hitting pure-white objects.
    This was a regular headache on the GH5 with its 9.7/10.8 stops.  The OG BMPCC with 11.2/12.5 stops was MUCH better but still not perfect, and while I haven't used my GH7 in every possible scenario, so far its 11.9/13.2 stops are more than enough.
    The only reason you need DR is if you want to heavily manipulate the shot in post by pulling the highlights down for some reason, or lifting the shadows up for some reason.
    Beyond the DR of the GH7 I can't think of many uses other than bragging rights.  When the Alexa 35 came out and DPs were talking about its extended DR, it was only in very specific situations that it really mattered.  
    Rec709 only has about 6 stops of DR, so unless you're mastering for HDR (and if you are, umm - why?) so adding more DR into the scene only gives you more headaches in post when you have to compress and throw away the majority of the DR in the image.
  14. Like
    kye got a reaction from eatstoomuchjam in What does 16 stop dynamic range ACTUALLY look like on a mirrorless camera RAW file or LOG?   
    I shoot in uncontrolled conditions, using only available light, and shoot what is happening with no directing and no do-overs.  This means I'm frequently pointing the camera in the wrong direction, shooting people backlit against the sunset, or shooting urban stuff in midday-sun with deep shadows in the shade in the same frame as direct sun hitting pure-white objects.
    This was a regular headache on the GH5 with its 9.7/10.8 stops.  The OG BMPCC with 11.2/12.5 stops was MUCH better but still not perfect, and while I haven't used my GH7 in every possible scenario, so far its 11.9/13.2 stops are more than enough.
    The only reason you need DR is if you want to heavily manipulate the shot in post by pulling the highlights down for some reason, or lifting the shadows up for some reason.
    Beyond the DR of the GH7 I can't think of many uses other than bragging rights.  When the Alexa 35 came out and DPs were talking about its extended DR, it was only in very specific situations that it really mattered.  
    Rec709 only has about 6 stops of DR, so unless you're mastering for HDR (and if you are, umm - why?) so adding more DR into the scene only gives you more headaches in post when you have to compress and throw away the majority of the DR in the image.
  15. Like
    kye got a reaction from Emanuel in What does 16 stop dynamic range ACTUALLY look like on a mirrorless camera RAW file or LOG?   
    I shoot in uncontrolled conditions, using only available light, and shoot what is happening with no directing and no do-overs.  This means I'm frequently pointing the camera in the wrong direction, shooting people backlit against the sunset, or shooting urban stuff in midday-sun with deep shadows in the shade in the same frame as direct sun hitting pure-white objects.
    This was a regular headache on the GH5 with its 9.7/10.8 stops.  The OG BMPCC with 11.2/12.5 stops was MUCH better but still not perfect, and while I haven't used my GH7 in every possible scenario, so far its 11.9/13.2 stops are more than enough.
    The only reason you need DR is if you want to heavily manipulate the shot in post by pulling the highlights down for some reason, or lifting the shadows up for some reason.
    Beyond the DR of the GH7 I can't think of many uses other than bragging rights.  When the Alexa 35 came out and DPs were talking about its extended DR, it was only in very specific situations that it really mattered.  
    Rec709 only has about 6 stops of DR, so unless you're mastering for HDR (and if you are, umm - why?) so adding more DR into the scene only gives you more headaches in post when you have to compress and throw away the majority of the DR in the image.
  16. Like
    kye reacted to maxJ4380 in Adapters are BACK.. and better than ever!   
    3 reasons i suspect, 
    1) There's a dollar in it.
    2) Pretty much every influencer has been selling these and other products and their souls as well.
    3) Taking a slightly less cynical view, digital cameras being "digital" these lenses may take that edge off and they give you a different field of view as well as some unique character you may or may not like. But you know all of this already.
    Personally, i'm pretty happy with my sirui 24mm on mft mount. Its taken awhile to get used to it and i think i'm about 80% there. I do like the flares, i don't think their overdone. bokeh is kinda meh buts its not a 2x squeeze so to be expected, but i could insert an oval shape if it bothered me more and then there's trades off's as well. Mostly i just like the convenience i think.  Attach lens to camera and shoot. A single-focus anamorphic front adapter sounds simple but would add complexity i think, something i'm not keen on at the moment. 
    I could be wrong but wasn't 1.33 tailored towards mft ? and the bigger squeezes to  s35 and full frame sensors? thats the impression i have about the different sensors sizes and squeeze ratios. 
     
  17. Like
    kye got a reaction from Emanuel in RAW Video on a Smartphone   
    This is the first smartphone video I've seen that didn't shout (or whisper) that it was shot on a smartphone.
    I do get flavours of it being shot on something small and mirrorless because of the movement of the camera (if it was a heavy rig it would have moved differently).
    I'm pretty stoked actually, and can't wait to get a vND solution for my iPhone 17 Pro.  I suspect that I might end up shooting a lot of street stuff on it just because the form-factor is so small and people are far less curious/suspicious of smartphones than real cameras.
  18. Like
    kye reacted to FHDcrew in Panasonic G9 Mark II. I was wrong   
    Thank you. I’ve found it’s very possible. I hear claims online and even in-person of people saying “you just can’t replace a gimbal.”  I mostly disagree. If you have a great shooting technique down and are smart to maximize your camera’s best aids to help stabilize the shot, and you then are smart about your post-stabilization if needed, it’s absolutely possible. I was able to get a lot done with my less-than-ideal Nikon Z6 setup. IBIS on that camera is pretty lens dependent with F mount glass. It was never as good as the G9II or S9 with e stab high. But it was still possible. 
     
    You have caught on fast to my preferred way to shoot and WHY I like to shoot that way. I’ve used gimbals before and have worked with others who use gimbals a lot. To me it just feels like a heavy nuisance that gets in my way. I feel I can get shots faster handheld. I can quickly get vastly different angles, movements, compositions, focal lengths etc handheld. I still think ultimately the gimbal gives ultimate smoothness, particularly at long focal lengths. It is easier to pull it off there. But, you can get handheld very close and gain a massive weight and agility advantage. 
     
    Sometimes I watch a popular camera review YouTuber or shill-tuber. I take a minute to actually analyze the shots they are getting. Some are talented. Others, I look at the actual footage and I’m like, this is fine. I could get that easily. Sometimes I like the stuff I shoot more than the stuff they get from a camera. Sometimes I even feel I’ve gotten better shots with my 8 bit Z6 than some of their stuff on the latest Sony FX2 or whatever. This is not always the case, but it’s happened. Anymore I care about cameras to make my life easier and speed up my workflow while giving me a shooting experience that helps me be more creative. For me, the G9II certainly does that. 
  19. Like
    kye got a reaction from MrSMW in Panasonic G9 Mark II. I was wrong   
    Seems similar to my priorities.  Since getting the GH7 I haven't been jealous of FF except for my foray into 'night street cinema' where having crazy shallow DOF would be awesome, but it's not worth swapping over and thanks to adapters there are options for me.
    Yeah, everyone would be better off if they shot more.  When you do it becomes obvious that our wants/needs are highly situational and you become more understanding when someone describes their wants/needs as being different to your own.
    Definitely agree with @MrSMW about just assessing the cameras on their current capabilities.  TBH, the GH7 is more capable right now than most flagship cameras, when it comes to things that actually matter.
    There's a funny thing that happens when you write down what is important to you, then assess cameras against that.  Things like 8K60 internal RAW somehow magically don't make the camera better at many/any of your actual criteria, but having 4K Prores HQ internally might.  The list above from @FHDcrew is a good example of that.
    Then if you write down a list of everything you own that wouldn't work with the alternative camera, and think about selling it and then re-buying it again, you realise that getting a different camera with half-a-stop more DR isn't worth the thousands of dollars and weeks of work you'd lose selling and re-buying all your lenses, batteries and power management,  cage and accessories, trips to the post office or courier, waiting for things to be shipped, assembling and troubleshooting everything when it arrives, learning new firmware and menus, doing a battery of tests to understand the sensor and colour science and codecs and how to treat it in post, then getting familiar with the rig to the point you can think about what you're pointing it at instead of what settings are assigned to which button etc.
    In terms of the G9ii / GH7 vs the S5 series, I wouldn't count on the MFT options being that much lighter, and it will probably come down to the lenses rather than bodies, which are pretty chunky.
  20. Like
    kye got a reaction from FHDcrew in Panasonic G9 Mark II. I was wrong   
    @FHDcrew Cool clips.  Doing smooth moves that are meant to be like slider / crane work while walking and using a small setup with just camera/lens/filters and no big/heavy rig etc is definitely at the edge of stabilisation tech, and your results look pretty good.
    The way I think about your tradeoffs are:
    When you get the 12-35 2.8 you'll get Dual-IS where the IBIS links with the OIS and you get an additional level of stabilisation, so that will up your results by another notch or make it a lot less fiddly in post to get the same results you're getting now
      The 12-35 will definitely have a deeper DOF wide-open than the 18-35 setup wide open, but none of the shots in those videos really struck me as being too shallow for the 12-35.  
    They might be, but while you might lose some DOF, you're gaining stabilisation, so you might be able to get the same amount of stabilisation from a longer focal length on the 12-35 than you did on the 18-35 which will push towards a shallower DOF because of the constant aperture, so there are lots of things that are competing with each other here and 'all-else-being-equal' comparisons are misleading.
      Another trade-off people don't seem to talk about is that having shallower DOF might make still images look a bit nicer, but if your deeper-DOF setup allows you to shoot faster then you'll shoot more shots in the same situation, which will mean that you have more shots to draw from in the edit and the best ones that get used in the edit will be better simply because there were more of them..  so it's your edit having shots with shallower-DOF vs your edit having shots with deeper-DOF but being better composed, including nicer moments from people, etc.
      Continuing on from the previous point, if you shoot more shots then not only will each individual shot in the edit potentially be nicer, you'll also have a lot more options in the edit.  This is something that is a huge dynamic in what I do (shooting travel) because it means I can get a greater variety of shots which means I can pull things off in the edit that I couldn't otherwise do.  This might be a bit non-obvious to some, but if you imagine you have to make a 1-minute edit with only 1-minute of footage, then you have almost no options whatsoever, and although there is diminishing returns the more footage you have, sometimes in the edit you'll want to use one moment but in order to do so you have to have another shot that fits a very specific purpose and if you don't have that then you can't use the good moment (or if you do it will be clunky).  So in this sense a really dull or even badly-shot clip can make your edit better by letting you use a great shot or have a more interesting structure or line things up with the music better to have a nicer ebb-and-flow to the whole thing.
      Elaborating still, the more stable your shots are in-camera, the less time you'll spend tinkering with them in post and the more time you can devote to finding a better audio track, doing more sound design, doing a couple more versions and smoothing over the rough edges a bit more, or pulling off a colour grading look that needs a bit of time to work out. People think MFT = deep DOF = less cinematic, when in realistic terms it can also be MFT = better stabilisation = shoot faster = more shots and more variety of shots = less work in the edit = more options in the edit = a better edit overall.  If camera-bro YT has taught us one thing it's that shallow DOF doesn't make an edit great.  Some of the best edits I've ever seen were made of exclusively non-spectacular shots.
    It's a bit of a blind spot because less-skilled people online don't yet understand about how things in pre impact prod, or how things in pre and prod impact post, and the professionals who work on film sets where people work in departments often don't understand the downstream implications because they just follow the guidelines of their department without understanding them.
  21. Like
    kye reacted to FHDcrew in RAW Video on a Smartphone   
    Looks so good.  Love how far the iPhone as come.  Outside of lowlight I could probably shoot all of my client work on the 17 pro if I needed to...it looks very good in talented hands.  Deep DOF is a vibe anymore.  All the elements we need are there.  Good color, good codec, good DR...boom.
  22. Like
    kye reacted to FHDcrew in Panasonic G9 Mark II. I was wrong   
    Definitely; that's where the 12-35 2.8 and DJi 15mm 1.7 will show their strengths for me.  The lenses haven't arrived yet, but the camera came with the 12-60mm 3.5-5.6 kit lens, so definitely in the ball park with size and weight.  The thing is so darn light; it actually feels extremely comfortable and balanced attached to the larger body style.  I could be fine with a sigma 18-35; funny thing is even that feels somewhat light to me, as I am used to having to carry around my Atomos Ninja V for anything 10 bit.
     
    Yes, yes, yes.  The only reason I have a great idea now of what matters to me, is because at this point I've shot a lot of projects, and have done so on multiple different camera systems.  So along the way, I've learned what matters to me.  I'm not going to notice a 1/2 stop DR difference; some grainy footage (as long as the grain isnt ugly) doesn't bother me terrible. Sometimes I like it.  But I do value great stabilization, as the way I shoot I end up spending a good chunk of my time finessing post-stablization to achieve the type of camera movement I want while keeping things completely handheld.  I have a solid handheld technique down, but on most cameras it is still not perfect.  I always hold the camera losely, usually shoot wide, and do a great heel-toe walk or body sway.  But I always need to post-stabilize.  I end up trying the stabilization options in Davinci Resolve.  If that doesn't work, I render that clip out to Prores and import it to After Effects just to apply Adobe Warp Stabilizer, as it is a bit better in my experience.  Once I get the result I like, I export.  The beauty of the G9II is that when you combine the fantastic IBIS with e-stabilization high, I get quite close to the result I get after all of that post stabilization process...but this is just the footage out of the camera.  It saves me a lot of time.  And I can even add a drop of warp stabilizer on top to make it perfectly smooth.
     
    Another big advantage for me, its effectively doing what Gyroflow does but all internally and paired with the best IBIS ever.  I've tried Gyroflow.  I've used it on some FX3 footage I shot for my buddy's wedding film company.  I've also rigged an iPhone to my Nikon Z6 as well as used the Senseflow A1.  Its a nice solution.  I figured I'd love it; its the same concept of what normal post-stabilization does (which I always use ALL THE TIME).  Big difference is it is using true camera motion data; so the results should be perfect right?  Well yes, but you need to shoot on a high shutter speed.  And I found that even on the Sony FX3, where Gyroflow can work with IBIS on, the crop was often still fairly large.  And the workflow is lengthy.  With the G9II, I have a minimal 1.255x crop with e-stab on high, and because its working fully in tandem with the phenominal physical IBIS system, its very stable AND I can zoom the lens mid-shot and it works fine (can't do this with gyroflow on most setups).  AND I can keep my shutter at 1/50 because the physical IBIS system is doing 80% of the work here.
     
    But yeah.  Moral of the story is shooting lots and lots of stuff has made me realize what matters most to me.  The G9II seems to really hit that.  Again, I used the Nikon Z6 for 4 years.  I've also filmed weddings on a Sony FX3 with nice Sony G master glass.  I filmed very extensively for one organization with a Canon R5 and EF glass.  This past summer I bought a Canon R7, then a Panasonic S9, then sold both.  So I've tried enough cameras and shot enough to know what works well for me.  I'd fully agree that a lot of what camera youtuber's claim are the big time differences are not always as important as they seem; for me, the wonderful IBIS of the G9II and the minor crop in e-stabilization is way more useful than a full stop of DR improvement when you already had great DR in the first place.  Etc etc.
    This is a concert I filmed and edited this summer on the Panasonic S9.  I haven't had a chance to film anything substantial on my G9II...but this is close.  It's a super weird setup I sort of wound up with over the summer...The Lumix S9 with the Sigma 18-35...in the Super 35 crop mode WITH E-stab on high.  So basically a 2x crop MFT level at that point.  But I still found the image to be very nice.  More importantly, with some careful walking, I got the images to be this stable and a lot of these shots have NO post-stab applied.  Colors were very rich.  G9II is even better because again the crop is lessened in E-stab high and the physical IBIS is better.  And build quality smokes the S9; that was something I did not appreciate about that camera.
     
    A short clip from a concert I filmed, with the aforementioned Lumix S9 setup.  Again, no post-stabilization.  It is just so smooth.  Makes all the difference with how I like to film.
     
    More handheld with the Lumix S9 setup.  This has a bit less "gimbal-push-in" shots and a bit more regular handheld shots.  With e-stabilization high, it has a perfect balance struck, where you can walk and move the camera such that it looks like a steadicam, or you can just handhold it for regular stuff and it looks as stable as a cine-cam weighted down.
     
    This wedding trailer was with my old Nikon Z6 setup.  Combo of Davinci post-stab and Warp Stabilizer.  Outside, I cranked my shutter speed very high to help.  I used RSMB to add motion blur in post.  While this worked, I had to spend extra time stabilizing in post and tweaking things if it was not perfect.  This is all but eliminated now with the G9II.  Also, half of this video was shot on a Nikon F-mount 24-85mm 3.5-4.5, entirely at f/4.5.  I reckon that looks pretty close to what the Panasonic12-35 2.8 will look like; @kyelet me know if I am wrong since you've used that very lens I think?  But anyways, its enough DOF for me.  That being said, if you like more, totally get it.  Nothing wrong with that.  End ramble haha.  
  23. Like
    kye got a reaction from FHDcrew in RAW Video on a Smartphone   
    This is the first smartphone video I've seen that didn't shout (or whisper) that it was shot on a smartphone.
    I do get flavours of it being shot on something small and mirrorless because of the movement of the camera (if it was a heavy rig it would have moved differently).
    I'm pretty stoked actually, and can't wait to get a vND solution for my iPhone 17 Pro.  I suspect that I might end up shooting a lot of street stuff on it just because the form-factor is so small and people are far less curious/suspicious of smartphones than real cameras.
  24. Like
    kye got a reaction from Ty Harper in Adapters are BACK.. and better than ever!   
    Nice.
    In a sense, the fact that lots of modes on FF cameras have an APSC crop is a bit of a blessing in disguise.  Not only to get a RAW file without insane resolution / bitrate, but it means that there are speed boosters for FF mounts.
    I think @Andrew - EOSHD has investigated speedboosting Medium Format lenses onto FF sensors, but my impression was that it's probably a difficult architecture to find combinations of equipment that won't vignette heavily or perform poorly at fast apertures.  
    This is where the front anamorphic adapters can be useful, as I'd imagine there would be far more usable configurations from fitting a front anamorphic adapter to a FF camera + FF lens combo.  The front adapters don't care about your camera mount / flange distances / lens mount / lens rear protrusion / etc, so in a way they're more like PL glass which you'd be able to keep and use regardless of what cameras you got in the future.
  25. Like
    kye reacted to Ty Harper in Adapters are BACK.. and better than ever!   
    Right now I'm mostly using the Canon's .71x speedbooster on my R50V - but before that I was using it with my R5C for years as a great way to access the cam's 2-5K RAW features (which are only available in s35 mode) while basically making it FF. It's just one of the many reasons the R5C remains one of the most versatile releases in Canon history. 
×
×
  • Create New...