Jump to content

kye

Members
  • Posts

    8,117
  • Joined

  • Last visited

Reputation Activity

  1. Like
    kye reacted to Django in 2026 Camera Pick (C50/R6 mk3, FX3/FX2, ZR)   
    @kye
    Thanks for the thoughtful take, two solid points.
    On the first one: I don’t really have emotional attachment to camera bodies anymore. They’re just tools that either help me get the shot or get in the way. Lenses are the emotional part for me (the rendering, the character, the way they feel when I look through them), but the body is basically a computer with a mount and some buttons.
    That said, ergonomics and UI matter hugely. If I’m constantly fighting menus, fumbling controls under pressure, or the grip feels wrong after 20 minutes, my mood tanks and it bleeds into the set. I’ve shot with cameras that technically should be fine but never clicked with my hands or brain. The day always feels harder and the results flatter. So if the C50’s cine OS with shutter angle, proper exposure tools and XLR top handle let me stay in flow instead of menu-diving or second-guessing, that’s worth a lot more than specs on paper.
    Reliability is primal too. A body that fails on set (AF hunting in low light, overheating mid-interview, battery dying unexpectedly, corrupted file, flicker issues, or weird grading artifacts) is a disaster, especially solo. I’ve had shoots go sideways because of exactly that. So even if a camera is technically capable, if it can’t be trusted in the field for hours, it’s not a tool, it’s a liability.
    On stabilization: I’m with you. I’m not chasing perfectly locked-down gimbal shots or overcooked EIS. I actually like natural camera movement, it feels alive and human. The stuff that kills the vibe for me is the micro-jitters and tiny breathing shakes on small-body cameras. Those little floating tremors look nervous and amateurish. Big intentional camera motion (shoulder rig sway, handheld energy) can be beautiful and add to the scene, but those small unintentional artifacts from inadequate stabilization are just distracting.
    That’s why Gyroflow plus shooting with EIS off (or Standard only when needed) feels like the sweet spot. I get to keep the organic handheld character I like, but I can surgically remove the annoying micro-shake in post without turning everything into a locked-down special effect. If a shot is so dynamic that even that isn’t enough, I’ll reach for a gimbal or shoulder rig anyway. But for 80 to 90 percent of the lifestyle, interview and observational stuff I’m shooting, I’ll be on sticks with handheld B-roll. 
    Appreciate the nudge. It’s always good to be reminded that mood, flow and reliability matter more than specs.
  2. Like
    kye got a reaction from ArashM in 2026 Camera Pick (C50/R6 mk3, FX3/FX2, ZR)   
    Two thoughts from me.
    If you close your eyes and imagine each scenario, how do each of them make you feel?
    What is never really talked about is that if you feel like you're having to argue or strong-arm your equipment then you'll be in a bad mood, which isn't conducive to a happy set, getting good creative outputs, or just enjoying your life.  I think people dismiss this, but if you're directing the talent then this can really matter - people can tell if you're in a good mood or distracted or frustrated etc and people tend to take things personally so your frustrations with the rig can just as easily be interpreted by others that you're not happy with their efforts.
    The odd little image technical niggle here or there won't make nearly as much difference as enjoying what you do vs not.
    When it comes to IBIS vs Giroflow vs EIS etc, it's worth questioning if more stabilisation is better.  For the "very dynamic handheld shots" having a bit more camera motion might even be a good thing if it is the right kind of motion.  Big budget productions have chosen to run with shoulder-mounted large camera rigs and the camera shake was pleasing and added to the energy of the scene.  Small amounts of camera shake can be aesthetically awful if they're the artefacts from inadequate OIS + IBIS + EIS stabilisation, whereas much more significant amounts of camera shake can be aesthetically benign if coming from a heavier rig without IBIS or OIS.
    If more stabilisation is better, maybe it would be better overall to have a physical solution that can be used for those shots?
    Even if there aren't good options for those things, maybe the results would be better if those shots were just avoided somehow?  In todays age of social media and shorts etc, having large camera moves that are completely stable is basically a special effect, and maybe there are other special effects that can be done in post that are just as effective but are much easier to shoot?
  3. Like
    kye got a reaction from Django in 2026 Camera Pick (C50/R6 mk3, FX3/FX2, ZR)   
    Two thoughts from me.
    If you close your eyes and imagine each scenario, how do each of them make you feel?
    What is never really talked about is that if you feel like you're having to argue or strong-arm your equipment then you'll be in a bad mood, which isn't conducive to a happy set, getting good creative outputs, or just enjoying your life.  I think people dismiss this, but if you're directing the talent then this can really matter - people can tell if you're in a good mood or distracted or frustrated etc and people tend to take things personally so your frustrations with the rig can just as easily be interpreted by others that you're not happy with their efforts.
    The odd little image technical niggle here or there won't make nearly as much difference as enjoying what you do vs not.
    When it comes to IBIS vs Giroflow vs EIS etc, it's worth questioning if more stabilisation is better.  For the "very dynamic handheld shots" having a bit more camera motion might even be a good thing if it is the right kind of motion.  Big budget productions have chosen to run with shoulder-mounted large camera rigs and the camera shake was pleasing and added to the energy of the scene.  Small amounts of camera shake can be aesthetically awful if they're the artefacts from inadequate OIS + IBIS + EIS stabilisation, whereas much more significant amounts of camera shake can be aesthetically benign if coming from a heavier rig without IBIS or OIS.
    If more stabilisation is better, maybe it would be better overall to have a physical solution that can be used for those shots?
    Even if there aren't good options for those things, maybe the results would be better if those shots were just avoided somehow?  In todays age of social media and shorts etc, having large camera moves that are completely stable is basically a special effect, and maybe there are other special effects that can be done in post that are just as effective but are much easier to shoot?
  4. Like
    kye reacted to FHDcrew in Panasonic G9 Mark II. I was wrong   
    Sadly sold the G9II. I realized I need good high ISO performance, and seems PDAF is disabled at real high ISOs. I scored a canon r6 for $929 and other than the overheating it’s great. And I can live wijt the overheating for how I shoot. IBIS can genuinely compete with LUMIX by having DIS on standard and using adobe’s warp stabilizer, somehow warp stabilizer absolutely thrives at stabilizing the type of leftover shakiness and artifacts of canon’s IBIS + DIS combo. Very consistently stable without warpy artifacts. Great high ISO performance. Lovely image in CLOG3. 
     
    I did like the G9II. Its image was great. Best Ibis I’ve ever used. Very comfortable.  Just realized I need better high ISO performance. Yes I could have gotten a super fast zoom like the sigma 18-35 1.8/speedbooster or Panasonic 10-25mm 1.7. But sometimes I REALLY need to push things at weddings or concerts, shooting at ISOs like 25,600. That’s beyond what the Gh7/G9II can handle. 
  5. Like
    kye reacted to Mattias Burling in Where did Mattias Burling go? Youtube channel is gone.   
    Hello, I hope everyone is well!
    Even though I’m not really active on camera forums anymore, I frequently read the EOSHD blog and every now and then the forum, so I saw the thread and thought I would respond.
    Because it wasn’t ”poof gone”, it was announced on the channel over a year ago and mentioned in the last three videos.
    Before going into why, super flattered that this thread exist. I mean that.
    So here are some thoughts on the matter and why I took it down.
    Hobby vs Work
    YouTube was never my job, just a hobby. So was video making and photography, in the beginning.
    When starting the channel I was working as a producer after a couple of years as a radio/TV reporter. So I started the channel to keep my practical skills fresh. And to keep up with the development, which was huge at the time. The DSLR revolution, Blackmagic, cheaper editors etc.
    Fast forward a couple of years and I started making more videos at work again. At the same time I pretty much lost all interest in doing it as a hobby. And actually canceled the channel.
    Winston Churchill was definitely right in saying that work and hobbies should not be too similar. 
    But what I had discovered was a passion for still photography, which I had pretty much no experience with. So I started making videos again.
    That’s why my videos became very repetitive and short. I didn’t care about that part, I just wanted to display my stills work and get feedback, talk to the community, experiment with cameras and develop.
    After a few years I became a good enough photographer that my new employer noticed and just like that I was shooting stills professionally all the time. And I still do (I work in marketing and PR). It’s a huge bonus in my field and if you are good at it you will never be out of work.
    So photography also became less and less of a hobby.
    Instead I found other hobbies. They where things that for example got me out into nature, so photography tagged a long a while, as a secondary activity. But eventually it faded. It was also nice to do things and not share it with people. I know I probably could have a very successful channel by making videos about my current hobbies, and even make some money. But I never really wanted a channel for the sake of a channel. And always had a full time job.
    The fact is that at no point would I had been able to live of my channel, not even at the peak. Even with sponsors it was never more that a regular salary (in my field and country). But as long as it was a hobby and I was glad to do it, it was a welcome addition to finance camera gear.
     
    Time
    At the same time as my channel started to feel less fun and other hobbies started taking my time, I started a family. So.. you get the idea: full time job + family + 2-3 hobbies = no YouTube.
    Upkeep 
    So why take it down, why not leave it for the community? I did..  at first.
    Like some of you pointed out, the YouTube crowd in the photography/video space is generally nice and positive. That is my experience as well.
    Early on I learned that a good way of keeping the trolls away was to be present. Respond and engage. Trolls are usually idiots or cowards, so they don’t like getting push back.
    But once I stopped making videos, views and comments obviously went down. But the trolls started coming back. Not so much after me, and I don’t care about that. But agains the community. The people commenting started being nasty towards each other.
    I felt a responsibility to moderate, which was annoying. That’s when the thought about simply removing it started to grow.
    It wasn’t an impuls. It was an internal debate that went on for months. And the issue grew much much larger than a couple of trolls. 
    I started thinking about five years ahead, 10 years, 30 years..
    This post is already way too long so I won’t go into all of it. But I think you get the idea when I say:
    Privacy or when the content no longer reflects the creator. Digital minimalism, control over one’s narrative, inactive or outdated content. Risk of misuse of content  due to me not checking the terms updates. Closure.
     
    So there is a looong ramble :)
     
    To keep in spirit of the forum I can charge my current gear for pro work :)
    For the longest time I used the EOS-R for 75% of all my work and the R5 (rental)  for the rest. It wasn’t mine but my employer told me to buy whatever I wanted. Paired it with a 28, 35 and 70-200. 70/30 stills/video.
     
    The R5 is peak camera imo.
     
    Today is a little different. I started working for a new company about a year ago and again was told to buy what I needed. I would have bought the R5 without hesitation if it wasn’t for the Sigma 35-150/2-2.8.. I just had to have it. So I ordered the Nikon Z6iii. It’s not as good overall as the R5 for me and what I like in a tool camera. But it’s 90% there. And coupled with that lens it’s becomes on par.
     
    //MB
  6. Haha
    kye got a reaction from John Matthews in Panasonic G9 Mark II. I was wrong   
    MFT has been dead for decades now - everyone who hasn't been living under a rock for the last 10 years knows this.
    What people don't know is that due to a quirk in quantum physics and the way that time works, MFT was actually dead before it was invented.
    This means that my GH7 and GX85 and OG BMPCC and BMMCC never existed, don't exist, and when MFT finally "dies" somehow will disappear from my house.
    I bet you even think the earth is round...   some people are just too much!
  7. Like
    kye reacted to Andrew - EOSHD in Where did Mattias Burling go? Youtube channel is gone.   
    Really sad if he has indeed given up on YouTube.
    He's still on Instagram https://www.instagram.com/mattiasburling
    Maybe we should ask him if he's alright?
  8. Like
    kye got a reaction from John Matthews in Panasonic G9 Mark II. I was wrong   
    Nice!
    The other thing to consider when testing ISO and noise in the final image is the delivery part of the pipeline.  If I shot in two different modes and then processed them differently in my NLE, I might be able to tell the difference between them in my NLE.
    But no-one except you is watching your footage in your NLE, so you'll be exporting it, probably to h264 or h265, and you might not be able to tell the difference between them at this point.
    If you're going to be uploading them to a streaming service, then that service will decompress, process (NR, sharpening, who knows what else) and then brutally re-compress it.
    Lots of things are visible in the NLE and are completely gone or mangled beyond recognition in the final export or stream.
  9. Like
    kye got a reaction from FHDcrew in Panasonic G9 Mark II. I was wrong   
    Nice!
    The other thing to consider when testing ISO and noise in the final image is the delivery part of the pipeline.  If I shot in two different modes and then processed them differently in my NLE, I might be able to tell the difference between them in my NLE.
    But no-one except you is watching your footage in your NLE, so you'll be exporting it, probably to h264 or h265, and you might not be able to tell the difference between them at this point.
    If you're going to be uploading them to a streaming service, then that service will decompress, process (NR, sharpening, who knows what else) and then brutally re-compress it.
    Lots of things are visible in the NLE and are completely gone or mangled beyond recognition in the final export or stream.
  10. Like
    kye reacted to FHDcrew in Panasonic G9 Mark II. I was wrong   
    Definitely. I did a brief comparison of a VLOG frame and shooting the same in flat with ISO lowered, I used the same node tree I did with my Nikon z6 8 bit flat and was able to get color and highlight rolloff real close. 
  11. Like
    kye got a reaction from ac6000cw in Panasonic G9 Mark II. I was wrong   
    Just a note to say that it would probably be worth doing some tests ahead of the event.
    Situations like this involve many variables and most often people don't consider all of them because they don't do any methodical tests.  You are assuming that the AF will work differently between different picture profiles, but I would suggest the AF would be operating on the image before the picture profile is applied, so it shouldn't matter...  but, once again, you should test this to confirm.
    Another thing to consider is if you can push the shutter angle to 270 degrees or even 360 degrees.  If it's a worship setting then making the footage seem a bit more surreal might be appropriate, and you can get another half or full-stop of exposure this way.
    You should also test NR in post - it's not ideal but it might give a better result overall considering none of your scenarios are operating in the cameras ideal operating range.  I've done a lot of shooting with cameras at/beyond their capabilities and when you're pushing things you're trading off the drawbacks of each strategy.
  12. Like
    kye got a reaction from FHDcrew in Panasonic G9 Mark II. I was wrong   
    Just make sure you're testing the options in the full image pipeline, so comparing finished 709 grades.  So many people only test one part of the pipeline and ignore the rest.
    I haven't really experimented much with the AF on the GH7 as I'm used to the AF on the GX85 etc, and I tend to use manual lenses in lower light.  
    AF is very difficult to test as well, and @Davide DB has posted before about how lens-dependent it can be too.
    Maybe there are AF tests online?  Playing peekaboo with your camera seems a popular camera reviewer pastime!
  13. Like
    kye got a reaction from FHDcrew in Panasonic G9 Mark II. I was wrong   
    I'm not sure how this would translate, but my GH7 does far better when I raise the ISO to get a proper exposure in-camera vs shooting under exposed and raising the exposure in post.  For some reason the shadows are quite noisy, even at native ISOs.  This is shooting in C4K Prores so it's not a codec issue.
  14. Like
    kye got a reaction from FHDcrew in Panasonic G9 Mark II. I was wrong   
    Just a note to say that it would probably be worth doing some tests ahead of the event.
    Situations like this involve many variables and most often people don't consider all of them because they don't do any methodical tests.  You are assuming that the AF will work differently between different picture profiles, but I would suggest the AF would be operating on the image before the picture profile is applied, so it shouldn't matter...  but, once again, you should test this to confirm.
    Another thing to consider is if you can push the shutter angle to 270 degrees or even 360 degrees.  If it's a worship setting then making the footage seem a bit more surreal might be appropriate, and you can get another half or full-stop of exposure this way.
    You should also test NR in post - it's not ideal but it might give a better result overall considering none of your scenarios are operating in the cameras ideal operating range.  I've done a lot of shooting with cameras at/beyond their capabilities and when you're pushing things you're trading off the drawbacks of each strategy.
  15. Haha
    kye reacted to FHDcrew in Panasonic G9 Mark II. I was wrong   
    I’m dead 🤣🤣
  16. Haha
    kye reacted to FHDcrew in Panasonic G9 Mark II. I was wrong   
    Formatting is being super weird right now ignore post
  17. Haha
    kye reacted to Snowfun in Panasonic G9 Mark II. I was wrong   
    Would it be worth quoting Genesis 1:3 as this would solve the high ISO need…
    (with apologies)
  18. Like
    kye reacted to FHDcrew in Panasonic G9 Mark II. I was wrong   
    Definitely. It’s super versatile. 
  19. Like
    kye got a reaction from Thpriest in Panasonic G9 Mark II. I was wrong   
    Digital zoom is definitely an underrated feature of these higher resolution cameras.
    On my GH5 I used the 2x punch-in on my 17.5mm F0.95 to get 35mm and 70mm FOVs, and on my GX85 I used it with the 14mm F2.5 pancake lens to get 31mm and 62mm FOVs in a pocketable form-factor.
    The crop function on the GH7 is different and a bit more restrictive.  You get continuous zooming, but only to the point where the resolution you've chosen is at/near 1:1 crop into the sensor.  So, if you've got the 14mm lens on there and you're shooting in C4K, you enable the feature and it pops up a box on the screen saying "14mm" and you can zoom in more and more by pushing or holding a button and it goes from 14 - 15 -16 - 17mm, but it won't let you go further.  If you're in 1080p mode then it goes from 14mm to 38mm.
    Conveniently, if you disable the mode then it goes back to 14mm but if you re-enable it then it goes back to whatever zoom you were at previously, so it's easy to set a zoom level you like and then jump in and out of that FOV.
    My testing didn't indicate any IQ issues with it, in 24p mode anyway, so I think it's probably downscaling from a full sensor read-out.
    Not only is it really good for getting more FOVs from primes, but it's also great in extending the long end of your zooms too.
  20. Like
    kye got a reaction from FHDcrew in Panasonic G9 Mark II. I was wrong   
    Digital zoom is definitely an underrated feature of these higher resolution cameras.
    On my GH5 I used the 2x punch-in on my 17.5mm F0.95 to get 35mm and 70mm FOVs, and on my GX85 I used it with the 14mm F2.5 pancake lens to get 31mm and 62mm FOVs in a pocketable form-factor.
    The crop function on the GH7 is different and a bit more restrictive.  You get continuous zooming, but only to the point where the resolution you've chosen is at/near 1:1 crop into the sensor.  So, if you've got the 14mm lens on there and you're shooting in C4K, you enable the feature and it pops up a box on the screen saying "14mm" and you can zoom in more and more by pushing or holding a button and it goes from 14 - 15 -16 - 17mm, but it won't let you go further.  If you're in 1080p mode then it goes from 14mm to 38mm.
    Conveniently, if you disable the mode then it goes back to 14mm but if you re-enable it then it goes back to whatever zoom you were at previously, so it's easy to set a zoom level you like and then jump in and out of that FOV.
    My testing didn't indicate any IQ issues with it, in 24p mode anyway, so I think it's probably downscaling from a full sensor read-out.
    Not only is it really good for getting more FOVs from primes, but it's also great in extending the long end of your zooms too.
  21. Like
    kye reacted to Alt Shoo in Panasonic G9 Mark II. I was wrong   
    I think a lot of people wrote Micro Four Thirds off before really paying attention to what changed. Once you start working with the newer Panasonic bodies as a system not just a sensor the color, IBIS, and real time LUT workflow start to make a lot of sense, especially for documentary and run and gun work.
    I’ve been baking looks in camera more and more instead of relying on heavy grading later and it’s honestly sped everything up. I just put out a short video showing how I’m using that approach on the GH7 if anyone’s interested: 
    My YouTube is more of a testing and experimenting space rather than where I post my “serious” work, but a lot of these ideas end up feeding directly into professional gigs.
     
     
  22. Like
    kye got a reaction from Jahleh in What does 16 stop dynamic range ACTUALLY look like on a mirrorless camera RAW file or LOG?   
    My advice is to forget about "accuracy".  I've been down the rabbit-hole of calibration and discovered it's actually a mine-field not a rabbit hole, and there's a reason that there are professionals who do this full-time - the tools are structured in a way that deliberately prevents people from being able to do it themselves.
    But, even more importantly, it doesn't matter.  You might get a perfect calibration, but as soon as your image is on any other display in the entire world then it will be wrong, and wrong by far more than you'd think was acceptable.  Colourists typically make their clients view the image in the colour studio and refuse to accept colour notes when viewed on any other device, and the ones that do remote work will setup and courier an iPad Pro to the client and then only accept notes from the client when viewed on the device the colourist shipped them.
    It's not even that the devices out there aren't calibrated, or even that manufacturers now ship things with motion smoothing and other hijinx on by default, it's that even the streaming architecture doesn't all have proper colour management built in so the images transmitted through the wires aren't even tagged and interpreted correctly.
    Here's an experiment for you.
    Take your LOG camera and shoot a low-DR scene and a high-DR scene in both LOG and a 709 profile.  Use the default 709 colour profile without any modifications.
    Then in post take the LOG shot and try and match both shots to their respective 709 images manually using only normal grading tools (not plugins or LUTs).
    Then try and just grade each of the LOG shots to just look nice, using only normal tools.
    If your high-DR scene involves actually having the sun in-frame, try a bunch of different methods to convert to 709.  Manufacturers LUT, film emulation plugins, LUTs in Resolve, CST into other camera spaces and use their manufacturers LUTs etc.
    Gotcha.  I guess the only improvement is to go with more light sources but have them dimmer, or to turn up the light sources and have them further away.  The inverse-square law is what is giving you the DR issues.
    That's like comparing two cars, but one is stuck in first gear.  Compare N-RAW with Prores RAW (or at least Prores HQ) on the GH7.
    I'm not saying it'll be as good, but at least it'll be a logical comparison, and your pipeline will be similar so your grading techniques will be applicable to both and be less of a variable in the equation.
    People interested in technology are not interested in human perception.  
    Almost everyone interested in "accuracy" will either avoid such a book out of principle, or will die of shock while reading it.  The impression that I was left with after I read it was that it's amazing that we can see at all, and that the way we think about the technology (megapixels, sharpness, brightness, saturation, etc) is so far away from how we see that asking "how many megapixels is the human eye" is sort-of like asking "What does loud purple smell like?".
    Did you get to the chapter about HDR?  I thought it was more towards the end, but could be wrong.
    Yes - the HDR videos on social media look like rubbish and feel like you're staring into the headlights of a car.
    This is all for completely predictable and explainable reasons..  which are all in the colour book.
    I mentioned before that the colour pipelines are all broken and don't preserve and interpret the colour space tags on videos properly, but if you think that's bad (which it is) then you'd have a heart attack if you knew how dodgy/patchy/broken it is for HDR colour spaces.
    I don't know how much you know about the Apple Gamma Shift issue (you spoke about it before but I don't know if you actually understand it deeply enough) but I watched a great ~1hr walk-through of the issue and in the end the conclusion is that because the device doesn't know enough about the viewing conditions under which the video is being watched, the idea of displaying an image with any degree of fidelity is impossible, and the gamma shift issue is a product of that problem.
    Happy to dig up that video if you're curious.  Every other video I've seen on the subject covered less than half of the information involved.
  23. Like
    kye got a reaction from j_one in What does 16 stop dynamic range ACTUALLY look like on a mirrorless camera RAW file or LOG?   
    My advice is to forget about "accuracy".  I've been down the rabbit-hole of calibration and discovered it's actually a mine-field not a rabbit hole, and there's a reason that there are professionals who do this full-time - the tools are structured in a way that deliberately prevents people from being able to do it themselves.
    But, even more importantly, it doesn't matter.  You might get a perfect calibration, but as soon as your image is on any other display in the entire world then it will be wrong, and wrong by far more than you'd think was acceptable.  Colourists typically make their clients view the image in the colour studio and refuse to accept colour notes when viewed on any other device, and the ones that do remote work will setup and courier an iPad Pro to the client and then only accept notes from the client when viewed on the device the colourist shipped them.
    It's not even that the devices out there aren't calibrated, or even that manufacturers now ship things with motion smoothing and other hijinx on by default, it's that even the streaming architecture doesn't all have proper colour management built in so the images transmitted through the wires aren't even tagged and interpreted correctly.
    Here's an experiment for you.
    Take your LOG camera and shoot a low-DR scene and a high-DR scene in both LOG and a 709 profile.  Use the default 709 colour profile without any modifications.
    Then in post take the LOG shot and try and match both shots to their respective 709 images manually using only normal grading tools (not plugins or LUTs).
    Then try and just grade each of the LOG shots to just look nice, using only normal tools.
    If your high-DR scene involves actually having the sun in-frame, try a bunch of different methods to convert to 709.  Manufacturers LUT, film emulation plugins, LUTs in Resolve, CST into other camera spaces and use their manufacturers LUTs etc.
    Gotcha.  I guess the only improvement is to go with more light sources but have them dimmer, or to turn up the light sources and have them further away.  The inverse-square law is what is giving you the DR issues.
    That's like comparing two cars, but one is stuck in first gear.  Compare N-RAW with Prores RAW (or at least Prores HQ) on the GH7.
    I'm not saying it'll be as good, but at least it'll be a logical comparison, and your pipeline will be similar so your grading techniques will be applicable to both and be less of a variable in the equation.
    People interested in technology are not interested in human perception.  
    Almost everyone interested in "accuracy" will either avoid such a book out of principle, or will die of shock while reading it.  The impression that I was left with after I read it was that it's amazing that we can see at all, and that the way we think about the technology (megapixels, sharpness, brightness, saturation, etc) is so far away from how we see that asking "how many megapixels is the human eye" is sort-of like asking "What does loud purple smell like?".
    Did you get to the chapter about HDR?  I thought it was more towards the end, but could be wrong.
    Yes - the HDR videos on social media look like rubbish and feel like you're staring into the headlights of a car.
    This is all for completely predictable and explainable reasons..  which are all in the colour book.
    I mentioned before that the colour pipelines are all broken and don't preserve and interpret the colour space tags on videos properly, but if you think that's bad (which it is) then you'd have a heart attack if you knew how dodgy/patchy/broken it is for HDR colour spaces.
    I don't know how much you know about the Apple Gamma Shift issue (you spoke about it before but I don't know if you actually understand it deeply enough) but I watched a great ~1hr walk-through of the issue and in the end the conclusion is that because the device doesn't know enough about the viewing conditions under which the video is being watched, the idea of displaying an image with any degree of fidelity is impossible, and the gamma shift issue is a product of that problem.
    Happy to dig up that video if you're curious.  Every other video I've seen on the subject covered less than half of the information involved.
  24. Like
    kye got a reaction from Video Hummus in Panasonic G9 Mark II. I was wrong   
    Welcome back!

    Can you tell me your name?  Where are we?  What year is it?
    Good, good...
    You've been in a DOF-induced coma for the last 7 years.  We'll contact your families and let them know you've woken up - they'll be very happy to see you!
  25. Like
    kye got a reaction from Jahleh in What does 16 stop dynamic range ACTUALLY look like on a mirrorless camera RAW file or LOG?   
    I'm seeing a lot of connected things here.
    To put it bluntly, if your HDR grades are better than your SDR grades, that's just a limitation in your skill level of grading.  I say this as someone who took an embarrassing amount of time to learn to colour grade myself, and even now I still feel like I'm not getting the results I'd like.
    But this just goes to reinforce my original point - that one of the hardest challenges of colour grading is squeezing the cameras DR into the display space DR.  The less squeezing required the less flexibility you have in grading but the easier it is to get something that looks good.  The average quality of colour grading dropped significantly when people went from shooting 709 and publishing 709 to shooting LOG and publishing 709.
    Shooting with headlamps in situations where there is essentially no ambient light is definitely tough though, and you're definitely pushing the limits of what the current cameras can do, and it's definitely more than they were designed for!
    Perhaps a practical step might be to mount a small light to the hot-shoe of the camera, just to fill-in the shadows a bit.  Obviously it wouldn't be perfect, and would have the same proximity issues where things that are too close to the light are too bright and things too far away are too dark, but as the light is aligned with the direction the camera is pointing it will probably be a net benefit (and also not disturb whatever you're doing too much).
    In terms of noticing the difference between SDR and HDR, sure, it'll definitely be noticeable, I'd just question if it's desirable.  I've heard a number of professionals speak about it and it's a surprisingly complicated topic.  Like a lot of things, the depth of knowledge and discussion online is embarrassingly shallow, and more reminiscent of toddlers eating crayons than educated people discussing the pros and cons of the subject.  
    If you're curious, the best free resource I'd recommend is "The Colour Book" from FilmLight.  It's a free PDF download (no registration required) from here: https://www.filmlight.ltd.uk/support/documents/colourbook/colourbook.php
    In case you're unaware, FilmLight are the makers of BaseLight, which is the alternative to Resolve except it costs as much as a house.  
    The problem with the book is that when you download it, the first thing you'll notice is that it's 12 chapters and 300 pages.  Here's the uncomfortable truth though, to actually understand what is going on you need to have a solid understanding of the human visual system (or eyes, our brains, what we can see, what we can't see, how our vision system responds to various situations we encounter, etc).  This explanation legitimately requires hundreds of pages because it's an enormously complex system, much more than any reasonable person would ever guess.
    This is the reason that most discussions of HDR vs SDR are so comically rudimentary in comparison.  If camera forums had the same level of knowledge about cameras that they do about the human visual system, half the forum would be discussing how to navigate a menu, and the most fervent arguments would be about topics like if cameras need lenses or not, etc.
×
×
  • Create New...