Jump to content

Subforums

  1. The EOSHD YouTube Channel   (22,457 visits to this link)

    Follow Andrew Reid on YouTube

17,207 topics in this forum

    • 9.1k replies
    • 2.4m views
    • 1.2k replies
    • 409.8k views
    • 0 replies
    • 537 views
    • 700 replies
    • 278.6k views
  1. Lenses 1 2 3 4 289

    • 5.8k replies
    • 1.7m views
  2. Panasonic GH6 1 2 3 4 88

    • 1.8k replies
    • 692.1k views
    • 18 replies
    • 1.5k views
    • 544 replies
    • 244.7k views
  3. DJI banned in US

    • 18 replies
    • 970 views
    • 7 replies
    • 469 views
    • 9 replies
    • 572 views
    • 24 replies
    • 1.3k views
    • 116 replies
    • 53.2k views
    • 26 replies
    • 1.9k views
    • 139 replies
    • 36.7k views
    • 0 replies
    • 180 views
    • 7 replies
    • 1.2k views
    • 78 replies
    • 3.7k views
    • 5 replies
    • 319 views
    • 20 replies
    • 1k views
  4. The Aesthetic (part 2) 1 2 3 4

    • 66 replies
    • 32.9k views
    • 25 replies
    • 1.5k views
    • 8 replies
    • 620 views
  5. Nikon Zr is coming 1 2 3 4 24

    • 462 replies
    • 110k views
    • 3 replies
    • 918 views
    • 16 replies
    • 919 views
  6. The D-Mount project 1 2 3 4

    • 63 replies
    • 32.4k views
    • 7 replies
    • 702 views
    • 6 replies
    • 1.2k views
    • 10 replies
    • 892 views
  7. gh series in 2025

    • 8 replies
    • 675 views
    • 3 replies
    • 496 views
    • 12 replies
    • 929 views
    • 0 replies
    • 371 views
    • 15 replies
    • 8.6k views
    • 103 replies
    • 52.8k views
    • 7 replies
    • 585 views
    • 6 replies
    • 988 views
    • 2 replies
    • 578 views
  8. Fujifilm Repair Service

    • 0 replies
    • 387 views
  • Popular Contributors

  • Forum Statistics

    • Total Topics
      17.2k
    • Total Posts
      350.7k
  • Member Statistics

    • Total Members
      34,364
    • Most Online
      19,591

    Newest Member
    parvezsalman021
    Joined
  • Posts

    • How about using Dolby Vision? On supported devices, streaming services, and suitably prepared videos it adjusts the image based on the device's capabilities automatically, and can do this even on a scene-by-scene basis. I have not tried to export my own videos for Dolby Vision yet, but it seems work very nicely on my Sony xr48a90k TV. The TV adjusts itself based on ambient light and the Dolby Vision adjusts the video content to the capabilities of the device. It seems to be supported also on my Lenovo X1 Carbon G13 laptop.    High dynamic range scenes are quite common, if one for example has the sun in the frame, or at night after the sky has gone completely dark, and if one does not want blown lamps or very noisy shadows in dark places. In landscape photography, people can sometimes bracket up to 11 stops to avoid blowing out the sun and this requires quite a bit of artistry to get it mapped in a beautiful way onto SDR displays or paper. This kind of bracketing is unrealistic for video so the native dynamic range of the camera becomes important. For me it is usually more important to have reasonably good SNR in the main subject in low-light conditions than dynamic range, as in video, it's not possible to use very slow shutter speeds or flash. From this point of view I can understand why Canon went for three native ISOs in their latest C80/C400 instead of the dynamic range optimized DGO technology in the C70/C300III. For documentary videos with limited lighting options (one-person shoots) the high ISO image quality is probably a higher priority than the dynamic range at the lowest base ISO, given how good it already is on many cameras.  However, I'd take more dynamic range any day if offered without making the camera larger or much more expensive. Not because I want to produce HDR content but because the scenes are what they are, and usually for what I do the use of lighting is not possible. 
    • It seems we have wen’t through the same rabbit holes already🙂 Back in the day with Macbook Airs I tried to calibrate their displays for WB, but it was more like a hit and miss. Since then I have not bothered. By accuracy I just ment that the HDR timeline looks the same in Resolve than the exported file would look on my Macbook Pro screen or MBP Pro connected to my projector or to OLED via HDMI, or when I am watching the video from Vimeo.  Like you said, the real pain is the other person’s displays. You have no way of knowing what they are capable of, how they interpret the video and it’s gamma tags, even if set properly, with or without HDR10 tags. The quick way to check this mess would be to take your video to  your phone. If it looks the same, start Instagram etc post. After first step it usually messes up the gamma if the tags are wrong or meta can’t just understand them. After 30 days it anyways degrades the HDR to SDR, highlights are butchered and the quality gone. And for example the IG story just don’t seem to understand HDR at all. Also my ipad has only SDR display, and watching my HDR videos on it is not pretty. Usually I don’t use LUTs, as I prefer the Resolves Color managed pipeline, but I’ve tried also non color managed with LUTs and CSTs. I’ve tried the non LOG SDR profiles on Z6iii and on various Panasonic cameras, and did not like them, or LOG with LUT baked in. They clipped earlier than LOG, but were cleaner in the shadows, if you needed to raise them though. I usually grade my footage to SDR first, because I want to take screen captures as images from it. Then duplicate the timeline, set it to HDR, adjust the grade and compare both. I’ve used quite a lot time to get both looking good, but still almost always HDR looks better, as it should as it has 10x more brightness, 1000 nits vs 100 nits, and wider color space. Some auto correction in the Iphone to SDR video usually takes it closer to my HDR grade, so clearly my grading skills are just lacking when pushing the 11-12 stops of DR to SDR😆 With S5ii in same low light situation with headlamps you could say I had problems, as the image looked always quite bad. Now with ZR, Z6iii and GH7 the results are much better in that regard I would say. Dimmer lights are always better than too bright ones, or putting them too close to the subject. The 1st thing I did after getting the GH7 I shot Prores RAW HQ, PRRAW, Prores and H.265 on it and compared them. Recently shot also R3D, NRaw and Prores Raw on ZR and just did not like the Prores Raw. It’s raw panel controls were limited and it looked just worse, or needed more adjusting. On GH7 the PRRaw was a maybe slightly better than it’s H.265 but the file sizes were bigger than 6k50p R3D on ZR🙄 I have made power grades to all Panasonic cameras that I’ve had, to Z6iii NLog and NRaw, to ZR R3D and also to Iphone Prores. So the pipeline is pretty similar no matter what footage I grade. Have also a node set to match specific camera’s color space and gamma too for easier exposure and WB change, when it can’t be done on the Raw panel. The best option at the moment in my opinion is NRaw, as it’s file size is half the R3D and trimming and saving the trimmed NRaw files in Resolve works too. R3D is slightly better in low light, but as long as saving only the trimmed parts does not work you need to save everything, and that sucks, big time. The HDR section was just in the middle of the book, and last thing I read. If someone prefers projected image over TV’s and monitors I would think they prefer also how the image is experienced, how it feels to look the image (after you get over the HiFI nerd phase of adjusting the image). At least I do, even though my OLED’s specs are superior compared to my projector. So the specs are not everything, even though important. Yes, delivering HDR to social media or directly to other people’s displays, you never know how their display would interpret and show the image. Like you said it is a bigger mess than even with SDR, but to me worth it to explore as my delivery is mostly to my own displays and delivery to Vimeo somewhat works. The Apple gamma shift issue should be fixed by know for SDR. Watched some YT video about it linked to Resolve forum, that Rec.709 (Scene) should fix everything, but it is not that straight forward. Also while grading it has an effect too on how bright conditions are you grading. If you grade on a dark room and another person is watching the end result on bright daylight, it very likely does not look like how it was intended. With projectors it is even worse. Your room will mess up your image very easily, no matter how good projector you have. Appreciate all the inputs, not trying to argue with you. Would say these are more like a matter of opinions. And the more you dig deep the more you realize it is a mess, even more with HDR.
    • My advice is to forget about "accuracy".  I've been down the rabbit-hole of calibration and discovered it's actually a mine-field not a rabbit hole, and there's a reason that there are professionals who do this full-time - the tools are structured in a way that deliberately prevents people from being able to do it themselves. But, even more importantly, it doesn't matter.  You might get a perfect calibration, but as soon as your image is on any other display in the entire world then it will be wrong, and wrong by far more than you'd think was acceptable.  Colourists typically make their clients view the image in the colour studio and refuse to accept colour notes when viewed on any other device, and the ones that do remote work will setup and courier an iPad Pro to the client and then only accept notes from the client when viewed on the device the colourist shipped them. It's not even that the devices out there aren't calibrated, or even that manufacturers now ship things with motion smoothing and other hijinx on by default, it's that even the streaming architecture doesn't all have proper colour management built in so the images transmitted through the wires aren't even tagged and interpreted correctly. Here's an experiment for you. Take your LOG camera and shoot a low-DR scene and a high-DR scene in both LOG and a 709 profile.  Use the default 709 colour profile without any modifications. Then in post take the LOG shot and try and match both shots to their respective 709 images manually using only normal grading tools (not plugins or LUTs). Then try and just grade each of the LOG shots to just look nice, using only normal tools. If your high-DR scene involves actually having the sun in-frame, try a bunch of different methods to convert to 709.  Manufacturers LUT, film emulation plugins, LUTs in Resolve, CST into other camera spaces and use their manufacturers LUTs etc. Gotcha.  I guess the only improvement is to go with more light sources but have them dimmer, or to turn up the light sources and have them further away.  The inverse-square law is what is giving you the DR issues. That's like comparing two cars, but one is stuck in first gear.  Compare N-RAW with Prores RAW (or at least Prores HQ) on the GH7. I'm not saying it'll be as good, but at least it'll be a logical comparison, and your pipeline will be similar so your grading techniques will be applicable to both and be less of a variable in the equation. People interested in technology are not interested in human perception.   Almost everyone interested in "accuracy" will either avoid such a book out of principle, or will die of shock while reading it.  The impression that I was left with after I read it was that it's amazing that we can see at all, and that the way we think about the technology (megapixels, sharpness, brightness, saturation, etc) is so far away from how we see that asking "how many megapixels is the human eye" is sort-of like asking "What does loud purple smell like?". Did you get to the chapter about HDR?  I thought it was more towards the end, but could be wrong. Yes - the HDR videos on social media look like rubbish and feel like you're staring into the headlights of a car. This is all for completely predictable and explainable reasons..  which are all in the colour book. I mentioned before that the colour pipelines are all broken and don't preserve and interpret the colour space tags on videos properly, but if you think that's bad (which it is) then you'd have a heart attack if you knew how dodgy/patchy/broken it is for HDR colour spaces. I don't know how much you know about the Apple Gamma Shift issue (you spoke about it before but I don't know if you actually understand it deeply enough) but I watched a great ~1hr walk-through of the issue and in the end the conclusion is that because the device doesn't know enough about the viewing conditions under which the video is being watched, the idea of displaying an image with any degree of fidelity is impossible, and the gamma shift issue is a product of that problem. Happy to dig up that video if you're curious.  Every other video I've seen on the subject covered less than half of the information involved.
    • I remember the discussions about shooting scenes of people sitting around a fire and the benefit was that it turned something that was a logistical nightmare for the grip crew into something that was basically like any other setup, potentially cutting days from a shoot schedule and easily justifying the premium on camera rental costs. The way I see it is any camera advancement probably does a few things: makes something previously routine much easier / faster / cheaper makes something previously possible but really difficult into something that can be done with far less fuss and therefore the quality of everything else can go up substantially makes something previously not possible become possible ..but the more advanced the edge of possible/impossible becomes the less situations / circumstances are impacted. Another recent example might be filming in a "volume" where the VFX background is on a wall around the character.  Having the surroundings there on set instead of added in post means camera angles and sight-lines etc can be done on the spot instead of operating blind, therefore acting and camera work can improve.
    • Happy New Year, fellow creatives! To kick off 2026, I've created a couple of brand new music tracks to share with you. As always, they're 100% free to use in your projects with attribution, just like my thousands of other tracks: On my Funny 8 page: "A WILD PARTY IN DOCTOR STRANGEVOLT’S CASTLE" – (Looping) https://soundimage.org/funny-8/ On my Chiptunes 5 page: "PIXELTOWN HEROES" – (Looping) https://soundimage.org/chiptunes-5/ OTHER USEFUL LINKS: My Ogg Game Music Mega Pack (Over 1400 Tracks and Growing) https://soundimage.org/ogg-game-music-mega-pack/ My Ogg Genre Music Packs https://soundimage.org/ogg-music-packs-2/ Custom Music https://soundimage.org/custom-work/ Attribution Information https://soundimage.org/attribution-info/ Enjoy, please stay safe and keep being creative.  🙂
×
×
  • Create New...