Jump to content

All Activity

This stream auto-updates     

  1. Past hour
  2. I suggest you do your own tests, take a contrasty scene, expose both to the right and compare shadow detail in post. I didn't see a difference in DR. Noise was similar.
  3. I think it could've been a lot shorter. In fact it probably could've started at the 4:00 mark - not because it doesn't hold your attention, but because there's nothing we see before that which tells us anything we need to know. All it does is raise confusing questions like "why does a college-age girl live alone on what appears to be an industrial-level farm?" and "why does this person keep a baseball bat right next to the front door" (especially as they seem to live in rural Europe?), and "what type of monster uses single-use water bottles in 2019?".
  4. Today
  5. Thank you for the detailed reply. I’ve used F-Log and loved the results I can’t imagine what the footage will look like with the extra stop of DR. I’ll give HLG a try.
  6. Quite impressed about the RAW samples also. Interpreted them as BMD Film and used my Soft film LUT to convert to Rec709, but made it even darker with way more green - also added some grain und gaussian blur. Really liking the result.
  7. Answers to your questions There is about 2/3 - 1 stop improvement in dynamic range with HLG over Flog. I am unable to tell if there is a difference in highlight rolloff. Difference in noise? I usually shoot at isos above base. So I am unable to comment if there is a difference in noise. One other thing about shooting HLG on the X-T3. I am less prone to getting some unusual magenta casts in highlights with HLG. The disadvantage of HLG is that you must record H265 and one has to transcode the footage to improve editability. Don Barar
  8. Yesterday
  9. Good points. For the test I did for myself I shot 4K h264, scaled some shots, exported in h264 (at a higher bitrate) and then uploaded to YT in 4K, as that is my workflow. Obviously if you're shooting RAW and delivering in Prores HQ then your thresholds for what is perceptible will be different. I was simply doing it to see if it mattered to me, and how far I could push things in how I shoot, hoping that it would matter less and I could use digital zoom to space my primes further apart and cover more zoom range with the same number of lenses. Fun stuff.
  10. Modified Sigma's smaller grip to allow it to hold a T5; haven't printed it yet so not so sure how comfortable it is.
  11. It rather looks like a too weak AA filter to me (lots of Moiré). Can this be fixed in firmware?
  12. It is usually evident in anything that has vegetation in it, since leaves are approaching the limits of resolution and anything that results in local degradation reduces them to an amorphous mass. If your subject matter is a face on the other hand, it is far from the limits of resolution, so for something like that you might not notice the difference. That is the issue I have with a lot of these comparative "tests", usually the person doing them chooses subject matter that reinforces whatever claim they are making. So, someone who claims that resolution does not matter will typically shoot a bunch of talking heads or buildings to make their point, and sure, for those things resolution is less important but the claim that resolution is not important is still wrong. They are just focusing on the wrong thing. It could be that they simply don't understand, or it may be that they do understand and are doing that on purpose.
  13. If we are talking fidelity, not subjective quality, then no, you can't get information about the original scene that was never recorded in the first place. I'm not sure what you mean by interpolation in this case. Simply going 8 bit to 12 bit would just remap values. I assume you mean using neighboring pixels to guess the degree to which the value was originally rounded/truncated to an 8 bit value? It is possible to make an image look better with content aware algorithms, AI reconstruction, CGI etc. But these are all essentially informed guesses; the information added is not directly based on photons in the real world. It would be like if you have a book with a few words missing. You can make very educated guess about what those words are, and most likely improve the readability, but the information you added isn't strictly from the original author. Edit: And I guess just to bring it back to what my post was about, that wouldn't give an advantage to either format, Raw or YUV.
  14. This would help explain the support for 1080p120 https://camerajabber.com/olympus-om-d-e-m5-mark-iii-review/
  15. Color saturation is defined not by variety of values of any single (R, G, B) channel arcross the image but by difference between channels in given pixel. Thus color fidelity is defined by amount of steps this difference is digitised. If you have image with (200, 195, 205) RGB values in one point, (25, 20, 30) in other, (75, 80, 70) in third that doesn't mean the image color range is 25-200, it's 10 and it's lacking color badly.
  16. Even if we are including the interpolated data in this statement? If you throw 8-bits into a 12-bit container, couldn't the results (including the newly created data) be an improvement over the original if the interpolation is really good?
  17. I am still really loving the sound or lack there of, of this Eq. If you don’t own already it, and want a great (not just for mastering) eq, grab it while it is still free here: https://harrisonconsoles.com/site/free-plugin.html Still free as of 10/22/2019 @8:40 am pst...
  18. Thanks Chris, I have been rocking the GH2, GH3, GH4 and now the GH5. It is the end of the line for me with the GH line. I really learned allot and I have lots of footage I love. There are a few exciting camera's out and coming out. GH5 with the 10 bit was a big plus over 8 bit. I want something with great color science like the Blackmagic URSA Mini Pro 4.6K G2 but also with a small form factor. The Z CAM E-2 4K looks nice. For me the dream specs would good color science, 10 bit, 120 fps 4k, small form factor, global shutter, Pro Res Raw or HQ with MFT mount. I have invested and built up a collection of vintage glass in MFT so I'm rocking this formate for a while. With all that said, I still think my best footage came off my hacked GH2
  19. No they can't, they can't output 10-bit
  20. I wasn't talking about 12 bit files. In 8 bit 4:2:0, for each set of 4 pixels in 8 bit 4:2:0, you have 4x 8 bit Y, and 1x 8 bit U and 1x 8 bit V. That averages out to 12 bits/pixel, a simplification to show the variation per pixel compared to RGGB raw. It seems your argument is essentially that debayering removes information, therefore the Raw file has more tonality. That's true--in a sense. But in that sense, even a 1 bit Raw file has information than the 8 bit debayered version won't have, but I wouldn't say it has "more tonality." I don't believe this is true. You always want to operate in the highest precision possible, because in minimizes loss during the operation, but you never get out more information than you put in. It's also possible we're arguing different points. In essence, what I am saying is that lossless 8 bit 4:2:0 debayered from 12 bit Raw in camera has the potential* to be a superior format to 8 bit Raw from that same camera. *and the reason I say potential is that the processing has to be reasonably close to what we'd do in post, no stupidly strong sharpening etc. About this specific example form the fp... I didn't have that impression, to be honest. It seems close to any 8 bit uncompressed image.
  21. kye

    Evaluating Cameras

    I think satisfaction levels are about comparing what you have to what you want. I think everyone probably always wants more, but it's about priorities. Do I want a camera the size of a GoPro Hero 5 Session, lenses that perform like Zeiss Master Primes, output files that look like an ARRI 65, and the whole thing to cost $100 with free shipping? Yes. But the point is that all that happens in the context of all the rest of what we're doing when we shoot. I think most real shooters are concerned with the total package of what they deliver, and if the camera isn't in the top 5 issues that are holding us back then we're not focused on it, and if asked we'll say we're satisfied. Everyone wants more, we just differ by how much we want it.
  22. A couple of screen-grabs from a recent Sofar Concert I shot. Shot with the 70-200 2.8, all in 4k 10bit HLG, graded with Filmconvert Nitrate in a couple of minutes.
  23. Otago

    Evaluating Cameras

    Interesting replies! I note that there are compromises for some people and some people are content with what they are currently shooting with. If you are satisfied then how did you come to that conclusion ? I am curious about how you know that you don't need anymore, because I know what I am happy with but I am finding it difficult to express how I came to that conclusion.
  24. Thanks for all the great insights. I am out working on a mission in a Mexican border town and distance has given me a bit of perspective on the Maxx/Nomad choice. Seems like the future with Zaxcom will really be Nova and their ecosystem if wireless is your priority. For me it may be a future need but at present is not so I may be best served by a Mixpre 10 II or Zoom F8n ... enough inputs for add on wireless as that need grows. Davinci Resolve Studio has been a bit hit or miss with TC sync and the present TC bugs in the new Mixpre II series does not help but I assume that they will sort it soon enough. Thanks again!
  25. Great stuff! I can see some cool lens stuff going on. These lenses work really nicely at night!
  26. Just to follow up on this. Most of these formats encode in YUV which is where the sampling comes from. An 8 bit YUV file is awful for tonality. The colour difference channels are so lacking detail and it's really just the luma part that 'holds' the image. But, a bit like raw, we don't see YUV or RAW files, these are both decoded into RGB space which we see on the monitors. Whenever there is an operation to 'decode' which takes place in a higher depth, then the resulting bit depth can be higher. If you have any applications that let you see inside a YUV H264 file then you really should take a good explore. When you see the data that is used to store images - be it YUV or RAW then you can see where the quality differences come from. The nice thing about DNGs is no compression, makes a wonderful difference to the image! You can get a 12 bit YUV file but most are 8 bit. That's 8bit for luma and 8bit for each of the colour difference channels but because of the nature of how colour difference works most of those 8 bit containers are empty. The colour difference channels are subsampled, so if the luma is 1920x1080 then in 422 each of those colour differences are 960x540. But you can decode an 8 bit YUV into 12 bit space if you want... cheers Paul
  27. Well, considering 31st October this year is supposed to be Brexit day..... • what r u gonna be Poorer and with the right to live and work in 27 other countries ended. • past costumes (post pics) Bread queue chic from our glory days of the 1970s. • fav candies These ones that we had to use to do our homework with during that same 3 day week era. • halloween movies??? Can't think of one containing a more scary bunch of clowns than this.
  1. Load more activity
EOSHD C-LOG and Film Profiles for All Canon DSLRs
EOSHD Pro Color 3.0 for Sony cameras
EOSHD Pro LOG for Sony cameras
EOSHD 5D Mark III 3.5K RAW Shooter's Guide

  • Create New...