Jump to content

Thomas S

  • Posts

  • Joined

  • Last visited

About Thomas S

Profile Information

  • My cameras and kit
    GH4, Canon M6 MK2, P4k

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Thomas S's Achievements

New member

New member (1/5)



  1. Thats why I got a Pocket 4k. I got tired of the farting around with compressed formats. External recorders are the way to get around that nut thats also a hassle. Now I can shoot 3:1 Braw on a cheap SSD directly on the camera or at least 5:1 on an internal SD card. Now that some DSLRS are getting external raw support I still don't really care. Its an added cost to get an external recorder and its a lot more hassle than what I have now. I'm also kind of a freak for 4:4:4. I know visually it actually doesn't make a huge difference. I studied VFX in college and can pull damn good keys with 4:2:0. To me its more about why 4:2:2. Its a left over from the analogue days and we don't really need it anymore. We have fast and cheap enough media now to not worry about 4:2:2. Its an old broadcast video standard and really has no place in our digital world today. h264 and h265 are also very capable of 4:4:4 but we are barely getting cameras to add 4:2:2 and 10bit let alone 4:4:4. So Braw on the P4k represents something I have been trying to achieve ever since I started with SVHS and have been trying to get something better than video standards. Its not just because its raw. To me its because its RGB, 4:4:4 and color space agnostic. No more butchered rec709, no more unnecessary 4:2:2. I know visually I could probably do the same with a lesser format but to me its just about starting clean and go from there. It represents what I always dreamed of being able to do with video. Oh yeah and its 12bit which will be even harder to make an argument for than the 10bit vs 8bit argument. But hey its there and doesn't hurt so why not. Fun fact. 12bit has 4096 samples. DCI 4k resolution is 4096 wide. Thats exactly one sample per pixel for a full width subtle gradient or in other words the perfect bit depth for 4k. Not sure anyone could ever tell vs one sample every 4 pixels like 10bit has but hey there it is. Basically posterization should be physically impossible on the P4k shooting Braw.
  2. Kind of depends. ProRes is in my opinion one of the best formats the industry has ever had. With that said its not a very smart format. It just throws the same amount of bits at a frame no matter what the content. The beauty of formats like h264 is that they are smarter and they look at each frame and try to figure out how much is really needed. Yes bitrate is important for that but its such an efficient format that it can get away with a lot less bits than a dummy format like ProRes could get away with. When ProRes drops down to LT or Proxy its sacrificing quality across the frame no matter what the visual impact might be. h264 breaks the image into blocks. mpeg2 did the same thing but the image was all 8x8 pixel blocks. h264 is even more sophisticated and can use 1x1, 2x2, 4x4 and 8x8 pixel blocks. So the macro blocking can be much smaller in fine detail areas where a macro block may be 2x2 pixels instead of 8x8 that mpeg2 would use. This means we don't see macro blocking as much and visually we get a very solid picture. The image then saves bits in those flat areas so they can be used for the more detailed areas. h264 also spreads those bits across many frames. It uses difference between a group of frames to determine what has actually changed. So if a camera is locked down and only a small red ball rolls across the screen then each of the proceeding frames only have to use bits for the macro blocks that cover that red ball. The more the frames change the more bits they need for each of the following frames. The problem is yes sometimes some h264 encoders do not get enough bits. most of the time 100 mbps is enough. If you get a shot with a lot of random moving fine detail like tree leaves blowing in heavy wind then that 100 mbps may fall apart. ProResHQ is dumb but it has the advantage to look good no matter what the situation. A locked down blue sky will compress as well as those complex moving tree leaves. Its just that both will take the same 700 mbps no matter how simple or complex. h264 on the other hand can get by with much less. It would be a complete epic waste to give h264 the same 700mbps. It would not need it at all. In the above example that small red ball just does not need that much data to store 100% perfectly. I'm not sure there is a magic number as to what bitrate should be used. Really depends on the scene but for the most part 100 mbps has been pretty solid on many 4k cameras. 150 mbps for 10 bit on some cameras has been even more solid. Thats another thing to factor in. ProResHQ is 10bit 4:2:2. So its not really fair to compare to 8bit 4:2:0 h264 formats directly in terms of bitrate. Again its a dummy format and even if you send ProRes a 8bit 4:2:0 camera source it stills encodes it as if its 10bit 4:2:2. So the 150 mbps 10bit 4:2:2 h264 formats are a better comparison and visually they hold up very well compared to ProResHQ.
  3. Agreed. All the photos we look at online are 8bit and rarely is 8bit an issue. For normal rec709 video profiles 10bit is a tad overkill although there can be some extremely rare cases where it can help. The bigger plague of 8bit is some h264 encoders that are too aggressive in how they assume color samples will not be noticed as different and merge them as a macro block. You can have a frame from a h264 video in 8bit and an uncompressed png of that image also in 8bit and get more banding from the h.264. Encoders try to figure out what is safe to compress together. The stuff we can't see with the naked eye. If we can't see it there is no point wasting bits on it. So a 8x8 pixel block with very subtle green values may decide to make that a 8x8 block of one green color. This can cause banding where one would normally not have any in 8bit. Panasonic suffered from this on the GH4 when they added log. The log was so flat that the encoder assumed it could compress areas of color into big macro blocks because the values would not be noticeable. If the shot stayed as log they were right but because the log was flatter than other log profiles it really struggled with areas of flat color like walls when graded. Sony's encoder did better at not grouping similar colors. At least up to S-log2. S-log3 could suffer from the same issues as Panasonic V-log on the GH4 on older Sony cameras. The GH5 had an improved encoder that wasn't as aggressive with 8bit and areas of similar colors.
  4. A lot of this is due to 32bit float color space in NLEs. As long as the 8bit has enough to not have posterization the 32bit float will likely be able to fill in any gaps as the image is pushed hard. Grading is much easier for math to fill in gaps than say upscaling an image. In the case of upscaling new pixels can be averaged but averaging doesn't work for fine details like a hair. Grading however we are trying to prevent posterizing. That is done through smooth gradients. Sometimes averaging surrounding values does perfectly. For example if you have a color of value 200 and another value of 250 its easy in grading to averaging an in between value of 225 which still creates a nice smooth gradient. Where 10bit is important is making sure the shot is captured well the first time. Once you have posterization it will always be there and no 32bit float processing can magically make it go away. Visually ion the shot has no posterizing than no matter how hard it is pushed it likely never will have any or pushing the 10bit would show just as much. Thats why 32bit float was created. 10bit is a lot like 32 bit audio or 16 stops of DR that are graded down to 10 stops. We record more so we have it and can better manipulate it. Most of the shots above likely would have still looked good with 6 bits. You need a very long and complex gradient to break 8bit. It can and does happen. The more noise the camera has the less it will happen because of dithering. I think this is partially why Sony always had such a high base ISO for log. Finally 10bit never promised to have better color, more dynamic range or less compression artifacts. Thats not what bit depth does. Its all just about how many different color samples can be used across the image. The single and only real side effect is posterizing. Many computer monitors at one point were only 6 bit panels even if they claimed 8bit. Most never really noticed unless they did something like use the gradient tool in Photoshop to span a full 1920 wide image. In the case of the clear blue sky image in the article that wasn't even a difficult gradient. Most of the sky was a similar shade of blue. To break 8bit you need to create a gradient going from 0 blue to 255 blue across the full 3840 pixels for 4k video. That means there is a unique blue sample every 15 pixels if you create a gradient like that. So your sky needs to go from black on one end of the screen to bright blue on the other side. Not always realistic but you can shoot Skys around dusk and dawn that spread the values out a lot more than mid day. By comparison 10bit has a unique blue color sample every 3.75 pixels for UHD video. It doesn't even have to be something that covers the full screen. If you have a gradient over 100 pixels from 200 blue to 205 blue that still means a new blue sample every 20 pixels. Even though the screen area is very small. I develop mobile apps and when I add a gradient I run into the same problem trying to do a subtle gradient across something like a button. The gradient needs enough range to cover the area of pixels or it will look steppy. 10bit and higher is a safety net or guarantee to likely never have any kind of posterizing. In the professional world thats important and nobody wants surprises after the shoot is done.
  • Create New...