Jump to content

Discovery: 4K 8bit 4:2:0 on the Panasonic GH4 converts to 1080p 10bit 4:4:4


Andrew Reid
 Share

Recommended Posts

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

The color depth of a given image can never be increased -- not without introducing something artificial.

 

I would think that you could increase the color depth of an image if you reduce its viewing size without changing its resolution, just taking a step backwards for instance.

 

If I understand correctly, a 2K image downscaled from 4K via the method described in this thread will have greater color depth than either (1) a 2K image downscaled via a less clever method, or (2) a 2K crop of the original 4K image, but I'm not entirely sure I understand how color depth is quantified.

Link to comment
Share on other sites

good points, though keep in mind that the algorithm panasonic uses to go from 10-bit to 8-bit could in fact make sure the 4 pixels average out to one 10-bit pixel. There are also variations in between they could do, so accuracy would range from 25-100% depending on the algorithm.

 

Ah that's interesting tosvus, so if the GH4 is like the GH3 it will use a 12bit A/D converter and if its video processing is anything like most cameras the algorithm would take 12bit values from the sensor then "rounds" those to 8bit values.  

 

The question then is what's the "round up" threshold in panasonic's algorithm.  To do what you suggest would require Panasonic engineers to use a variable threshold; (**WARNING TECHNICAL RANT AHEAD**) first they would have to break the pixels into 2x2 blocks, then round the 12bit values of (at least) every "top left pixel" to 10bit before they can set the "round up" threshold for the rest of the pixels (every "top right pixel" and the two "bottom left and right pixels") in relation to the 10bit value of the "top left pixel" for every 2x2 block of pixels, then it can round everything to 8bit.(**/RANT**) Basically that means they would have to do a lot of extra work when programing the algorithm to specifically reconstruct 8bit to 10bit when down sizing 4k to HD in post.  

 

I think it's far more likely they they just set the "round up" threshold to a static .5 or higher; which would mean we're back to the 25-100% chance of 10bit accuracy, though the statistics law of large numbers would say we can expect a 50% chance of accuracy with could be considered similar to 9bit which is still twice the accuracy of 8bit.

 

...Again, BIT DEPTH ≠ COLOR DEPTH.  Bit depth determines the number of possible shades per color channel in a digital image.  Color depth is a much broader characteristic, as it majorly involves resolution and as it also applies to analog mediums (film, non-digital video, printing, etc.).

 

tupp, are you taking about "perceived" color depth?  Which would factor in viewing distance and resolution.

 

I agree that the perceived color depth would have broad characteristics but from a technical standpoint Bit Depth is bits per channel and Color Depth is bits per pixel.  For example I would say an image's bit depth is 8bits per color which would be a 24bit color depth image because the 8bits for each color (RGB) would be 8bits x 3channels for each pixel; however from this technical standpoint Bit Depth will not always equal Color Depth, for example you could have an image that has the same 8bit Bit Depth but has a 32bit Color Depth when using an Alpha Channel because then the equation changes to 8bits x 4channels (RGBA).

 

Either way the bottom line is how many possible shades can the image contain, from that definition anything possible of 1,024 shades per channel would be considered a 10bit image (even if it isn't is using the same precision as a true 10bit source), so 8bit 4k would "technically" scale to 10bit HD but it's unlikely it would reproduce colors as accurately as a proper 10bit image.  

Link to comment
Share on other sites

I would think that you could increase the color depth of an image if you reduce its viewing size without changing its resolution, just taking a step backwards for instance.

 

If you reduce the image size while maintaining the pixel count of the image, you effectively increase the resolution -- the pixels are smaller per degree of the field of view.  So, you are correct in that you have increased the color depth per degree of the field of view.

 

However, the actual total color depth of the image has not changed at all.  You are merely squeezing the total color depth of the image into a smaller area, and, of course, you are sacrificing discernability and image size.

Link to comment
Share on other sites

tupp, are you taking about "perceived" color depth?  Which would factor in viewing distance and resolution.

 

I am talking about actual color depth, of which viewing distance can be a factor.  As I have already said, resolution is a major factor in color depth.

 

However, in most cases, viewing distance can be ignored, as color depth can be determined by simply taking the bit depth and pixel count on a given area (percentage) of the frame.

 

 

I agree that the perceived color depth would have broad characteristics but from a technical standpoint 

Bit Depth is bits per channel

and

Color Depth is bits per pixel

.


 

 

Taking "color depth per pixel" is actually just considering the number of colors that results from the bit depth in a single RGB pixel cell.  Color depth also majorly involves resolution -- pixels per given area of frame (or pixel groups/cells per given area of frame).

 

 

For example I would say an image's bit depth is 8bits per color which would be a 24bit color depth image because the 8bits for each color (RGB) would be 8bits x 3channels for each pixel;

 

That equation merely gives the number of possible colors per RGB pixel group.  It doesn't give the color depth, becuase it does not account for how many pixel groups fit into a given area of the frame.

 

 

however from this technical standpoint

Bit Depth will not always equal Color Depth

, for example you could have an image that has the same

8bit Bit Depth but has a 32bit Color Depth

when using an Alpha Channel because then the equation changes to 8bits x 4channels (RGBA).

 

Adding an alpha channel complicates the equation considerably, in that the background color/shade can affect how many new colors that the alpha channel adds to those of the RGB cell.  However, the total number of possible colors that an RGBA cell can generate will never exceed  the number of possible RGB colors multiplied by the number of possible alpha shades.

 

 

Either way the bottom line is how many possible shades can the image contain, from that definition anything possible of 1,024 shades per channel would be considered a 10bit image (even if it isn't is using the same precision as a true 10bit source), so 8bit 4k would

"technically"

scale to 10bit HD but it's unlikely it would reproduce colors as accurately as a proper 10bit image.

 

8bit 4k can "practically" scale to 10bit HD with "accurate" colors.  The efficiency/quality of the conversion determines how many colors are lost in translation.

Link to comment
Share on other sites

Would the opposite be true (i.e. deterioration of bit depth and color space) when stretching anamorphic footage horizontally?

 

Nah, an anamorphic de-squeeze actually adds pixels rather than replacing or resampling them and since it's essentially duplicating the already recorded pixels over to the right it would still have the same bit depth/color depth, color space, dynamic range/latitude.  Someone might make the argument of a perceived difference but the numbers are the numbers and nothing "new" is created in the process.  Actually instead of simply duplicating you could interpolate the "new" pixels with something like a Mitchell filter (samples the surrounding 16 "source pixels" to make one new pixel) which, "technically speaking", an 8bit source would have a chance of achieving UP TO "14bit per channel accuracy" (more likely 11bit) for every other horizontal pixel LOL; but hey it could make a difference to the perceived bit depth and dynamic range :D

Link to comment
Share on other sites

Nah, an anamorphic de-squeeze actually adds pixels rather than replacing or resampling them and since it's essentially duplicating the already recorded pixels over to the right it would still have the same bit depth, color depth, color space, dynamic range and latitude.

 

I can see how that would be the case for a de-squeeze, but how about stretching the image (i.e. enlarging it horizontally)? In the past, anamorphic shooters have surmised that image quality doesn't really take a hit when doing this, but we weren't really looking at it through the lens of this new theory.

Link to comment
Share on other sites

I can see how that would be the case for a de-squeeze, but how about stretching the image (i.e. enlarging it horizontally)? In the past, anamorphic shooters have surmised that image quality doesn't really take a hit when doing this, but we weren't really looking at it through the lens of this new theory.

 

Right, well a proper de-squeeze is stretching the image horizontally, which "duplicates" pixels (1920x1080 to 3840x1080).   However you could also just "squash" the image vertically (1920x540), this would "resample" the vertical pixels.  The "squashing" technique should "technically" result in a 4:2:2 9bit image, that is if you're averaging the pixels down rather than just deleting them, use a "Sinc filter" like Lanczos.

Link to comment
Share on other sites

Would the opposite be true (i.e. deterioration of bit depth and color space) when stretching anamorphic footage horizontally?

 

Essentially, digital anamorphic systems create pseudo rectangular pixels, which are usually oblong on the horizontal axis.

 

In such scenarios, the color depth is reduced on the horizontal dimension, while color depth remains the same on the vertical dimension.  However, the total color depth of the image never changes from capture to display.

 

Furthermore, in digital anamorphic systems the bit depth never changes on either axis (barring any intentional bit depth reduction) -- the pixels just become oblong.

Link to comment
Share on other sites

Would this still work If I were to crop/reframe a 4k image and then convert it to 1080p?

 

If you crop "into" a frame, you reduce the pixel count, therefore, you reduce to total color depth of the image.

 

However, if you do not resize the cropped portion, its color depth per "view-degree" doesn't change from before it was cropped.

Link to comment
Share on other sites

Would this still work If I were to crop/reframe a 4k image and then convert it to 1080p?

 

No, unfortunately when you crop an image you're straight up throwing away pixels and cutting down resolution.  Once you crop the image past 3840x2160 you lose that 2:1 pixel scale rate to 1920x1080, which negates any gains found in resampling "4k"/UHD to HD.  

 

You should resample the UHD to HD with Lanczos first, you want that improved image so make sure to render that in 10bit, then crop that down to 1280x720.  That would get you perfect 720p footage with the ability to reframe.  

 

Then if you're feeling lucky you can try to up-res that back to HD with a Mitchell interpolation algorithm (render this in 14bit, if you can), at least that would keep the artifacts to a minimum, but I'd be happier with the reframed 720p image.

Link to comment
Share on other sites

"The way this works is quite technical but it essentially amounts to using the extra pixels present in the 4K files to rebuild the lost colour information. Neighbouring pixels are summed to create a super pixel with a greater bit depth and better sampling. It requires a lot of processing power and a finishing codec good enough to store the extra colour data (10bit 4:4:4, with CineForm and ProRes being suitable choices) which is why the camera doesn’t do it internally. It does require a transcoding step in post but the increase in quality over the 8bit internal 1080p codec will be marked. The message is clear. Even if you’re working in 1080p, shoot 4K on the GH4 if you want the best quality."
 

That's pure fakery. What matters to me is the ORIGINAL RAW Clut. It's not 10Bit 4:4:4. So they are trying to convince you that by widening the color space via dithering ( AKA Interpolation)  that 'magically' a wider gamut has been recovered. That's incorrect because the the orginal space off the sensor was 8Bit 4:2:0
The 4K>2K hat trick is just that. A TRICK A trick because pixel density has NOTHING at all to do with capture color gamut. Sorry to be so harsh. You will discover this when someone runs Imperical Chroma Key tests.
Panasonic
Please stop this nonsense!

Link to comment
Share on other sites

I did some tests on Nuke (compositing software), and couldn't see much improvement downscaling a 4K 4:4:4 8-bit image to a 2K 4:4:4 32-bit (Nuke operates exclusive at the floating point colour depth).
 
I created a large, gentle ramp and saved as an 8-bit image. The image was reloaded and downscaled to 50%. The differences are a few more pixels were the banding transition occurs. Adding some noise (error diffusion) doesn't alleviate the problem too. We have an image with more information, but it doesn't translate in smoother tonality or better colours.
Link to comment
Share on other sites

it is not interpolation or dithering. Pretty much everyone gets that you do get true 4:4:4 (i.e less color compression). The 8-bit vs 10-bit is a bit more difficult to agree on. It is clear that even with the worst possible method of downconverting to 8 bit, you can retrieve at least on average 25% of the difference between 8 and 10 bit. The best scenario can yield the full 10 bit original (no dithering). However, I doubt panasonic implemented it this way. I will put together a simple spreadsheet. This is pretty basic math.....

 

I'm not sure why you seem agitated with Panasonic either. AFAIK, they have never said this. Guys who understand math brought it up and it was validated by the gentleman referenced in the article. No offense, but I don't know your credentials, however his are pretty impressive... Google his blog and you can see he knows more about color-space, dithering, dynamic range etc, than you or I ever will...

 

 

 

 

xxxxxxxxxxxxxxxxxxxxx

zephyrnoid:

 

"The way this works is quite technical but it essentially amounts to using the extra pixels present in the 4K files to rebuild the lost colour information. Neighbouring pixels are summed to create a super pixel with a greater bit depth and better sampling. It requires a lot of processing power and a finishing codec good enough to store the extra colour data (10bit 4:4:4, with CineForm and ProRes being suitable choices) which is why the camera doesn’t do it internally. It does require a transcoding step in post but the increase in quality over the 8bit internal 1080p codec will be marked. The message is clear. Even if you’re working in 1080p, shoot 4K on the GH4 if you want the best quality."
 

That's pure fakery. What matters to me is the ORIGINAL RAW Clut. It's not 10Bit 4:4:4. So they are trying to convince you that by widening the color space via dithering ( AKA Interpolation)  that 'magically' a wider gamut has been recovered. That's incorrect because the the orginal space off the sensor was 8Bit 4:2:0
The 4K>2K hat trick is just that. A TRICK A trick because pixel density has NOTHING at all to do with capture color gamut. Sorry to be so harsh. You will discover this when someone runs Imperical Chroma Key tests.
Panasonic
Please stop this nonsense!

Link to comment
Share on other sites

So they are trying to convince you that by widening the color space via dithering ( AKA Interpolation)  that 'magically' a wider gamut has been recovered.

 

As tosvus has pointed-out, they are not dithering.  They are merely swapping resolution for bit depth -- reducing resolution while increasing bit depth.  The color depth remains the same or decreases (due to inefficiencies in the conversion or due to the color limitations of the final resolution/bit-depth combination).

 

With dithering, there is no change in the bit depth nor in the resolution.

 

 

... pixel density has NOTHING at all to do with capture color gamut.

 

Pixel density (resolution) is a major factor in color depth/gamut of digital systems.

 

Here is the relationship between color depth, bit depth and resolution in a digital RGB system:

COLOR DEPTH = (BIT DEPTH X RESOLUTION)3

That's basically the way it works, barring practical and perceptual variables.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...