Jump to content

Discovery: 4K 8bit 4:2:0 on the Panasonic GH4 converts to 1080p 10bit 4:4:4


Andrew Reid

Recommended Posts

Thanks tupp and P337.

 

Which one of these do you guys think will give better results for output to 1080p.

 

4K converted to 1080p. Reframed by a bit and then resized to 1080p or

 

4K reframed and then converted to 1080p.

 

TIA

 

Ok, good question, so let's figure this out.

 

Say you've resampled the 4K/UHD to HD then cropped to 720 and then resampled back up to HD.  

 

Scaling from 1280x720 to 1920x1080 needs to add 1 new pixel after every 2 pixels (both horizontal and vertical) which means every 3rd pixel are interpolated in post rather than "recorded" (but remember every pixel from a bayer sensor is interpolated at some point, what is important is how accurate the interpolation method is.)  The good news is if using a decent scaling algorithm this will still be a 4:4:4 image since you aren't simply copying pixels but sampling from a group of 4:4:4 source pixels which gives them a very high chance of accuracy; I'm not saying there will be no quality loss vs source 4:4:4 HD footage, because this still requires every 3rd pixel in the final image to be estimated which means it could get it wrong from time to time, but any artifacts or softness would be minimal.  

 

Compare this method to how a "recorded pixel" is created from a bayer pattern sensor, in the camera a 3x3 block (9 pixels) of Red, Green OR Blue "pixels" (sensels really) are sampled to interpolate one pixel; the pixels being created here in post are interpolated from sampling a 4x4 block (16 pixels) of full RGB pixels

 

 

 

Now instead let's say you've cropped 4k/UHD to 75% (2560x1440 aka 2.5k) then resampled that to HD.

 

(First off if you did any reframing in 2.5k before resampling to HD try to do them in values of 4 pixels (4,8,12,16, 20, etc.) from the center, because you're in a 4:2:0 environment that you plan to scale down, if not then your results could be slightly worse but probably not noticeably.)

 

So cropping UHD to 2.5k has not made any changes to chroma subsampling or bit depth yet since all you've done was delete the outer pixels but when you resample 2.5k to HD now you are removing every 3rd pixel (both horizontal and vertical).  There isn't enough data here to sample up to a 4:2:2 subsampling pattern so they'll be resampled back to 4:2:0 and will likely add extra aliasing.  The same goes for the Bit Depth, it might allow up to a possible 9bit precision for every other pixel but that isn't enough to call the entire image 9bit, but I guess you could call it 8.5bit if you like :)

 

In the end what you're talking about here is an "8.5bit" 4:2:0 1080p image (with extra aliasing) vs a slightly inaccurate 10bit 4:4:4 1080p image.

Link to post
Share on other sites
  • Replies 158
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Pros are wondering what the benefit of 4K is to them in terms of overall image quality, when mastered and delivered for 2K / 1080p. A lot of work is still shot in 1080p and cameras like the Cano

I think that Cineform Studio free version (now called GoPro Studio 2.0) can do this.  In step 1, convert your file to CFHD (in the same resolution).  It is inflated to a 10-bit file.  Step 2, select f

Finally, a GH4 vid that doesn't look like 480 mush. Absolutely beautiful.

We have an image with more information, but it doesn't translate in smoother tonality or better colours.

 

No.  There was no increase in information.  There could not have been such an increase in a captured image, unless something artificial was added.

 

The best possible result would be that the color depth would remain identical, with no loss in the conversion.

Link to post
Share on other sites

First we must consider FSAIQ: http://www.clarkvision.com/articles/digital.sensor.performance.summary/

aiq.clark.a.gif

 

Once that has been determined, we can figure out how many gigabecquerels of radiation from Fukushima must be shielded to prevent excess sensor noise depending on exact GPS position on the Earth to maximize image quality before sub-quantum supersampling in the complex spectral domain needed to yield one perfectly constructed pixel after full image reconstruction in the real spatial domain.

 

On a serious note, there's nothing significant to be gained from 4K => 1080p resampling in terms of color space / bit depth. Anyone can up or down-size an image in an image editing tool to test this (Lanczos, Mitchell-Netravali, Catmull-Rom, Bicubic, etc. won't matter. In terms of aliasing/artifacts and detail preservation, this is helpful: http://pixinsight.com/doc/docs/InterpolationAlgorithms/InterpolationAlgorithms.html#__section002__ )

Link to post
Share on other sites

there's nothing significant to be gained from 4K => 1080p resampling in terms of color space / bit depth.

 

Certainly, one can never gain color depth in an image that is already captured.  However, one can increase the bit depth while reducing resolution, and still maintain the same color depth.

 

 

In terms of aliasing/artifacts and detail preservation, this is helpful: 

http://pixinsight.com/doc/docs/InterpolationAlgorithms/InterpolationAlgorithms.html#__section002__

)

 

Aliasing/artifacting is a whole other deal.

Link to post
Share on other sites

Certainly, one can never gain color depth in an image that is already captured.  However, one can increase the bit depth while reducing resolution, and still maintain the same color depth.

 

Example images? Can post a 4K and 1080p* image for a single blind trial  B)

 

 

 

*1080p and 540p would also work.

Link to post
Share on other sites

I'm sure when the gh4 starts hitting stores there will be people showing that it is possible to pull out 444 and somewhere between 8-10 bit in a image downconverted to 1080p from 4K 420 8bit. The exact performance will depend on the algorithms that panasonic utilized.

 

 

Example images? Can post a 4K and 1080p* image for a single blind trial  B)

 

 

 

*1080p and 540p would also work.

Link to post
Share on other sites

The compression algorithm it's not important here. I did some synthetic tests (>post #118) simulating a best case scenario (a full 4K 4:4:4 8-bit image to a 2K 4:4:4 32-bit). I've tried different scaling algorithms and even adding noise doesn't give us any extra colour information. 

 
We technically get an image with increased bit depth(since we're blending some pixels), but it doesn't translate in smoother tonality or better colours.

 

 

I'm sure when the gh4 starts hitting stores there will be people showing that it is possible to pull out 444 and somewhere between 8-10 bit in a image downconverted to 1080p from 4K 420 8bit. The exact performance will depend on the algorithms that panasonic utilized.

Link to post
Share on other sites

I haven't followed the topic, but I was curious if I could simulate the theory.

On Fred Miranda I found this topic that explains how Photoshop saves jpgs in 4:2:0 or 4:4:4

I used a full resolution JPG from the GH2 and took the following steps:

 

Cropped the image to 3840x2160 and saved as JPG quality 6. The result is a 4K 4:2:0 still.

gallery_20742_64_369576.jpg

 

Then I resized this image to 1920x1080 and saved it as quality 10 to make a 4:4:4 image.

gallery_20742_64_41459.jpg

 

I also resized the 4K 4:2:0 still to 1920x1080 and saved it at quality 6 to make a 1080p 4:2:0 image.

gallery_20742_64_16625.jpg

 

200% crops of the above images in same order:

 

4K 4:2:0

gallery_20742_64_31679.jpg

 

1080p 4:4:4

gallery_20742_64_60368.jpg

 

1080p 4:2:0

gallery_20742_64_22088.jpg

 

Not sure if this test is correct, but to me it looks like you can gain color resolution from downsampling the 4K 4:2:0 file to 1080p. Correct me if I'm wrong!

Link to post
Share on other sites

The compression algorithm it's not important here. I did some synthetic tests (post #118) simulating a best case scenario (a full 4K 4:4:4 8-bit image to a 2K 4:4:4 32-bit). I've tried different scaling algorithms and even adding noise doesn't give us any extra colour information. 

 
We technically get an image with increased bit depth(since we're blending some pixels), but it doesn't translate in smoother tonality or better colours.


not THAT algorithm... The one that works on the 10 bit 4K to make it an 8 bit 4K....
Link to post
Share on other sites

Example images? Can post a 4K and 1080p* image for a single blind trial  B)

 

There are plenty of examples in which a higher resolution image has been binned to a lower resolution with a higher bit depth.  The technique is mainly used to reduce noise and increase camera sensitivity and to maintain rich colors in post production transfers.

 

Again, using such a process on a captured image can never increase color depth.  So, barring and inefficiencies/inequities in the conversion process or the display devices, the resulting images should look the same (except that one has more resolution).

 

In addition, it is usless to try and compare such results, unless there are two display devices of identical size that can be set to correspond to the bit-depth and resolution inherent in each image.  Displaying a 32-bit image on a 16-bit monitor would be a waste of time when assessing the results of binning to a lower resolution to a higher bit depth.

Link to post
Share on other sites

There are plenty of examples in which a higher resolution image has been binned to a lower resolution with a higher bit depth.  The tecnique is mainly used to reduce noise and increase camera sensitivity and to maintain rich colors in post production transfers.

 

Again, using such a process on a captured image can never increase color depth.  So, barring and inefficiencies/inequities in the conversion process or the display devices, the resulting images should look the same (except that one has more resolution).

 

In addition, it is usless to try and compare such results, unless there are two display devices of identical size that can be set to correspond to the bit-depth and resolution inherent in each image.  Displaying a 32-bit image on a 16-bit monitor would be a waste of time when assessing the results of binning to a lower resolution to a higher bit depth.

 

But the difference between 8-bit and 16-bit is pretty obvious and more in the ballpark of this test.

However, since most of us don't have 4K monitors maybe the test won't be very evident.

And from what I understand (and most here agree), bit depth can't be improved, can it? It's only the color sampling that actually improves.

Link to post
Share on other sites

But the difference between 8-bit and 16-bit is pretty obvious and more in the ballpark of this test.

 

The difference between 8-bit and 16-bit might be apparent when comparing two images of the same resolution (on monitors of the same resolution and bit depth to match each image).  Such a difference would become obvious if the scene contained a gradation subtle enough to cause banding in the 8-bit image but not in the 16-bit image.

 

However, in such a scenario, if you could continually increase the resolution of the camera sensor and monitor of the 8-bit system, you would find that the banding would dissappear at some point in the 8-bit image.  By increasing the resolution of the 8-bit system, you are also increasing its color depth -- yet its bit depth always remains at 8-bit.

 

One can easily observe a similar phenomenon.  Find a digital image that exhibits a slight banding when you are directly in front of it, then move away from the image.  The banding will disappear at some point.  By moving away from the image, you are increasing the resolution, making the pixels smaller in your field of view.  However, the bit depth is always the same, regardless of your viewing distance.

 

 

However, since most of us don't have 4K monitors maybe the test won't be very evident.

 

Such a test wouldn't be conclusive unless each monitor matches the resolution and bit depth of the image it displays.

 

 

And from what I understand (and most here agree), bit depth can't be improved, can it? It's only the color sampling that actually improves.

 

Most image makers are not aware of the fact that bit depth and color depth are two different properties.  In digital imaging, bit depth is a major factor of color depth, but resolution is an equally major factor of color depth (in both digital and analog imaging).

 

Therefore, one can sacrifice resolution while increasing bit depth, yet the color depth remains the same (or is decreased).  In other words, swapping resolution for more bit depth does not result in an increase in color depth.

Link to post
Share on other sites

 

 

Not sure if this test is correct, but to me it looks like you can gain color resolution from downsampling the 4K 4:2:0 file to 1080p. Correct me if I'm wrong!

 

Completely correct. Not sure why jcs is all against it because he is wrong. There are significant advantages to scaling 4k 4:2:0 to 4:4:4 HD.

 

One thing does come to mind...when we are talking about 8-bit, the h264 compression algorithms actually do tend to drop the actual bitdepth to 7 or even 6. So sometimes we actually start with 7bit 4:2:0 footage with all kinds of compression artifacts.

Link to post
Share on other sites

...when we are talking about 8-bit, the h264 compression algorithms actually do tend to drop the actual bitdepth to 7 or even 6. So sometimes we actually start with 7bit 4:2:0 footage with all kinds of compression artifacts.

 

That's interesting, I didn't realize how destructive h.264 could be to color depth.  Sorry to put you on the spot but I'd like to see some links or examples that backs that claim (I'll also google it) Thanks hmcindie! 

Link to post
Share on other sites

Julian's images: saving the 4K example at quality 6 creates DCT macroblock artifacts that don't show up in the 444 example at quality 10. All the images posted are 420: that's JPG. To compare the 1080p 444 example to the 4K 420 example: bicubic scale up the 1080p image to match exactly the same image region as the 4K image (examples posted are different regions and scale). The 1080p image will be slightly softer but should have less noise and artifacts. Combining both images as layers in a image editor then computing the difference (and scaling the brightness/gamma) up so the changes are clearly visible will help show exactly what has happened numerically; helpful if the differences aren't very obvious on visual inspection.

 

We agree that 420 4K scaled to 1080p 444 will look better than 1080p captured at 420 (need to shoot a scene with camera on tripod and compare A/B to really see benefits clearly). 444 has full color sampling per pixel vs 420 having 1/4 the color sampling (1/2 vertical and 1/2 horizontal). My point is that we're not really getting any significant color element bit depth improvement which allows significant post-grading latitude as provided by a native 10-bit capture (at best there's ~8.5-9-bits of information encoded after this process: will be hard to see much difference when viewed normally (vs. via analysis)). Another thing to keep in mind is that all > 8-bit (24-bit), e.g. 10-bit (30-bit) images, need a 10-bit graphics card and monitor to view. Very few folks have 10-bit systems (I have a 10-bit graphics card in one of my machines, but am using 8-bit displays). >8-bit systems images need to be dithered and/or tone mapped to 8-bit to take advantage of the >8-bit information. Everything currently viewable on the internet is 8-bit (24-bit) and almost all 420 (JPG and H.264).

 

re: H.264 being less that 8-bits- it's a effectively a lot less than 8-bits not only from initial DCT quantization and compression (for the macroblocks), but also from the motion vector estimation, motion compression, and macro block reconstruction (which includes fixing the macroblock edges on higher quality decoders).

Link to post
Share on other sites

Julian's images: saving the 4K example at quality 6 creates DCT macroblock artifacts that don't show up in the 444 example at quality 10. All the images posted are 420: that's JPG. 

 

You can easily create 4:4:4 jpegs. They are not enforced to be 4:2:0.

 

Those images are 200% crops, they don't have to be 4:4:4.

Link to post
Share on other sites

That's true- JPEG supports 444, 422, 420, lossless, etc. However when comparing images the compression level needs to be the same (most importantly, the DCT quantization). The rescaled web images are likely 420 while the downloads could still possibly be originals. The point about 420 is that web-facing content is typically 420 (with rare exceptions due to the desire to conserve bandwidth and save costs).

 

If at 100% A/B cycling we can't see any difference, then there's nothing gained since the results aren't visible (except when blown up or via difference analysis, etc.).

 

Scaling an image down (with e.g. bicubic) then scaling it back up acts like a low-pass filter (smoothes out macroblock edges and reduces noise). I downloaded the 4K 420 image and scaled in down then up and A/B-compared in Photoshop- couldn't see a difference other than the reduction of macroblock edges (low pass filtering) etc. Adding a sharpen filtering got both images looking very similar. A difference compare was black (not visible), however with gamma cranked up the differences from the effective low-pass filtering are clear.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...