Jump to content

Mac app to resample GH4 8 bit 4:2:0 to 10 bit 4:4:4


Thomas Worth
 Share

Recommended Posts

Would this method also work on the Canon XC10?

Works on most any digital imaging file.  All it does is retain the color depth of the original, higher-res file by swapping resolution for bit-depth.

 

By the way, contrary to some of the more recent comments, the method discussed in this thread is very different from simply scaling-down an image,  A simple down-scaling throws away information and does not retain the original color depth of the image

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

Works on most any digital imaging file.  All it does is retain the color depth of the original, higher-res file by swapping resolution for bit-depth.

 

By the way, contrary to some of the more recent comments, the method discussed in this thread is very different from simply scaling-down an image,  A simple down-scaling throws away information and does not retain the original color depth of the image

Do you evidence to back that claim that scaling in other programs throws away color information? I have tested this in many different apps and have never seen that behavior in any of them. Resolve, for example, operates in a 32bit RGBY colorspace and will not throw away any color information during scaling.

Link to comment
Share on other sites

Do you [have] evidence to back that claim that scaling in other programs throws away color information? I have tested this in many different apps and have never seen that behavior in any of them. Resolve, for example, operates in a 32bit RGBY colorspace and will not throw away any color information during scaling.

I did not make any claims regarding the scaling method used by any programs.

 

By "simple down-scaling," I mean reducing the resolution of the image without any summing nor averaging the adjacent pixels in the original image and without increasing the bit-depth in the final image.  Such a simple conversion throws away the information of the unused pixels from the original image.

 

It is irrelevant that Resolve generally operates in 32-bit depth with RGBY color space, if it doesn't increase the bit-depth when down-scaling.  Again, to downscale a digital image and retain all of the color depth of the original image, the adjacent pixels in the original image must somehow be summed or averaged and the bit-depth must be increased in the final image.  If your program/transcoder is not doing both of those things, then you are losing color depth information.

Link to comment
Share on other sites

The XC10 is already 422 so maybe not as much to gain.

A down-scale from UHD to full HD is 4:1 ratio.  So, regardless of the resolution per color channel, about 3/4 of the information per color channel would be thrown away, when down-scaling from UHD to full HD without summing nor averaging the adjacent pixels in the original digital image (and without increasing the bit-depth of the final image -- assuming the final chroma sub-sample is identical to the original).

Link to comment
Share on other sites

I did not make any claims regarding the scaling method used by any programs.

 

By "simple down-scaling," I mean reducing the resolution of the image without any summing nor averaging the adjacent pixels in the original image and without increasing the bit-depth in the final image.  Such a simple conversion throws away the information of the unused pixels from the original image.

 

It is irrelevant that Resolve generally operates in 32-bit depth with RGBY color space, if it doesn't increase the bit-depth when down-scaling.  Again, to downscale a digital image and retain all of the color depth of the original image, the adjacent pixels in the original image must somehow be summed or averaged and the bit-depth must be increased in the final image.  If your program/transcoder is not doing both of those things, then you are losing color depth information.

Yes and every single NLE, color grading, encoding software uses some kind of averaging when doing downscaling. It is pratically impossible to find any that will even give the option not to. Premiere, photoshop, after effects, nuke, resolve, media encoder and final cut all do. Resolve converts everything to 32bit float and processes it in that bit depth which is the highest precision in computing unless you count doubles which no software I am aware of uses. 

Link to comment
Share on other sites

Yes and every single NLE, color grading, encoding software uses some kind of averaging when doing downscaling. It is pratically impossible to find any that will even give the option not to. Premiere, photoshop, after effects, nuke, resolve, media encoder and final cut all do.

I'll have to take your word that those programs average (without rounding) when they downscale.

 

However, summing is more accurate, and increased bit-depth is implicit with summing.

 

 

Resolve converts everything to 32bit float and processes it in that bit depth which is the highest precision in computing unless you count doubles which no software I am aware of uses. 

That's fine, but more importantly, does resolve yield you greater bit-depth in the final down-scaled file, without rounding the average?

Link to comment
Share on other sites

Yes Resolve works in 32bit hence it is decimal precision so it never rounds to integers, it is floating point. I know what you are thinking with the summing of the 4 pixels increasing bit depth, it is logical and I tested this theory just like you did by writing program to do it. Yes if you downscale 4K to HD you can get 10 bit 4:4:4, however all you have done is incease the numeric precision from 8bit to 10bit you have not increased the color accuracy, the same is true in any program including Resolve. For example lets say you have 4 pixel who's lumance if recorded 10bit native would have a value of 801, 10bit is 0 - 1023. When recorded at 8 bit these pixels would have a value of 200, 8bit 0 - 255. Take those 4 pixels and sum them up and you have a 10bit value of 800 not 801. 

So you have increased the bit depth but the accuracy of the data is still the same as it was when it was 8bit. So there is no benifit to the increase in bit depth as the final pixel will have the same potential error as before as far as grading is concerned In fact if I am not misaken there is the same amount of potential error as if you just converted the 4K to 10bit without downscaling, there is a increase in chroma though as you now have chroma data for every pixel, hence no sub sampling. Basically the increased bit depth is artifical and not the same as 10bit capture.

Any program that has the option of working in 32bit will not round. All the programs I listed have the option to or do so by default.

Link to comment
Share on other sites

I know what you are thinking with the summing of the 4 pixels increasing bit depth, it is logical and I tested this theory just like you did by writing program to do it.

I never tested it.  I merely have a little knowledge of how high-end down-conversions (and up-conversions) have worked since the early days of DV.  Plus, the math and theory are straightforward and "dumb-simple."

 

 

Yes if you downscale 4K to HD you can get 10 bit 4:4:4, however all you have done is incease the numeric precision from 8bit to 10bit you have not increased the color accuracy,

I think we basically agree here.  As I have maintained throughout this thread, summing in a down-conversion is merely swapping resolution for bit depth -- not increasing color depth.  One can sacrifice resolution for greater bit-depth, but one can never increase the color depth of a digital image (not without introducing something artificial).

 

However, I am not sure on whether or not the "color accuracy" can be increased during a down conversion.

 

 

For example lets say you have 4 pixel who's lumance if recorded 10bit native would have a value of 801, 10bit is 0 - 1023. When recorded at 8 bit these pixels would have a value of 200, 8bit 0 - 255. Take those 4 pixels and sum them up and you have a 10bit value of 800 not 801.

A lot depends on what is depicted by those four pixels.  One of those four summed pixels might be 199 with another being 202 and the other two pixels being 200, hence, 801.

 

Obviously, smoother surfaces/areas (such as the example you gave ) can cause minute "color accuracy" discrepancies.  The main instance in which such minute discrepancies become apparent is banding effects on smooth areas.  Banding has been discussed in this thread, and. nevertheless, the color depth of the original image is maintained in a down-conversion when the pixels are summed -- even with banding.

 

 

So you have increased the bit depth but the accuracy of the data is still the same as it was when it was 8bit. So there is no benifit to the increase in bit depth as the final pixel will have the same potential error as before as far as grading is concerned

No, there is definitely a benefit in increasing the bit-depth during a down-conversion.

 

It is important to understand that there is a world of difference between color depth and what you call "color accuracy."   If you do not sum and also increase the bit depth in a down-conversion, you throw away valuable color depth -- even if the "color accuracy" remains "8-bit" on some smooth sections of the image.  Such a sacrifice in color depth will be apparent in the more complex, "cluttered" sections of the image that have zillions of complex transitions between color tones.

 

If you don't sum the pixels and don't increase the bit-depth, you may or may not have occasional banding (8-bit accuracy) but you will certainly have reduced color depth (apparent in the more complex areas of the image).  If you do sum and do increase the bit-depth, you likewise may or may not have occasional banding, but you will have nonetheless maintained the color depth of the original image (no reduced color depth).

 

 

In fact if I am not misaken there is the same amount of potential error as if you just converted the 4K to 10bit without downscaling, there is a increase in chroma though as you now have chroma data for every pixel, hence no sub sampling. Basically the increased bit depth is artifical and not the same as 10bit capture.

The increased bit-depth is not artificial -- it is merely sacrificing resolution for bit-depth to maintain color depth.

 

Most people don't realize that resolution is a major factor in color depth, and that fact is usually the misunderstood point in the aforementioned down-conversions.  In fact, you could have a system with a bit depth of "1," and, with enough resolution, have the same degree of color depth as a 12-bit, 444 system.

 

Actually, there exist countless images that have absolutely no bit-depth, yet every one of those images have color depth equal to or greater than 12-bit, 444 images.

 

By the way, the mathematical relationship between color depth, resolution and bit depth is very simple in digital RGB imaging systems:

COLOR DEPTH = (RESOLUTION X BIT DEPTH)3

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...