Jump to content

Discovery: 4K 8bit 4:2:0 on the Panasonic GH4 converts to 1080p 10bit 4:4:4


Andrew Reid
 Share

Recommended Posts

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

Exactly.  same goes for 4k scaled to 1080.  I have done tests on my own 1dc (8 bit 4:2:2) months ago.  First I converted the 4k mjpeg to a 16 bit tiff. Then I Tried photoshop and its different scaling algorithms, pp, ae, resolve 10, capture one, and others to do the scale.   Viewed the results on a calibrated and profiled wide gamut nec pa271, much larger color space than rec 709/ srgb ( not that it makes much if any difference as long as they are viewed on the same screen).  Very difficult to tell the difference, if any, other than sharpness and noise.  My opinion, and from what I've read, is that Ae does the best scale as it has the newest algorithms according to Todd Kopriva at Adobe.    

 

 

 

Off topic:     Gh4 is interesting;  the sensor after the crop for 4k is barely larger than 1 inch.  That fact combined with the compression of 100 Mbps and 8bit 4:2:0 (unless Gh4+ Yagh + odyssey 7q+ codec license + propietary ssd = @ $7k from the info we currently have) means other options may be a better fit unless resolution (which does not necessarily = sharper, just more pixels unless downscaled) is your primary goal.  Look at this test done by http://***URL not allowed***/?p=21457 of the sony fdr ax1 with a 1/2.3" sensor and 100 Mbps compression, for around $4500.  Interesting!    

Link to comment
Share on other sites

It's clear to see that if we add 4 8 bit values together along with random sensor noise we'll be able to cover 0..1023 (10 bits). One issue is that by adding the samples instead of averaging them (which will reduce noise), we will increase noise. Since dynamic range is the noise floor through saturation, increasing the noise floor isn't helpful (but may actually help reduce banding).

 

Scaling 4K to 1080 in NLEs will be effectively an averaging operation (and low pass filter), which will result in noise reduction. Averaging 4 floating-point values will result in more tonal values possible vs. 256 values from a single sample.

 

A way to test what an NLE might do would be to take a 1080p frame and resize to 540p and see how it does with fragile sky imagery.

Link to comment
Share on other sites

Just to double check my months earlier results, I found a shot with some blue sky with banding that I took 2 weeks ago.  (4k 4:2:2 8 bit) I saved one copy as a 16 bit tiff.  I saved another exactly the same but downscaled by 50% using bicubic in ps.  I then opened them in ps and toggled between the images, one viewed at 200%, and the other at 400% so they would be the same enlarged on screen size.  The banding looked exactly the same in both and only the ever so slightest change in contrast which makes it appear a hair sharper., but really almost undiscernible. I then applied the same amount of saturation to each image to see if i could notice a difference in tonality.  Zero. One thing I must admit i forgot this time around is making sure the scaling was done in linear gamma.  I can't remember if photoshop has this as a default.    

Link to comment
Share on other sites

Most do not understand the color depth advantage of higher resolution and only see the "sharpness" advantage of higher resolution.

 

There are common misconceptions about this scenario, so here are some basic facts.

 

First of all, BIT DEPTH ≠ COLOR DEPTH.  This is the hardest concept for most to understand, but bit depth and color depth are not the same things.  Basically, bit depth is a major factor in color depth, with resolution being the other major factor.

 

A fairly accurate formula for the relationship of color depth, bit depth and resolution is:

COLOR DEPTH = (BIT DEPTH X RESOLUTION)3.

 

This mathematical relationship means that a small increase in resolution can yield a many-fold increase in color depth.

 

The above formula is true for linear response, RGB pixel group sensors.  When considering non-RGB sensors (eg. Bayer, Xtrans, etc) and sensors with a non-linear response, the formula gets more complex.  In addition, the formula does not take into account factors of human perception of differing resolutions, nor does it account for binning efficiancy when trying to convert images from a higher resolution to a lower resolution.

 

More detailed info can be found on this page.

 

 

Link to comment
Share on other sites

I'm sorry tupp, but I think what you wrote may be misleading.  I believe you're looking at a lot of video compression math, and not camera sensor (RAW) math (which is the prime driver of image quality)

 

First, I've tried creating non-bayered images where the eye will blend in the red, green and blue pixels.  I could not get it to work.  You must de-bayer pixel values to create full-color pixels.  Here is a video I created where you can see how the eye can not do this on its own

 

 

In other article you link to, you mention that 10-bit color is 2^10, or 1,024.  So if you have a resolution of 2 megapixels (around 1080) that gives you about 2 billion color depth by your equation above.

 

HOWEVER, most camera sensors do not sample at 10-bits, but more like 14-bits.  So that gives you about a 16,383 x 2 million or 33 billion color depth.

 

By your calculation, you could increase the resolution 4 times (like 4K), so 8 million times 1,024 is around 8 billion "color depth" by your equation.  

 

If you multiplied 1080 (2 million) against 11bits (2,048) you'd get 4 billion.  Against 12 bits you get 8 billion, similar to 4K in your equation.

 

After that, the camera data is past 4K.

 

How do I explain this?  The number of bits that represent a color have 2 aspects

 

1. The larger the bit value the GREATER accuracy you can have in representing the color

2. The large the bit value, the greater RANGE you can have between the same color in two neighboring pixels, say.  

 

The "dynamic range" aspect of color depth is what is missing from your thinking.  

 

Higher resolution does not create higher dynamic range.   Dynamic range is a function of bit-depth at the pixel level. This has been pointed out by many on this forum, though it's difficult to explain to people who have only worked with compressed video (which almost always assumes an 8-bit channel color space which can cover all visible colors).  Once you work with RAW video you get it, or at least I did.  

Link to comment
Share on other sites

Well, you may give people the impression that 8-bit means only 256 color choices per pixel, and 10-bit means only 1024 color choices per pixel. This is of course not correct.

 

To take your ruler example, say the ruler is 10 centimeter long. That means you have 10 values in 8-bit, and 40 values in 10-bit. (This example is a bit unfair as the low numbers used may give the impression that the colors are way more off than they would be in reality, but that is another topic).

 

Say your 10-bit ruler has a value of 39

 

The 8-bit ruler may on the other hand have values of 9,10,10,10. In this case, the value would be correct, though of course, there are several scenarios where it would end up wrong (but not likely as wrong as just one 8-bit value).

 

As far as I can see, the resulting 10-bit value will be less accurate than a source 10-bit, but most likely well above 8-bit as well. I believe the down-converting process could actually try to hit the "39" thus be very close to 10-bit in practice, but whether this is something the GH4 actually works hard at achieving, is unknown to me.

 

 

You don't have 4 boxes of 256 colors versus 1 box of 1024. You have four rulers that measure in cm increments versus one that measures in 1/4 cm increments.

 

 

Link to comment
Share on other sites

@maxotics

Thank you for your post.

By the way, on the strength of your work with the EOS-M, I just got one with a Fujian 35mm. I can't wait to start shooting with it!

In regards to my color depth post, the math that I quoted (from the page that I linked) has nothing to do with video compression. In fact, that formula only applies to raw, unadulterated image information. Introducing compression variables would make the math more complex.

However, introducing compression can never increase the color depth capabilities inherent in a given image capture or image viewing system.

Not sure what is the point with the Bayer images, but the color depth formula probably applies to raw Bayer images, with a slight adjustment. One chooses pixel groups in multiples of four (two green, one blue, one red), and, I think the only formula change is that one merely sums the bit depth of the two green pixels and then multiplies that sum against the bit depth of the other two pixles.

 

Keep in mind that one is calculating the color depth of a raw image that normally (but not necessarily) has a predominant green cast. Also, be mindful of the fact that there are no Bayer viewing systems (just Bayer sensors).

On the other hand, there are several non-Bayer sensors (even RGB sensors, eg. Panavision Genesis), and almost all digital color viewing systems are RGB.

I do not follow the point on the formula discrepancy, but note that for the formula to work, one must choose a percentage of an image frame, and one must consistently use that same percentage for all image frames to assess their relative color depth. One can choose for the area to be the entire image, but then one is essentially taking the entire frame as one blended pixel group.

If you are consistently utilizing the same image percentage throughout your example, please simplify your point for my benefit. I do not understand your conclusion, with the statement, "The number of bits that represent a color have 2 aspects."

I think that I agree with the statement: "The larger the bit value the GREATER accuracy you can have in representing the color." I am not sure if "accuracy" is the appropriate term. Certainly, the larger the bit depth, the greater the number of possible colors/shades.

I am not sure this statement was what you meant: "The large the bit value, the greater RANGE you can have between the same color in two neighboring pixels, say." There is a situation in which the color/shade range would be exactly the same regardless of bit depth. In addition, a greater bit depth can actually reduce the dynamic range between two pixel values. I am happy to give examples on request.

Speaking of dynamic range, it really is a property that is independent from bit depth and color depth. Dynamic range involves the possible high and low value extremes relative to the noise level. The bit depth determines the number of available increments within those extremes. There are plenty of examples of systems having high dynamic range with a low bit depth (and vice versa).

I agree with this statement: "Higher resolution does not create higher dynamic range." Resolution and dynamic range are completely independent. However, higher resolution definitely increases color depth.

I disagree with this statement: "Dynamic range is a function of bit-depth at the pixel level." Again, bit depth and dynamic range are two different characteristics. A system can have: great bit depth and low dynamic range or low bit depth and great dynamic range -- or any other combination of the two.

Thanks!

 

[edit -- corrected formula]

Link to comment
Share on other sites

@tupp, this is all very complicated stuff.  It sounds like you know what you're talking about.  A lot of people, however, make the leap from 4K to better dynamic range (from end-to-end) and I just wanted to clarify that a bit.  

 

That bayer sensors borrow colors and don't have true color depth the way many think of them is true, but again, one of those things that trip everyone up (including me at one point).

 

Hope you enjoy the Fujian.  Best value in show business :)

Link to comment
Share on other sites

Does this mean we could also convert 8-bit to 9-bit by downscaling 1080 footage to 720?

 

If so, would we gain anything by delaying the color grading process until after scaling down to the resolution that the footage will be displayed at?

Link to comment
Share on other sites

Speaking of dynamic range, it really is a property that is independent from bit depth and color depth. Dynamic range involves the possible high and low value extremes relative to the noise level. The bit depth determines the number of available increments within those extremes. There are plenty of examples of systems having high dynamic range with a low bit depth (and vice versa).

I agree with this statement: "Higher resolution does not create higher dynamic range." Resolution and dynamic range are completely independent. However, higher resolution definitely increases color depth.

 

I agree.  It is too bad we don't speak of dynamic range in dB as the audio people do.  I think this would be a lot clearer to people.  The bit depth can set a limit to a camera's dynamic range, but it doesn't specify the camera's dynamic range.

 

Michael

Link to comment
Share on other sites

I like the analogy of Bit depth being like rulers for this argument, I see a "10bit HD" source as one person having to produce a measurements in full 1/4 centimeter while "8bit 4k to HD" is like 4 different people measuring in full centimeters then averaging to the median of their measurements.

 

For example let's say they have to measure a subject that is 3.25 cm

 

The guy measuring in 1/4 centimeters easily and accurately produces the measurement of 3 1/4cm.

 

The four guys measuring in full centimeters would have to choose between 3cm or 4cm initially, if three of the four measured it as 3cm and one guy measures it as 4cm then their result would be accurate at 3 1/4cm.  But if they ended up with a different set of initial measurements they could result in an inaccurate measurement of 3 1/2cm or 3 3/4cm.

 

So it is possible to down sample 8bit 4k and gain more accurate color depth similar to 10bit HD but you will be introducing an opportunity to get slightly inaccurate results compared to an actual 10bit source, in fact each pixel has only a 1 in 4 chance of being accurately sampled to 10bit, this would be noticed on edges of color objects and in gradients.  I don't think it's worth it as a 10bit replacement but it's definitely worth it for 4:4:4 and hey it's better than 8bit so if you're stuck with it I would down sample it.  

 

Bit-rate compression is another story, as 100mbps is not enough for 4:4:4 HD.  Luckily we are bloating to 4:4:4 after recording so we are able to reset the bit-rate, allowing adequate room for the extra information to keep the same detail of the original, I would suggest 400mbps at least up to 1.6gbps if you really think it's 10bit. 

 

You don't have 4 boxes of 256 colors versus 1 box of 1024. You have four rulers that measure in cm increments versus one that measures in 1/4 cm increments.

 

Link to comment
Share on other sites

good points, though keep in mind that the algorithm panasonic uses to go from 10-bit to 8-bit could in fact make sure the 4 pixels average out to one 10-bit pixel. There are also variations in between they could do, so accuracy would range from 25-100% depending on the algorithm.

I like the analogy of Bit depth being like rulers for this argument, I see a "10bit HD" source as one person having to produce a measurements in full 1/4 centimeter while "8bit 4k to HD" is like 4 different people measuring in full centimeters then averaging to the median of their measurements.

 

For example let's say they have to measure a subject that is 3.25 cm

 

The guy measuring in 1/4 centimeters easily and accurately produces the measurement of 3 1/4cm.

 

The four guys measuring in full centimeters would have to choose between 3cm or 4cm initially, if three of the four measured it as 3cm and one guy measures it as 4cm then their result would be accurate at 3 1/4cm.  But if they ended up with a different set of initial measurements they could result in an inaccurate measurement of 3 1/2cm or 3 3/4cm.

 

So it is possible to down sample 8bit 4k and gain more accurate color depth very similar to 10bit HD but you will be introducing an opportunity to get slightly inaccurate results compared to an actual 10bit source, in fact each pixel has only a 1 in 4 chance of being accurately sampled to 10bit, this would be noticed on edges of color objects and in gradients.  I don't think it's worth it as a 10bit replacement but it's definitely worth it for 4:4:4 and hey it's better than 8bit so if you're stuck with it I would down sample it.  

 

Bit-rate compression is another story, as 100mbps is not enough for 4:4:4 HD.  Luckily we are bloating to 4:4:4 after recording so we are able to reset the bit-rate, allowing adequate room for the extra information to keep the same detail of the original, I would suggest 400mbps at least up to 1.6gbps if you really think it's 10bit. 

Link to comment
Share on other sites

So it is possible to down sample 8bit 4k and gain more accurate color depth very similar to 10bit HD...

 

The color depth of a given image can never be increased -- not without introducing something artificial.

 

Increasing bit depth in a digital image while reducing the resolution will, at best, maintain the same color depth as the original.  I think that this established theory/technique is what has been recently "discovered."

 

Again, BIT DEPTH ≠ COLOR DEPTH.  Bit depth determines the number of possible shades per color channel in a digital image.  Color depth is a much broader characteristic, as it majorly involves resolution and as it also applies to analog mediums (film, non-digital video, printing, etc.).

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...