Jump to content

Discovery: 4K 8bit 4:2:0 on the Panasonic GH4 converts to 1080p 10bit 4:4:4


Andrew Reid

Recommended Posts

Imagine you make a Chroma Key with the GH4. And the final result has to be 1080p

 

What would you do?

 

With native 4K 8bit 4:2:0  (and after that convert to 1080p)

 

OR  with converted 1080p 10bit 4:4:4?

 

 

I understand that it would be better the first option, because you work with more data, but perhaps the second one has other advantages...

 

 

 

 

Simon.

 

 

Link to post
Share on other sites
  • Replies 158
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Pros are wondering what the benefit of 4K is to them in terms of overall image quality, when mastered and delivered for 2K / 1080p. A lot of work is still shot in 1080p and cameras like the Cano

I think that Cineform Studio free version (now called GoPro Studio 2.0) can do this.  In step 1, convert your file to CFHD (in the same resolution).  It is inflated to a 10-bit file.  Step 2, select f

Finally, a GH4 vid that doesn't look like 480 mush. Absolutely beautiful.

Hi, im new in the forum. I understand this can be do too with gopro hero 3 2,7k fotage, more accesible. I was trying do it today without results (in AE). Somebody know a specific way to do it?

 

I think that Cineform Studio free version (now called GoPro Studio 2.0) can do this.  In step 1, convert your file to CFHD (in the same resolution).  It is inflated to a 10-bit file.  Step 2, select file.  Step 3, export to custom settings, choose MOV container, scale down to 1080p, you can choose Film scan 2, which is the highest quality that can preserve the details of a 4:4:4 footage.

 

Update: Just tried this with my Galaxy Note 3 which shoots 4K 4:2:0 8-bit at 50Mbps AVC.  Looks a million times better than native 1080p.  I also tried dropping the same 4K file into After Effects CC working in 16bpc.  Created a sequence and exported to DNx 350X or 440X 4:4:4 10-bit...Looks different than Cineform (CFHD) workflow above.  Not sure which is better to be honest.

Link to post
Share on other sites

Thanks Andrew for laying this out. For those who doubt it theoretically, this is already done in camera by C300 and Alexa.  C300 takes 8 bit 4k image, down samples to 10 bit 422 1080p and then re-packs it to 8 bit (to fit in 50mb). Alexa takes app. 3k image down samples to 422 10bit 1080p then up samples to 444 pro rez. 

 

But what I'd really like to know is what is the exact recommended workflow. Is it 4k 100mb to After FX RGB 10bit then to 1080p? Help.

Link to post
Share on other sites

Minor nit picking but just see this "4:2:0 as 4:4:4 major discovery" as a bit off, to suggest 4:2:0 intetpolated and averaged 4k 4:2:0 to 1080p is going to give equivilant to 4:4:4 1080p shot in camera. 

 

It's actually gonna give more as all the HD cams are 4:2:2 max anyway. And with Blackmagic 2.5k res, you can't even get 1080 4:4:4, because the sensor doesn't have enough luma after debayering to fill HD.

Link to post
Share on other sites

 

But what I'd really like to know is what is the exact recommended workflow. Is it 4k 100mb to After FX RGB 10bit then to 1080p? Help.

 

You need to work in software that's in 16-bit or 32-bit float. Just downconverting to 8bit will not do anything. The trick happens when the 8bit image is interpolated inside a 16/32-bit container.

 

I'd say After Effects can do that (or any modern compositing software) if used in 16-bit mode. Editing software is more tricky. Maybe Premiere could do it if the enable maximum bit depth option is clicked on but not sure unless tested.

Link to post
Share on other sites

I have to believe that David is talking about "in practice," b/c the math is not perfect.

 

Here's a simple example that illustrates what I'm saying:

 

Have an image with a perfectly uniform shade of grey that falls between two 8-bit values but closer to the higher value. Lets say between 100 and 101 but closer to 101. Well it's going to be encoded as 101.

 

But let's say you take the same perfectly uniform shade of grey and sample it with 10-bit precision. So it falls between 400 and 404, but lines up perfectly with 403.

 

There is no way that four 8-bit values of 101 are going to mathematically sum to 403. They are going to sum to 404. And 404 <> 403.

 

While I'm sure the downrezing helps the bit depth considerably, the math is not perfectly reversible.

Link to post
Share on other sites

I have to believe that David is talking about "in practice," b/c the math is not perfect.

 

Here's a simple example that illustrates what I'm saying:

 

Have an image with a perfectly uniform shade of grey that falls between two 8-bit values but closer to the higher value. Lets say between 100 and 101 but closer to 101. Well it's going to be encoded as 101.

 

But let's say you take the same perfectly uniform shade of grey and sample it with 10-bit precision. So it falls between 400 and 404, but lines up perfectly with 403.

 

There is no way that four 8-bit values of 101 are going to mathematically sum to 403. They are going to sum to 404. And 404 <> 403.

 

While I'm sure the downrezing helps the bit depth considerably, the math is not perfectly reversible.

 

We've said this in the thread a bunch now. I am sure this is true without error diffusion. But with error diffusion I think there might be a way to store the information in 8 bit. 

Link to post
Share on other sites

Photons captured by the sensor have a random component (noise)- 4 sensor samples added together gets us 2 more bits of signal with an effective low pass filter (similar to C300's dual green sampling). Keep in mind the random sampling component when thinking about specific cases.

Link to post
Share on other sites

I have to believe that David is talking about "in practice," b/c the math is not perfect.

Here's a simple example that illustrates what I'm saying:

Have an image with a perfectly uniform shade of grey that falls between two 8-bit values but closer to the higher value. Lets say between 100 and 101 but closer to 101. Well it's going to be encoded as 101.

But let's say you take the same perfectly uniform shade of grey and sample it with 10-bit precision. So it falls between 400 and 404, but lines up perfectly with 403.

There is no way that four 8-bit values of 101 are going to mathematically sum to 403. They are going to sum to 404. And 404 <> 403.

While I'm sure the downrezing helps the bit depth considerably, the math is not perfectly reversible.

A rounding error of slightly less than half a least significant bit on a uniform scene, as per your example, is not a significant or noticeable issue. To see the advantage of four 8 bit samples -> One 10 bit you need something like a very gentle gradient that will generate noticeable banding. Then the four samples will, when averaged, allow 3 extra values between 100 and 101 - 100.25, 100.5, and 100.75, depending on how many of the 4 samples are 100 and how many are 101. Now of course, they won't actually take a decimal value as we are now working in a space with 10 bit resolution, not 8 bit and so the actual integer value will be times 4 and you can in fact resolve anywhere integer value between 400 and 404.

This is in fact perfect math. But, as you point out if the absolute value of a uniform scene is subjected to a quantization error due to 8 bit resolution, the correct absolute value cannot be correctly reconstructed.

Wikipedia has some nice info on over sampling and how you can trade back and forth between sampling rate and bit depth. Lots of info in this with regard to audio recording as well, if my explanation is not adequate
Link to post
Share on other sites

So let's start with your four 8-bit pixels example and compare it to a single 10 bit observation. Imagine our stored 8-bit values are all max readings:

(256)  (256)

(256)  (256)

Now suppose we want to approximate a single, native 10-bit observation in the middle of all four of them ("X").

(256)   (256)

        (X)

(256)   (256)

What 10-bit value do we assign? 1024 seems logical, right? But what if the true value of the native 10-bit observation is 1023? (because 1023 would be mapped to 256 in all of the 8-bit observations) Then we have made a mistake in our mapping and overestimated the value. Similarly, we cannot be sure about assigning any number less than 1024, because 1024 might be the true value of the native 10-bit observation!

 

In this case, used an example where the dynamic range is truncated in the 8 bit from above, but it doesn't matter where this lack of precision exists. 

 

All that being said, error diffusion complicates the matter. It may be possible to diffuse the errors (the remainders when going from native bit depth to 8 bit) It complicates the matter beyond a simple proof.

Link to post
Share on other sites

Been playing with this all day.  I want to share another way to explain information which may have already been posted.  

 

4:2:0 to 4:4:4...it works, I tested it, it looks great when compared to other 4:4:4 vs 4:2:0 examples on the web.

 

As for 8-bit to 10-bit, and looking to gain any dynamic range from that by combining 4 x 8-bit data, it is not possible.

 

It's not that we're collecting actual light per pixel and combining light from 4 sources into 1.  We can't add these together as if we're adding light photons.  We only have the readings of 0-255 (8-bit) for each subpixel (R,G,B ) for each pixel for one ISO sensitivity.  It knows no information outside of that for shadow or highlight recovery out of those bounds.  Having this near identical data repeated 4 times won't help expand the dynamic range with any kind of formula or workflow.  You can't extrapolate detail from the value of 0.  Same thing for blacks.  When all the RGB values goes to 255, you can only extrapolate shades of gray and any color is lost.

 

All we can do is just average 4 pixels for each 1 pixel for better color accuracy.

 

If Panasonic had made the GH4 so that you can flip a switch and each 4 sensor groupings collected data at 4 different sensitivities, then we'd have the data to work with, but then 4:2:0 to 4:4:4 conversion won't be as effective or effective at all because we're purposely overexposing or underexposing sensors for some pixels that would normally be collecting useful information.  

 

With that said, you still should convert to 4:4:4 10-bit (not 4:4:4 8-bit if that's even an option).  That would give the color grading software more room to work with.  I learned a lot from this thread.  Now I want a 4:4:4 12-bit camera more than ever.   :)  Maybe in 1-2 years we'll have it on the GH5 or the GH6.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...