Jump to content

Calling all colourists - Grade Panasonic GH4 4K ProRes next to Arri Alexa 2K ProRes


Andrew Reid
 Share

Recommended Posts

  • Administrators

I don't think that video is relevant?

 

On DSLRs with 1080p, the sensor gives a weak signal to the image processor. It is a signal that is heavily chopped and sliced before it even gets to the encoder and is turned into 8bit.

 

On the GH4 the sensor gives a very strong signal to the image processor. It is a 1:1 pixel readout, debayered to 10bit 4:2:2... That is a lot of data for the encoder to work with. It is also a LOT more data than 1080p, around 4x more.

 

So working with 4x more data in your grade is going to be a bit different to working with the usual 8bit 1080p from DSLRs.

 

That it is still 8bit is kind of irrelevant - it is still 4x the data of 1080p and the stronger sensor signal. The whole internal imaging pipeline in the GH4 is 10bit 4:2:2.

 

When you pack 4x the data into a 1080p file using your computer, you are throwing a lot of processing power at it.

 

I can already see for myself how well the 4K data grades and how nice the 2K is from it when oversampled from 4x the data that you have usually!

 

I must admit I do not understand the maths that well, I am not a mathematician but a filmmaker, so others may or may not be right on the maths... in the end I don't care... I have a pair of eyes and that is what counts :)

Link to comment
Share on other sites

Hey Andrew,

 

Basically if you take a grayscale gradient and you capture, resample, whatever.. from 10bit to 8bit, you wind up with 256 shades of gray (wow, just though of a great title for a book).. and if you push the crap out of that, you will see banding, no matter what size the resolution is.

 

But with a camera, there is Bayer pattern grain, algorithms and lots of interesting IP going on that can make it look much better.. conceal some of the obvious artifacts, but you're still stuck with the math.. 256 colors per channel at 8bit. And lower quality when you factor in chroma sub sampling.

Link to comment
Share on other sites

The files for me read as 4:2:2 10bit, but I thought it might be a Linux thing. Anyway, I've seen enough to know the footage looks great. Unless something unexpected comes out at NAB, I'm getting one... but will need an HDMI capture solution.

Link to comment
Share on other sites

  • Administrators

Hey Andrew,

 

Basically if you take a grayscale gradient and you capture, resample, whatever.. from 10bit to 8bit, you wind up with 256 shades of gray (wow, just thought of a great title for a book).. and if you push the crap out of that, you will see banding, no matter what size the resolution is. In my experience, bit depth is more important than most of the other factors.

 

But with a camera, there is Bayer pattern grain, algorithms and lots of interesting IP going on that can make it look much better.. but you're still stuck with the math.. 256 colors per channel at 8bit.

 

Indeed, which is why when you downscale 4K to 2K and pack it into a file which can handle greater precision you have smoother gradients with finer steps in-between which respond better when you push them around in the grade.

Link to comment
Share on other sites

That's on the original files Thomas.

 

4:2:0 artefact I think.

 

This is the kind of thing you avoid when:

 

A - You record via the 10bit 4:2:2 HDMI output

or

B - You downsample to 2K 4444 ProRes

 

Let's test this to be absolutely certain. If it is indeed an artifact of the color sampling, then the artifact will not be present in the Y channel. This can be verified by transcoding the original with 5DtoRGB using the "None" setting for decoding matrix. It will show Y, Cb and Cr as R,G, and B in the output file.

 

Any way you can post originals? I'd like to do some experimenting over here.  :)

Link to comment
Share on other sites

Indeed, which is why when you downscale 4K to 2K and pack it into a file which can handle greater precision you have smoother gradients with finer steps in-between which respond better when you push them around in the grade.

 

Sorry Andrew, the 4K 420 => 2K 444 math of 8.67-bits/pixels (not 10-bits) doesn't support any significant extra color depth. Here's how you can prove it to yourself: grade something in 4K that starts to break up due to limited color depth. Downsample to 2K- see any improvement? 2K 444 8.67 bits is still very nice at this price point.

 

Since the camera supports 10-bit output, perhaps a future firmware upgrade will support 10-bit H.264 (supported by the spec and used in Sony's XAVC). It might not happen for a while due to upline cameras such as the S35 Varicam, but should be possible.

Link to comment
Share on other sites

Sorry Andrew, the 4K 420 => 2K 444 math of 8.67-bits/pixels (not 10-bits) doesn't support any significant extra color depth. Here's how you can prove it to yourself: grade something in 4K that starts to break up due to limited color depth. Downsample to 2K- see any improvement? 2K 444 8.67 bits is still very nice at this price point.

 

Since the camera supports 10-bit output, perhaps a future firmware upgrade will support 10-bit H.264 (supported by the spec and used in Sony's XAVC). It might not happen for a while due to upline cameras such as the S35 Varicam, but should be possible.

 

The 10 bit figure is achieved by summing the values of four 8 bit pixels, which automatically downsamples to 1/4 the resolution as a result. This requires special image processing designed for this exact purpose, and is most likely not being done by Compressor, etc.

Link to comment
Share on other sites

Let's review the math:

 

10 bits is 2 more bits vs 8, 2^2= 4, so adding 4 8-bit values together gives us a 10-bit result. If we add up all the values, then divide by 4, thus averaging the result, we'll still see the extra information in the fraction. So if we average the values together in floating-point, we've achieved the 'effect'. This is effectively what an NLE will do when rescaling, so we don't need any special extra processing for an NLE that works in 32-bit float.

 

420 AVCHD (H.264) is YUV. If we scale 4K YUV 420 to 2K 444 YUV, only Y is full resolution, and only Y will get the benefit of the 4-value summing and additional fractional bit depth. Luminance is more important than chrominance, so that's not so bad. So best case, we have 10-bits of information in Y and 8-bits for U and V. This is (10+8+8)/3 = 8.67 bits per pixel. If the NLE does the scaling after converting YUV to RGB, it's still 8.67-bits of information per pixel at best (no new information is added during the transformation).

 

This is why we won't see a significant improvement in color depth. Here's a challenge- create an example that shows the '10-bit' effect is significant (I agree it's there, but at 8.67 actual bits, it will be hard to see in most cases).

Link to comment
Share on other sites

The 4K ProRes files are 10-bit 422 after transcoding from the original 8-bit 420. However, there's only 420 8-bits of color information in those 422 10-bit files.

 

That's correct, which is why the results so far aren't showing any benefit. Testing them should be put on hold until we have the camera originals and can process them specifically for ~10 bit output.

Link to comment
Share on other sites

The ProRes files are 10-bit 422 after transcoding from the original 8-bit 420. However, there's only (at best) 8.67-bits of color information in those 10-bit files.

 

Right, I follow. 

 

The ffprobe on Andrew's files:

 
Video: prores (apcs / 0x73637061), yuv422p10le, 4096x2160, 339440 kb/s, SAR 4096:4096 DAR 256:135, 24 fps, 24 tbr, 24 tbn, 24 tbc
Link to comment
Share on other sites

Anyone have an idea if the following workflow would degrade the footage :

 

first use 5dtorgb to convert the 4K footage to 2K ProRes 444

then start editing and grading

 

This workflow would probably be the easiest way for people who are outputting 2K anyway, and are using a less recent editing bay.

 

 

Link to comment
Share on other sites

Anyone have an idea if the following workflow would degrade the footage :

 

first use 5dtorgb to convert the 4K footage to 2K ProRes 444

then start editing and grading

 

This workflow would probably be the easiest way for people who are outputting 2K anyway, and are using a less recent editing bay.

 

5DtoRGB isn't set up to do this at the moment, but I'm looking into adding this capability.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...