Jump to content
Sign in to follow this  
Andrew Reid

Discovery: 4K 8bit 4:2:0 on the Panasonic GH4 converts to 1080p 10bit 4:4:4

Recommended Posts

I emailed someone at Cineform who talked about how to get the 4k, 8 bit, 4:2:0 video downsampled to 1080p, 10 bit, 4:4:4 for Sony Vegas, if anyone else uses that.

 

You can do it without After Effects. First, import the 4k video into Sony Vegas Pro 12 (with 32 bit floating point (full video) in the project settings), and render it out to Cineform with these settings.

 

'>

 

Then render it back out from Cineform GoPro Studio Premium.

 

http://forums.creativecow.net/readpost/24/976346

post-33136-0-02582400-1395437427_thumb.p

Share this post


Link to post
Share on other sites
EOSHD Pro Color for Sony cameras EOSHD Pro LOG for Sony CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

Is all of this conversion really worth it?  The sensor isn't even 4:4:4, and the HDMI port will output 10bit 4:2:2, even in 4K.  I say put a recorder on there, and call it a day.

 

Michael

Share this post


Link to post
Share on other sites

It's quite worth it for me. I won't be outputting anything in 4k for the foreseeable future, but that increased latitude in color grading is something I've wanted for a long time, along with other characteristics that the camera offers.

 

Plus, it's from footage that's shot to an internal SD card, so I won't have to buy an external recorder and deal with that set up.

Share this post


Link to post
Share on other sites

Only green will be 10-bit-ish. So (10+8+8)/3 = 8.67 bits. This is less about latitude and really about getting a true 1080p resolution image.

Share this post


Link to post
Share on other sites

So I am not as gifted as some of you, and I am still trying to understand this workflow. Regardless of if this works to increase bit depth or not.

 

Is it as simple as importing the 4k into After Effects, and downsampling to 1080p? Or what other steps are involved.

 

Thanks.

Share this post


Link to post
Share on other sites

Right, simply add 4k footage to a 1080p timeline/sequence. That's it. You'll get a detailed 1080p image, though not likely any significant visible additional color depth (about 8.67 bits vs 8).

Share this post


Link to post
Share on other sites

I'm kinda new to the science of luma / chroma bits. These are my concerns / questions:

1- 4 pixels to 1 can perhaps create better gradation but what about recognizing the shades of white or black. Does 4k to 2k downsampling create more whites or more blacks? If a sensor see it as white but in reality it is just a tiny bit grey, wouldn't it fall below the threshold of the sensor sensitivity anyway?

 

2- How does the end result (no matter we call it 4:4:4 or …) gives a nice decent dynamic range of 12 stops or more? Can we translate colour space and depth into dynamic range? I mean not only the math but what we see from the image.

 

3- Let's point the GH4 toward a scene with lots of intense lights and shadows and do the same with a matched lens spec, f, FOV, etc on a BMPC. Do we end up getting the same quality of colours and especially dynamic range (after matched gradation of raw image) between the two? If I get the end result of a BMPC footage, I'm still going to be happy.

 

4- What is the quality difference between raw 4:2:2 vs. compressed 4:4:4? I mean once we say NOT RAW, does real 4:4:4 still exist? and if it is altered, is there a way to create a systematic approach for categorizing the quality differences? The reason I'm interested in the systematic approach is to have a better idea of strength and weaknesses for different shooting scenarios in the pre-production phase.

Share this post


Link to post
Share on other sites

So we can welcome the first sub 2k$ consumer camera that delivers cinema quality.

 

Doesn't the Blackmagic Pocket Cinema Camera deliver 12-bit CinemaDNG (RAW) or 10-bit 4:2:2? It's less than $1K.

Share this post


Link to post
Share on other sites

Is all of this conversion really worth it?  The sensor isn't even 4:4:4, and the HDMI port will output 10bit 4:2:2, even in 4K.  I say put a recorder on there, and call it a day.

 

Michael

 

Right, simply add 4k footage to a 1080p timeline/sequence. That's it. You'll get a detailed 1080p image, though not likely any significant visible additional color depth (about 8.67 bits vs 8).

 

I'm with both of you guys on this one. It would be curious to see the difference between a rendered clip after dropping the 4K footage on a 1080p timeline and one that's been through the "conversion". Probably not noticeable.

Share this post


Link to post
Share on other sites

All the discussion on theory is interesting, BUT, I still haven't seen a detailed response on exactly how to covert GH3 4k 4:2:0 files to 1080p 4:4:4 (or 4:2:2) using AE and/or Premiere.  Does anyone know specifically how to do this?

Share this post


Link to post
Share on other sites

I am a long (long long) time reader, first time poster here.
 

As a recent owner of a GH4 (coming from a GH2 and skipping the GH3), I have marveled at the idea of down-sampling the 4k footage to 10bit 4 4 4 1080p.

I have gone through this entire thread. Lots of great tech info here, but a question has been raised in here without really being answered. For those of use not really caring about the science and just wanting to get down to the brass tax of the matter:

Does ANYONE out there have a brief, concise, step by step guide on how one actually performs the down-sample process from 4k to 1080p using either After Effects or Premiere?  (I am talking the trans-code here, not simply scaling the footage)

The science behind it all is great and wonderful, but a simple, bullet-point guide on how to perform the actual down-sample using After Effects or Premiere would really be the ticket here.... 

 

Can anyone post on this? I think it would be really helpful for a lot of us.

 

Cheers.

Share this post


Link to post
Share on other sites

i don't understand the maths when you downsample a 4:2:0 8 bits by a factor of 2 (4 pixels to 1 pixel) , you get 4:4:4 but with 10 bits only for the Y, and you still only get 8 bits for U and the V so it is 4:4:4 in 10:8:8 bits, no ?

Share this post


Link to post
Share on other sites

This is a myth.. you have to record at 10bit to get 10bits of light information from the scene, or latitude in post.. another case of being fooled by math. Say you had 2bits, which is 4 colors, at 100k.. you start to get the idea. You could never recreate all the details of the scene if the information has been truncated to 4 colors, no matter how many pixels you recorded and resampled in post. (you are correct arya44, even though you say you are new to this). It is incorrect to think that there is an inverse relationship between bit depth and pixel depth with respect to depth info from the scene.

 

Since a camera is not just a pixel calculator, but is more importantly a light recording device, the ability to map 4 pixels to 1 and increase the bit depth per pixel is not that relevant if you still can't see the scene any clearer.

 

It also will not be that much more grade-able, and this I just know from experience. You will get anti-aliasing, which is not always what you want, and the noise will look smaller because you just shrunk the picture, but artifacts will still come out with a couple stops, unlike working in true 10bit log that has been converted into linear space. Gotta capture at 10bit to get that latitude in post.

Share this post


Link to post
Share on other sites

I’m really interested in whether shooting 4k for HD output actually gives more colour information that shooting HD in the first place. There’s been lots of mathematical theorising about this, some of which I kind of understand, lots of which I don’t. 

 

So to get a real world answer, I just pointed the GH4 at the sky, and this is what happened.

 

 

First off, 100Mbps at 25p. Then the same shot a few seconds later, no changes other than to switch to 4k. Then import to FCPX, optimizing footage to ProRes. Drop both shots onto a 1080p timeline. Add a Hue/Saturation filter, crank up the Saturation to 2.0. Export as 1080p.

 

Obviously nobody would actually add that much saturation in real life, but it exaggerates the colours in a grey Newcastle sky.

 

I think the difference really stands out.

 

Settings: Cinelike V tweaked to Mr AR’s recommendations, cloudy WB, shutter speed 1/50, iso 200, f3.5. Not sure what the colour balance difference is due to - I changed nothing on the camera between shots other than changing between HD and 4k.

Share this post


Link to post
Share on other sites

This all sounds rad, but what's the easiest way to accomplish this?? Is it as simple as converting your C4k footage to 1080p prores 444 in a program like MPEG Streamclip?

 

What's the difference in bit rates between C4k 4096 x 2160 and UHD 3840 x 2160? Does the UHD 59.94 HZ  system frequency play a role here? - new to this camera and specs, trying to learn how to get the most out of it.

 

Thanks!

Share this post


Link to post
Share on other sites

  I kept wondering whether the resampling would also work in the temporal domain... the GH4 allows 1080p at 4x frame rates.  Downsampling in the temporal domain, i.e. averaging 4 frames to generate 1, should in principle - if the sensor gain is well calibrated and noise is about 1 (or at least single digit) ADUs also allow going from 8bit to 10 bit luma information per channel.

 

  Of course it would only work in static parts of the image, creating smear in the moving ones like, say, a moving car or train.  But maybe that would create an interesting look on its own?

 

  m.

Share this post


Link to post
Share on other sites

This is a myth.. you have to record at 10bit to get 10bits of light information from the scene, or latitude in post.. another case of being fooled by math. Say you had 2bits, which is 4 colors, at 100k.. you start to get the idea. You could never recreate all the details of the scene if the information has been truncated to 4 colors, no matter how many pixels you recorded and resampled in post. (you are correct arya44, even though you say you are new to this). It is incorrect to think that there is an inverse relationship between bit depth and pixel depth with respect to depth info from the scene.

 

 

 

What you're saying is also not strictly true... You would be correct if you take an image of an area of strictly the same colour and intensity everywhere, and the sensor would be totally free of noise.  But imagine a color ramp.... the downsampling and averaging of neighbouring pixels *does* yield intermediate values of colour information, and you do get more than the intitally available 4 values out!

 

It's an interpolation, true, and the accuracy of the result depends on the spatial frequency of the object itself and the resolution of the camera and sensor system.  But if that's not too far off, the result of the interpolation should come pretty close to a sensor recording 10 bit right away!

 

Maybe a tedious calculation reveals not the full 10 bit of information content, but 9.x and a slightly reduced spatial resolution, but still it can well be worthwhile!

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

×
×
  • Create New...