Jump to content

Color Grade My GH3 Footage!!


/p/

Recommended Posts

re. using fast colour corrector or any other effect in PP with a '32bit' icon next to it.  To me I find using the low bitrate nex5n footage 24mbs, and de saturating in camera, but if required, bringing back colour using a saturation boost in the 32bit effects is just the same as shooting non low saturated and lowering the saturation.  The data is still there.  If anything, if you know you will be doing any type of grading i find it better to get WB right then shoot flat (lowest contrast and saturation) then monitoring is easier since you are dealing with less information and can focus on framing, exposure etc without letting a unaccurate representation of the final colour get in the way.  It is also a nice surprise taking your flat footage into post and having a flat canvas onto which you can manipulate.  

 

The arguments for and against saturated or de saturated in camera are split 50/50 from what I have read.  I certainly dont think my captured footage is limited in colour information due to desaturation in camera - I feel my dynamic headroom in both exposure and rgb colour handling is actually increased using a low saturated profile in 90% of situations.

Link to post
Share on other sites
  • Replies 60
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

I tried to cc this footage & there was something seriously lacking with it (really lacklustre) - i just couldn't get it to do what i wanted (will give it another go, as i probably gave up too quic

Cheers my firend!   Varying degrees of Colorista II, Looks and Filmconvert.    Lots of small tweaks rather than big sweeping attacks on colour, which are all too common these days IMO.   I tried

I just had a play with your files.  'Standard' profile looked best for my preference.  I have just started exporting a series of different grades.  some rather extreme.  I am really liking the headroo

Posted Images

To me I find using the low bitrate nex5n footage 24mbs, and de saturating in camera, but if required, bringing back colour using a saturation boost in the 32bit effects is just the same as shooting non low saturated and lowering the saturation.  The data is still there.  If anything, if you know you will be doing any type of grading i find it better to get WB right then shoot flat (lowest contrast and saturation) then monitoring is easier since you are dealing with less information and can focus on framing, exposure etc without letting a unaccurate representation of the final colour get in the way.  It is also a nice surprise taking your flat footage into post and having a flat canvas onto which you can manipulate.  

 

The arguments for and against saturated or de saturated in camera are split 50/50 from what I have read.  I certainly dont think my captured footage is limited in colour information due to desaturation in camera - I feel my dynamic headroom in both exposure and rgb colour handling is actually increased using a low saturated profile in 90% of situations.

 

EDIT: All of what I've written here is to the best of my knowledge, but should also be verified.  As a computer science university student, I feel comfortable with the claims I have made.

 

@rich101 Indeed, there are some key advantages to both paths.  But for the sake of making decisions, we should understand all the implications of both options.  Unfortunately, I think you've made a mistake in saying that desaturating the footage in-camera and bringing it back in post will deliver the same result as having the saturation high in-camera, even if you work in a 32 bit float colour space.  Let me explain with an example.

 

Suppose you have a smooth gradient in one of the channels from numerical value 0 to 5.  In 32 bit float, the data might look like this:

[0.0 - 0.3 - 0.6 - 1.0 - 1.3 - 1.6 - 2.0 - 2.3 - 2.6 - 3.0 - 3.3 - 3.6 - 4.0 - 4.3 - 4.6 - 5.0]

where each number corresponds to the value of one of the chroma channels of a pixel.  I.E. each number in the sequence is a different pixel.

If we increased saturation on this data, let's say double its original value, it would look like this:

[0.0 - 0.6 - 1.2 - 2.0 - 2.6 - 3.2 - 4.0 - 4.6 - 5.2 - 6.0 - 6.6 - 7.2 - 8.0 - 8.6 - 9.2 - 10.0]

 

Now suppose that same gradient were captured using an 8 bit int codec, it would look like this:

[0 - 0 - 1 - 1 - 1 - 2 - 2 - 2 - 3 - 3 - 3 - 4 - 4 - 4 - 5 - 5]

8 bit encoding does not allow for decimal places, so the numbers are rounded to their nearest integer.  If we convert these to 32 bit float values after the fact, then we get this:

[0.0 - 0.0 - 1.0 - 1.0 - 1.0 - 2.0 - 2.0 - 2.0 - 3.0 - 3.0 - 3.0 - 4.0 - 4.0 - 4.0 - 5.0 - 5.0]

As you can see, converting to 32 bit float does not bring back any information that was lost in the 8 bit encoding.  So, if we apply the same amount of saturation on these clips, i.e. doubling it, we get this:

[0.0 - 0.0 - 2.0 - 2.0 - 2.0 - 4.0 - 4.0 - 4.0 - 6.0 - 6.0 - 6.0 - 8.0 - 8.0 - 8.0 - 10.0 - 10.0]

 

As you can see, even though we've converted the 8 bit footage into 32 bit float, we have not gained any information.  Therefore, increasing saturation will still cause banding.  If we set the saturation high in-camera, the colour is affected before being encoded into 8 bit values.  We are still left with 8 bit footage, but the values would look more like this:

[0 - 1 - 1 - 2 - 3 - 3 - 4 - 5 - 5 - 6 - 7 - 7 - 8 - 9 - 9 - 10]

 

By starting with greater saturation values in-camera, we maintain a greater degree of colour precision.  And since modern codecs encode in a Y'CbCr colour space, this not only affects the saturation, but also the hue, which IMO is even more important.  Depending on the shot, it might not have a huge impact on the colour accuracy.  But I have definitely experienced some shots that have suffered from this, notably the test shots I did in my DVXuser post.

 

Does this make sense?

Link to post
Share on other sites

@JamesH - thanks for the info, as i always thought that was the case for saturation.

One question, what about contrast? Is it the same principle or does it work slightly differently. I've found that if you dial all the way down, you help the shadows but you end up sacrificing the highlights - this was really evident in the GH3 footage i graded.

Link to post
Share on other sites

One question, what about contrast? Is it the same principle or does it work slightly differently. I've found that if you dial all the way down, you help the shadows but you end up sacrificing the highlights - this was really evident in the GH3 footage i graded.

In theory, the same should apply to the luma channel.  Therefore, if you have a very low contrast seen, and also dial down the contrast in-camera, then you'll introduce banding into the shot when you increase the contrast in post.  Again in theory, lowering the contrast should actually help both shadow and highlight areas, but often at the expense of the mid tones.

 

As for sacrificing shadows/highlights, it really depends on the camera.  I've never used a Panasonic GH camera.  The only one I've actually taken the time to tinker with is the Sony NEX-5N.  In this camera, lowering the contrast reduced the S-Curve applied to the luma channel, while maintaining the same black and white levels.  In a 32 bit codec, this would have virtually no permanent effect.  But since it's an 8 bit codec, it can be the difference between nice smooth gradients and ugly jarring banding.

Link to post
Share on other sites

Hi james_H, just a quick query to confirm,

 

With the FCC applied, the values above and below 100 & 0 are still maintained in the waveform.  And, just as you predicted, the values mapped perfectly to 16 - 235.  These results are expected since the FCC is a 32-bit float effect.

 

There were two files in the zip, both full range levels in h264 but the one, flagon.mp4 has the VUI Option 'fullrange' switched on so the decompressing codec will squeeze levels into 16 - 235 for standard to specification conversion to RGB with no clipping, where as the flagoff.mp4 has the full range flag left switched off.

 

Could you just confirm that the flagoff.mp4 was the file you tested the FCC & levels with? sorry to fuss I wasn't very clear before.

 

Still have to look at your thread on dvxuser but have to register so...

 

I'd be interested to know what FCP X makes of the test files to, it's 32bit but all the same it is Apple and previously QT has made the 16 & 255 text appear to not clip but the text didn't have RGB values of 16 & 255 skewing results, due to the stupid gamma issues with color management in OSX and QT, prior to Mountain Lion that is.

Link to post
Share on other sites

Could you just confirm that the flagoff.mp4 was the file you tested the FCC & levels with? sorry to fuss I wasn't very clear before.

 

Still have to look at your thread on dvxuser but have to register so...

 

I'd be interested to know what FCP X makes of the test files to, it's 32bit but all the same it is Apple and previously QT has made the 16 & 255 text appear to not clip but the text didn't have RGB values of 16 & 255 skewing results, due to the stupid gamma issues with color management in OSX and QT, prior to Mountain Lion that is.

 

I did indeed use the flagoff.mp4 file.  But I didn't use PP's "Levels" effect because it clips.  It must be an old effect that hasn't been updated to use 32 bit colour.  The FCC, on the other hand, does not clip and clearly shows values above and below 100 & 0.  

 

I just tried throwing the files in a FCP X timeline and they seemed to work just fine.  On the waveform, I could clearly see the values beyond 0 & 100 for the flagoff.mp4.  And for flagon.mp4, the values were mapped properly to 0 - 100.  This is on FPC X (10.0.4) running on OS X (10.8.2) Mountain Lion.  

 

One thing that I find bizarre is that PP effects that are only 8 bit clip the levels above and below 100 & 0, even on 8 bit footage.  From what I understand, these values are stored in an 8 bit stream, but the effects can't handle it for some reason.  I've begun learning about making plugins for Premiere and After Effects and I've noticed that when I disable 32 bit, my effect will clip the values.  But when I enable 32 bit, it handles it just fine.  Am I missing something?  I find it quite weird.

 

EDIT: I think the clipping of 8 bit effects doesn't have to do with bit depth at all.  I have a feeling it has to do with the fact that these effects use the RGB colour space.  It seems that when working in Y'CbCr space holds the high and low values, even when using only 8 bit values.

Link to post
Share on other sites
James, thanks for the confirmation, regarding 8bit and also 16bit, I think the clipping is due to both modes being integer not float? And the standard conversion of 16 - 235 YCC to 0 - 255 8bit and prorata for 16bit. 65536.

*EDIT*

No thinking about it, precision shouldn't have any effect on that levels mapping.

Yes, if you stay YCbCr then no clipping, we just risk making even more invalid RGB values that when the conversion to 8bit does happen, such at playback on RGB devices then the clipping happens.

This is the beef about past advice on dialing down saturation, 8bit and probably 16bit workflows when the YCC to RGB happens in the workflow, channels clipped leading to real loss of data, unretrievable.

32bit workflows are more recent, more demanding if not done on the GPU, which is probably why OpenGL GLSL shaders have been used previously, but I think they clip, I think things like Magic Bullet Looks and possibly Colorista use GLSL shaders.

Regarding clipping chroma, cameras like Canon and Nikon DSLRs use JFIF specification rather than typical YCbCr so chroma is normalized to fit within the full levels as with luma, result is it won't clip chroma however much you saturate.

Regarding the flagon.mp4 the reason it fits 0-100 is that although the levels are fullrange h264 has VUI Options metadata in the stream, one of them is a fullrange flag, set on the decompressing codec will scale levels into 16-235 for a typical 16-235 YCC to 0-255 RGB.

For picture styles like Cinestyle that raise black to 16 YCC in the profile then get raised again to 32 and 255 down to 235 because of the fullrange flag scaling before the conversion to RGB.

For Nex5n AVCHD I don't think VUI Options or Equivilent fullrange flag for its 16-255 levels exist.
Link to post
Share on other sites

It turns out the clipping has to do with the colour space, not the bit depth.  The old premiere effects use RGB colour to perform their operations.  When I create an effect that stays in Y'CbCr, the values don't get clipped.

 

On another note, I've created an effect that tries to simulate the "Vibrance" effect from Lightroom.  Essentially it boosts the saturation for pixels that are desaturated, but doesn't affect the pixels that are already saturated.  I will test out how this affects footage with high and low saturation and I'll post my results.  Hopefully I'll be able to test it today.

Link to post
Share on other sites
Yes they were purely RGB based even YCC filters were based on the clipped RGB converted back to YCC, that's why I made the distinction in earlier posts about post CS5, any color space conversions in prior versions before the Mercury Engine were suspect.

Yeah, re Vibrance, adding Contrast and or Saturation Masks to adjust sunset saturation and raise levels to brighten the rocks and grass sufficiently. And can be done in YCC without any RGB processing.:-)

Bit like the old printer lights days of saturation and contrast control by using inter pos or inter negs.

btw added a bit more to the previous post.
Link to post
Share on other sites

Yes, if you stay YCbCr then no clipping, we just risk making even more invalid RGB values that when the conversion to 8bit does happen, such at playback on RGB devices then the clipping happens.

This is the beef about past advice on dialing down saturation, 8bit and probably 16bit workflows when the YCC to RGB happens in the workflow, channels clipped leading to real loss of data, unretrievable.

 

So, if I understand this properly, then the best workflow to use (in terms of keeping as much information and precision as possible) would be to shoot with relatively high saturation in-camera, but then tone it down using something like the FCC once it's imported.  That way we keep the precision of shooting high sat, but prevent invalid RGB values by toning it down in post.

Link to post
Share on other sites

I think the conclusions I have come to are:

  1. Make sure the camera's codec is capturing enough colour information, i.e. don't set the saturation too low in-camera;
  2. Make sure your footage isn't creating invalid RGB values, i.e. keep the saturation within reasonable limits throughout each step of the grade;
  3. Always export using 32 bit processing.

For my setup, that means setting the saturation to 0 (default) in my 5N and reducing saturation if needed using the FCC in Premiere Pro.  I think this discussion has been very useful, thanks to everyone who shared their input on the matter.

Link to post
Share on other sites

EDIT: All of what I've written here is to the best of my knowledge, but should also be verified.  As a computer science university student, I feel comfortable with the claims I have made.

 

@rich101 Indeed, there are some key advantages to both paths.  But for the sake of making decisions, we should understand all the implications of both options.  Unfortunately, I think you've made a mistake in saying that desaturating the footage in-camera and bringing it back in post will deliver the same result as having the saturation high in-camera, even if you work in a 32 bit float colour space.  Let me explain with an example.

 

Suppose you have a smooth gradient in one of the channels from numerical value 0 to 5.  In 32 bit float, the data might look like this:

[0.0 - 0.3 - 0.6 - 1.0 - 1.3 - 1.6 - 2.0 - 2.3 - 2.6 - 3.0 - 3.3 - 3.6 - 4.0 - 4.3 - 4.6 - 5.0]

where each number corresponds to the value of one of the chroma channels of a pixel.  I.E. each number in the sequence is a different pixel.

If we increased saturation on this data, let's say double its original value, it would look like this:

[0.0 - 0.6 - 1.2 - 2.0 - 2.6 - 3.2 - 4.0 - 4.6 - 5.2 - 6.0 - 6.6 - 7.2 - 8.0 - 8.6 - 9.2 - 10.0]

 

Now suppose that same gradient were captured using an 8 bit int codec, it would look like this:

[0 - 0 - 1 - 1 - 1 - 2 - 2 - 2 - 3 - 3 - 3 - 4 - 4 - 4 - 5 - 5]

8 bit encoding does not allow for decimal places, so the numbers are rounded to their nearest integer.  If we convert these to 32 bit float values after the fact, then we get this:

[0.0 - 0.0 - 1.0 - 1.0 - 1.0 - 2.0 - 2.0 - 2.0 - 3.0 - 3.0 - 3.0 - 4.0 - 4.0 - 4.0 - 5.0 - 5.0]

As you can see, converting to 32 bit float does not bring back any information that was lost in the 8 bit encoding.  So, if we apply the same amount of saturation on these clips, i.e. doubling it, we get this:

[0.0 - 0.0 - 2.0 - 2.0 - 2.0 - 4.0 - 4.0 - 4.0 - 6.0 - 6.0 - 6.0 - 8.0 - 8.0 - 8.0 - 10.0 - 10.0]

 

As you can see, even though we've converted the 8 bit footage into 32 bit float, we have not gained any information.  Therefore, increasing saturation will still cause banding.  If we set the saturation high in-camera, the colour is affected before being encoded into 8 bit values.  We are still left with 8 bit footage, but the values would look more like this:

[0 - 1 - 1 - 2 - 3 - 3 - 4 - 5 - 5 - 6 - 7 - 7 - 8 - 9 - 9 - 10]

 

By starting with greater saturation values in-camera, we maintain a greater degree of colour precision.  And since modern codecs encode in a Y'CbCr colour space, this not only affects the saturation, but also the hue, which IMO is even more important.  Depending on the shot, it might not have a huge impact on the colour accuracy.  But I have definitely experienced some shots that have suffered from this, notably the test shots I did in my DVXuser post.

 

Does this make sense?

 

@ James.

 

Thanks for this.  I am taking this comment on board and am currently playing with profiles using more saturation to determine how much your suggestions have an impact on my footage.  I appreciate your effort here with this comment and your logic

Link to post
Share on other sites

Thanks for this.  I am taking this comment on board and am currently playing with profiles using more saturation to determine how much your suggestions have an impact on my footage.  I appreciate your effort here with this comment and your logic

I'm glad to help.  I've played around with saturation values a lot lately, and it seems to me that only really low saturation levels (i.e. -3), and only some bad scenes, can have a noticeable impact.  I shot stuff with -1 yesterday and today and it all came out pretty nice.  I'd love to hear the outcome of your tests.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...