Jump to content

Mac AVCHD gamma issues - the fix


Andrew Reid

Recommended Posts

[quote name='AdR' timestamp='1343884351' post='14816']
That's part of the problem, especially with h.264 footage, because many apps scale the luma when they see the h.264 flag in the meta data.

The two methods I've used are:

(1) Open the files in QT7 (which I have heard ignores scaling flags), then take a screengrab and open in photoshop. Use Info Window and Levels histogram to determine luma scale.

(2) Set up AE CS6 with color space of Adobe RGB(1998), and import .mts file, then use Synthetic Aperture for scopes and histograms. (I believe using Adobe RGB (1998) causes AE to import the .mts file without any luma scaling.)

If anyone has better methods, please post them.
[/quote]

Couple of methods I use:

1 - Rewrap the .mts clip to Quicktime using Clipwrap, Media Converter, FFMpeg, etc. Bring the .mov clip into Davinci Resolve, open the "Clip Atributes" and choose to interpret the color levels as "0 - 1023".

2 - Bring the .mts clips into SCRATCH, go to the Process Module in the FxCtrl tab set the YCrCb RGB mode to "Full".

Both these methods work in OSX and Windows.

Resolve

[img]http://www.eoshd.com/comments/uploads/inline/18737/501a48b24a929_resolve.png[/img]

SCRATCH

[img]http://www.eoshd.com/comments/uploads/inline/18737/501a48dd7e034_scratch.png[/img]

On a side note, these workflows work very well to create ofline editorial media and then reconform for color grading.
Link to post
Share on other sites
  • Replies 84
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

[img]http://www.eoshd.com/wp-content/uploads/2012/07/fs100-slrmagic-hyperprime.jpg[/img] As I recently discovered Macs really seem to hurt your AVCHD footage from the FS100, GH2 and NEX cameras. Es

These are not exactly new "problems". Stu Maschwitz describes the limiting of values (that is cutting off blacks and whites) in his [i]DV Rebel[/i], and for DV, hence the name. I didn't understand why

O dont have FCP x only PPro so Fix is for P Pro cs5.5 cs6

[quote name='EOSHD' timestamp='1343347937' post='14566']
This Rec.709 portion of a 601 space (16-235 instead of the full 0-255 the FS100 shoots in) is incorrectly remapped to 0-255 by Quicktime.[/quote]

There are a couple of ways rec709 & 601 are discussed:

1. Color primaries and color matrix. BT601 color primaries differ from BT709 but unless shooting DV (ie BT601) then HD video is BT709 color primaries. This has nothing to do with levels ranges but color gamut.

2. Color Matrix (Luma Coeffs). The color matrix only comes into play when converting a YCbCr (YCC for short) source file to RGB. Getting the wrong color matrix means a slight contrast change (not as drastic as 16 - 235 <> 0 - 255) and a slight color shift in reds to orange, blues to green and visa versa. Play back in a media player with or without hardware acceleration can even screw this up not only the NLE, leading the viewer to wrong assumptions about camera performance vs another camera.

[quote]Therefore apps that use Quicktime at their core like Premiere, trip up.[/quote]

Premiere uses it's own medicore and with regard to h264 it's a MainConcept decoder not QT.


[quote name='sfrancis928' timestamp='1343356784' post='14586']
I think I figured it out in FCPX. I just used the built-in color corrector.

The exposure levels in FCPX aren't numbered 0-255, they're percentages (0-100).[/quote]

There's some test files here if you want to see how QT handles full luma:

[url="http://www.yellowspace.webspace.virginmedia.com/fullrangetest.zip"]http://www.yellowspa...llrangetest.zip[/url]

And a discussion here:

[url="http://***URL not allowed***/viewtopic.php?f=56&t=30317&&start=340#wrap"]http://www.cinema5d....&start=340#wrap[/url]

If you're interested in testing then use the fullrange flag off files for your test to emulate 16 - 255 and 0 - 255 shooting cameras like Nex5n and FS100 respectively.

[quote name='AdR' timestamp='1343383178' post='14604']
I think it's better to convert using 5DtoRGB.[/quote]

Depends on camera source and NLE.

[quote]Our goal is to keep as much of the information your camera captured. Let's look at the process.

When you bring in an .mts file from your GH2, it has data from 0-255.[/quote]

GH2 is 16-235 luma in YCC according to the files I've seen.

[quote]Your NLE assumes the .mts file should be broadcast safe, so it limits the values to 16-235.

The data from 0-15 and 236-255 are G-O-N-E. Not hidden. Discarded by the NLE.[/quote]

Depends on NLE, depends on 8bit or 32bit workflow. Depends on decompressing codec. You're suggesting every NLE clips the source which is not true. With modern NLE's and 32bit workflow all the levels can be passed through to the NLE untouched, nothing lost.

What you see in an RGB histogram or other RGB scope is the result of whatever YCC to RGB conversion the NLE has done. This has nothing to do with what the camera actually shoots but what the NLE gives you as a result. Your 0 - 255 is RGB levels. YCC conversion to RGB at 8bit is 16 - 235 luma and 16 - 240 chroma makes 0 - 255 RGB.

[quote]When you use a color corrector to expand the color space back to 0-255, you're just stretching the 16-235 values across the full range. That makes gaps in the transitions, which makes for banding and abrupt tranistions when you color correct the footage.[/quote]

You're getting confused I think between YCC and RGB levels. 16 - 235 YCC luma is the correct. What exactly is a 709-601 stretch?. Both BT601 & BT709 can have luma outside of 16 - 235 and a 709-601 color matrix change has nothing to do with 16 - 235 to 0 - 255 or visa versa.

[quote]With 8-bit footage, every bit of quality and latitude counts. The 5DtoRGB process results in the full 0-255 data being imported, which gives you more flexibility in color correction.[/quote]

5DToRGB squeezes the full 8bit range into 16 - 235 ie: less levels, it then reencodes it, a second generation and for many camera source + NLE combinations a pointless process imho.

[quote]Hard drives are getting cheaper. Buy a bigger one. ;-)[/quote]

They are more expensive now and personally I'd suggest putting the cash towards a better NLE that will handle source files correctly. :-)
Link to post
Share on other sites
[quote name='AdR' timestamp='1343404757' post='14630']Also, doing a 709 to 601 stretch in 32-bit float will help a little, but the underlying facts are the same. You're still missing 14% of your color data, and the discontinuities degrade your image.[/quote]

32bit is not just about precision. YCC to 8bit RGB is lossy, about 25% of the color values possible in the YCC can be held / displayed in 8bit RGB. However 32bit RGB can hold the full YCC color space of the source file.

[quote]The 5DtoRGB conversion is quick and easy, and it keeps the entire 0-255 color space. That's what I'll be doing.
[/quote]

5DToRGB transcode is not lossless. YCC to 32bit RGB is. What exactly is a 0-255 color space?

[quote name='AdR' timestamp='1343419043' post='14644']
@alexander Um, no. Let's try to keep this friendly and polite, 'k?

It's really simple. The problem comes from the Quicktime APIs. From the 5DtoRGB website:[/quote]

In the media player that maybe but not just decompressing into the NLE and there's a difference. You mention the api but the dither option is just that an option. Dither at playback is a good thing generally and when done right. Dither at decompression is not done. The difference is whether viewing in a media player or NLE. From the bit plane extractions I've done previously I've not seen any sign of dithering or noise added as default by QT decompression. Have you?

Many NLE's don't even use QT to decode. Premiere uses MainConcept in it's own mediacore module for example.

[quote name='dreams2movies' timestamp='1343718044' post='14742']
So for those with FCPX like myself, How can one recover the grey in between white and black for the DR, when Quicktime screw our GH2 footage.. I like the Input and Output tool wit the range, reminds me of editing photos with Aperature or iPhoto.. But FCPX doesn't have this kind of editing of exposure or I haven't found it yet..
[/quote]

You really shouldn't be having problems with GH2 source files as they are 16-235.

[quote name='AdR' timestamp='1343884351' post='14816']
That's part of the problem, especially with h.264 footage, because many apps scale the luma when they see the h.264 flag in the meta data.[/quote]

The fullrange flag is in the container rather than the h264 stream. From what I've seen only mp4 and mov containers have a fullrange flag option.

[quote]The two methods I've used are:

(1) Open the files in QT7 (which I have heard ignores scaling flags), then take a screengrab and open in photoshop. Use Info Window and Levels histogram to determine luma scale.[/quote]

But how do you know what is happening in the YCC to RGB conversion? For example if you convert full luma YCC to 8bit RGB and the full luma is actually used in the conversion then almost certainly the channels will be quite heavily clipped.

[quote](2) Set up AE CS6 with color space of Adobe RGB(1998), and import .mts file, then use Synthetic Aperture for scopes and histograms. (I believe using Adobe RGB (1998) causes AE to import the .mts file without any luma scaling.)

If anyone has better methods, please post them.
[/quote]

Interpreting or assuming source is a wider color space such as AdobeRGB will give you more room to accomodate the RGB with reduced channel clipping but a side effect will be a skew in color hues due to differing color primaries between sRGB and AdobeRGB. re luma scaling not happening I don't see why AdobeRGB option would mean no luma scaling.
Link to post
Share on other sites
[quote name='yellow' timestamp='1343918022' post='14845']
There are a couple of ways rec709 & 601 are discussed:

1. Color primaries and color matrix. BT601 color primaries differ from BT709 but unless shooting DV (ie BT601) then HD video is BT709 color primaries. This has nothing to do with levels ranges but color gamut.

2. Color Matrix (Luma Coeffs). The color matrix only comes into play when converting a YCbCr (YCC for short) source file to RGB. Getting the wrong color matrix means a slight contrast change (not as drastic as 16 - 235 <> 0 - 255) and a slight color shift in reds to orange, blues to green and visa versa. Play back in a media player with or without hardware acceleration can even screw this up not only the NLE, leading the viewer to wrong assumptions about camera performance vs another camera.
[/quote]

SCRATCH has two separate setting for handling color gammut and luma.

[img]http://www.eoshd.com/comments/uploads/inline/18737/501aa411bb8a0_cs.png[/img]
Link to post
Share on other sites
hi milanesa, those settings look like how to interpret the source levels either 16 - 235 (rec709) or 0 - 255 (RGB). Not a gamut control just levels interpretation?

I don't know SCRATCH or how it handles various camera source files, demo'd it a few years ago but the screen grab above just looks like the usual levels terminology confusion describing 0 - 255 levels in 8bit YCbCr as RGB levels.

If that's the case it would appear a bit unhelpful because the source isn't RGB color model regardless of 16-235 or 0-255 levels, it's YCbCr color model and the fact is YCbCr can have 8bit luma levels across 0 - 255, nothing to do with RGB at all. :-)

It could be that the YCbCr RGB Full refers to JPEG/JFIF rather than BT709. JPEG/JFIF and BT709 handle chroma encoding and chroma placement differently.

I think the problem is, it doesn't matter how we request SCRATCH or Resolve or any other application how to interpret the levels, it's not a given that the decompressing codec will hand that to the application anyway to make a valid choice. Hence your Clip Unwrap process first I guess.

However you mention FFmpeg also. FFmpeg will not pass full levels through from a source like Canon or Nikon h264 even though they have them because of the fullrange flag in the MOV container. It squeezes into 16-235, so the first step in the process before getting to SCRATCH or Resolve or whatever can screw up the source if not careful.
Link to post
Share on other sites
Some of this stuff definitely goes over my head.
I do see a chroma (hue) shift (specially in the reds and warm colors) when I change from 709 to 601... that's why I assumed that it was a gamut related setting. Obviously, in the case of my GH1 clips, 709 looks more natural and pleasant.
Link to post
Share on other sites
  • 3 weeks later...
I still use FCP7 to edit because I like using Apple Color. My work flow goes like this:
-MTS converted to Prores HQ 422 via Media Encoder
-New file imported to FCP7
-Edited sequence to Apple Color.

Now my question is, should I be worried about this clipping issue? If so, what is the FCP equivalent of Premiere Pro's Fast Color Corrector?

Please help for I am now paranoid! thank you!
Link to post
Share on other sites
[quote name='stangarcia' timestamp='1345723562' post='16278']
I still use FCP7 to edit because I like using Apple Color. My work flow goes like this:
-MTS converted to Prores HQ 422 via Media Encoder
-New file imported to FCP7
-Edited sequence to Apple Color.

Now my question is, should I be worried about this clipping issue? If so, what is the FCP equivalent of Premiere Pro's Fast Color Corrector?

Please help for I am now paranoid! thank you!
[/quote]

You need not worry. Many complained about the Color workflow, but actually, if you understand the whole color affair as post in the strictest sense (that means exporting the FCP sequence and grade as final step of image processing, even after after FX), it is really fast and comfortable.

But best of all: You have the best scopes and the finest control-sensitivity of the tools. If something falls off the range between 0 and 100 (percent, that is, in the luma waveform monitor), you draw it back with greatest precision. That's all. Clipping only occurs on your monitor, 'cause you might not see the values. Once you start using the scopes, you see them and can bring them back.

The only thing I wonder about is, why on earth you use Media Encoder for the conversion to ProRes. If you got FCP 7, you should do what's the programs greatest advantage over the competitors: You should [u]log & capture/transfer[/u]. The organization of the assets is FCPs strength.

And it continues to be. The top of all editing software today is FCP X for it's unreached access to all your media through tagging, showing the clips as mini-timeline-thumbnails and a lightning-fast skimmer to identify every frame in hours of footage within seconds. Without cumbersome [i]folders[/i] that need to be [i]double-clicked[/i] before you can even [i]scroll[/i] through old-fasioned [i]lists.[/i]

[i]BUT- [/i]best of both worlds is obviously FCP X as editor for FC-Studio. Possible through the $ 50 software [url="http://assistedediting.intelligentassistance.com/Xto7/"]Xto7[/url].
Link to post
Share on other sites
Hi, could someone provide a definitive workflow for Final Cut Pro x without 5d2rgb?

Just importing AVCHD files straight into FXP X, transcoding them with the internal engine (optimized Prores)? Then what?

sfrancis928 mentioned in a previous post that he lifts black levels in order to match the footage to how 5d2rgb transcodes the footage. But, is that all to it or is it necessary at all? Just lifting shadows is enough? So, why would I even use 5d2rgb then? I won't even worry aboutl I'll bring in the footage into any NLE and then according my taste I will either lift the shadows a bit and lower the highlights or I won't... Am I missing something?

Thanks
Link to post
Share on other sites
[quote name='sfrancis928' timestamp='1343356784' post='14586']
I think I figured it out in FCPX. I just used the built-in color corrector.

The exposure levels in FCPX aren't numbered 0-255, they're percentages (0-100).

So I figure bringing it up by 15/255 at the low end equates to 5.88%, so bring the blacks up by 6%.
The high end is to be lowered by 20/255, which equates to 7.84%, so bring the highlights down by 8%.

The resultant waveform looked similar to the waveform of the footage transcoded in 5DtoRGB. Not exact, but pretty much the same.

[/quote]


So, all that 5d2rgb does is lift shadows? and bring highlights down? Why do we even worry about transcoding then? If I won't to bring up the shadows I can do so at any time in any NLE... How is it different from "recovering detail" which is not even there in the first place...?
Link to post
Share on other sites
Yes it brings luma levels outside of 16 - 235 into that zone by squeezing the 8bit levels range prorata and then a typical 8bit media playet displays it all but shadows that were black become less black, whites become less white so detail seems to magically appear better to just do it at 32bit float in a decent NLE than waste time transcoding if the only reason to transcode is to see more detail in some media player.

The levels adjustment must be done in a 32bit mode though if the adjustments are being made in RGB to access the under and over brights.

Basically full range luma or even 16-255 8bit YCbCr into 8bit RGB doesn't fit without the clipping or compression of luma outside 16-235 and therefore if in RGB clipping of color channels, so levels adjust at 8bit in RGB will just move the clipped crushed extents where as 32bit RGB can hold the whole 8bit YCbCr and slide overs and unders in and out of the 8bit 16-235 zone with no risk of clipping.

Also pointless for GH2 as the levels are 16-235 to start with.
Link to post
Share on other sites
  • 3 weeks later...
  • 4 months later...
  • 4 weeks later...

Hi all, I'm just trying to make sense of what this all means for a NEX-5n and FCP X user.  Really simply: when I open the vectorscope, I see my highlights going above 100.  If I lower the highlights to 100% via the native color corrector, I seem to get a little detail back.

 

The blacks already go to 0 so I assume there's no need to lift the shadows.

 

Is that it? 

Link to post
Share on other sites
No RunGunShoots highlights are above 100 so as he says he needs to pull them down to so strongest highlights are at or just below 100.

At the shadows end, the levels don't need adjusting, they are at the lowest 'legal' level already. Adjusted up maybe depending on the look RGS is after.
Link to post
Share on other sites
Correct 5D2RGB isn't required for adjusting levels before import into a 32bit workspace.

I'd suggest only thing 5D2RGB is transcoding to Prores for the sole reason of playback performance, but then anything that goes to Prores would do.

No gain using 5D2RGB imho if 32bit on the GPU is available. OpenCL or CUDA.
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...