Jump to content

5DtoRGB VS GH2 banding issue


QuickHitRecord
 Share

Recommended Posts

My crusade against color banding on the GH2 continues. The Flowmotion hack (v2.02) minimizes banding as well or better than any hack I have tried, but I still get unusable shots from time to time. Today, I used FCP7's log and transfer to import a single clip with banding issues. Then I used 5DtoRGB to process the same problem clip and exported every possible combination to see if there was a magical setting that would stop color banding altogether. There isn't. But somewhat unsurprisingly, ProRes 4444 / Full Range / ITU-R BT.601 seems to be a significant improvement. Here are the settings:
[img]http://www.eoshd.com/comments/uploads/gallery/album_15/gallery_18451_15_2014.png[/img]
Andrew has posted about using these settings (or with ProRes 422 at least) to preserve color information, but it seems to have an effect on banding as well. To see for yourself, you can go to my photo album on this site ( [url="http://www.eoshd.com/comments/gallery/album/15-5dtorgb-vs-gh2-banding-problem/"]http://www.eoshd.com...anding-problem/[/url] ) and download the screen grabs. For best results, I recommend viewing them in rapid succession in a program like Preview like this:

[img]http://www.eoshd.com/comments/uploads/gallery/album_15/gallery_18451_15_497834.png[/img]
The only downsides are that the files are about twice as large as good old ProRes 422, and it takes MUCH longer than FCP7's log and transfer window.
Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
Hi

Even though you feel the settings shown provide good results, the settings don't make sense with regard to luma range and matrix. It's AVCHD off a GH2 so BT709 is the matrix and GH2 encodes into 16 - 235 luma 16 - 240 chroma so limited range not full range.

QT Player screen shots are not really reliable, QT auto scales the levels and upsamples to 4:2:2 and then converts to an RGB screen grab, so don't think the screen grabs are reliable.

Also the method of providing grabs is so painful, is it not possible to just provide a zip download and better still the sample GH2 clip you're testing then others can actually make a comparison if anyone is interested. :-)
Link to comment
Share on other sites

[quote name='yellow' timestamp='1350424361' post='19826']
Hi

Even though you feel the settings shown provide good results, the settings don't make sense with regard to luma range and matrix. It's AVCHD off a GH2 so BT709 is the matrix and GH2 encodes into 16 - 235 luma 16 - 240 chroma so limited range not full range.

QT Player screen shots are not really reliable, QT auto scales the levels and upsamples to 4:2:2 and then converts to an RGB screen grab, so don't think the screen grabs are reliable.

Also the method of providing grabs is so painful, is it not possible to just provide a zip download and better still the sample GH2 clip you're testing then others can actually make a comparison if anyone is interested. :-)
[/quote]

Good points. What is a better way to go about posting accurate screen grabs?
Link to comment
Share on other sites

Here is a composite of the Quicktime screen grabs for the time being:

[img]http://www.eoshd.com/comments/uploads/gallery/album_15/gallery_18451_15_471134.png[/img]

It looked different in Photoshop. The FCP side actually looked cleaner. After the new file was created for posting here, I'm back with an image that reinforces my initial observation. I am sure that someone more knowledgeable than I can shed some light on this.
Link to comment
Share on other sites

I had a banding problem with my 60D, just on some shots & it really started to annoy me.
(As yellow said you must keep to the right Decoding Matrix & Luminance range)
I started to use the DPX file output option on 5DtoRGB - huge files, but amazing quality. I thought that this had solved the banding problem, but it was still noticeable on some shots - mostly white or off-white backgrounds.

For screen grabs use MPEGStreamclip (its free & widely used by editors) & that goes double for converting any footage that comes out of FCP (you just use the simple export setting, NOT export to QT) - QT is a nice toy, but that's all it is!

[u]The Solution:[/u] I am trying out the free 30 day trial of FCPX & guess what? No banding at all! I tried all the shots that had banding in FCP7 & in FCPX the banding was not present at all, just lovely clear footage like i knew i'd shot!
I'm still playing with FCPX, but the more i do, the more i like it & will probably dump FCP7 for DSLR footage completely.

Hope that helps a little
Link to comment
Share on other sites

[quote name='QuickHitRecord' timestamp='1350428666' post='19828']
Good points. What is a better way to go about posting accurate screen grabs?
[/quote]

Not sure what's available for mac but VLC should do the job, or Media Player Classic.

Here's a test file for making sure VLC is set up for levels handling:

[url="http://www.yellowspace.webspace.virginmedia.com/fullrangetest.zip"]http://www.yellowspace.webspace.virginmedia.com/fullrangetest.zip[/url]

More important for Canon and Nikon h264 sources really.

Banding can occur due to poor encoding like GH2 and/or wrong levels handling in playback. And making it look worse. Canon sources banding should be far less likely.

In camera banding no combination of settings in 5DToRGB is going to help, a very gentle denoise will though, does your NLE have a denoiser, preferably temporal and working of YCC data not RGB.

Converting to DPX or other image formats isn't going to solve the problem, working in a 16bit or preferably 32bit project will to an extent. Denoise plus higher precision work space is a way forward.

Take care with denoising though as it can make banding appear more prominent if done too much or at lower bit depth, then an addition of fine amounts of noise, grain or dithering even a debanding plugin if necessary to fight that, finding a happy medium between mushing pixel values at high bit depth with a denoiser, balance holding detail vs smoothing, adding the right amount and type of noise as last opp after any sharpening.
Link to comment
Share on other sites

I've seen so many arguments against transcoding GH2 footage, like it's a waste of time. I use Premiere and do the gamma adjustment, then i transcode it to prores 444, 601, at gamma 1.8. It pretty much puts the footage in "log" space. Then I grade it in AE at 16 bit. Sounds like crazy overkill, but honestly I do see a difference. I have no banding issues once i take those files and edit them in 16bit. I tried 4:2:2 in 5DtoRGB and 4:4:4 was far better. It might be a bigger pain in the ass, but i've never "lost" anything in doing it. It cleaned up the bullshit, and let me add my lut and worry about grading. So lest my eyes have deceived me, using 4:4:4 in 5DtoRGB at full range shows significant results for me... I think you just have to do the gamma fix in Premiere first. Let the chastising begin, lol.
Link to comment
Share on other sites

A trolls paradise! It doesn't help when the head of the heap is a mean and nasty ogre too.

Anyway, @Germy - What do you mean by "gamma fix" in premiere? I haven't heard about this.

About transcoding and all that: I don't really get it. Last night I spent hours pouring over footage I have with banding. if I drop the .mts files into Premiere they seem to work fine and look like I expect. If I run it through 5DtoRGB, with BT709 and broadcast range selected, and then into Premiere there is only a slight difference in contrast but it basically looks exactly the same. I realize that for the most part, since I'm using Premiere, there really isn't any reason to use 5DtoRGB since the whole point was to transcode into a format that your editor can use... But seeing at Premiere seems to be fine with AVCHD... Anyway, forgoing that part of the argument; in 5DtoRGB, if I choose BT709 and full range, back in Premiere it looks washed out (like I was expecting) but I noticed that once it's graded back to something that resembles the original, there seems to be less banding... It's still there but it seems to be pushed into ranges that don't show as much or something... with the 'broadcast range' files, there are clear lines of banding but with the 'full range' files, it looks more like codec crud then banding. And it seems as though there is a bit more latitude with full range! Maybe I'm just fooling myself but it seems like it's 'full range' that fixes the banding to some extent. Am I totally off base here?
Link to comment
Share on other sites

Not at all man. Andrew posted a thread a while back on the avchd mac gamma fix. You load your mts's in premiere and pull up fast color corrector. Adjust the incoming gamma settings to 15-235 range. The output is full range. Export it. Then i pro-res it in 5dtorgb with 4:4:4, bt.601, full range at gamma 1.8. I know it sounds silly, but if you're going to grade.. You've just brought the footage in and adjusted it to its native gamma range, then put it in a log space at prores bit rates and a 4:4:4 play area. (No it wasn't there in the first place, but grade both with, and without all that transcoding and you'll see the difference.) i think switching in AE from 8 bit to 16 bit is crucial especially in the gradients.
Link to comment
Share on other sites

So, that's if you're grading in AE but what if you just want to grade in Premiere? I just discovered Colorista recently and I think I like it. :-) Has anyone tried setting the sequence to "maximum bit depth"? Does anyone know if this is the same as setting a comp to 16 bit in AE? You might be able to get around the whole transcoding to a higher bit depth codec thing if that's the case. But also, I don't really understand doing that in the first place? Aren't most grading platforms 32 bit internally anyway?
Link to comment
Share on other sites

Premiere CS5 onwards is 32bit processing default. AE 7 onwards I think offers 32bit and linear blending / workspace.

BT601 or 709 differences in 5DToRGB are only applicable in the conversion to RGB and then the only difference seen is a slight contrast change and pinks sliding to orange, blues sliding towards green a bit, but unless you have a correctly converted video to image as reference you'd probably never know the difference.

All the 4:4:4 and gamma nonsense is pointless, really just import into a higher bit depth workspace, denoise a bit and go from there. 32bit Premiere is going to do a limited range conversion of the GH2 source to RGB straight away, same for AE CS5 onwards.

Absolutely no point adding a 15-235 filter to GH2 as thats what it is already, the filter was for 16-255 sources like FS100 and NEX not GH2.

All the 4:4:4 stuff is just interpolation of the 4:2:0 chroma which is done as soon as import into CS5 AE or Premiere.

Using AE or Premiere prior to CS5 is different handling and screwed up conversion to RGB if not careful. Results will differ between those twi versions, what works in CS5 is not necessarily going to look the same as earlier versions.
Link to comment
Share on other sites

[quote name='galenb' timestamp='1350621051' post='19972']
So, that's if you're grading in AE but what if you just want to grade in Premiere? I just discovered Colorista recently and I think I like it. :-) Has anyone tried setting the sequence to "maximum bit depth"? Does anyone know if this is the same as setting a comp to 16 bit in AE? You might be able to get around the whole transcoding to a higher bit depth codec thing if that's the case. But also, I don't really understand doing that in the first place? Aren't most grading platforms 32 bit internally anyway?
[/quote]

On paper, numbers, specs, technicalities, etc. Your computer programs run at 32bit unless you specify 64bit or whatever.. There is an 8bit, 16bit, and 32bit floating point option in AE. I don't grade in Premiere so I can't tell you what's up there. But plus one on Colorista! I just started using the Ellipse feature in there and i love it.

All i know is what i've seen by doing it the way i do vs without it and there is a difference whether the specs say otherwise or it sounds pointless because of what somebody says. The 16-235 into 5dtorgb actually produced much better results for me than just dropping it in the timeline and grading, so that's all that matters to me.
Link to comment
Share on other sites

If you believe (and feel ascertained by the results) that any transcoding before grading is better, do so and trust your eyes. I have my own theory and follow it somewhat stubbornly, following my own advise. This is, any original is not getting any better if you transcode it. There are hundred thousand guys who use Premiere on a PC and take advantage of it's ability to edit with the native codec and even export directly to mpeg4. And their results are good too.

With 32-bit render accuracy (that is, if you check 'maximum bit depth', of course), you can grade an image without degrading. There are, however, a few reasons why someone would want to transcode before that:
● Grading can change quite a few parameters of your original. Since grading is very much a WYSIWYG-affair and since the quality depends on the correct order of operations, you want to have an exact preview, not reduced for playback reasons. With every change, you cut off original values, the next step you perform on top of an image that's computed on-the-fly. Your hardware should really be fast enough or else a less compressed intermediate would serve you better.
● This is particularly important for compositing. Everybody reduces the preview quality in After Effects, because, while no realtime-application, waiting [i]too[/i] long for a fluid preview is a pain in the ass. That tempts you to make judgements on a very much rounded preview, a fantasy, where keyframes are being swallowed, something you often only realize in the final output (which is rendered way faster in a less compressed format).
● I made tests with different export codecs from AME. While there is visually no difference between a high bitrate mpeg4 and ProResHQ, the latter can be encoded to considerably lower-bitrate mpeg4 in a second step. If it is true, that modern graphic cards, RAM and multicore processors make the classic intermediates obsolete for playback, it is also true, that the additional HD space for ProRes has become ridiculously cheap. These things will change as soon as more All-I-mpeg4 codecs are implemented.

FCP X doesn't [i]need[/i] ProRes either. It just improves performance and stability to a point where one has to admit: At least for mpeg4, it [i]does[/i] need ProRes. And again, hundred thousands edit with FCP (7 or X), using ProRes, and if there was a problem with the quality, we had heard about it.
Link to comment
Share on other sites

[quote name='Axel' timestamp='1350656320' post='19989']
If you believe (and feel ascertained by the results) that any transcoding before grading is better, do so and trust your eyes. I have my own theory and follow it somewhat stubbornly, following my own advise. This is, any original is not getting any better if you transcode it. There are hundred thousand guys who use Premiere on a PC and take advantage of it's ability to edit with the native codec and even export directly to mpeg4. And their results are good too.

With 32-bit render accuracy (that is, if you check 'maximum bit depth', of course), you can grade an image without degrading. There are, however, a few reasons why someone would want to transcode before that:
● Grading can change quite a few parameters of your original. Since grading is very much a WYSIWYG-affair and since the quality depends on the correct order of operations, you want to have an exact preview, not reduced for playback reasons. With every change, you cut off original values, the next step you perform on top of an image that's computed on-the-fly. Your hardware should really be fast enough or else a less compressed intermediate would serve you better.
● This is particularly important for compositing. Everybody reduces the preview quality in After Effects, because, while no realtime-application, waiting [i]too[/i] long for a fluid preview is a pain in the ass. That tempts you to make judgements on a very much rounded preview, a fantasy, where keyframes are being swallowed, something you often only realize in the final output (which is rendered way faster in a less compressed format).
● I made tests with different export codecs from AME. While there is visually no difference between a high bitrate mpeg4 and ProResHQ, the latter can be encoded to considerably lower-bitrate mpeg4 in a second step. If it is true, that modern graphic cards, RAM and multicore processors make the classic intermediates obsolete for playback, it is also true, that the additional HD space for ProRes has become ridiculously cheap. These things will change as soon as more All-I-mpeg4 codecs are implemented.

FCP X doesn't [i]need[/i] ProRes either. It just improves performance and stability to a point where one has to admit: At least for mpeg4, it [i]does[/i] need ProRes. And again, hundred thousands edit with FCP (7 or X), using ProRes, and if there was a problem with the quality, we had heard about it.
[/quote]

Truth be told it is a pain in the ass waiting on a solid preview. I don't run a skynet mainframe, lol. My flavor is grading a "log" style image, so i try to get it to that point. Until I get a camera that doesn't stairstep gradients from shadows to highlights the further i push it, the transcoding and 16bit option in AE has significantly reduced that. Just another option. I don't forsee myself doing it with my BMC footage once that baby arrives.
Link to comment
Share on other sites

Not sure if this helps, but I used Andrew's suggestion to transcode and grade this footage: [url="https://vimeo.com/50229298"]https://vimeo.com/50229298[/url]

And I wrote about the experience I had between FCP & 5DtoRGB with a video example of the two here: [url="http://www.jbpribanic.com/#d2c/tumblr"]http://www.jbpribanic.com/#d2c/tumblr[/url]

I'm interested to try the 4444 full range .601 on the next project... but I'm not sure if the extra space for 4444 would be too much work for a feature length doc...
Link to comment
Share on other sites

The extra detail and brightness 'gained' is due to a simple reason, the mapping your NLE or media player does between the YCbCr (YCC for short) color model GH2 video and the RGB preview and color processing done by the NLE / media player.

The 'correct' way is to take the 8bit 16 - 235 luma range and 16 - 240 chroma range and convert to 0 - 255 RGB. So 16 YCC (black) and 235 YCC (White) is mapped to 0 RGB (Black) and 255 RGB (White).

YCC video has a luma channel (kind of greyscale) and two chroma channels Cr & Cr, chroma red and blue. They are all kept seperate and added together using a 'standard' calculation to give color, saturation and contrast and on computers thats an RGB preview.

RGB color space on the other hand entombs the brightness value into every pixel value, not kept seperate in a luma channel. So YCC gives us the advantage of how we combine luma and chroma to create RGB but it needs to be done to recreate the RGB data the camera started out with before encoding to AVCHD.

Checking the histogram, which is an RGB tool, can help establish correct conversion where visually it might look good but if combing or gaps are show in the histogram it illustrates a bad conversion, which becomes evident when trying to grade certainly at lower bit depth. Also a luma waveform can highlight problems too. Saying something 'looks' good also depends on the calibration of a monitor, it might look great on one persons and bad on another. Histograms can lie, but they are a better illustration of state of an image.

The key point though is the weighted calculation done to generate the RGB values that you see and those are the values that you base the quality of the image to be and therefore the performance of the camera. Important is the fact that a YCC color space can generate more range of values than can be held or displayed by '8bit' RGB, so 32bit float RGB processing is offered to negate this.

But what you percieve to be the 'quality' of the camera file depends on how the NLE interpretats the YCC and how the RGB values are calculated and stored in the memory of the computer as well as an interpolation of YCC values into RGB pixels with regard to edges, stepping etc.

Depending on what algo your NLE does in the conversion will depend on the percieved smoothness of edges. Ie: does it do nearest neighbour, bilinear, bicubic etc. 5DToRGB offers custom algo I believe. But to concentrate on levels handling...

It's important to understand that the RGB display you see is not necessarily the extent of what is actually in the YCC, just how the NLE is previewing it by default. We need to seperate what is displayed from what is really in the file and that a simple levels filter can make detail appear, this really needs to be done at 32bit processing though as clipping may well occur otherwise, as mentioned above.

Our monitors are 8bit RGB display, they can't display 32bit as 32bit or wider dynamic range but that's not to say that the RGB values calculated in the YCC to RGB conversion by the NLE or media player don't contain values greater than can be displayed at 8bit and that includes negative RGB values that would have been clipped in an 8bit YCC to RGB conversion.

Being able to store and manipulate negative values helps black levels / shadows and whites greater than a value of 1 can be held and manipulated over 8bit clipping level white. 0 - 255 is described as 0 - 1 in 32bit terms.

So back to YCC to RGB conversion. Your GH2 captures luma in the 16 - 235 range, it's not full range, it's limited but it's the 'correct' range for YCC to RGB conversion. 16 - 235 mapped to 0 - 255 RGB based on the standard calculation. This is all in 8bit terms and 8bit considerations like clipping 8bit values generated that are negative or greater than 1.

What 5DToRGB offers is for you to say 'nah I don't want to use the standard YCC to RGB mapping based on 16 - 235 luma. I want you to calculate RGB values based on the assumption that luma levels in my files are full range 0 - 255. So doing that means that instead of 16 YCC being treated as 0 black RGB, it's treated as 16 RGB (grey) and 255 YCC treated as 235 RGB. Result is levels of the original file get stretched out and the image looks washed out or not so contrasty and you can see more detail as a result. That's all 8bit world and if your NLE is 8bit then you may have to resort to that sort of workflow.

10bit and 16bit just provide finer gradients and precision, they do not however provide the ability to store negative RGB and RGB values over 1. A 32bit float workspace is required for that, 32bit is not all about compositing and higher precision.

Such apps as Premiere CS5 and AE CS7 onwards with a 32bit float workspace work differently to 8bit and 16bit. 16bit just spreads 8bit over wider range of levels prorata. So 8bit black level 0 hits the bottom of the 16bit range, 255 hits top of 16bt range.

Where as Adobes 32bit float workspace centres the 0 of the 8bit in the 0 - 65536 levels range so you have room to go negative as well, blacker than black so 8bit black doesn't hit 32bit 0 black.

The importance of this is that where your shadows would have been clipped to black and detail lost or hidden deep, 32bit allows the values to be generated and held onto, our 8bit display still can't display them but they are there safe in memory same for brighter than 8bit white. We can reassured that we can shift levels about freely until what we see in our 8bit preview is what we want, those details you miraculasly see appear by magic with 5DToRGB which is really just a remapping of levels YCC to RGB.

Iin 32bit the default GH2 import levels and detail will appear and disapear depending on your grading, but they are not lost, they slide in and out of the 8bit preview window into the 32bit world, no need to transcode.

This makes the 5DToRGB process pointless with regard to levels handling and gamma reasons when you have a 32bit float workspace. 16bit donesn't offer this. Just import GH2 source into 32bit and grade.

You can see whether your NLE handles this by importing the fullrangetest files in one of my posts above. Try grading the unflagged file, it's full range luma so initially the display will show black and white horizontal bars, this is the default mapping for 8bit RGB preview I mentioned above, relate that to your default contrasty, crushed blacks, lost detail preview in the NLE.

Put a levels filter on it or grade and pull the luma levels up and down, if you see the 16 & 235 text appear you can see your NLE has not clipped the values converting from YCC to RGB. You'll be safe in the knowledge you haven't been shafted by Apple and lost detail and that 5DToRGB is doing magic.

If you don't see any 16 & 235 text and conclude that your NLE is doing an 8bit conversion going YCC to RGB then options like 5DToRGB transcoding etc may be the only options.

Not sure why anyone would want to transcode to 4444, the last '4' is for alpha channel.

The conversion from 4:2:0 YCC ie: subsampled chroma to 444 Prores is a big interpolation process, creating 'fake' chroma values by interpolating the bit of half size chroma captured in camera.

It's possible that some dithering or noise is added to help with that, so 444 is manufactured from very little, again it's interpolation via an algo like bilinear, bicubic, smoothing chroma a bit, but does nothing to levels and gamma. Just manufactures a bit extra color.

444 is similar to RGB and the process of generating RGB in the NLE for preview on our RGB monitors is done very similarly, as soon as you import the YCC sources into the NLE, preferably at higher precision than 8bit chroma is interpolated and mixed with the non subsampled luma and RGB values created. The higher the bit depth the better the gradients and edges created.

I don't think transcoding to 444 or even 4444 for import into a 32bit NLE or grading package is worthwhile.

All this goes back to a simple workflow, suggest using a 32bit enabled NLE / grading tool and a slight gentle denoise to handle the interpolation of values at higher bitdepth, rather than interpolate to 444 using algos that can over sharpen edges and create black, white, colored halos and fringes at edges depending on pixel values and lower bit depth, if over done or pushed in the grade.

Gentle denoise will give other benefits too, obviously. But suggest doing own tests and whatever suites an individual is the way to go of coarse.
Link to comment
Share on other sites

I posted this elsewhere in simplified form. Stick your 8-bit 4:2:0 footage into After Effects, set it to 32-bit and it's being treated as non-sub sampled individual frames anyway, in a a floating-point space. Transcoding is pointless hassle really. You can't gain detail, and the finishing software is doing the same thing for you.

God bless the floating point!
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...