Jump to content

see ya

Members
  • Posts

    215
  • Joined

  • Last visited

Reputation Activity

  1. Like
    see ya got a reaction from John Twigt in 5DtoRGB VS GH2 banding issue   
    The extra detail and brightness 'gained' is due to a simple reason, the mapping your NLE or media player does between the YCbCr (YCC for short) color model GH2 video and the RGB preview and color processing done by the NLE / media player.

    The 'correct' way is to take the 8bit 16 - 235 luma range and 16 - 240 chroma range and convert to 0 - 255 RGB. So 16 YCC (black) and 235 YCC (White) is mapped to 0 RGB (Black) and 255 RGB (White).

    YCC video has a luma channel (kind of greyscale) and two chroma channels Cr & Cr, chroma red and blue. They are all kept seperate and added together using a 'standard' calculation to give color, saturation and contrast and on computers thats an RGB preview.

    RGB color space on the other hand entombs the brightness value into every pixel value, not kept seperate in a luma channel. So YCC gives us the advantage of how we combine luma and chroma to create RGB but it needs to be done to recreate the RGB data the camera started out with before encoding to AVCHD.

    Checking the histogram, which is an RGB tool, can help establish correct conversion where visually it might look good but if combing or gaps are show in the histogram it illustrates a bad conversion, which becomes evident when trying to grade certainly at lower bit depth. Also a luma waveform can highlight problems too. Saying something 'looks' good also depends on the calibration of a monitor, it might look great on one persons and bad on another. Histograms can lie, but they are a better illustration of state of an image.

    The key point though is the weighted calculation done to generate the RGB values that you see and those are the values that you base the quality of the image to be and therefore the performance of the camera. Important is the fact that a YCC color space can generate more range of values than can be held or displayed by '8bit' RGB, so 32bit float RGB processing is offered to negate this.

    But what you percieve to be the 'quality' of the camera file depends on how the NLE interpretats the YCC and how the RGB values are calculated and stored in the memory of the computer as well as an interpolation of YCC values into RGB pixels with regard to edges, stepping etc.

    Depending on what algo your NLE does in the conversion will depend on the percieved smoothness of edges. Ie: does it do nearest neighbour, bilinear, bicubic etc. 5DToRGB offers custom algo I believe. But to concentrate on levels handling...

    It's important to understand that the RGB display you see is not necessarily the extent of what is actually in the YCC, just how the NLE is previewing it by default. We need to seperate what is displayed from what is really in the file and that a simple levels filter can make detail appear, this really needs to be done at 32bit processing though as clipping may well occur otherwise, as mentioned above.

    Our monitors are 8bit RGB display, they can't display 32bit as 32bit or wider dynamic range but that's not to say that the RGB values calculated in the YCC to RGB conversion by the NLE or media player don't contain values greater than can be displayed at 8bit and that includes negative RGB values that would have been clipped in an 8bit YCC to RGB conversion.

    Being able to store and manipulate negative values helps black levels / shadows and whites greater than a value of 1 can be held and manipulated over 8bit clipping level white. 0 - 255 is described as 0 - 1 in 32bit terms.

    So back to YCC to RGB conversion. Your GH2 captures luma in the 16 - 235 range, it's not full range, it's limited but it's the 'correct' range for YCC to RGB conversion. 16 - 235 mapped to 0 - 255 RGB based on the standard calculation. This is all in 8bit terms and 8bit considerations like clipping 8bit values generated that are negative or greater than 1.

    What 5DToRGB offers is for you to say 'nah I don't want to use the standard YCC to RGB mapping based on 16 - 235 luma. I want you to calculate RGB values based on the assumption that luma levels in my files are full range 0 - 255. So doing that means that instead of 16 YCC being treated as 0 black RGB, it's treated as 16 RGB (grey) and 255 YCC treated as 235 RGB. Result is levels of the original file get stretched out and the image looks washed out or not so contrasty and you can see more detail as a result. That's all 8bit world and if your NLE is 8bit then you may have to resort to that sort of workflow.

    10bit and 16bit just provide finer gradients and precision, they do not however provide the ability to store negative RGB and RGB values over 1. A 32bit float workspace is required for that, 32bit is not all about compositing and higher precision.

    Such apps as Premiere CS5 and AE CS7 onwards with a 32bit float workspace work differently to 8bit and 16bit. 16bit just spreads 8bit over wider range of levels prorata. So 8bit black level 0 hits the bottom of the 16bit range, 255 hits top of 16bt range.

    Where as Adobes 32bit float workspace centres the 0 of the 8bit in the 0 - 65536 levels range so you have room to go negative as well, blacker than black so 8bit black doesn't hit 32bit 0 black.

    The importance of this is that where your shadows would have been clipped to black and detail lost or hidden deep, 32bit allows the values to be generated and held onto, our 8bit display still can't display them but they are there safe in memory same for brighter than 8bit white. We can reassured that we can shift levels about freely until what we see in our 8bit preview is what we want, those details you miraculasly see appear by magic with 5DToRGB which is really just a remapping of levels YCC to RGB.

    Iin 32bit the default GH2 import levels and detail will appear and disapear depending on your grading, but they are not lost, they slide in and out of the 8bit preview window into the 32bit world, no need to transcode.

    This makes the 5DToRGB process pointless with regard to levels handling and gamma reasons when you have a 32bit float workspace. 16bit donesn't offer this. Just import GH2 source into 32bit and grade.

    You can see whether your NLE handles this by importing the fullrangetest files in one of my posts above. Try grading the unflagged file, it's full range luma so initially the display will show black and white horizontal bars, this is the default mapping for 8bit RGB preview I mentioned above, relate that to your default contrasty, crushed blacks, lost detail preview in the NLE.

    Put a levels filter on it or grade and pull the luma levels up and down, if you see the 16 & 235 text appear you can see your NLE has not clipped the values converting from YCC to RGB. You'll be safe in the knowledge you haven't been shafted by Apple and lost detail and that 5DToRGB is doing magic.

    If you don't see any 16 & 235 text and conclude that your NLE is doing an 8bit conversion going YCC to RGB then options like 5DToRGB transcoding etc may be the only options.

    Not sure why anyone would want to transcode to 4444, the last '4' is for alpha channel.

    The conversion from 4:2:0 YCC ie: subsampled chroma to 444 Prores is a big interpolation process, creating 'fake' chroma values by interpolating the bit of half size chroma captured in camera.

    It's possible that some dithering or noise is added to help with that, so 444 is manufactured from very little, again it's interpolation via an algo like bilinear, bicubic, smoothing chroma a bit, but does nothing to levels and gamma. Just manufactures a bit extra color.

    444 is similar to RGB and the process of generating RGB in the NLE for preview on our RGB monitors is done very similarly, as soon as you import the YCC sources into the NLE, preferably at higher precision than 8bit chroma is interpolated and mixed with the non subsampled luma and RGB values created. The higher the bit depth the better the gradients and edges created.

    I don't think transcoding to 444 or even 4444 for import into a 32bit NLE or grading package is worthwhile.

    All this goes back to a simple workflow, suggest using a 32bit enabled NLE / grading tool and a slight gentle denoise to handle the interpolation of values at higher bitdepth, rather than interpolate to 444 using algos that can over sharpen edges and create black, white, colored halos and fringes at edges depending on pixel values and lower bit depth, if over done or pushed in the grade.

    Gentle denoise will give other benefits too, obviously. But suggest doing own tests and whatever suites an individual is the way to go of coarse.
  2. Like
    see ya got a reaction from Sean Cunningham in 5DtoRGB VS GH2 banding issue   
    Premiere CS5 onwards is 32bit processing default. AE 7 onwards I think offers 32bit and linear blending / workspace.

    BT601 or 709 differences in 5DToRGB are only applicable in the conversion to RGB and then the only difference seen is a slight contrast change and pinks sliding to orange, blues sliding towards green a bit, but unless you have a correctly converted video to image as reference you'd probably never know the difference.

    All the 4:4:4 and gamma nonsense is pointless, really just import into a higher bit depth workspace, denoise a bit and go from there. 32bit Premiere is going to do a limited range conversion of the GH2 source to RGB straight away, same for AE CS5 onwards.

    Absolutely no point adding a 15-235 filter to GH2 as thats what it is already, the filter was for 16-255 sources like FS100 and NEX not GH2.

    All the 4:4:4 stuff is just interpolation of the 4:2:0 chroma which is done as soon as import into CS5 AE or Premiere.

    Using AE or Premiere prior to CS5 is different handling and screwed up conversion to RGB if not careful. Results will differ between those twi versions, what works in CS5 is not necessarily going to look the same as earlier versions.
  3. Like
    see ya reacted to galenb in First DNG files from BMCC now on line...   
    Oh and new Blackmagic forum too!
    http://forum.blackmagicdesign.com
  4. Like
    see ya got a reaction from Andrew Reid in How Mac OSX still *screws* your GH2 / FS100 / NEX footage - A must read!!   
    Well 'harm' may be too strong a description depending on how 'precious' we feel the source files are. For me it's more about gaining awareness of what is happening to avoid 'processing' that is unnecssary or unhelpful.

    By OSX I guess you refer to QT? Premiere doesn't use QT even on mac but FCPx and FCP do of coarse?. I'm not aware of how QT handles GH2 source but for Canon DSLR's it does a very similar approach to 5DToRGB with regard to levels, that is it scales them into restricted range. Whenever i've used QT to decompress it always gives 4:2:2 even from 4:2:0 sources upsampling chroma, not sure what interpolation it uses for that though, as I try to avoid QT for anything.

    With Canon DSLR sources including 7D and even the prototype 1D C, the MOV container has that fullrange flag metadata set 'on' so many decompressing codecs will scale luma 16 - 235 as per a 5DToRGB transcode.

    Regarding improper presentation of original footage, yes it's just about being aware of why and how so that when things don't look right we stand a better chance of fixing it.
×
×
  • Create New...