Jump to content

see ya

Members
  • Posts

    215
  • Joined

  • Last visited

Everything posted by see ya

  1. Yes, Nex5n captures into 16 - 255, which is 0 - 108 on that scale, you need to be within 0 - 100 before encoding. Equals abot 16 - 235 luma, 16 - 240 chroma.
  2. Depending on camera, NLE, grading app and worklow. Own tests to establish best combination.   But sounds good to me. :-)
  3. Yes they were purely RGB based even YCC filters were based on the clipped RGB converted back to YCC, that's why I made the distinction in earlier posts about post CS5, any color space conversions in prior versions before the Mercury Engine were suspect. Yeah, re Vibrance, adding Contrast and or Saturation Masks to adjust sunset saturation and raise levels to brighten the rocks and grass sufficiently. And can be done in YCC without any RGB processing.:-) Bit like the old printer lights days of saturation and contrast control by using inter pos or inter negs. btw added a bit more to the previous post.
  4. James, thanks for the confirmation, regarding 8bit and also 16bit, I think the clipping is due to both modes being integer not float? And the standard conversion of 16 - 235 YCC to 0 - 255 8bit and prorata for 16bit. 65536. *EDIT* No thinking about it, precision shouldn't have any effect on that levels mapping. Yes, if you stay YCbCr then no clipping, we just risk making even more invalid RGB values that when the conversion to 8bit does happen, such at playback on RGB devices then the clipping happens. This is the beef about past advice on dialing down saturation, 8bit and probably 16bit workflows when the YCC to RGB happens in the workflow, channels clipped leading to real loss of data, unretrievable. 32bit workflows are more recent, more demanding if not done on the GPU, which is probably why OpenGL GLSL shaders have been used previously, but I think they clip, I think things like Magic Bullet Looks and possibly Colorista use GLSL shaders. Regarding clipping chroma, cameras like Canon and Nikon DSLRs use JFIF specification rather than typical YCbCr so chroma is normalized to fit within the full levels as with luma, result is it won't clip chroma however much you saturate. Regarding the flagon.mp4 the reason it fits 0-100 is that although the levels are fullrange h264 has VUI Options metadata in the stream, one of them is a fullrange flag, set on the decompressing codec will scale levels into 16-235 for a typical 16-235 YCC to 0-255 RGB. For picture styles like Cinestyle that raise black to 16 YCC in the profile then get raised again to 32 and 255 down to 235 because of the fullrange flag scaling before the conversion to RGB. For Nex5n AVCHD I don't think VUI Options or Equivilent fullrange flag for its 16-255 levels exist.
  5. Hi james_H, just a quick query to confirm,     There were two files in the zip, both full range levels in h264 but the one, flagon.mp4 has the VUI Option 'fullrange' switched on so the decompressing codec will squeeze levels into 16 - 235 for standard to specification conversion to RGB with no clipping, where as the flagoff.mp4 has the full range flag left switched off.   Could you just confirm that the flagoff.mp4 was the file you tested the FCC & levels with? sorry to fuss I wasn't very clear before.   Still have to look at your thread on dvxuser but have to register so...   I'd be interested to know what FCP X makes of the test files to, it's 32bit but all the same it is Apple and previously QT has made the 16 & 255 text appear to not clip but the text didn't have RGB values of 16 & 255 skewing results, due to the stupid gamma issues with color management in OSX and QT, prior to Mountain Lion that is.
  6. @James, will do. I guess all filters in PP CS5 onwards are 32bit? Regarding YCC vs RGB, if you have time it would be interesting to see if FCC clips or not, I guess it doesn't but all the same its quick to test. I don't have PP, heres a link to a full range file similar to the NEX5n: https://www.dropbox.com/s/g10sawbxva70luq/fullrangetest.zip If you use the full range mp4 in the zip, when added to the timeline it should appear as black and white horizontal bars, but the Y waveform should show levels above and below 0 - 100, dropping a levels filter and remapping levels into typical 0 - 255 into 16 - 235 should make the 16 and 255 text appear? Disable the levels filter and add Fast Color Corrector in between the levels filter and the clip, make slight adjust to FCC and then reenable the levels filter, do you still see the 16 and 255 text? If it doesn't then the FCC is clipping.
  7. @James, that's great, I'll check it out later. Are your tests in Premiere Pro, CS 5 onwards and in 32bit workspace?
  8. The care needs to be taken not with whether 'chroma clips' as chroma is a YCbCr concept, but the resultant RGB values generated when doing probably 90% of the color processing in a NLE or grading app, once the luma and chroma planes are combined to generate RGB that's when 'channels' get clipped. Compounded by using a typical 8 or 16bit mode to do the color conversion, invalid RGB values created leading to abnormalities in color, gamma and overall 'quality' of the resultant image. Chroma is nothing without the addition of luma. So 'shooting flat' is a consideration to make. Doing own tests help decide.
  9. Here's a question to all GH2 and GH3 owners familar with close inspection of the source files, GH2 shot AVCHD only, GH3 offers both AVCHD and h264AVC. The h264 out of the GH3 appears to have 32x32 or maybe its 64x64 macroblocks, or is that a problem with me using ffmpeg to decompress? If it is genuinely larger than 16x16 macroblocks with partitioning therein, the standard for h264 then maybe GH3 is using HEVC hybrid h264 encoding, supposedly more efficient and more with the future of higher resolutions in mind. If that's the case the simplistic comparison of bitrate and GOPs becomes even less a worthwhile measure of 'quality'? At the moment from the GH3 h264 I've seen I strongly dislike the coarseness of the image which appears to need a heavier hand denoising etc.
  10.   The guys not scaled the levels into rec709, they're full range so shadows get crushed, highlights compressed to white. Not just the video posted earlier but a lot of it on his Vimeo page.   http://dl.dropbox.com/u/74780302/4852.png   Goes some way to the nasty look on top of camera settings. But no excuse to do a simple levels check before encoding.   I do wonder whether with the increase in raw YCC capture, codex, atmos and all that and 12/14bit raw workflow in general is going to just make this failure even more prevelant and its very common already, even see it here on EOSHD.   At least with full range DSLR h264 it gets scaled into strict rec709 levels as soon as it enters the NLE, media player or when transcoding because of the metadata in the stream, so unless doing something odd with 5DToRGB or similar operator can't really go wrong.   Where as all these alternative capture and different workflows the onus is even more on the editor / grader to know the nature of the source files.
  11. The comment about Nikon and CLIPPING, meah, whatever.   From the few native samples of C300 I've looked at it appears to be able to capture luma into 16 - 255, like many cameras. Certainly many Canon mpeg2 video camera sources I've seen are 16 - 255 luma but assume 16 - 240 chroma. Rec709 primaries, transfer and luma coeffs. But I don't think they use the mpeg2 sequence display extension to signal full levels, so unless some sort of luma scaling into 16 - 235 is done at decompression a 32bit float conversion to RGB will almost certainly be required to prevent software clipping of luma, all depending on camera use and exposure choice. Even then the preview will look clipped but the data safe to be 'graded' into 16 - 235 range for encoding out.   Nikon, Canon DSLR's & GH3 encoding into MOV all use full 8bit luma BUT also normalize chroma over the full range, not staying within 16 - 240. But use the h264 VUI Option 'fullrange' in the metadata of the h264 stream to signal to a well behaved decompressing codec to scale levels 16 - 235 before conversion to RGB.   VLC with hardware acceleration on ignores the fullrange flag so preview looks more contrasty with levels beyond 16 - 235 being crushed to black and compressed to white accordingly. NLE preview the same even at 32bit float as it's all display referred material but with 32bit float workspace the out of range data is held onto not lost.   Most NLE's these days respect the fullrange flag and avert any problem with software induced crushing and clipping.   It is possible to simply turn off the fullrange flag in the stream, which is something I personally do in order to avoid the scaling of levels into reduced range at import into the NLE, so you have access to super white and black, then work at 32bit float which holds onto RGB values beyond the 0.0 to 1.0 range but this needs care, then scale into strict rec709 for encoding out.   Why Nikon uses BT470 luma coeffs I'm unsure but guess it's to do with noise or hiding it, like they hide noise or at least have done previously by skewing the color channels.   Difficult to compare and ultimately pointless I guess to try to compare C300 mpeg2 with DSLR h264 with regard to any benefit of super whites.
  12. @jgharding why are you talking about RGB 444 and then showing a YCbCr color model diagram?   "Odd shades that represent the reality of how the computer deals with it", it's not how the computer deals with it, it's how the image is represented and the nature of the encoding by the camera, it's the color model. Computer displays are RGB based so a conversion from YCC to RGB has to be done to give us back the RGB?
  13. A few additonal comments some specific to Nikon and Canon DSLRs. raw is 'single channel' 12 or 14bit data in linear light values and has no color gamut, ie: rec709, prophoto, ACES defined. So no skewing of values by gamma encoding and no restricted color gamut other than limitations of the camera hardware. Both those choices become user choices in the raw development process. Canons Digic processor handles this, it takes in the raw, does the raw development process like demosaic, line skipping (in pre mk3 models), applies various processing steps including camera response curve, picture style and outputs 4:2:2 YCC (lets leave analog YUV in the gutter). Not RGB. The 4:2:2 is aimed for the LCD display which takes YCC input. Canon added a h264 encoder chip to its line of cameras and tapped into that 4:2:2 and sent a feed to the h264 encoder and jpg image writer. The 4:2:2 YCC has been calculated slightly differently to rec709 mentioned above, for many models of Nikon, Canon and GH3 MOVs, the luma chroma mix is based on BT601 luma coeffs ie: color matrix, uses a rec709 transfer curve to go to gamma encoded YCC rather than linear light and declares rec709 color primaries in order to define the color gamut. The Nikon uses a BT470 transfer curve not rec709. The result is not rec709 4:2:2 in camera but JFIF, think jpg, chroma is normalized over the full 8bit range mixed with full range luma. That normalized 4:2:2 gets fed to the LCD screen and h264 encoder and soon for 5D MKIII hdmi out but to rec709 no doubt. YCC 4:4:4 and RGB are not interchangable in dicussion but belong to two different color models and need handling correctly accordingly especially getting luma coeffs and luma range correct in the conversion to RGB for display, otherwise undue clipping of channels will occur, artifacts created and wrong colors, pinks skewed toward orange blues to green. Great info CD.
  14.   Maybe it's a patent or licensing issue? There are very similar output characteristics between the Canon & Nikon DSLR's and GH3 for that matter, not unreasonable to consider that there maybe patented or licensed tech in the pipeline from source frames in camera to h264 encoder?
  15. Regarding the query as to whether Canon will provide 4:2:2 out, it's already possible to get 4:2:2 out of a Canon, Magic Lanterns Silent Pic function, ML writes the 4:2:2 to the card, it's just not possible to sustain the frame rate and the resolution varies between cameras.   I don't think even the 5D MKIII is full 1920x1080. Encoder feed is 1720x974 (550D) and 1904x1072 (5D3). 600D, sizes are 1728x972 (crop) and 1680x945 (non-crop).   But 4:2:2 out in April shouldn't be a big deal, think the pre release 5D MKIII's had 4:2:2 hdmi out enabled by accident and switched off at release?
  16. Yeah, it wasn't so much whether he used the 'DOF adaptor' in this particular test, more that it was mentioned at all, that a special adaptor ring bracket even exists for the Letus BMCC cage and that 3 images in his write up referenced that idea of putting a 'DOF adaptor' in front.   Not suggesting one way or the other whether it's right or wrong, there's enough of the 'black or 'white' 'right' or 'wrong' BS about as it is, just imagined another bit of kit in peoples attics getting dusted off to use once again when perhaps many had thought those days of DOF adaptors were generally over. :-)
  17. Interesting he's also suggesting making use of a ground glass adaptor. Bring on the usual BS about over use of shallow DOF.
  18. With 8bit we work within 0 - 255, with 16bit we work within 0 - 65536 but in both instances 0 - 255 8bit are remapped into those ranges. Problem is that our video sources are often full range or at least 16 - 255 but the conversion between YCbCr to RGB is based on 16 - 235 YCC to 0 - 255 RGB so those top 20 levels get clipped to white. With 32bit 0.0 to 1.0 is 0 - 255 RGB ie: 16 - 235 YCC those twenty levels don't get clipped but instead are mapped above 1.0 and not clipped. Same for shadows below 16 they get mapped below 0.0. Not crushed. When you read 16bit is enough it maybe enough precision but doesn't solve crush and clip with full range video sources. 32bit is essential. After Effects CS5.5 onwards works a little differently the 0 - 255 levels is mapped with 0 RGB at 0.5 rather than 0.0 which I think is because blending and color processing is done in the linear domain, so all levels get shifted up to avoid working too much in the lower end of the range. I believe that any CUDA or OpenCL operations are always 32bit and in the linear domain so even 16bit filters become historic.
  19. AFAIK 4:2:0 is interpolated to interleaved 4:2:2 with QT. One of the plugin writers for Avisynth who created QTInput based on QT SDK simply couldn't get 4:2:0 out of it however hard he tried. Adl can you post links to the three native files, personally I don't trust these sort of side by side comparisons via an NLE. Just to mention regarding Canon DSLRs which also use JFIF spec like the Nikon DSLR's, internally within Canon cameras it is 4:2:2 raw sent to the h264 encoder. Magic Lantern have confirmed this and 4:2:2 raw frames can be saved to the memory card.
  20. A couple of additional factors with Nikon DSLR h264 very similar to Canon is that chroma is normalised over the full 8bit range, +- 128 JFIF, the h264 stream is flagged full range so the decompressing codec must honour the flag and scale levels into 16-235 16-240 range for any conversion to RGB and also uses BT601 luma coeffs rather than BT709. Chroma placement is centered rather than to left. Nikon also uses BT470 transfer curve rather than BT709. So VLC for example with HW acceleration on ignores full range flag and clips, QT ignores BT601 luma coeffs so pinks go to orange, blues to green. All add to the mix with camera source comparisons and pixel peeping. And does NLE when adding a BT601 luma coeff source like D5200 alongside a BT709 luma coeff source GH3 encoded out to typical BT709 h264 transfer the BT601 matrix to BT709? So resultant frame grab comparisons just done as if BT709 luma coeffs skewing color comparisons.
  21. In this case the LUT is a 1D Look Up Table, basically for LOG an S shaped curve affecting R, G and B channels. The premise being that some one more technical than ourselves has created an accurate curve mathmatically to affect the LOG appearance and pull it into what is generally reffered to as a 'rec709' appearance and supposedly be more 'correct'. This might be useful if you want to accurately reproduce some real world color in the shot like a logo and a premade 1D LUT can save time. But you could could just use your color tools like curve and do yourself to suit your eye.
  22. Was the GH3 in this test using AVCHD .MTS or h264 .MOV?
  23. x264 is an excellent h264 implementation. Pre built as shared .dll's or as static binaries are available from here:   http://x264.nl/   Profile support includes 10bit, lossless and 4:4:4 as well as all the usual.   A 'decent' tool for those slumming it with badly designed open source software.   http://www.avidemux.org/admWiki/doku.php?id=tutorial:h.264   Except if open source 'engineers' spent all their time creating fancy interfaces rather than great quality flexible libraries and tools for others to wrap the simple job of a GUI around them then maybe x264 wouldn't be where it is now, I think they got their priorities right, in that bit of spare time they have, generally unpaid producing a h264 encoder that puts Apples to shame.
  24. It depends on native camera files and NLE handling which route is 'best' so the 'best' approach is to test what works 'best' with the combination you're working with, satisfy yourself what looks 'best'. As Axel says no right or wrong way.
  25. I'd suggest trying media player classic. As an aside VLC if configured with GPU acceleration on will not preview Nikon, Canon or GH3 MOV output correctly, it ignores the full range flag so all output looks overly contrasty.
×
×
  • Create New...