Jump to content

see ya

  • Posts

  • Joined

  • Last visited

Everything posted by see ya

  1. I've just ran Shot 1 through AE CS6 on Windows and viewed the output in VLC, QT Player 7.2.2 and in a color managed player I trust. :-) All three are identical. AE is the odd one out. AE is the culprit as far as I can see, preview I'd say is right, ie: the redder dress, but probably screwed up color management with icc profiles and confusion over sRGB iec61966 and rec709 gamma curve encoding to h264. The h264 out of AE is flagged rec709 prims, transfer & matrix and luma levels are limited range, all correct. Assume AE is not transferring the color matrix to the h264. Or stuffed up linear raw to gamma rec709. Not sure what ACR bakes into the files as way of colorspace etc or leaves that to the color management in AE, who knows. AE window looks same red in dress as your ProRes 444 but more orange in the h264. Typical of using wrong color matrix BT601 vs BT709.
  2. Yes it brings luma levels outside of 16 - 235 into that zone by squeezing the 8bit levels range prorata and then a typical 8bit media playet displays it all but shadows that were black become less black, whites become less white so detail seems to magically appear better to just do it at 32bit float in a decent NLE than waste time transcoding if the only reason to transcode is to see more detail in some media player. The levels adjustment must be done in a 32bit mode though if the adjustments are being made in RGB to access the under and over brights. Basically full range luma or even 16-255 8bit YCbCr into 8bit RGB doesn't fit without the clipping or compression of luma outside 16-235 and therefore if in RGB clipping of color channels, so levels adjust at 8bit in RGB will just move the clipped crushed extents where as 32bit RGB can hold the whole 8bit YCbCr and slide overs and unders in and out of the 8bit 16-235 zone with no risk of clipping. Also pointless for GH2 as the levels are 16-235 to start with.
  3. [quote name='Shian Storm' timestamp='1345754629' post='16324'] [img]http://www.eoshd.com/comments/uploads/inline/20670/503695e4cb254_ScreenShot20120823at14142PM.png[/img] choose one image and make sure these boxes are checked, this will import all the frames as a clip. [/quote] Anybody who needs an alternative or possibly better solution to Adobe Camera raw, i'm not sure of the output ACR gives, then maybe this will be of interest, using dcraw, probably the best raw tool available, it's possible to batch convert, (yes an intermediate step) to linear space 16bit tiffs with no white balance or color space restrictions applied by the raw 'development' process. http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm
  4. hi milanesa, those settings look like how to interpret the source levels either 16 - 235 (rec709) or 0 - 255 (RGB). Not a gamut control just levels interpretation? I don't know SCRATCH or how it handles various camera source files, demo'd it a few years ago but the screen grab above just looks like the usual levels terminology confusion describing 0 - 255 levels in 8bit YCbCr as RGB levels. If that's the case it would appear a bit unhelpful because the source isn't RGB color model regardless of 16-235 or 0-255 levels, it's YCbCr color model and the fact is YCbCr can have 8bit luma levels across 0 - 255, nothing to do with RGB at all. :-) It could be that the YCbCr RGB Full refers to JPEG/JFIF rather than BT709. JPEG/JFIF and BT709 handle chroma encoding and chroma placement differently. I think the problem is, it doesn't matter how we request SCRATCH or Resolve or any other application how to interpret the levels, it's not a given that the decompressing codec will hand that to the application anyway to make a valid choice. Hence your Clip Unwrap process first I guess. However you mention FFmpeg also. FFmpeg will not pass full levels through from a source like Canon or Nikon h264 even though they have them because of the fullrange flag in the MOV container. It squeezes into 16-235, so the first step in the process before getting to SCRATCH or Resolve or whatever can screw up the source if not careful.
  5. [quote name='AdR' timestamp='1343404757' post='14630']Also, doing a 709 to 601 stretch in 32-bit float will help a little, but the underlying facts are the same. You're still missing 14% of your color data, and the discontinuities degrade your image.[/quote] 32bit is not just about precision. YCC to 8bit RGB is lossy, about 25% of the color values possible in the YCC can be held / displayed in 8bit RGB. However 32bit RGB can hold the full YCC color space of the source file. [quote]The 5DtoRGB conversion is quick and easy, and it keeps the entire 0-255 color space. That's what I'll be doing. [/quote] 5DToRGB transcode is not lossless. YCC to 32bit RGB is. What exactly is a 0-255 color space? [quote name='AdR' timestamp='1343419043' post='14644'] @alexander Um, no. Let's try to keep this friendly and polite, 'k? It's really simple. The problem comes from the Quicktime APIs. From the 5DtoRGB website:[/quote] In the media player that maybe but not just decompressing into the NLE and there's a difference. You mention the api but the dither option is just that an option. Dither at playback is a good thing generally and when done right. Dither at decompression is not done. The difference is whether viewing in a media player or NLE. From the bit plane extractions I've done previously I've not seen any sign of dithering or noise added as default by QT decompression. Have you? Many NLE's don't even use QT to decode. Premiere uses MainConcept in it's own mediacore module for example. [quote name='dreams2movies' timestamp='1343718044' post='14742'] So for those with FCPX like myself, How can one recover the grey in between white and black for the DR, when Quicktime screw our GH2 footage.. I like the Input and Output tool wit the range, reminds me of editing photos with Aperature or iPhoto.. But FCPX doesn't have this kind of editing of exposure or I haven't found it yet.. [/quote] You really shouldn't be having problems with GH2 source files as they are 16-235. [quote name='AdR' timestamp='1343884351' post='14816'] That's part of the problem, especially with h.264 footage, because many apps scale the luma when they see the h.264 flag in the meta data.[/quote] The fullrange flag is in the container rather than the h264 stream. From what I've seen only mp4 and mov containers have a fullrange flag option. [quote]The two methods I've used are: (1) Open the files in QT7 (which I have heard ignores scaling flags), then take a screengrab and open in photoshop. Use Info Window and Levels histogram to determine luma scale.[/quote] But how do you know what is happening in the YCC to RGB conversion? For example if you convert full luma YCC to 8bit RGB and the full luma is actually used in the conversion then almost certainly the channels will be quite heavily clipped. [quote](2) Set up AE CS6 with color space of Adobe RGB(1998), and import .mts file, then use Synthetic Aperture for scopes and histograms. (I believe using Adobe RGB (1998) causes AE to import the .mts file without any luma scaling.) If anyone has better methods, please post them. [/quote] Interpreting or assuming source is a wider color space such as AdobeRGB will give you more room to accomodate the RGB with reduced channel clipping but a side effect will be a skew in color hues due to differing color primaries between sRGB and AdobeRGB. re luma scaling not happening I don't see why AdobeRGB option would mean no luma scaling.
  6. [quote name='EOSHD' timestamp='1343347937' post='14566'] This Rec.709 portion of a 601 space (16-235 instead of the full 0-255 the FS100 shoots in) is incorrectly remapped to 0-255 by Quicktime.[/quote] There are a couple of ways rec709 & 601 are discussed: 1. Color primaries and color matrix. BT601 color primaries differ from BT709 but unless shooting DV (ie BT601) then HD video is BT709 color primaries. This has nothing to do with levels ranges but color gamut. 2. Color Matrix (Luma Coeffs). The color matrix only comes into play when converting a YCbCr (YCC for short) source file to RGB. Getting the wrong color matrix means a slight contrast change (not as drastic as 16 - 235 <> 0 - 255) and a slight color shift in reds to orange, blues to green and visa versa. Play back in a media player with or without hardware acceleration can even screw this up not only the NLE, leading the viewer to wrong assumptions about camera performance vs another camera. [quote]Therefore apps that use Quicktime at their core like Premiere, trip up.[/quote] Premiere uses it's own medicore and with regard to h264 it's a MainConcept decoder not QT. [quote name='sfrancis928' timestamp='1343356784' post='14586'] I think I figured it out in FCPX. I just used the built-in color corrector. The exposure levels in FCPX aren't numbered 0-255, they're percentages (0-100).[/quote] There's some test files here if you want to see how QT handles full luma: [url="http://www.yellowspace.webspace.virginmedia.com/fullrangetest.zip"]http://www.yellowspa...llrangetest.zip[/url] And a discussion here: [url="http://***URL not allowed***/viewtopic.php?f=56&t=30317&&start=340#wrap"]http://www.cinema5d....&start=340#wrap[/url] If you're interested in testing then use the fullrange flag off files for your test to emulate 16 - 255 and 0 - 255 shooting cameras like Nex5n and FS100 respectively. [quote name='AdR' timestamp='1343383178' post='14604'] I think it's better to convert using 5DtoRGB.[/quote] Depends on camera source and NLE. [quote]Our goal is to keep as much of the information your camera captured. Let's look at the process. When you bring in an .mts file from your GH2, it has data from 0-255.[/quote] GH2 is 16-235 luma in YCC according to the files I've seen. [quote]Your NLE assumes the .mts file should be broadcast safe, so it limits the values to 16-235. The data from 0-15 and 236-255 are G-O-N-E. Not hidden. Discarded by the NLE.[/quote] Depends on NLE, depends on 8bit or 32bit workflow. Depends on decompressing codec. You're suggesting every NLE clips the source which is not true. With modern NLE's and 32bit workflow all the levels can be passed through to the NLE untouched, nothing lost. What you see in an RGB histogram or other RGB scope is the result of whatever YCC to RGB conversion the NLE has done. This has nothing to do with what the camera actually shoots but what the NLE gives you as a result. Your 0 - 255 is RGB levels. YCC conversion to RGB at 8bit is 16 - 235 luma and 16 - 240 chroma makes 0 - 255 RGB. [quote]When you use a color corrector to expand the color space back to 0-255, you're just stretching the 16-235 values across the full range. That makes gaps in the transitions, which makes for banding and abrupt tranistions when you color correct the footage.[/quote] You're getting confused I think between YCC and RGB levels. 16 - 235 YCC luma is the correct. What exactly is a 709-601 stretch?. Both BT601 & BT709 can have luma outside of 16 - 235 and a 709-601 color matrix change has nothing to do with 16 - 235 to 0 - 255 or visa versa. [quote]With 8-bit footage, every bit of quality and latitude counts. The 5DtoRGB process results in the full 0-255 data being imported, which gives you more flexibility in color correction.[/quote] 5DToRGB squeezes the full 8bit range into 16 - 235 ie: less levels, it then reencodes it, a second generation and for many camera source + NLE combinations a pointless process imho. [quote]Hard drives are getting cheaper. Buy a bigger one. ;-)[/quote] They are more expensive now and personally I'd suggest putting the cash towards a better NLE that will handle source files correctly. :-)
  7. [quote name='EOSHD' timestamp='1343243008' post='14497'] Bringing this topic back up again. Thanks for your contribution guys. Is there a conclusion you can give us? This would be helpful, and reduce need for people to read the whole 5 pages. It needs summarising. This thread went very into the details. What is the consensus on a fix? Is 5DtoRGB transcode the only solution? I'm using 5DToRGB but need to avoid having so much ProRes, the disk space it uses is insane. [/quote] My personal opinion: If you're using a QT based import method into an NLE and using a camera source with luma levels outside 16 - 235 then the levels squeeze happens in QT's decompress, regardless of any 'fullrange' flag setting, so no chance of getting full levels into the NLE for 32bit ops / grade / levels adjust. You get 16 - 235. If using Premiere CS5 onwards with it's default Mainconcept h264 decoder then as long as the 'fullrange' flag is set off in continers like mp4 & mov, MTS container I don't think supports the flag anyway then full levels pass straight through. You'll see luma levels over 100% in a YCbCr or Luma waveform as they should be because 100% on a typical NLE luma waveform is 235 RGB not 255. If using a 32bit NLE or grading suite and a decompressing codec that doesn't screw with the levels then there are various approaches to massage the levels into the 16 - 235 range to suit the 'look' you're after, an arbitary prorata input range mapping to output range type adjustment filter is one option but the 3 way and other more flexible tools probably give more options on how you move those levels outside of 16 - 235 into the restricted range. If you've shot in camera the way you want it to look, then yes a 0 - 255 to 16 - 235 mapping would surfice. But if shooting flat / custom profile no where near the final 'look' then perhaps just do the levels adjustment in the grade via a 3 way or some such tool. BUT if you are doing this in RGB as many NLE's do then 32bit mode really is required, not 8bit. If doing it with a YCbCr (YCC for short) or as described in many NLE's incorrectly as a 'YUV' filter and the NLE is actually working on the NATIVE YCC rather than Native YCC source -> 8bit RGB in NLE -> YCC in NLE mangling then 8bit is ok, 32bit preferred. One of the reasons for 32bit mode is not only about precision and rounding errors but the fact that the original camera 8bit YCC source is a different color model to RGB and a wider colorspace to 8bit RGB. It's said that only 25% of the color values in the YCC source can be contained in 8bit RGB with sRGB gamut, hence terms like invalid RGB, however when converting YCC to RGB into 32bit RGB the full 100% of color values generated can be held, not displayed just held in memory for working on, we still have to restrict that color range on output of the NLE for typical 8bit playback. It's also possible to shift the color primaries to say AdobeRGB or custom color space in order to hold much more of the gamut generated by a typical 8bit YCC to RGB conversion but that will also shift colors, not a problem for the flat / desaturated / custom shooting profile source as the object of the game is to capture as much as possible not maintain the exact shade of red as in the scene for example. May also be possible to create a custom picture profile to represent the custom color space color primaries. If using 8bit mode in any NLE or grading suit then you'll really need 16 - 235 luma going in. Same for final playback in a typical media player. 16 - 235 luma is required for correct representation of your files. Not sure if mac based Premiere uses QT but windows based certainly doesn't it uses MainConcept.
  8. [quote name='Axel' timestamp='1342067085' post='13776'] Notice that it must read "709", not "701". Also, I know only now, thanks to yellow, that I should have chosen 601 and [i]not[/i] full range, nevertheless there are no blatant problems after cc, let alone blocky reds, which would have been fatal in this red, dark scene ... [/quote] Axel, I'm not sure where I suggested that. GH2 would be 709 and not full range. 7D would be 601 and full range. That's if the whole 601/709 thing in 5DToRGB is about color matrix. Assume this only matters if going to RGB like DPX or maybe 5DToRGB transfers the matrix from 601 to 709 in transcodes as well to avoid potential miss handling of HD sources by certain media players oblivious to color matrix declaration in header of file.
  9. [quote name='Thomas Worth' timestamp='1342003046' post='13737']This is a non-issue. I'll explain. There's nothing Premiere is doing that 5DtoRGB isn't. They both take 4:2:0 H.264 and re-create an RGB (or 4:2:2, in the case of ProRes) representation of the original data.[/quote] Except Premiere CS5 onwards after decompressing the source files recreates RGB data in a 32bit levels range based on the native levels range within the camera source, transferring the YCbCr across to 32bit RGB losslessly and allowing YCbCr <> RGB color space operations to be swapped between in the NLE without loss and the full color gamut created in excess of that which can be achieved with an 8bit conversion using the 16 - 235 luma range from a transcode. This is not possible with 8bit color space conversions and is one of the reasons that camera source files with levels exceeding 16 - 235 have the luma scaled, to reduce the amount of 'invalid' RGB, gamut errors and resultant artifacts, that can be created in the color space conversion to RGB using source files with levels outside of the 16 - 235 range. This was mentioned previously on the thread, re: 10bit comments and advice to export 10bit DPX image sequences from 5DToRGB at 10mb+ a frame as the solution? So folders of images or 32bit data within the NLE. I do understand that there are reasons to use image sequences for VFX and whatever but that's specific to the task not in order to solve the problem of maintaining color and processing precision when transcoding. My understanding is rightly or wrongly that 5DToRGB scales the luma to 16 - 235? This is different from Premiere, they don't do the same thing in that instance, but not so with a QT based NLE like FCP or even FCPx I believe which autoscale. [quote]Actually, you should really be referring to the MainConcept decoder, which is responsible for supplying Premiere with decompressed H.264 output. At that point, all Premiere is doing is reconstructing 4:2:2 or RGB from the decompressed source.[/quote] And that decompressing codec does just that decompresses? Or are you aware of it doing the 4:2:0 -> 4:2:2 chroma upsample and algo used? or is that done by Premieres own code? For QT I find I can only get 4:2:2 out even feeding 4:2:0. [quote]Remember that the "original" data is useless without a bunch of additional processing. H.264 video stored as 4:20 (full res luma, 1/4 res red and 1/4 res blue) must, in 100% of cases, be rebuilt by the NLE or transcoding program into something that can be viewed on a monitor. [/quote] However we must separate 'Display' from 'Processing'? I work on many occasions YCbCr right through from deblock, denoise, upsample chroma, grade, deband, luma sharpen and encode without a single RGB operation, of coarse this is limited in scope compared to RGB operations for VFX, Motion Graphics etc but going to RGB is not 100% required with regard to processing. Display: Do we really need to upsample edges with some interpolation routine in the transcode, so it looks better on Display, media players give options for that at playback, bicubic, lanczos etc. Premeire gives options for preview quality, this doesn't have to be 'baked' in at transcode though? Interpolating edges has side effects, halos, ghosting and over sharpening is this not better done by a plugin within a 32bit workflow as part of the grade with other 'improvements' rather than some edge 'fix' at transcode? Sure there are improvements to be made, deblock, denoise, edge interpolation but surely at 32bit precision in the NLE? Processing: I understand your assertion regarding chroma interpolation from 4:2:0 to 4:2:2 to RGB but we're talking the difference between a couple of interpolation methods and correct chroma placement that's all, with negligeble differences perhaps at 400% zoom. Bearing in mind that being that particular about chroma interpolation but after first scaling luma and requantising in the transcode does no harm? [quote]It's this "rebuilding" process that sets one NLE apart from another. FCP7/QuickTime does a pathetic job rebuilding H.264. Premiere is better. 5DtoRGB, of course, is best.[/quote] :-)
  10. Well 'harm' may be too strong a description depending on how 'precious' we feel the source files are. For me it's more about gaining awareness of what is happening to avoid 'processing' that is unnecssary or unhelpful. By OSX I guess you refer to QT? Premiere doesn't use QT even on mac but FCPx and FCP do of coarse?. I'm not aware of how QT handles GH2 source but for Canon DSLR's it does a very similar approach to 5DToRGB with regard to levels, that is it scales them into restricted range. Whenever i've used QT to decompress it always gives 4:2:2 even from 4:2:0 sources upsampling chroma, not sure what interpolation it uses for that though, as I try to avoid QT for anything. With Canon DSLR sources including 7D and even the prototype 1D C, the MOV container has that fullrange flag metadata set 'on' so many decompressing codecs will scale luma 16 - 235 as per a 5DToRGB transcode. Regarding improper presentation of original footage, yes it's just about being aware of why and how so that when things don't look right we stand a better chance of fixing it.
  11. [quote name='Axel' timestamp='1341859454' post='13636'] I transcoded a GH2 clip using 5D2RGB "as" s.th. 709 Full Range, 601 Full Range and I exported it as the same ProRes from FCP X. I laid the original and the three ProRes versions into the same story line and compared them with the RGB scopes. Now since I color correct everything anyway, I balanced every of the graphs until they looked the same - the graphs as well as the output video clips. To me this seems to be much ado about nothing. I am glad I only downloaded the "lite" version and did not yet pay 39,99 € for the Batch-5D.[/quote] Decision to use valid tools like 5DToRGB can be for numerous reasons. But with regard to clipping / crushing / getting all the data etc I feel it's pointless using such a tool on GH2 files which are 16 - 235 luma anyway, ie: NOT full range, so to use 5DToRGB's fullrange option on GH2 is incorrect. 16 - 235 luma levels range in the GH2 is what a media player / HDTV / NLE any conversion to RGB at 8bit is expecting and required. I haven't tested 5DToRGB on GH2 sources but I'd assume if you feed a limited range 16 - 235 GH2 file through 5DToRGB and tell it that the source is full range ie: 0 - 255 or 16 - 255 luma levels it will squeeze the GH2's already limited luma range into even fewer 8bit levels. ie: 16 - 235 assumed 0 -255 resulting in 32 - 215. The full range option is for sources like Canons, Nikons that have 0 - 255 luma (assuming the NLE ignores the fullrange flag which generally it doesn't instead honoring it) and 16 - 255 luma shooting cameras like the Nex5 / FS100 because the decompressing codec passes 0 - 255 or 16 - 255 through to the NLE, but as NLE's expect 16-235 they treat the source as 16 - 235 and crushes shadows and highlights when stretching what it thinks is a 16 -235 range into 0 - 255 RGB for preview / playback. For many NLE's particularly those working at 32bit precision that's fine it's just the preview, look at the original source files in the NLE's waveform and it'll show above 100% IRE ie: levels greater than 235. Same for lower end, The data is there just needs grading into place and in a decent NLE that will be at 32bit precision rather than an 8bit precision transcode squeezing levels outside of 16 - 235 into that legal range. Pointless and detrimental in many cases to transcode solely to be appearing to get all the data, however for playback reasons transcoding may well be required. With regard to Canon & Nikon files this can be a bit more tricky as discussed previously. But media players will show the crushed shadows and highlights by default for native 0-255 and 16 - 255 source files, so all looks bad. Of coarse a squeezed source from a transcode will 'look' much better, not crushed and that's the point about making sure when encoding to delivery codec that a 16 - 255 or 0 - 255 source has been graded into 16 - 235 for proper display of levels by default in a typical non color managed media player.
  12. http://***URL not allowed***/viewtopic.php?f=29&t=42452 http://forums.adobe.com/message/3105124 http://helpx.adobe.com/premiere-pro/using/exporting-dvd-or-blu-ray.html
  13. py, missed your post too, result of browsing a long thread with large embedded images on an old gen iphone and trying reach the bottom of the thread rapidly :-) I think you have it right there and the DPX route which I'm assuming will be 10bit should be enough where as going ti 8bit RGB is insufficient to hold the full YCbCr data in the original source files. I'd hazard a guess that with regard to interpolation of edges that less refinement in 5DToRGB is perhaps due to a non RGB transform at 8bit versus a 32bit conversion in the NLE? I've not had chance to look properly at your images and may be misunderstanding your observations.
  14. Alexander, I'd missed your post and pys earlier one on Resolve for that matter. From the few nex samples I've seen I was aware it was not restricted 'broadcast' range like the GH2, same for the FS100 with the additional factor of its shooting modes, black level tweaks etc. But again it captures beyond the broadcast 16-235 zone of levels which is really the point, media player and NLE handling of the levels outside of 16-235 and methods of handling it. So whether 16-255 or 0-255 is a little acedemic but certainly good to know. Thanks.
  15. [quote author=Pyriphlegethon link=topic=726.msg6765#msg6765 date=1341327905] And a quick note on 5dtoRGB "scaling" full range data down on export: Thomas said this is due to the ProRes Codec spec itself being compliant with broadcast range. DPX files can be exported to maintain the full luma range if desired. [/quote] Interesting, the tests I did with 5DToRGB were DNxHD exports and they were luma scaled too.
  16. hi Pyriphlegethon. 1. GH2 will be BT709 color matrix. BT709 color primaries. If the color matrix is not specifically stated in the stream from the camera encoder then assume BT709 for HD sources. This is not to say that a media player will not do it's own thing and use BT601. Color Matrix only matters when a YCbCr to RGB conversion is to be done for display on a computer monitor or creating an image snapshot or extraction from the video source. Transcoding can loose a color matrix declaration in the stream though, really only a concern for Canon & Nikon DSLR's that use BT601 for HD. 2. From the sample files I've seen the GH2 is restricted range luma 16 - 235 from the camera, yes sure it's easy to get 0 - 255 by mishandling the source file, doing uncontrolled color space conversions to and from RGB. But 16 - 235 from source I think. This would appear to be the reason for example a FS100 & GH2 clip together in an NLE gives different appearance. FS100 is full luma and suggestion that a tool like 5DToRGB solves a problem, which it doesn't really as it just scales full luma into 16 - 235, from limited tests I've done anyway. 3. If the range is 'scaled', scaled would suggest that somewhere the codec has done fullrange to restricted at decompression or worse restricted -> full. Do you mean 'restricted' ie: 16 - 235 from source when you say scaled? This is where it can get messy depending on codec handling, but generally if a source file has been encoded with full luma it should be treated that way and visa versa. But best to test codec handling in the NLE. From tests I've done with 5DToRGB it scales into 16 - 235 so it appears you're getting all the data, which you are but it's squeezed it into less levels ie 0 - 255 into 16 - 235 than originally encoded and quantized over and gives the same appearance as a comparably shot / exposed GH2 clip which is already 16-235. But as mentioned previously if using a 32bit float NLE then there's really no need to transcode and squeeze levels, just grade in the NLE or a levels mapping, all the data will be there outside of the 16 - 235 range just the preview will appear crushed and clipped as the 'proper' range for 8bit playback is 16 - 235, but at capture as much as we can get. 4. Yes because GH2 is 16 - 235 and a Canon or Nikon DSLR source which both use the fullrange flag get luma scaled at decompression into 16 - 235 so both GH2 and Canon/Nikon comparable shots should look very similar in levels. This is the purpose of the fullrange flag, to get a 'proper' preview in an 8bit media player,  with 32bit processing in an NLE there's no need for this. But the flag does avoid confusion for playback that the FS100 appears to suffer from, because it shoots full luma and doesn't flag it as such, at playback it looks crushed and clipped and the decompressing codec or media player choice is blamed for screwing it up. Again 16 -235 is the 'proper' range for playback but we want as much as possible at capture. 5. 444 is not chroma subsampled, 4444 is no subsampling + alpha, 422 is subsampled chroma. But I think this is more about interpolation methods from the original 4:2:0 source. (GH2/Canon/Nikon DSLR's) and where this occurs and at what bit depth and precision it's done at etc. It's more useful for compositing etc as the output is almost certainly going to be 4:2:0 and there's no reason this interpolation can't just be done at playback in the media player depending on resources. 6. I don't think the MTS container has the flag as an option, unfortunately the patched build of MP4box doesn't recognize MTS as a container I've recently discovered. :-( if this is for GH2 then it probably not important as the source is 16 - 235.
  17. [quote author=BurnetRhoades link=topic=920.msg6704#msg6704 date=1341158480] That's awesome, thanks for posting the links to the Boyle podcasts! The thing I really like about a lot of these techniques is the use of the image and information in the image to enhance itself rather than deforming its values based on a static "handle" or "dial", affecting the image in a broad way.  It seems counter-intuitive to me to follow the practically universal advice that electronic sharpening should be turned down or off in-camera, down or off in-monitor/projection but then okay to apply these same techniques with their same limitations and artifacts through slower software applications.  Image-based techniques actually take more horsepower but the proof is in the pudding...or rather, absence of easily spotted artifacts. [/quote] No problem, hope you find the podcasts useful. Regarding using the image itself for information, as this is video and motion is involved rather than affecting a static image in photoshop we're working on many image frames per second and therefore decent techniques require motion analysis to jump forward and back through what can be hundred frames analysing the image data, building automated masks and then denoising, sharpening etc through those masks. The Lowry process is much about analysing each frame and getting a consistent appearance with regard to motion, noise levels etc between what could be numerous camera sources and shooting conditions.
  18. These are great techniques, I too work this way, fwiw, especially avoiding typical sharpening methods aka USM (a kind of tonemapping / LCE) and use local contrast enhancement instead, tonemapping is really a HDR to LDR process whereas LCE is about refining edges and tightening gradients to increase actuance, perceived sharpness. And only work on the native YCbCr ie: luma and chroma separately whole route. What inspired me years ago was listening to Peter Doyle talking about grading in Harry Potter. The podcasts are here: http://www.fxguide.com/fxpodcasts/color_in_harry_potter/ and http://www.fxguide.com/fxpodcasts/the-art-of-grading-harry-potter-peter-doyle/ for anyone interested, the first of the two from 2009 I found most inspiring talking about tonemapping, luma sharpening etc. And his quip about '1D look up table jockies' still cracks me up today, that is the general assumption that 'color grading' means Lift Gamma Gain, 3 wheel color corrector and some preset 'Look' :-) They have their uses and not knocking them but listening to Peter Doyle techniques opens up the thinking. Also the great work done by Lowry Digital, now Reliance Mediaworks on movies like Zodiac http://www.theasc.com/ac_magazine/April2007/Zodiac/page1.php & BB http://www.moviemaker.com/producing/article/lowry_digital_the_curious_case_of_benjamin_button_brad_pitt_20090203/ processing the Thomson Viper source files. Not owning AE and for anyone interested in a free route, I use these techniques a lot on DSLR h264 mainly, but also mpeg2 HDV and even uprezzing / deinterlacing / luma chroma processing DV, via Avisynth and the following plugins: MCTDmod [Motion Compensated Temporal Denoise] :- http://forum.doom9.org/showthread.php?t=139766 MCTDmod allows various tools in one function / plugin, in no particular order. Deblock: A compression macroblock deblocker that reinterpolates pixel values within the macroblocks to smash them. http://avisynth.org/mediawiki/Deblock_QED alternative method http://forum.doom9.org/showthread.php?t=164800 https://sites.google.com/site/jconklin754smoothd2/ Various methods of denoising and control over strength / areas. Choice to denoise only Luma or luma + chroma and in separate passes. Denosing through masks created by the integral motion analysis plugin MVtools2 http://avisynth.org.ru/mvtools/mvtools2.html + masktools http://avisynth.org/mediawiki/MaskTools2 Various Sharpening methods again motion compensated temporal and or spacial via masktool generated masks, sharpen only edges if required, luma only or luma + chroma, USM. http://avisynth.org/mediawiki/LSFmod Reduction of star & bright point 'tings' Antialising edges, edge clean, dehalo and deringing. Temporal stabilizing of flat areas within the frame to avoid shimmer and nervousness. Debanding: Enhance flat areas to remove / reduce banding and blocking. http://avisynth.org/mediawiki/GradFun2DBmod Adding controlled grain to bright, midtone and dark areas differently depending on scene, controlling size and texture. This is a more intelligent method than just overlay a grain scan. Dithertools to work on a 16bit resampled version of the 8bit source: http://forum.doom9.org/showthread.php?p=1386559#post1386559 For LCE, Local Contrast Enhancement, aka Tonemapping  alternative to Unsharp Mask: http://forum.doom9.org/showthread.php?t=161986 Other sharpening methods based on brights, midtone, darks : http://forum.doom9.org/showthread.php?t=165187 SmoothAdjust: http://forum.doom9.org/showthread.php?t=154971 works on 16bit version of 8bit source created via Dithertools, allows adjustments with interpolation of 'missing' data to keep smooth gradients at 16bit with encoding options to 16bit image sequences, 10bit lossless h264 or back to 8bit codecs including lossless via numerous dither / noise / grain methods. Including a 32 point 'S' curve for Cinestyle. Not suggesting for 1 minute this is as user friendly as some sliders and plugins in AE but Avisynth+AVSPmod+plugins provides a powerful free opensource option, yes more manual although AVSPmod does offer sliders for plugins :-) and http://forum.doom9.org a wealth of users willing to help. Once a script is created it's able to be used in a batch situation, preprocessing many clips in an automated way rather than manually. As a preprocess operation such as deblock -> denoise -> 8bit to 10 or 16bit gradients -> deband -> encode to intermediate and/or as a post processing operation after editing / color correction including resizing to target delivery -> sharpening -> add grain/noise -> levels adjustment to 16 - 235 -> encode to delivery codec.
  19. Well apart from the usual f--k up with transcoding Canon MOV's, the other thing noticeable is the 5D MK III is more contrasty, the skin tone more saturated and 'softer' look especially to the hair, but hasn't it already been stated that the in camera sharpening has been reduced in the MK III assumed to make it better to grade, so is it any real surprise, really? For those not wanting to sharpen in post, then just up the in camera sharpening? The window detail to the right looks more detailed in the 1D X but because the 5D MK III is more contrasty far more of the window area is clipped, loosing detail so it's not a like for like comparison. Pulling a selection on the 5D III vs the 1D X image and looking at the histogram shows levels are stretched in the 5D MK III compared to the 1D X so inconsistency there. Very subjective, inconsistency between source files and with all these tests the original files are never provided.
  20. Sorry Alexander have seen your post, just too much on to look at the moment.
  21. You'll notice that in your non 32bit filter screenshot that its only the RGB representation if your YCbCr source files that are crushed. There is no clipping btw. The YCbCr luma waveform is not affected. The reason your RGB scopes show crushed peaks is due to the difference between YCbCr source and RGB levels ie: 16 YCbCr is equal to 0 RGB and YCbCr 235 is equal to RGB 255 thats the correct mapping and conversion. So as many cameras capture the full luma range of 0 255 YCbCr it won't match in contrast and brightness between sources so either crushing or scaling has to happen. So then the decision is do I want the NLE codec to do that arbitarily and prorata at decompression either stretch and crush or scale luma into 16-235 so RGB scopes look right aka Vegas or do I want full control and have my sources left well alone and grade it where I want in order to get luma into 16-235 for correct RGB preview and playback in all the devices including web that expect a 16-235 luma signal. CS6 Premiere offers adjustment layers for applying across all clips? If cameras didn't shoot full luma and were strictly so called broadcast legal then the issue wouldn't exist. But not all output is destined for broadcast legal and wider gamut is possible with 32bit processing over 8bit.
  22. Hi Alexander, remuxing or last resort transcoding was specific to MTS file support in certain NLEs such as FCPx. With regard to Canon & Nikon its to remove the full range flag. Further up the thread I think I gave a link to Canon Movs with flag set on and off, it would be intresting to see how CS 5.5 & 6 handle them. Why 32bit and 8bit, this is due to the RGB conversion done by all NLEs for color processing internally. The video off our cameras is YCbCr color space, at 8bit conversion to RGB not all the information in the original file can be converted to RGB it creates negative RGB values and values over 1.0 in the Unity Cube creating invalid RGB so artifacts due to gamut errors can become visible particularly with close to clipping exposures and saturated colors. Working at 32bit float allows the whole YCbCr color space to be held in RGB without loss. Vegas does this internally by default I think with recent versions.
  23. The bottom line here is difference between APS C vs FF and equivilent field of view, so if you don't go for an EF-S mount you've got to go even wider in an EF mount, that has implications on quality and price. Something to way up which lens on the mounts at your price point performs best at the equivilent fields of view. Also consider difference in constant apeture lenses against cheaper variable apeture. Personally for wide angle I'd go for an EF-S mount and then when you upgrade to FF keep your APS C camera as a B cam and continue to use the WA. Or if you were to sell it then the lens should hold better value as everyone on a budget would be in the same predicament.
  24. [quote author=Axel link=topic=786.msg5698#msg5698 date=1338187368] So there [i]is[/i] this problem, and it's not (only) caused by QT, the GPU or a badly calibrated monitor. If you detect during filming, you can change the lighting or the framing and avoid it.[/quote] Axel, I was not for one minute suggesting that the cause of banding seen was due to the above and not the camera. You already said it has a well known problem, I'm no GH2 owner so I'd already deferred to your more experienced opinion. The reason for posting was to suggest that there are additional factors that may or may not play a part in banding at playback. Accentuating it, inducing it, 'it' being a 'weakness' in the source video waiting for a bit of 'mishandling' by the NLE to make things worse. [quote]Sounds very good. I didn't know about the possibility to "deband" in post. So there is another reason to use Bootcamp, not just the Ptool. Do you recommend Avanti as GUI?[/quote] I use AVSPmod [url=http://forum.doom9.org/showthread.php?t=153248]http://forum.doom9.org/showthread.php?t=153248[/url] for a GUI and used to use Virtualdub. With this tool, http://forum.doom9.org/showthread.php?p=1386559#post1386559 and/or this http://forum.doom9.org/showthread.php?t=154971 and this http://forum.doom9.org/showthread.php?p=1559222 sometimes using GrainFactory3 16bit mod [15th March in this thread http://blog.niiyan.net/post/19344965089/avisynth-news-on-mar-15-2012] for 'intelligent' grain addition instead of noise.
  • Create New...