Jump to content

see ya

  • Posts

  • Joined

  • Last visited

Everything posted by see ya

  1. Is there any chance of just providing the native camera files for this?
  2. Practice, Patience, Perseverance, follow focus. :-)
  3. [quote name='Julian' timestamp='1351028145' post='20192'] And the YouTube compression doesn't help. Here's a screengrab (view at full resolution): [/quote] Julian, hope you don't mind but that's a terrible screen grab, something's not right with it, if you don't mind how did you do it? Check the histogram for the image in Photshop or something and see how combed it is.
  4. [quote name='Julian' timestamp='1351028145' post='20192'] @yellow: your clip shows black and white here. Isn't it more likely that it is because of the way YouTube processes it? They re compress the files... [/quote] Well I see different results in different browsers, one browser just black and white bars, another something that's a bit closer to original levels. But YT has probably done what happens to all Canon MOV's and now GH3 MOV's when they are transcoded or imported into an NLE, the full 8bit range luma in the original camera files is squeezed into 16 - 235, which is how the GH3 mts files are encoded.
  5. [quote name='galenb' timestamp='1351017438' post='20181'] I'm on a Mac running Mountain Lion and I see everything fine in QuicktimeX, Quicktime 7 and Premiere Pro 6. I just tested it out and everything looks fine. Apparently they fixed the gamma bug in Mountain Lion? [/quote] Good to know, one less problem with gamma shift hopefully. For anyone interested if they want to compare shadow / highlight levels handling in their favorite NLE between an 8bit & 32bit project, try importing the 16-235_flagged off.mp4 file in the zip download into an 8bit and 32bit project. Then adjust levels 0 - 255 to 16 - 235 to try and get the text to show, the flagged 'off' file is exact same full range h264 stream as the flagged 'on' file, just metadata changed so it doesn't signal to the decompressing codec to squeeze luma. Black and white horizontal bars will be seen, however in an 8bit project applying a levels adjust to 16 - 235 will probably still fail to show the text, but a 32bit project levels adjust should bring text into view to illustrate no data is lost. all depends on where a particular NLE does the YCC to RGB conversion as to whether crushed blacks and compressed highlights are lost or can be recovered in a grade.
  6. [quote name='Julian' timestamp='1351011180' post='20177'] Media Player Classic shows black and white, I can just make out the numbers in the white part very faintly. Vegas and Resolve handle it fine, it shows exactly the same as the PNG's. I noticed the video's have a lot more shadow detail when watching them in Vegas, good to know I can't judge Media Player.[/quote] You can if you tweak Media Player Classics color management options. Same for VLC, which by default doesn't appear to show text. [quote]What happens if I upload a mov out of the camera straight away to Youtube? Will it be the crushed version? [/quote] Will depend on browser I'd imagine, sorry not a very long clip: [media]http://youtu.be/Ttn4t8Up9XU[/media] I never trust viewing in a browser personally and always prefer to download native files and view in a tested media player.
  7. You can test if Vegas and Media Player Classic handle the GH3 / Canon MOV's correctly with the test files in this zip: [url="http://www.yellowspace.webspace.virginmedia.com/fullrangetest.zip"]http://www.yellowspa...llrangetest.zip[/url] If you import the 16-235_flagon.mp4 into Vegas and/or Media Player Classic as long as you see the 16 & 235 text rather than just white and black horizontal bars then the GH3 / Canon MOV's are getting handled correctly and contrast should be correct, shadows will appear less crushed, highlights will appear less blown, depending on decent exposure at time of shooting. I've created MOV's and mp4's which both contain identical h264 streams but found the MOV's can crash some apps so created mp4's as well. The 16 - 235 in the file name relates to the text, not the luma levels within the file confusingly, they are encoded full luma range. The 16 text should be RGB value 16 and the 235 text should be RGB value 235 when checked with a pipet / color sampling tool.
  8. No doubt this is something you're already aware of with the GH3, that the GH3 MOV container stuff is h264AVC encoded full range luma, flagged so just like Canon MOV's and the GH3 mts files are AVCHD and encoded limited range luma. So when considering blown highlights in the MOVs from a NLE or media player preview you may not actually be seeing what was shot in camera if Vegas or whatever other media player ignores the fullrange flag set on in the MOV.
  9. I'm not on a mac and unfamiliar with imovie, but the only thing i can think of is that a different codec is assigned to handle the different containers for example Apple QT for a mov container, maybe MainConcept for the MTS and that the performance of those may vary, again I'm not familar with mac perhaps the Quickwrap people can assist if no one here has a clue.
  10. [quote name='galenb' timestamp='1350771074' post='20053'] Well, it's obviously pretty subjective. I first saw this latitude trick on youtube done by a guy named Drew. [/quote] So how does this differ from using 5DToRGB and picking full range for GH2 which it isn't and therefore stretching levels, which inturn alters gamma and brightens the image, retorical question I think as it's exactly the same. All he's doing is compensating for the initial interpretation by the NLE going YCC to RGB for RGB preview, the usual coloquelisums, crushed blacks etc due to bad handling by the NLE or media player first off, notice he's using a 32bit project so he can pull the levels around and bring detail up without it getting clipped and lost. I'd be interested to see if he gets same results at 8 or 16bit. I've asked him and await he's reply. And I'd also suggest that this has nothing to do with increased DR of a GH2 but just the same as any player mishandling the video files and Canon MOV's are no different, they can also be mishandled by an NLE or media player to appear crushed. Again here's a test file, [url="http://www.yellowspace.webspace.virginmedia.com/fullrangetest.zip"]http://www.yellowspa...lrangetest.zip[/url] to show a simple mishandling of levels going to RGB, stick the unflagged file in a 32bit project like he did, see it shows a black and white bar which is the same as his blown highlights to the water and supposedly underexposed trees, adjust levels and miraculously see detail appear in the shape of a 16 & 235 text. Use a pipet tool and sample the 16 grey (16 RGB) and the 235 grey (235 RGB). Honestly don't think there's any more to it than that. If using QT for this on a mac your milage may vary.
  11. rewrapping is just like remultiplexing, theres no real processing going on it just takes the video and audio streams stored within the MTS container, takes them out and puts them in a mov container plus writing a bit of metadata about the file, like fps, matrix, bitrate etc so you should be able to remux into another container without loss. Perhaps the stuttering is due to a different codec handling the MTS compared to the movs?
  12. The extra detail and brightness 'gained' is due to a simple reason, the mapping your NLE or media player does between the YCbCr (YCC for short) color model GH2 video and the RGB preview and color processing done by the NLE / media player. The 'correct' way is to take the 8bit 16 - 235 luma range and 16 - 240 chroma range and convert to 0 - 255 RGB. So 16 YCC (black) and 235 YCC (White) is mapped to 0 RGB (Black) and 255 RGB (White). YCC video has a luma channel (kind of greyscale) and two chroma channels Cr & Cr, chroma red and blue. They are all kept seperate and added together using a 'standard' calculation to give color, saturation and contrast and on computers thats an RGB preview. RGB color space on the other hand entombs the brightness value into every pixel value, not kept seperate in a luma channel. So YCC gives us the advantage of how we combine luma and chroma to create RGB but it needs to be done to recreate the RGB data the camera started out with before encoding to AVCHD. Checking the histogram, which is an RGB tool, can help establish correct conversion where visually it might look good but if combing or gaps are show in the histogram it illustrates a bad conversion, which becomes evident when trying to grade certainly at lower bit depth. Also a luma waveform can highlight problems too. Saying something 'looks' good also depends on the calibration of a monitor, it might look great on one persons and bad on another. Histograms can lie, but they are a better illustration of state of an image. The key point though is the weighted calculation done to generate the RGB values that you see and those are the values that you base the quality of the image to be and therefore the performance of the camera. Important is the fact that a YCC color space can generate more range of values than can be held or displayed by '8bit' RGB, so 32bit float RGB processing is offered to negate this. But what you percieve to be the 'quality' of the camera file depends on how the NLE interpretats the YCC and how the RGB values are calculated and stored in the memory of the computer as well as an interpolation of YCC values into RGB pixels with regard to edges, stepping etc. Depending on what algo your NLE does in the conversion will depend on the percieved smoothness of edges. Ie: does it do nearest neighbour, bilinear, bicubic etc. 5DToRGB offers custom algo I believe. But to concentrate on levels handling... It's important to understand that the RGB display you see is not necessarily the extent of what is actually in the YCC, just how the NLE is previewing it by default. We need to seperate what is displayed from what is really in the file and that a simple levels filter can make detail appear, this really needs to be done at 32bit processing though as clipping may well occur otherwise, as mentioned above. Our monitors are 8bit RGB display, they can't display 32bit as 32bit or wider dynamic range but that's not to say that the RGB values calculated in the YCC to RGB conversion by the NLE or media player don't contain values greater than can be displayed at 8bit and that includes negative RGB values that would have been clipped in an 8bit YCC to RGB conversion. Being able to store and manipulate negative values helps black levels / shadows and whites greater than a value of 1 can be held and manipulated over 8bit clipping level white. 0 - 255 is described as 0 - 1 in 32bit terms. So back to YCC to RGB conversion. Your GH2 captures luma in the 16 - 235 range, it's not full range, it's limited but it's the 'correct' range for YCC to RGB conversion. 16 - 235 mapped to 0 - 255 RGB based on the standard calculation. This is all in 8bit terms and 8bit considerations like clipping 8bit values generated that are negative or greater than 1. What 5DToRGB offers is for you to say 'nah I don't want to use the standard YCC to RGB mapping based on 16 - 235 luma. I want you to calculate RGB values based on the assumption that luma levels in my files are full range 0 - 255. So doing that means that instead of 16 YCC being treated as 0 black RGB, it's treated as 16 RGB (grey) and 255 YCC treated as 235 RGB. Result is levels of the original file get stretched out and the image looks washed out or not so contrasty and you can see more detail as a result. That's all 8bit world and if your NLE is 8bit then you may have to resort to that sort of workflow. 10bit and 16bit just provide finer gradients and precision, they do not however provide the ability to store negative RGB and RGB values over 1. A 32bit float workspace is required for that, 32bit is not all about compositing and higher precision. Such apps as Premiere CS5 and AE CS7 onwards with a 32bit float workspace work differently to 8bit and 16bit. 16bit just spreads 8bit over wider range of levels prorata. So 8bit black level 0 hits the bottom of the 16bit range, 255 hits top of 16bt range. Where as Adobes 32bit float workspace centres the 0 of the 8bit in the 0 - 65536 levels range so you have room to go negative as well, blacker than black so 8bit black doesn't hit 32bit 0 black. The importance of this is that where your shadows would have been clipped to black and detail lost or hidden deep, 32bit allows the values to be generated and held onto, our 8bit display still can't display them but they are there safe in memory same for brighter than 8bit white. We can reassured that we can shift levels about freely until what we see in our 8bit preview is what we want, those details you miraculasly see appear by magic with 5DToRGB which is really just a remapping of levels YCC to RGB. Iin 32bit the default GH2 import levels and detail will appear and disapear depending on your grading, but they are not lost, they slide in and out of the 8bit preview window into the 32bit world, no need to transcode. This makes the 5DToRGB process pointless with regard to levels handling and gamma reasons when you have a 32bit float workspace. 16bit donesn't offer this. Just import GH2 source into 32bit and grade. You can see whether your NLE handles this by importing the fullrangetest files in one of my posts above. Try grading the unflagged file, it's full range luma so initially the display will show black and white horizontal bars, this is the default mapping for 8bit RGB preview I mentioned above, relate that to your default contrasty, crushed blacks, lost detail preview in the NLE. Put a levels filter on it or grade and pull the luma levels up and down, if you see the 16 & 235 text appear you can see your NLE has not clipped the values converting from YCC to RGB. You'll be safe in the knowledge you haven't been shafted by Apple and lost detail and that 5DToRGB is doing magic. If you don't see any 16 & 235 text and conclude that your NLE is doing an 8bit conversion going YCC to RGB then options like 5DToRGB transcoding etc may be the only options. Not sure why anyone would want to transcode to 4444, the last '4' is for alpha channel. The conversion from 4:2:0 YCC ie: subsampled chroma to 444 Prores is a big interpolation process, creating 'fake' chroma values by interpolating the bit of half size chroma captured in camera. It's possible that some dithering or noise is added to help with that, so 444 is manufactured from very little, again it's interpolation via an algo like bilinear, bicubic, smoothing chroma a bit, but does nothing to levels and gamma. Just manufactures a bit extra color. 444 is similar to RGB and the process of generating RGB in the NLE for preview on our RGB monitors is done very similarly, as soon as you import the YCC sources into the NLE, preferably at higher precision than 8bit chroma is interpolated and mixed with the non subsampled luma and RGB values created. The higher the bit depth the better the gradients and edges created. I don't think transcoding to 444 or even 4444 for import into a 32bit NLE or grading package is worthwhile. All this goes back to a simple workflow, suggest using a 32bit enabled NLE / grading tool and a slight gentle denoise to handle the interpolation of values at higher bitdepth, rather than interpolate to 444 using algos that can over sharpen edges and create black, white, colored halos and fringes at edges depending on pixel values and lower bit depth, if over done or pushed in the grade. Gentle denoise will give other benefits too, obviously. But suggest doing own tests and whatever suites an individual is the way to go of coarse.
  13. Premiere CS5 onwards is 32bit processing default. AE 7 onwards I think offers 32bit and linear blending / workspace. BT601 or 709 differences in 5DToRGB are only applicable in the conversion to RGB and then the only difference seen is a slight contrast change and pinks sliding to orange, blues sliding towards green a bit, but unless you have a correctly converted video to image as reference you'd probably never know the difference. All the 4:4:4 and gamma nonsense is pointless, really just import into a higher bit depth workspace, denoise a bit and go from there. 32bit Premiere is going to do a limited range conversion of the GH2 source to RGB straight away, same for AE CS5 onwards. Absolutely no point adding a 15-235 filter to GH2 as thats what it is already, the filter was for 16-255 sources like FS100 and NEX not GH2. All the 4:4:4 stuff is just interpolation of the 4:2:0 chroma which is done as soon as import into CS5 AE or Premiere. Using AE or Premiere prior to CS5 is different handling and screwed up conversion to RGB if not careful. Results will differ between those twi versions, what works in CS5 is not necessarily going to look the same as earlier versions.
  14. [quote name='QuickHitRecord' timestamp='1350428666' post='19828'] Good points. What is a better way to go about posting accurate screen grabs? [/quote] Not sure what's available for mac but VLC should do the job, or Media Player Classic. Here's a test file for making sure VLC is set up for levels handling: [url="http://www.yellowspace.webspace.virginmedia.com/fullrangetest.zip"]http://www.yellowspace.webspace.virginmedia.com/fullrangetest.zip[/url] More important for Canon and Nikon h264 sources really. Banding can occur due to poor encoding like GH2 and/or wrong levels handling in playback. And making it look worse. Canon sources banding should be far less likely. In camera banding no combination of settings in 5DToRGB is going to help, a very gentle denoise will though, does your NLE have a denoiser, preferably temporal and working of YCC data not RGB. Converting to DPX or other image formats isn't going to solve the problem, working in a 16bit or preferably 32bit project will to an extent. Denoise plus higher precision work space is a way forward. Take care with denoising though as it can make banding appear more prominent if done too much or at lower bit depth, then an addition of fine amounts of noise, grain or dithering even a debanding plugin if necessary to fight that, finding a happy medium between mushing pixel values at high bit depth with a denoiser, balance holding detail vs smoothing, adding the right amount and type of noise as last opp after any sharpening.
  15. Hi Even though you feel the settings shown provide good results, the settings don't make sense with regard to luma range and matrix. It's AVCHD off a GH2 so BT709 is the matrix and GH2 encodes into 16 - 235 luma 16 - 240 chroma so limited range not full range. QT Player screen shots are not really reliable, QT auto scales the levels and upsamples to 4:2:2 and then converts to an RGB screen grab, so don't think the screen grabs are reliable. Also the method of providing grabs is so painful, is it not possible to just provide a zip download and better still the sample GH2 clip you're testing then others can actually make a comparison if anyone is interested. :-)
  16. [quote name='Bruno' timestamp='1349703707' post='19483'] 600 lines seems to be an exaggeration. The 5D3's video capture resolution is 1904x1072, vs 1720x974 for the 5D2/7D/550D. [/quote] And personally from the visual comparison between T2i 4:2:2 raw silentpic in non record operation (1056x704) versus the 4:2:2 raw silent pic at 1720x974 that we get whilst recording, I feel the latter is softer suggesting an upscale of 1056x704, although A1ex suggests otherwise, he's to know I guess. But long time back I remember Canon suggesting they added video recording functionality by rerouting the liveview feed from the LCD and added a h264 encoder chip to the hardware of the 5D MK whatever.
  17. A public library has books for free, you can get more than one at a time. :-) Here's an excellent site btw: [url="http://www.cambridgeincolour.com/"]http://www.cambridgeincolour.com/[/url]
  18. If it's the same as for Canons, then it's all about the mount, not the make, you can have the same manufacturer of a lens but different mount. btw fwiw I read somewhere on this thread that Canon lens choice is restricted in fact a great many lens mounts fit Canon EOS including good oldies like EXAKTA.
  19. [quote name='EOSHD' timestamp='1348309262' post='18887'] [img]http://www.eoshd.com/comments/uploads/inline/1/505d90ded65c3_gh2levels.jpg[/img] For reference this is the levels problem I am still having with the GH2 after all these years, and now GH3. Why does my Mac not recognise that GH2 footage should be 16-235 not 0-255? Quicktime treats it as 0-255 as well, crushed blacks and blown highlights. Full resolution screen grab: [url="http://www.eoshd.com/uploads/gh2-levels.jpg"]http://www.eoshd.com.../gh2-levels.jpg[/url] [/quote] Because you're looking at the RGB interpretation, which will be 0 - 255 from YCC 16 235. What does the YC Waveform show?
  20. The GH3 encodes luma over full 0 - 255 (based on that native file Rich pointed to in the other thread) and the stream is flagged 'fullrange' on. Just like Canon and Nikon. This is a departure from the GH2 which was 16 - 235 luma. The full range flag is there to tell the decompressing codec to squeeze luma levels 16 - 235 for playback which is the correct luma range for conversion to 0 - 255 RGB in the media player. If you are seeing 16 - 235 in a luma waveform it's because the flag is working, Premiere and AE on Windows respect the flag. VLC depending on config with hardware YUV->RGB conversion active may not respect the full range flag so it may just crush black and compresses white giving increased contrast if not setup right. Many other media players will do the same. Quicktime Player (on windows at least) does it's own thing and f--ks it up totally. Perhaps test playback with ffplay from ffmpeg or better still Media Player Classic, which will honour the flag. Here's a couple of test files, they are both from the same 0 - 255 YCbCr source file, one is encoded with the stream flagged full range on, as GH3, Canon, Nikon MOVs, the other flagged full range off. [url="http://dl.dropbox.com/u/74780302/fullrangetest.zip"]http://dl.dropbox.co...llrangetest.zip[/url] If your media player / NLE whatever doesn't display the 255 & 16 text then you won't be viewing GH3 or Nikon or Canon MOV's correctly, it will crush shadows and compressed upper levels to white. If you do see the text then you can check if the levels are correct and not stretched the other way by doing a frame grab and checking in a image editor using the sampler to check 0, 16, 235 & 255 RGB values correspond.
  21. Lack of full manual control in video mode being one of them, at least that's the case on the lower and older end.
  22. [quote name='jcs' timestamp='1346972892' post='17475'] If someone mentioned 780 as vertical pixel resolution for the 5D3, where is the chart to support this? If it is really horizontal line pairs, that would be 1560x878 pixels- very close to 1600x900. [/quote] If you're running Magic Lantern you could take a Silent Pic in movie mode without recording and again whilst recording. On a T2i/550D I get sharp uncompressed 4:2:2 raw YCC at a resolution of 1056x704 when not recording. Whilst recording I get a silent pic approx 1700x900 4:2:2 (not got one in front of me for exact resolution) but it's a soft as s--t compared to the 1056x704. Clear to see what appears to be an uprez of the 1056x704 4:2:2. Then when I compare the MOV recorded vs the 1056x704 4:2:2 uncompressed YCC raw vs the 1700 YCC raw it's clear to see the MOV is a lot softer than the 1056x704. No doubt there are differences between how the T2i, 5D Mk II, 5D Mk III do this so it'd be interesting to see a comparison from a 5D Mk II or III The ML devs consider the raw uncompressed YCC the feed into the h264 encoder.
  23. Yes, it has to be remembered that at no point is anyone actually seeing raw, just a software interpretation of raw from the ACR or other chosen defaults in the raw development process from bayer interpolation, RGB multiplication factors that if chosen wrongly will produce clipped channels in RGB that were't there in the raw file leading to magenta color casts, shiny skin tone etc, black handling, white balance and color space choice from the more limited sRGB to Wide Gamut and ACEs. Advantage of raw development outside of camera is control, setting the RGB multipliers correctly for any given shot to avoid clipped channels and color casts, choice of bayer interpolation method, whether that is ACR in AE or Premiere, Resolve or a free raw development tool like dcraw used in numerous commercial apps also gives a lot of control. dcraw will even provide low level output before bayer intetpolation for analysis if required.
  24. Does AE even use QT for encoding looks like its MainConcept in Adobes Mediacore to me certainly on Windows it is and I have QT installed. I've had another go at it via AE and I really don't know where such a saturated result as the Prores has come from unless graded that way. AE settings are 32bit linear project, sRGB work space on this occasion although as thesr are raw and assume no colorspace baked in could use wider than that in theory, comp 24p 1080 and output simulation as rec709. Encoding out to h264 using AEs own, x264 is possible with Premiere I think, shows same preview in all players, that is the more orange dress not saturated red. Don't even see saturated red in any other raw developing tool I've tried. Interesting article here: [url="http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm"]http://www.guillermo...aw/index_en.htm[/url] With reference to the multitude of assumptions a raw converter will do many out of reach of user. Really, think the problem here is comparing against the Prores and how that wad derived, not QT playback of h264 dragging up three year old 'bugs' and work arounds. Although I don't use mac so defer to those with more experience of the OS. Yes this could be different handling vs Windows, I also see same results using Linux and Linux based raw tools. fwiw. **EDIT** Playing around with some of the wider gamut choices via dcraw for raw developing to 16bit Linear Tiffs, outputting to the ProPhoto colorspace rather than sRGB I get the Prores saturated look and deep red to the dress, all with the other developing choices the same as sRGB such as default dcraw RGB channel multipliers and automatic white balance assumptions and then output simulation to sRGB. Still this may have nothing to do with it, but... btw regarding h264, there are many profiles available, it's not all 4:2:0 c--p vs Prores, h264 specifiction including x264 implementation offeres 4:4:4 also 8 and 10bit lossless. There's also support in h264 for xvYCC a wider gamut than rec709 / sRGB. Its quite easy to have 300mbps lossless h264. :-) Not that its to suggest bitrate determines quality.
  25. I don't know mac at all but I believe OSX has a go at color managing QT player using whatever display profile is configured. VLC I don't think is color managed at all. But certainly AE on Windows doesn't encode what is previewed so far. Can you post your Prores 444 and h264 from AE on mac for comparison. I'm not surprised AE is suspect, its not even possible to put a Canon MOV into a linear light 32bit project and do the right degamma and regamma for encoding because of sRGB vs rec709 curves differing. But that's a seperate problem. :-)
  • Create New...