Jump to content


  • Content Count

  • Joined

  • Last visited

Posts posted by blackrat

  1. I didn't get to watch the videos yet, but looking at the screen caps, the A7S+Shogun appears to capture a heck of a lot more detail. The Canon 1DC seems to have the same waxy, squish video that the 5D3 does in 1080P (if not using ML RAW).

    And from the sample footage I've seen posted, the A7S stuff all makes it look natural and like you are there and the 1DC stuff all looks waxy and artificial. Maybe it's just down to the Canon users using poor settings or compression before uploading to Vimeo and Youtube but....


  2. For an AE only workflow it should be possible to do it (referring to my messgae two above) if you can find and .icc profile that has sRGB/REC709 primaries mixed with gamma 2.2 instead of sRGB TRC. I didn't find one with a quick web search, but I'm sure one has to be out there. I think I will just make my own such. Then you could just use that in AE and it should be good (at worst if you used that for the output conversion profile that should force it to work out).

  3. Hi Andrew,

    I see rec raw but nothing happen I hit the start button but no recording

    5d mark iii with 32 gig lexar 1000x


    Maybe you are running an older version of ML RAW, one where you need to start and stop RAW recoding within the ML menu system? The newer buiilds should start it up with the normal video record button. (and just in case, after telling ML to load the raw_rec module, make sure you did go to the ML video menu and select RAW Video recording enabled since it is disabled by default).

  4. ML RAW Video Tone Curve Photoshop/ACR Workflow FIX

    Basically when you use the Photoshop/ACR workflow to process the RAW DNG folder you have to set working space for ACR to sRGB 16bits which is all fine BUT most people calibrate monitors and TVs to something like gamma 2.2 but you were editing in sRGB and as soon as you take the footage out of something not completely color-managed which includes almost all video playback software you end up with the sRGB video file's sRGB tone response curve not getting converted for use on a gamma 2.2 display and you get the contrast and saturation a trace boosted and the shadows and lower mid-tones become too dark.


    First, of course ACR should be set to sRGB working space and 16bits when using it to do ML RAW video (for stills ProPhotoRGB 16bit makes most sense).


    The fix is to add a step right before you save out as TIFF in your batch action. Use "Edit->Convert To Profile->Custom RGB" and then rename it to "REC709 Primaries With Gamma 2.2" (or whatever) and hit OK (it should already have selected REC709 primaries and gamma 2.2 for you automatically, if not, make sure it has gamma 2.2 set and REC 709/sRGB primaries set). This will store each TIFF in Gamma 2.2 with sRGB/REC709 primaries instead of in sRGB TRC with sRGB/REC709 primaries so your videos should look the same when played back on your sRGB/REC709 primaries and gamma 2.2 calibrated display as they did when you edited the initial frame in ACR/Photoshop.


    But that is not all. That simply makes the TIFFs get stored as gamma 2.2 but AE will still convert them back into sRGB TRC instead of leaving them at gamma 2.2 unless you make sure to set "Preserve RGB" as one of the options for the output codec in the render queue and I believe that you also need to change AE preferences to chose "None" for working space to turn off its color management engine.
    That does the trick (it's actually simple all you do is turn off the AE color management once and save those prefs and then just add the convert to profile with gamma 2.2 thing to your RAW batch action in Photoshop once and you are good to go with nothing more needed to be done each time.


    Then when you import into Premiere Pro it looks the same way as it did in Photoshop/ACR (assuming your monitor is internally calibrated to sRGB/REC709 gamut and gamma 2.2 D65m if not there may be slight variations due to primaries in different locations and such although if you at least calibrated it through software the gamma/WB ramp should still work in your video card and that should still match up more or less).


    It really makes a considerably noticeable difference. Your video won't end up overly saturated/contrasty/dark in dark to midtones compared

    to what you thought you had prepared in Photoshop/ACR. If you were fine-tuning in Premiere Pro anyway I suppose it doesn't matter but it saves you from having to re-tune to make up for sRGB vs gamma 2.2 differences which is hard to exactly do by hand and it means less need to push bits around once you are possibly no longer in full bit format.

  5. Again I can't imagine any appeal to having to work with a completely different mount and camera system. If you have two Canons already just stick with Canon. They are the leader for a good reason, and glass is a major Canon strength.


    BTW, the internal ND's on the C100 just beat out Formatt and Schneider 4x5.65's in a shootout:



    Saves you even more dough (not to mention the also world-class built-in IR filter that also seals the sensor area from dust), and no flattening polarization effect or color bias from a Vari-ND.


    Really given we have two excellent indie-affordable options in the C100/Ninja2 (best lowlight) and FS700/Speedbooster (overcrank) it's heroic but puzzling of you guys to stick with DSLRs. The RAW hack is definitely interesting to make use of the 5D3 as a super-B cam instead of just stills. But even so kitting it out is more expensive than the C100 option if you are only interested in motion, and not as great an image as charts reveal.

    What if you also do a lot of stills? Then you need the 5D3 anyway which makes the costs even far more extreme. 5D3 gets it all in a small package. Great for hiking out to film cool spots/nature. Don't need to always lug two systems, etc.


    C100 still doesn't grade quite as well.


    I mean each have their uses though of course.

  6. Interesting...the GH2 is very close to the 5D raw in these tests and it doesn't need all this post processing and storage power.  I understand the gravity of ML and their work, especially if you're an owner of a 5D or have a lot of Canon glass.  However, for us on the fence between GH3 and 5D3 what is the allure of Raw?  Considering it is so intensive of a workflow process it would only be used for special shots, and the majority of footage would be h.264, wouldn't the no brainer be to upgrade to a gh3?

    GH2 doesn't have the DR or color of a hacked 5D3. It's more than just the resolution that gets improved with the 5D3 hack.

  7. Thanks for doing these tests! Awesome to hear our mushy 5D3 image now has the ability to shoot a more detailed and vibrant image than with the stock compression. Even more awesome to know it compares closely with the Cinema line. 


    Some of your shots from the 5D3 are still showing some of the fixed vertical patten noise. This is my biggest concern after shooting raw with my 5D3. Having to push the luminance sliders past 50 (up to 75 for some shots) is definitely making the image more muddy. Sharpening helps bring back some detail but it's a shame so much noise-removal is necessary to get a clean image with no vertical lines. Maybe this will be worked out eventually? You've mentioned you like to shoot a few notches under-exposed to retain better highlight detail but it seems more people are reccomending the opposite, over-exposing then bringing down the highlights, in order to help with the vertical pattern issue. I'll have to do some more comparisons to see if this verifies but it's typically the way I shoot photos with my 5D3.

    You don't want to try to remove vertical banding with standard spatial NR! As you say that destroys all regular detail to only just remove some fixed pattern banding. You need to use special banding-tuned NR.

  8. Film with a purely photochemical process after a one-light has less saturation and contrast than a lot of the RAW videos posted. From the last project i did that's what my eye tells me. Film is not super contrasty and saturated, i don't know why you're arguing that it is? The neg is not designed to create high contrast/saturation, that doesn't make sense. The Neg gives you a nice range of contrast to retain as much info as possible within the boundaries of what film can achieve. 


    Ok the MKII video posted is less saturated than film processed photochemically, but it's waaayy less saturated and contrasty than some of the videos being posted, and far more in the direction of what film looks like.


    I don't understand what you mean by this digital LUT problem. You shoot on your camera with LUT applied to the viewing monitor, then in post you apply the LUT, and there you go it's the same as when you shot it. If you mean people are trying to grade footage to look like the LOG profile of an Alexa before grade/LUT applied, yes that's weird.


    Anyway, rather than arguing about it, i guess i should just buy a fast CF card and test it for myself huh?

    If you ever shot some Fuji Velvia 50 and such that was pretty contrasty and wildly saturated.

  9. Cinema5D was claiming their 128gb 1000x Komputerbay cards were much faster than what Andrew and I have been getting. I'm wondering if I should just order 5-6 cards at once and only keep the fastest ones? Thoughts?

    That is strange though since I've not heard of anyone else having any luck at all with the Komputerbay 128GB cards. I looked at a couple and both also topped out at barely over 70MB/s which is no good. Sounds like Andrew hit the same speed and that's all I've read everywhere else too (other than at Cinema5D). Maybe they sent Cinema5D some special batch that has nothing to do with normal copies? (Even from Lexar I noticed that many speed tests had their 128GB cards rated slower than their 64GB and 32GB cards, although not as slow as the Komputerbay cards seem to be).

  10. Oscar your analogy doesn't work. That RAW live view massive image is now written onto newly developed cards So for that much information to get written onto the card it must create heat. Maybe not as much as line skipping binning or whatever the 5D does But maybe more. 

    But so far so good. 

    What about sports guys firing off 18MP 12fps bursts all the time? Reading from the card to transfer to a computer at maximum speed? I think these cards were designed to be able to work at max rate for extended periods of time.

  11. Excellent summary and update. I do share Vince's skepticism with regard to heat damage over a long period of time. Heat & Dust are the arch enemies of Digital Video and we know that 4K processing in a small container is going to result in parts decay eventually. Then again, this update is really for the installed base. I wouldn't run out and buy this camera just for 4K ( knowing that in the horse race, first out the gate, only sets the pace). No doubt, Nikon ( last out the gate) and the others are looking closely.
    I'm still hopeful for that 4KRaw 10Bit in a smaller box but NOT a DSLR. Hell may freeze over before I get that wish fullfilled.

    But once again what extra heat? It is simply pulling the liveview feed. The same feed the camera is spitting out ANY time you use liveview or movie mode. And then they are dumping to CF cards quickly. ALl-I mode also dumps to CF cards quickly as does someone holding the shutter down a lot for stills mode. Only in this case they are not running the h.264 compression chip as well and are probably using Digic LESS as well.

  12. I've never seen so much test footage of trees, shrubs and flowers in my life with all of these 5D3 raw clips being uploaded online. I appreciate seeing the quality of the footage but let's try to shoot some landscape wides of a lake, highway, city line, or something besides closeups of the bush in your backyard.... lol

    hah, OK, well how about some wildlife footage now appearing:


  13. Old CRT's actually give you that "window feeling" WAY better than LCD screens. Especially if that CRT is an HD model. They really look outstanding. No LCD I've ever seen has ever given that "out a window"-feeling and they won't until they are OLED. Nothing to do with resolution.


    Resolution is a red herring. I'd much rather have an old 720p plasma than a new 1080p lcd. Contrast, blacks...That's where it's at.


    At home I have one of the last plasmas Pioneer did before they sold the business to Panasonic And it's gorgeous. It really is. Man those blacks really do their stuff, especially when watching films in the dark. 

    Well looking at 4K demo on an LCD sure gives me more  of a looking out the window feeling than look at at 1080p set. And the same went for most others walking up and giving it a look.

  14. Hmm. No they didn't. The increase to 1080p was one of the biggest things ever especially coming from NTSC. 720x480 interlaced?


    How can anyone say the difference was small? But now, moving from 1080p to 4k is not as big a deal. People's eyesight ain't that good. That's why many movietheater is content to play 2k. It doesn't matter.

    Yeah eyes are that good. Look at a 24" print and the same image on a 24" screen from a typical viewing distance. Hella difference.

    Look at the old ipad and a retina ipad that's sort of like what 1080p vs 4k from a GOOD viewing distance for an HDTV can be like.

  15. Correct me if I am wrong, but I thought each photosite has the same dynamic range for what it can measure in terms of signal. And if you are imputing DR by using multiple photosites, more photosites would mean more DR, not less, as your statement implies. 

    Max signal stays the same but averaging many photosites to a few means noise floor goes down and then plug in and DR goes up. Trading spatial resolution away in the process.

  16. I'm curious, how do you mesure DR using black raw frames? whats the process.

    Shot something that blows out all channels and measure the signal recorded (WP=white point)and then shoot a frame with body cap on and measure the noise of that (RN=read noise) and find the BP=blackpoint (avg value of the black frame):



    You also might want to normalize. You could just apply the factor to RN so for instance say you had a 21MP cam and got 6ADU RN then to compare at 8MP basis you  do (8/21)^2 * 6 and get like 3.7 or so ADU RN and then you just use 3.7 above instead of 6.

  17. And again, these other "real world" examples are fabricated based on what someone sees in a typical edited down fashion.


    I can see it now, besides with actors in any form, not just long form, all those moments lost because you either weren't rolling or just had the camera cut out on you, you will have even more lost moments when trying to capture wildlife, or anything live and unpredictable.  


    This rationalization over 40sec being useful will all likely be moot when ML applies the trivial fix for spanning but it does reveal much.

    It's 49 not 40 seconds. And it's working out pretty marvelously as is for nature/scenic stuff for me already, but of course that stuff doesn't count because it merely some silly natural world video (of course there are times for that stuff you would like more than 49 seconds too but as has been said it's just a short matter of time before they more or less get past the 4GB thing anyway and you can get plenty of awesome stuff in the meantime).

  18. You're watching that screen at likely less than 2' from the display.  Please take the time to understand what's being discussed.  You're adding nothing to the discussion here.

    Not everyone sits 10'-20' away from their 55" HDTV set! Way past THX recommended. Even with a retina iPad at normal ipad distance I can still see some pixels because even that isn't quite high enough density for viewing distance.


    4k TVs make ever bit of sense in the world.

  • Create New...