Jump to content

blackrat

Members
  • Posts

    57
  • Joined

  • Last visited

Everything posted by blackrat

  1. I didn't get to watch the videos yet, but looking at the screen caps, the A7S+Shogun appears to capture a heck of a lot more detail. The Canon 1DC seems to have the same waxy, squish video that the 5D3 does in 1080P (if not using ML RAW). And from the sample footage I've seen posted, the A7S stuff all makes it look natural and like you are there and the 1DC stuff all looks waxy and artificial. Maybe it's just down to the Canon users using poor settings or compression before uploading to Vimeo and Youtube but....
  2. For an AE only workflow it should be possible to do it (referring to my messgae two above) if you can find and .icc profile that has sRGB/REC709 primaries mixed with gamma 2.2 instead of sRGB TRC. I didn't find one with a quick web search, but I'm sure one has to be out there. I think I will just make my own such. Then you could just use that in AE and it should be good (at worst if you used that for the output conversion profile that should force it to work out).
  3. Maybe you are running an older version of ML RAW, one where you need to start and stop RAW recoding within the ML menu system? The newer buiilds should start it up with the normal video record button. (and just in case, after telling ML to load the raw_rec module, make sure you did go to the ML video menu and select RAW Video recording enabled since it is disabled by default).
  4. ML RAW Video Tone Curve Photoshop/ACR Workflow FIX Basically when you use the Photoshop/ACR workflow to process the RAW DNG folder you have to set working space for ACR to sRGB 16bits which is all fine BUT most people calibrate monitors and TVs to something like gamma 2.2 but you were editing in sRGB and as soon as you take the footage out of something not completely color-managed which includes almost all video playback software you end up with the sRGB video file's sRGB tone response curve not getting converted for use on a gamma 2.2 display and you get the contrast and saturation a trace boosted and the shadows and lower mid-tones become too dark.   First, of course ACR should be set to sRGB working space and 16bits when using it to do ML RAW video (for stills ProPhotoRGB 16bit makes most sense).   The fix is to add a step right before you save out as TIFF in your batch action. Use "Edit->Convert To Profile->Custom RGB" and then rename it to "REC709 Primaries With Gamma 2.2" (or whatever) and hit OK (it should already have selected REC709 primaries and gamma 2.2 for you automatically, if not, make sure it has gamma 2.2 set and REC 709/sRGB primaries set). This will store each TIFF in Gamma 2.2 with sRGB/REC709 primaries instead of in sRGB TRC with sRGB/REC709 primaries so your videos should look the same when played back on your sRGB/REC709 primaries and gamma 2.2 calibrated display as they did when you edited the initial frame in ACR/Photoshop.   But that is not all. That simply makes the TIFFs get stored as gamma 2.2 but AE will still convert them back into sRGB TRC instead of leaving them at gamma 2.2 unless you make sure to set "Preserve RGB" as one of the options for the output codec in the render queue and I believe that you also need to change AE preferences to chose "None" for working space to turn off its color management engine. That does the trick (it's actually simple all you do is turn off the AE color management once and save those prefs and then just add the convert to profile with gamma 2.2 thing to your RAW batch action in Photoshop once and you are good to go with nothing more needed to be done each time.   Then when you import into Premiere Pro it looks the same way as it did in Photoshop/ACR (assuming your monitor is internally calibrated to sRGB/REC709 gamut and gamma 2.2 D65m if not there may be slight variations due to primaries in different locations and such although if you at least calibrated it through software the gamma/WB ramp should still work in your video card and that should still match up more or less).   It really makes a considerably noticeable difference. Your video won't end up overly saturated/contrasty/dark in dark to midtones compared to what you thought you had prepared in Photoshop/ACR. If you were fine-tuning in Premiere Pro anyway I suppose it doesn't matter but it saves you from having to re-tune to make up for sRGB vs gamma 2.2 differences which is hard to exactly do by hand and it means less need to push bits around once you are possibly no longer in full bit format.  
  5. I think NIK Dfine has some options, for one and is said to be among the best as ridding banding. I have a bad feeling that it won't batch properly though which would make it no go for video. I need to check.
  6. What if you also do a lot of stills? Then you need the 5D3 anyway which makes the costs even far more extreme. 5D3 gets it all in a small package. Great for hiking out to film cool spots/nature. Don't need to always lug two systems, etc.   C100 still doesn't grade quite as well.   I mean each have their uses though of course.
  7. GH2 doesn't have the DR or color of a hacked 5D3. It's more than just the resolution that gets improved with the 5D3 hack.
  8. Not possible for a number of reasons including that the Nikon LV feed is aliased and choppy.
  9. You don't want to try to remove vertical banding with standard spatial NR! As you say that destroys all regular detail to only just remove some fixed pattern banding. You need to use special banding-tuned NR.
  10. If you ever shot some Fuji Velvia 50 and such that was pretty contrasty and wildly saturated.
  11. Should be somewhere in the ISO100-200 range for most DSLR. I think ISO 200 for some Nikons.
  12. That is strange though since I've not heard of anyone else having any luck at all with the Komputerbay 128GB cards. I looked at a couple and both also topped out at barely over 70MB/s which is no good. Sounds like Andrew hit the same speed and that's all I've read everywhere else too (other than at Cinema5D). Maybe they sent Cinema5D some special batch that has nothing to do with normal copies? (Even from Lexar I noticed that many speed tests had their 128GB cards rated slower than their 64GB and 32GB cards, although not as slow as the Komputerbay cards seem to be).
  13. Komputerbay 128GB cards seem to only do 70MB/s. They don't cut it. Perhaps their 64GB? Might be Lexar 1000x 32/64GB or forget it though.
  14. What about sports guys firing off 18MP 12fps bursts all the time? Reading from the card to transfer to a computer at maximum speed? I think these cards were designed to be able to work at max rate for extended periods of time.
  15. But once again what extra heat? It is simply pulling the liveview feed. The same feed the camera is spitting out ANY time you use liveview or movie mode. And then they are dumping to CF cards quickly. ALl-I mode also dumps to CF cards quickly as does someone holding the shutter down a lot for stills mode. Only in this case they are not running the h.264 compression chip as well and are probably using Digic LESS as well.
  16. hah, OK, well how about some wildlife footage now appearing: ISO3200
  17. Well looking at 4K demo on an LCD sure gives me more of a looking out the window feeling than look at at 1080p set. And the same went for most others walking up and giving it a look.
  18. Yeah eyes are that good. Look at a 24" print and the same image on a 24" screen from a typical viewing distance. Hella difference. Look at the old ipad and a retina ipad that's sort of like what 1080p vs 4k from a GOOD viewing distance for an HDTV can be like.
  19. Max signal stays the same but averaging many photosites to a few means noise floor goes down and then plug in and DR goes up. Trading spatial resolution away in the process.
  20. Shot something that blows out all channels and measure the signal recorded (WP=white point)and then shoot a frame with body cap on and measure the noise of that (RN=read noise) and find the BP=blackpoint (avg value of the black frame): log2((WP-BP)/RN)=DR   You also might want to normalize. You could just apply the factor to RN so for instance say you had a 21MP cam and got 6ADU RN then to compare at 8MP basis you  do (8/21)^2 * 6 and get like 3.7 or so ADU RN and then you just use 3.7 above instead of 6.
  21. "Vincent: " – and can the body TRULY survive doing this over time (heat/damage?)""   1. I bet it runs a trace cooler if anything. 2. Even if it did eventually do something (and I don't see how it would anymore than using Liveview a lot would) look at the price of it!
  22. It's 49 not 40 seconds. And it's working out pretty marvelously as is for nature/scenic stuff for me already, but of course that stuff doesn't count because it merely some silly natural world video (of course there are times for that stuff you would like more than 49 seconds too but as has been said it's just a short matter of time before they more or less get past the 4GB thing anyway and you can get plenty of awesome stuff in the meantime).
  23. No more like 3' and the grain and pixel are easy to see. Sitting a nice distance from a 4k HDTV blow the heck out of sitting stupidly far from a 1080p set.
  24. Not everyone sits 10'-20' away from their 55" HDTV set! Way past THX recommended. Even with a retina iPad at normal ipad distance I can still see some pixels because even that isn't quite high enough density for viewing distance. 4k TVs make ever bit of sense in the world.
×
×
  • Create New...