Jump to content

cpc

Members
  • Posts

    204
  • Joined

  • Last visited

Everything posted by cpc

  1. Resolve, Scratch, Nuke Studio, Premiere, After Effects, Edius (slow), Lightworks latest release. All work with DNG footage.
  2. This is probably for historical reasons and is related to interlaced video. With interlaced, each of the two fields is subsampled separately (because the fields represent different time moments and subsampling them the same way as with progressive images would introduce chroma artifacts related to motion). Now, because each field is subsampled separately, if you use one of the subsampling methods with only half the vertical resolution, for example 4:2:0 or 4:4:0, this would result in gaps of two lines with no chroma samples. Here is how a column of 4 neighbor pixels looks like in this case: Field 1 Top row (chroma sample) Field 2 Top row (chroma sample) Field 1 Bottom row (no chroma sample) Field 2 Bottom row (no chroma sample) But if you use full vertical sampling, as in 4:2:2, there is no such issue. You only get 1 sample gaps horizontally, and no gaps vertically when applying 4:2:2 to interlaced video.
  3. Well, since you've been renting lights you should know best what works for you? I'm not sure how the 1x1 lights can be enough for lighting a normal scale set/location from scratch. Are you only shooting closeups or using the LEDs to augment daylight?
  4. 1) Convert your ML raw/mlv files to dng sequences using one of the many available GUI batch tools. This is essentially a simple and fast rewrapping job. 2) Delete the raw/mlv files. Raw process, edit, color and finish in Resolve Lite, non-destructively, using the dng files. No generations, proxies or whatever. Full original quality at each step. That's what I've used. Now, if you need compositing you'll have to export to a compositing app using whatever format suits you. I've used tiff sequences for that. (I have an extra step, using my compressor http://www.slimraw.com/ to losslessly compress the DNGs (usually 50+% size reduction while retaining full original quality), but this is optional. /end of plug) And apparently there are now ways to mount the ML raw files directly so that they are visible as dng sequences in video apps. You should consult the magic lantern forum. Tons of workflows there. I haven't tried CC 2015 yet, but it looks promising. CC 2014 was way too slow for my liking when working with raw.
  5. Yeah, they put global shutter in-there, then forgot to mention it in all the marketing blurb. Not really.
  6. Well, I wrote this last year: ​http://www.shutterangle.com/2014/shooting-4k-video-for-2k-delivery-bitdepth-advantage/ Downscaling 4:2:0 8-bit video from 4K to 2K will give you 4:4:4 video with 10-bit luma and 8-bit chroma. But have in mind that these are the theoretical limits. In practice, what you gain depends on pixel variation and compression used on the source image. The more detailed the 4K image, the more true color precision you gain in the downscaled image. In any case, downscaling 4K in post delivers the best looking 2K/1080p from the current gen 4K cameras.
  7. Looks good. I would probably have him turned a little more to the left and also move the camera a bit left to have the window offset, and his head against the blue columns. Also, the light would probably be slightly better on him. The changing sun is messing a bit with the brightness of these interview shots, they go continuously darker. Not a big deal, but you may want to match them.
  8. You better ask here: http://www.magiclantern.fm/forum/index.php I have tons of ML footage but it is all converted to DNG. No reason to keep the original raw files since they aren't standard.
  9. You are downscaling 4k to 1080p, Then upscaling 1080p up to compare to 4k. Downscaling is not a reversible operation. Why? Because different groupings of pixels can result in the same output pixel once downscaled. When upscaling back the software can't create the original pixels out of thin air, because there are many possibilities for these pixels. In the end, you are comparing a 1080p image to a 4K image. And of course, the 4K image will have more resolution. So a proper downscaling is losing you resolution and gaining you bitdepth/color precision at the lower resolution, in the sense that each new pixel is essentially "quantized" at a higher bitdepth.
  10. What leads you to believe that 4K video is center crop and not sensor downscale?
  11. But there is usually some variance, moreso with multiple samples per output pixel. Yes, noise provides this, but so does detail and even gradients can have it (the steeper, the better), especially without compression on top. The thing is, in blacks and dark grays, where precision is most lacking, noise is at its strongest. So in practice you gain the most exactly where you need the gain. (On a remotely related note: Been playing with some Kinefinity footage lately; noise is nice and the image scales down to 2K beautifully.) @Axel: Not necessarily the case with HDMI out being subsampled the same as the in-camera recorded video. Lots of cameras record 4:2:0, but output 4:2:2 on HDMI. And some go 4:4:4 (over SDI), F3 comes to mind.
  12. Not sure about that, John. The native fine noise in the 4K image is probably mostly killed by compression as much as the downsampled grain in the original 2K image is killed by compression. I don't think NLEs dither, Resolve doesn't seem to do it. But the scaling algorithms are surely involved, and this affects output because quite a lot source pixels are sampled per output pixel (a generic cubic spline filter would sample 16 source pixels). @sunyata: it is not true 10-bit, and in your example chroma is still 8-bit (with no subsampling though), but it surely is better than a 2K 8-bit subsampled source from camera, if not as good as a true 10-bit source.
  13. The numbers above should read 20-25 and 60-65 tonal steps values per stop.
  14. Thanks to John for pointing me here, it is an interesting discussion. :) As one of the people who think that there is free tonal precision lunch to be had in 4K to 2K downscale and the one that wrote the ShutterAngle article linked on the previous page, I think the 8-bit display argument is a bit beside the point. The whole idea of shooting 4K for 2K (for me, at least) is in using a flat profile as s-log2 on the A7S. Then working it in post to the appropriate contrast. As my idea of a good looking image is generally inspired by film and includes strong fat mids and nice contrast, this means the source image is gonna take quite the beating before getting in the place I want it. And here is where the increased precision is going to help. To simplify it a bit: when you stretch an 8-bit image on an 8-bit display, you are effectively looking at a, say, 6-7-bit image on an 8-bit display, depending on how flat the source image is. That's why starting with more precision is helpful. Starting with 20-25 values in the mids (which is the case with 8-bit s-log2) is just not gonna handle it, when you are aiming at, say, 60-65 values there in delivery. Compression and dirty quantization to begin with surely affect the result and limit precison gains. But they don't entirely cancel them, and the better codec you use on the hdmi feed, the cleaner the downscale.
  15. My guess is the FC process goes like this: 1) Linearize source data. Source can be any transfer curve, and with varying colorimetry. Tonal curve is linearized, and color is transformed (corrected) to some reference color space. This step equalizes the input. 2) An idealized "printed" film transfer function (with the film negative gamma/contrast index expanded to 1, hence "printed") is applied, which pushes colors around, possibly also tweaking contrast (based on the film negative contrast index values). 3) Result is gamma encoded for display.   In theory, 1) and 2)  (well, and 3, for that matter) can be done in a single composite step (a composite LUT for each possible source type and each possible target film, for example).
  16. Yes, but that's not what I am asking. Any negative film is extremely low contrast. We are talking film gammas like 0.5-0.6. The image is never meant to be used at this contrast level. The audience never sees the negative. And printing on paper or on release film restores contrast. Even higher contrast release stocks help battle theater projection issues (stray light, low theater screen luminance, lateral eye adaptation) which tend to make darks appear brighter. The gamma of the whole system, from scene to projection, is higher than 1, often higher than 1.5, depending on the release stock.   Hence, the question about FC. There is an abundance of FC footage floating the internets which looks unnaturally flat (apparently, for many people lifted blacks = filmlike), and definitely not in the way a specific stock really looks like when printed. I doubt that FC defaults to digital cinema contrast - digital cinema gamma curves actually have higher contrast than computer displays (sRGB, 2.2), exatly due to the projection issues mentioned above.
  17. Looks very neat.   I have a question about FilmConvert. Do you select a print release stock in combination with the negative film stock? In reality, there is NO useable look to a film negative, until you print it. And a negative can look one way when printed on low contrast release stock, and differently when printed on a higher contrast release stock, etc.
  18. There are some tests on BMCuser that indicate (at least some) s16 lenses may actually work pretty well in terms of coverage. Problems with wide mount side lens diameter may prevent mounting though.
  19. [quote name='HurtinMinorKey' timestamp='1344440244' post='15147'] cpc, don't you think that lighting(on the subject) is more important than DOF for making an image appear less flat? [/quote] Surely light is important. I am only discussing the characteristics of the image inherent to the camera (in this case, the sensor).
  20. Well, a few points: 1) The image we see is video compressed for web, which is quite heavily compressed. We can only judge compression once John is allowed to post DNG files. 2) The size of the light capturing area does affect SNR. Larger pixels will generally have crisper images due to lower noise. 3) Somewhat related: In terms of DOF, there is some weird notion on the web that shooting shallow focus gives more depth to the image. This is so full of bull. People should pay more attention to the semantics of the words "deep" and "shallow". A [i]slight [/i]defocusing can lead the eye to important image subjects and add some perceptual depth to an image. Very shallow focus on the other hand only produces flat images. There is plenty of "slight defocusing" (pun intended) in a sensor of this size.
  21. The thing with resolving power and contrast is that you rarely need the very high spatial frequencies. For video, small-to-medium prints and web viewing, at least. Yet, most lens reviews obsess with pointless measurements of the extremes as if they are going to print posters... Resolving power and microcontrast are different things. You need not simply resolving power, but good microcontrast at the respective frequencies affecting viewing. Pixel count actually can define the maximum frequency that needs to be taken under consideration(unlike film, that needs both print size and viewing distance, to give anything meaningful as quantity). Note that most decent lenses will resolve above 0% MTF up to very high frequencies, so technically they have lots of resolving power even if lacking in microcontrast It is important to know that the less pixels you need, the more the [i]low[/i] spatial frequencies are critical. For example, a lens with [i]great [/i]microcontrast at low spatial frequencies and weak result at high frequencies will deliver a more brilliant image than a lens with just [i]good [/i]results at both low and high frequencies, if you actually only care about a 1280x720 image, and not tens of megapixels. This is why some Leica and Zeiss lenses produce exceptional images for web viewing/small print purposes even on APS-C sensors. They simply have exceptional results at 5, 10 and 20 lp/mm. And yes, if you keep the number of pixels but decrese the image field size (or sensor size) you effectively increase the maximum spatial frequency influencing the image proportionally to the crop factor. As pretty much all lenses have a decreasing MTF chart in relation of spatial frequencies, it is easy to see why the smaller pixels will lose brilliance. There is of course more that goes into a poppy image: consistently good microcontrast accross the whole image field and well corrected aberrations are also important, because it is the exceptionally crisp edges above anything else that lead to an image with 3d pop. Subject in complete focus also helps a lot. There is a great article on Zeiss's site about MTF charts and what they mean: Part 1: [url="http://www.zeiss.de/C12567A8003B8B6F/EmbedTitelIntern/CLN_30_MTF_en/$File/CLN_MTF_Kurven_EN.pdf"]http://www.zeiss.de/C12567A8003B8B6F/EmbedTitelIntern/CLN_30_MTF_en/$File/CLN_MTF_Kurven_EN.pdf[/url] PArt 2: [url="http://www.zeiss.de/C12567A8003B8B6F/EmbedTitelIntern/CLN_31_MTF_en/$File/CLN_MTF_Kurven_2_en.pdf"]http://www.zeiss.de/C12567A8003B8B6F/EmbedTitelIntern/CLN_31_MTF_en/$File/CLN_MTF_Kurven_2_en.pdf[/url]
  22. [quote name='jgharding' timestamp='1344418324' post='15116'] Still I see very few final products that come from small sensors that don't feel slightly... flat... in a depth sense. Something I don't see as often with 135 full frame regardless of detail or resolution. Does anyone else get what I mean? Or am I off my swede? Is it actually just the lighting or the lack of subtle post in many productions shot with small-sensor cams? [/quote] This is normal and is related to how lens microcontrast interacts with sensor pixel density. At the same image resolution subject "pop" will be more noticeable in an image made with a big sensor/film-size. This is usually immediately noticeable with still images but is generally masked out by video compression issues in moving pictures. Without going into much detail, in order to achieve the same pop and brilliance with a smaller sensor, you would need a lens with the same MTF result as the lens used with the bigger sensor but at a [i]significantly higher[/i] spatial frequency. For example, if you have a 95% MTF result at 10 lp/mm. You need a lens with 95% at 23 lp/mm for a 2.3x crop camera to achieve the same brilliance and pop. And again, video compression artefacts generally screw the image enough so that this does not matter.
  23. [quote author=Andrew Reid - EOSHD link=topic=701.msg5080#msg5080 date=1336520564] [quote author=riogrande100 link=topic=701.msg5076#msg5076 date=1336514572] What about PCs with ATI graphics cards? [/quote] As long as the card supports OpenCL, and your drivers do, should be fine. [/quote] Anyone tested this? My understanding was OpenCL is specific to MacOS. Probably limited by some shady agreement with nVidia or stuff. :)
  24. cpc

    Dynamic Range

    Axel has given a nice overview. Raw dynamic range, baked dynamic range and usable dynamic range should be distinguished. Raw dynamic range is the range of the raw quantized signal. Baked dynamic range is the range when the raw data is further quantized according to the output bit-depth. For example. Canon DSLR Raw is 14-bit and the output movies (and jpegs) are 8-bit. 8-bit images in general do not exceed 9 stops DR because they are meant to be shown on consumer 8-bit displays. Add more DR and the image starts looking HDR and lacks contrast. 8.3-8.5 stops is typical. Usable dynamic range is the DR that contains recognizable detail. And detail that can be played with in post. This is around 6 stops for a typical gamma encoded 8-bit video. The darkest stops have some tonal gradation but not any real detail. Here is a more detailed overview on [url=http://www.shutterangle.com/2012/canon-picture-styles-shooting-flat-or-not/][u]picture styles and dynamic range[/u][/url] that I've written recently. Highlights can't really be saved on a DSLR camera (and most digital cameras, for that matter) because baked DR is usually mapped at the upper limit of the RAW DR. This is because the higher stops of RAW DR have the cleanest signal, due to high signal to noise ratio in there. Blacks, on the contrary, are always noisy and thin because the SNR is low. This is also the reason that noise gets excessive when blacks (underexposed areas) are pushed up in RAW processing.
  25. I actually made a simple experiment. I showed both clips to 4 different people (all in their 30's), all moviegoers, but none of them technically proficient about films, and asked them which one appears more "cinematic" to them (without briefing them about the difference). Guess what... Two of them said the 48 fps version is more cinematic. Two of them said the 24 fps version is more cinematic. Nothing scientific about this, obviously, but still offers some food for thought. Apparently the average moviegoer doesn't care about frame rates. Or at least, doesn't care as much as we do.
×
×
  • Create New...