Jump to content

cantsin

Members
  • Posts

    948
  • Joined

  • Last visited

Everything posted by cantsin

  1. Here's one sample DNG frame - the full 168 MB sequence doesn't make a real difference since it's a static image. Blackmagic Cinema Camera_1_2017-06-21_2318_C0000_000021.dng
  2. Okay people, I produced an 8bit and a 10bit reference video file for you to test & settle your dispute. They were produced with a high quality imaging pipeline (2.5K BM Cinema Camera Raw -> Resolve -> Adobe Media Encoder) but encoded in such a way that they represent the difference between 8bit and 10bit video recording in a camera like the GH5 under ideal conditions (i.e. supposing that the video signal processing and internal codecs of the GH5 are as good as the pipeline CinemaDNG -> Resolve -> AME). What I did: Pointed a fresnel spotlight to a white wall so that it created a high-contrast gradient; Shot it with the Blackmagic Cinema Camera 2.5K in CinemaDNG raw, exposed to the right (i.e. with no clipping of the highlights, using the sensor's maximum dynamic range) Imported it into a 1920x1080 project in Davinci Resolve, used Resolve's color space transformation tool to transform the color space to Panasonic VLog. Rendered it out as 16bit uncompressed TIFFs for maximum quality and no loss of color depth. Imported the TIFF sequence into Adobe Media Encoder, and rendered it out as 10bit h264 at 200 Mbit/s. (Note: Unfortunately, I could only use 4:2:0 color sampling because h264's High 10 profile does not support 4:2:2. I also tried ffmpeg/x264, but it doesn't support 10bit 4:2:2 either. Panasonic's own implementation of 10bit 4:2:2 h264 recording in the GH5 is non-standard, causing the well-known NLE compatiblity problems when the camera came out.) 8bit h264 4:2:0 at 200 Mbit/s. Perhaps surprisingly, the 10bit file is much smaller than the 8bit file; I "blame" more efficient compression in the newer h264 high10 profile for this difference. How you can play with the footage: Import it into your NLE Apply a Vlog-to-Rec709 LUT or color space transformation Compare banding of the gradient I did, btw., not do the latter yet because I want to be impartial and let myself surprise by the outcome. gradient_test-8bit_vs_10bit.zip
  3. I use an EOS-M100 with the 22mm/f2 as my go-to, always-in-my-bag camera. Not for video, only for stills. I wouldn't have understood the popularity of the system either if I hadn't gathered practical experience with it. My entry drug was a second-hand EOS-M bought at a ridiculous price ($100 including the 22mm) for experimenting with MagicLantern and Super 8 lenses (see the parallel discussion thread.) The system is popular, IMHO, because it is very well-rounded. You get bodies that are as small as MFT cameras. The M100 is even smaller and lighter than the Panasonic GX80 (108 x 67 x 35 mm/300g vs. 122 x 71 x 44 mm/380g). Still, it sports Canon's newest-generation APS-C sensor with about twice the ISO (and a half bit more color depth) than the GX80. The 22mm is optically superb, as small and light as Panasonic's pancake lenses but better because it's optically corrected, with almost zero distortion and no need of software geometry correction. The user interface is, typically Canon, very well thought out and manages to combine point & shoot with full manual controls through a both clever and practical combination of dials and touchscreen interface. Touchscreen focus+shutter in combination with Dual Pixel AF is simply superb. With the 22mm lens, the M100 is an excellent street photography camera, one that you can pull out of your bag, blindly dial from muscle memory to either full auto mode or manual/program setting, hit focus and nail the shot - everything in only a few seconds. Much better, in almost every respect, than the Panasonic GM1 + 20mm pancake combo that I had used before. Yes, the camera has less features than you get from other manufacturers. There's no level gauge, no silent/electronic shutter, sucky/nearly useless mush-o-vision video quality, and the sensor performance, while good, is still not as good as that of current-generation Sony APS-C sensors. The features which the cameras has, however, are really, really well implemented. There's no other APS-C mirrorless body/lens combo that competes for quick street shooting IMHO: the Fuji X100 series is bulkier despite having a fixed lens, less robust with its EVF, more expensive and made for slower shooting, the Sony A5100 - with about the same body size and weight as the M100 - is dated, has an inefficient, convoluted interface and most significantly lacks an equivalent of the 22mm/f2 lens.
  4. Most c-mount lenses will not cover the sensor, not even in the sensor crop mode for 4K recording. Some math: The EOS-M50's APS-C sensor measures 22.3mm x 14.9mm. A 1.6x crop yields an effective sensor size of 14mm x 9.3mm in 4K video mode. The image circle is about 12% bigger than Super 16 (12.5mm x 7.4mm). There are only few c-mount lenses that officially cover S16/1": the Zeiss Jena Tevidons, the Canon TV16s and Vxx zooms plus a few select Pentax/Cosmicar lenses. Others cover S16 inofficially, but only above 25mm (portrait tele and above) focal length and often with blurry corners. With the EOS-M50, there's a risk that even lenses designed for 1" sensors won't properly cover the sensor.
  5. Made a test with the Angenieux 98-64mm/1.9 c-mount, just by taking stills, opening them in a graphics program and cropping the center 1408x1030 pixels. The lens definitely covers this resolution/sensor crop at different zoom and aperture settings. Corners look good, too.
  6. For the Schneider 6-66, I use a Novoflex Leica M-to-EOS M adapter. It's expensive, but worth the price because of its exact fit and precision measurement.
  7. I don't even know how to activate 1408x1030 in ML on the EOS-M. Do you need the experimental build with 10bit/12bit and mlv_lite for that? (I've skipped this so far because the module doesn't record sound.)
  8. It matters for issues like banding in gradients (especially in Log footage converted to Rec709 and graded), which are a systemic limitation of 8bit rather than "artifcats that some 8bit cameras produce". Attila's sample images, recorded from a Fuji XT2 as high-quality 8bit 4:2:2 with an external recorder in ProRes, provide excellent examples.
  9. Two conclusions: 1: It's the successor to the EOS-M5, but gains an additional digit in the model number. In Canon's model naming scheme, it means that it's relegated from a top-tier to a second-tier camera in its product group. So there will be a new Canon mirrorless camera above the EOS-M50: likely, a new full frame mirrorless camera. 2: Since 4K video is a feature of the new DIGIC chip, it will trickle down to all other Canon APS-C cameras: the DSLRs and the rest of the EOS-M camera line. By the end of 2019, even the Rebel DSLRs and the successor to the EOS-M100 will likely have 4K video. If Canon will deliver good-enough 4K (at least something that can be downscaled to a first-class HD image), this could be a satisfying video solution for many if not most people, given the whole package of Canon colors and dual pixel AF.
  10. Not actually the sensor, but the camera's signal processing where you can't completely dial out artificial sharpening. This has been the issue of the GH series since its beginnings. In the good old GH2 times, people experimented a lot with c-mount lenses and Russian lenses to work against the camera's over-sharpness. Your post proves that this, principally, is still good thinking (although one should use indeed better lenses than c-mounts).
  11. Thanks @Attila Bakos, that obsoletes new tests of my own.
  12. Same for me, I started with Super 8 and DV camcorders, always wanted to make moving images, and became mostly a photographer - gradually and by accident - after DSLR video became a thing almost a decade ago. Especially if you shoot documentary and on the street, it's much easier (and much more rewarding) to catch and frame small details and moments, and get really great images out of them. Video, in the end, is first of all about the flow and rhythm of images, much less about the image as such.
  13. Recorded with an external recorder, I guess? (So 8bit 4:2:2 ProRes without transcoding?)
  14. I will make a better test with CinemaDNG source material from a Blackmagic Cinema Camera, downconvert it to a high-quality 8bit codec (DNxHR HQ), upsample it to 10bit with Neat, and see how it solves banding when grading the material. However, give me some time - this might take me a week to do. But that temporal NR spreads 8bit colors into 10bit by greatly increasing the number of unique colors, is a fact.
  15. @Don & @hyalinejim - my post was actually not about compression artifacts and macroblocks. (Could have used Neat's spatial denoising to tackle them...) But it's only about how to best upsample 8bit to 10bit. Maybe the test footage wasn't wisely chosen, and I should rather have picked 8bit footage without these artifacts.
  16. To illustrate, here are two extreme grades of the same frame - the one with temporal noise reduction applied, the other without. You mostly see macroblocking from the codec (with the noise filter also somewhat reduces). But if you pay attention to color gradients such as the underside of the stair or on the surface of the kitchen drawer, you see where the 8bit color breaks apart and the noise filter smoothes it out. [View in 1:1 pixel size.]
  17. I already did, see the download link in my original post. The difference is only contained in the 16bit TIFFs and not really visible to the human eye (since these are ungraded log images). It wouldn't make sense to post the images here in the forum, since they would be downconverted to highly compressed 8bit JPEGs....
  18. Justed tested temporal denoising with the built-in noise filter of Resolve Studio: This results in even 1.6 million unique colors, but with rather aggressive filter settings. (Neat yields higher image quality, in my opinion. It's not worth upgrading to Studio for the noise filter. The Neat plugin also works in the free version.) You apply the noise filter to nodes in the Color menu, no matter whether you use Neat Video Pro as an OFX effect or Resolve Studio's built-in denoiser. You get the best results if you set the filter parameters for each clips individually. But you can also define a group node, use one filter setting for all footage, and have that group node at the beginning of your color correction chain. (You should always denoise the ungraded footage first and apply all other corrections afterwards.) The group node work method also makes it easier to turn noise reduction on and off for all clips with a single click, for example, when you need realtime playback in editing.
  19. cantsin

    Ikan vh8 99$

    Avoid external monitors with the BMPCC. Its HDMI port is directly soldered onto the mainboard and breaks easily if the cable is pulled too hard. When it breaks, your camera is toast. (Using cages with lock screws for the HDMI port helps, but doesn't completely solve the issue. Most broken BMPCCs died because of HDMI. It's the camera's known achilles heel, discussed on forums again and again. If you want external monitoring, better use the Blackmagic Micro Cinema Camera where this issue has been solved.)
  20. Here's an approach to high-quality upsampling of 8bit to 10bit video material that I haven't seen mentioned before. We had several discussions on this forum whether 4K 8bit footage could be converted to 2K footage with higher color depth, but my approach is different, and doesn't result in loss of pixel resolution. First brought it up as a hypothesis on the German Slashcam forum, tested it, and voilà, it worked even better than I had expected: High-quality video noise filters (such as Neat Video Pro and the built-in noise filter of Resolve Studio) can do temporal denoising, by comparing several neighboring video frames and averaging pixels between them. If these filters internally work with 10bit or more color depth, they should average/interpolate these color values with more subtle gradations than available in 8bit, producing a picture with more than 8bit color depth. (The logic is the pretty much the same as with super resolution algorithms that compute a higher image resolution from adjacent frames. Only that this method affects color resolution instead of spatial resolution.) So I did the following: Downloaded Luke Neumann's out-of-the-camera test files of the GH5 from here, the ones that were shot with 180fps in 8bit 4:2:0 in V-log; imported the clip "Pete Fire Hand.MP4" into Resolve and dropped it into the time without applying any corrections; exported one frame as a 16bit uncompressed TIFF; ran Imagemagick's "identify" command on the command line to determine the number of its unique colors. It identified 17,210. [command syntax: "identify -format %k xy.tif"] applied Neat Video Pro (OFX plugin) to the clip, using only temporal noise reducation, with 5 reference frames and in the highest quality setting. exported the same frame as a 16bit uncompressed TIFF; ran Imagemagick's "identify" on the denoised frame; it now identified 1,194,446 unique colors. Both TIFF files can be downloaded from here: http://data.pleintekst.nl/chroma-upsampling.zip (13 MB) 1,1 million colors is still much less than the 24 million colors available with 8bit color depth. So, to check whether the temporal denoising had really interpolated the colors into a higher bit depth, I exported an uncompressed 8bit TIFF in Resolve and, in addition, used Imagemagick to convert the denoised 1.1 million color-16bit TIFF into a second 8bit TIFF. The two images contained 39.471 respectively 39.473 unique colors. This was the proof that most of those 1.19 million unique colors of the denoised 16bit TIFF were outside the 8bit color/gradation range. (This can already happen if you have a monochrome gradient with more than 256 gradations. So, theoretically, an image with only 257 unique colors can exceed 8bit color resolution.) 1.19 million is about 69 times 17,210. When jumping from 8bit to 10bit, there are two bits=2^2=4 times the values per color channel. In total, this amounts to is 4^3=64 times as many color values. So the jump from the original image to the denoised image is even slightly bigger than from 8bit to 10bit. If we take the noise-filtered 8bit image with its 39.473 unique colors as a reference instead, then the increase is still by factor 30=5bits, which would mean 9.67 bits per color channel. In other words, it's fair to call this a high-quality 8bit to 10bit upsampling. As opposed to dithering into higher bit depth, noise isn't added but even removed!
  21. Or hacked Samsung NX... @Benjamin Hilton, isn't it time to unravel the mystery? We're pretty much through all possible options...
  22. cantsin

    Lenses

    Yes, this was shot with a TV16 c-mount 50mm/f1.8 (closeups) and a Carl Zeiss Jena Tevidon 10mm/f2 (wide shots) on the BM Pocket:
  23. cantsin

    Lenses

    To add to the above - this video was exclusively shot with the BM Pocket and a Tarcus 16-160mm/f1.8 c-mount zoom for 1" video cameras, a 2.3kg/5lbs beast of a lens that absolutely requires a rig and lens support: Shooting with the BM Pocket for now 4 1/2 years has completely relieved me from any camera upgrade mania.
  24. Could also be a first-generation Canon C300/C100 (which produced a beautiful image despite its 8bit limitation). Hard to tell, either this is a low-budget cam with Raw or an older professional/semiprofessional camera with great internal image processing. We're a DIY no-budget forum here, so the first guest would always be some sub-$2000 camera, either hacked or Blackmagic. (But it strikes that there's zero moiré in the images, which suggests a hacked 5D MkIII or a Cx00.)
  25. I'd also say that this was shot raw. No compression artifacts, very fine tonality and color gradations, no in-camera artificial sharpening, no in-camera noise reduction over-smoothing textures. My guess would be 5D MkII/III MagicLantern raw, or Blackmagic Pocket/Cinema Camera 2.5K raw.
×
×
  • Create New...