Jump to content

jcs

Members
  • Posts

    1,839
  • Joined

  • Last visited

Everything posted by jcs

  1. I used Vegas to edit stereoscopic 3D footage a few years ago (9, 10, 11). For simple edits it was OK but for anything more than that it was quite buggy and unstable. It was also a bit slow. I figured both issues were related to the use of MS C# and all the dynamic dependices and updates. For $30 a month you can now get all Adobe's tools. Premiere CC is pretty solid and very fast on both Windows and Mac. You could also use ffmpeg and or something like Handbrake to batch convert ProRes HQ to DNxHD 10-bit 422.
  2. 5D3 + ML = definitely a good value! As the RAW workflow improves, value goes up. I A/B compared AMaZE against ACR (which has sharpening at 25 on by default) and a small amount of ACR NR. AMaZE looks very similar and high contrast edges weren't as bad as first thought for ACR. Since ACR is one of the top deBayer tools, that's good news. After further testing, this tool could use some help with WB and colorspace processing. ACR on default settings (along with correct WB) produces more accurate colors and saturation levels. This MLV tool outputs very low saturation color (almost sepia-like in my lowish-light tests (indoor, natural tungsten lighting (no added studio lighting)). Processing time with AE set to render multiple frames (4 core MBP) was about the same vs the ML RAW tool (which doesn't use full CPU cores). The ML RAW tool is great for watching RAW clip in real-time (still some bugs with audio cutting short). Future updates should address these issues.
  3. That's true- JPEG supports 444, 422, 420, lossless, etc. However when comparing images the compression level needs to be the same (most importantly, the DCT quantization). The rescaled web images are likely 420 while the downloads could still possibly be originals. The point about 420 is that web-facing content is typically 420 (with rare exceptions due to the desire to conserve bandwidth and save costs). If at 100% A/B cycling we can't see any difference, then there's nothing gained since the results aren't visible (except when blown up or via difference analysis, etc.). Scaling an image down (with e.g. bicubic) then scaling it back up acts like a low-pass filter (smoothes out macroblock edges and reduces noise). I downloaded the 4K 420 image and scaled in down then up and A/B-compared in Photoshop- couldn't see a difference other than the reduction of macroblock edges (low pass filtering) etc. Adding a sharpen filtering got both images looking very similar. A difference compare was black (not visible), however with gamma cranked up the differences from the effective low-pass filtering are clear.
  4. It is surprisingly clean in testing so far. It also looks decent without having to adjust white balance or colors- looks very similar to what LiveView showed during recording. mlvbrowsesharp had issues with >4GB option enabled (exFat formatted CF cards). Testing a 23.6GB file right now (4m:30s). In the viewer, the audio cut out around frame 1500, however the extracted .wav file is 100% OK. Converting to ProRes now: it seems a bit slower than the initial test with a smaller file. On this quad core laptop (late 2013 MBP (fastest)), ffmpeg is using ~143% CPU and 97% for the host process. Ideally it would sum to 400% (100% CPU utilization); running two copies at once should work... Took about 50minutes to process 4m30s; a little more than 1/2 as slow as my 12-core MacPro using ACR+AE (1/4 real-time). Audio in the MOV stopped at 1:50. The initial, fully extracted .wav file was OK (full 4:30), so was able to line up and test in PPro (was fine). Will try pressing 'E' with the viewer stopped at the first frame (last time pressed 'E' after a second or so from playback start). Reading his TODO list, looks like adding vertical stripe removal is on the list. That said, so far the shadow noise in my tests doesn't look too bad: it actually looks worse in ACR before using NR. Interestingly, on super high contrast edges, the AMaZE deBayer looks much better than ACR (doesn't have unnatural hard edge with color artifacts). Tested so far on OSX (latest). I didn't have to install anything else (such as python or ffmpeg)- using the DMG download everything worked as is.
  5. Drop an MLV on the application and you can view the MLV with sound in real-time. Press E to encode to ProRes HQ (10-bit 422): http://www.magiclantern.fm/forum/index.php?topic=9560.0 (no GUI- all keyboard commands). I've been using ACR and AE with other workflows (mlvbrowsesharp etc.) resulting in a 175Mbit/s 10-bit 422 DNxHD file for editing. The final quality has been excellent, however the workflow was cumbersome. mlvrawviewer uses the AMaZE algorithm to deBayer with excellent results (paused and when exporting to ProRes: live playback with bilinear on GPU is also decent). While there's no Bayer sharpening or denoising options in this workflow vs ACR or Resolve 10, I am impressed with the quality and time to process files directly to ProRes HQ (tested so far on my MBP laptop- will work well for on set processing). As the 14-bit ML RAW tools improve, the 5D3 is indeed becoming a baby Alexa B). Great job everyone involved!
  6. Julian's images: saving the 4K example at quality 6 creates DCT macroblock artifacts that don't show up in the 444 example at quality 10. All the images posted are 420: that's JPG. To compare the 1080p 444 example to the 4K 420 example: bicubic scale up the 1080p image to match exactly the same image region as the 4K image (examples posted are different regions and scale). The 1080p image will be slightly softer but should have less noise and artifacts. Combining both images as layers in a image editor then computing the difference (and scaling the brightness/gamma) up so the changes are clearly visible will help show exactly what has happened numerically; helpful if the differences aren't very obvious on visual inspection. We agree that 420 4K scaled to 1080p 444 will look better than 1080p captured at 420 (need to shoot a scene with camera on tripod and compare A/B to really see benefits clearly). 444 has full color sampling per pixel vs 420 having 1/4 the color sampling (1/2 vertical and 1/2 horizontal). My point is that we're not really getting any significant color element bit depth improvement which allows significant post-grading latitude as provided by a native 10-bit capture (at best there's ~8.5-9-bits of information encoded after this process: will be hard to see much difference when viewed normally (vs. via analysis)). Another thing to keep in mind is that all > 8-bit (24-bit), e.g. 10-bit (30-bit) images, need a 10-bit graphics card and monitor to view. Very few folks have 10-bit systems (I have a 10-bit graphics card in one of my machines, but am using 8-bit displays). >8-bit systems images need to be dithered and/or tone mapped to 8-bit to take advantage of the >8-bit information. Everything currently viewable on the internet is 8-bit (24-bit) and almost all 420 (JPG and H.264). re: H.264 being less that 8-bits- it's a effectively a lot less than 8-bits not only from initial DCT quantization and compression (for the macroblocks), but also from the motion vector estimation, motion compression, and macro block reconstruction (which includes fixing the macroblock edges on higher quality decoders).
  7. iMovie is a decent basic editor for iPhone and iPad. Haven't tried editing DSLR footage without transcoding, however since 1080p is supported it should work (issue might be space on device). The iPhone 5S has fairly powerful 64-bit CPU and decent GPU (in our app, Twrrl, we capture 1080p video, decode an H.264 video (with a synthesized alpha channel), composite the videos together along with GPU video and CPU audio effects, then write to a compressor at 1080p on the 5S (in real-time)).
  8. Example images? Can post a 4K and 1080p* image for a single blind trial B) *1080p and 540p would also work.
  9. First we must consider FSAIQ: http://www.clarkvision.com/articles/digital.sensor.performance.summary/ Once that has been determined, we can figure out how many gigabecquerels of radiation from Fukushima must be shielded to prevent excess sensor noise depending on exact GPS position on the Earth to maximize image quality before sub-quantum supersampling in the complex spectral domain needed to yield one perfectly constructed pixel after full image reconstruction in the real spatial domain. On a serious note, there's nothing significant to be gained from 4K => 1080p resampling in terms of color space / bit depth. Anyone can up or down-size an image in an image editing tool to test this (Lanczos, Mitchell-Netravali, Catmull-Rom, Bicubic, etc. won't matter. In terms of aliasing/artifacts and detail preservation, this is helpful: http://pixinsight.com/doc/docs/InterpolationAlgorithms/InterpolationAlgorithms.html#__section002__ )
  10. If the camera has manual level control for the mic, a JuicedLink* or any other high quality preamp will sound better than just about all inexpensive external recorders (Zoom, Tascam, etc.). For example, I have a DR100Mk2, and a Sound Devices preamp into the 5D3 sounds much better and has far less noise. Dual audio is also more work. Some low cost options that appear to work really well: http://cheesycam.com/inexpensive-dual-xlr-microphone-preamp/ http://www.dslrfilmnoob.com/2012/11/25/irig-pre-hack-cheap-xlr-phantom-power-preamp-dslr/ * While the JuicedLink provides clean gain, it doesn't sound as good as other preamps- perhaps thinner, less detailed, less rich/warm etc. Sound Devices preamps are expensive, however they sound awesome and can't be clipped (excellent limiter design).
  11. It's clear to see that if we add 4 8 bit values together along with random sensor noise we'll be able to cover 0..1023 (10 bits). One issue is that by adding the samples instead of averaging them (which will reduce noise), we will increase noise. Since dynamic range is the noise floor through saturation, increasing the noise floor isn't helpful (but may actually help reduce banding). Scaling 4K to 1080 in NLEs will be effectively an averaging operation (and low pass filter), which will result in noise reduction. Averaging 4 floating-point values will result in more tonal values possible vs. 256 values from a single sample. A way to test what an NLE might do would be to take a 1080p frame and resize to 540p and see how it does with fragile sky imagery.
  12. Photons captured by the sensor have a random component (noise)- 4 sensor samples added together gets us 2 more bits of signal with an effective low pass filter (similar to C300's dual green sampling). Keep in mind the random sampling component when thinking about specific cases.
  13. Premiere should work- it converts everyting to 32-bit float (and GPU accel when avail- so will be very fast).
  14. Any workflow which can downsample 4K to 1080p at 10-bits or more should be able to produce 9-bit 444 RGB. It won't be full 10-bit as the chroma was only 8-bit, however the luma mixed in with chroma when converting to RGB will be 10-bit, so really more like 9-bit at the end of the day.
  15. @yellow- while YUV (et al) <=> RGB is technically lossy ( http://en.wikipedia.org/wiki/YUV ), it's not nearly as lossy as the DCT, quantization, and motion interpolation (when interframe compressed). I would be surprised if one could see any significant difference after converting to/from YUV and RGB. My guess is they compute: 420 to 444 YUV, then transform YUV to RGB, then downsample to 1080 from 4K. Quality will be excellent.
  16. Many of the effects work in RGB- I would expect everything to be converted to RGB after loading (and don't see any reason why there would be any noticeable quality loss when everything is computed in 32-bit float). Ringing and halos come from the negative lobes of the filter (see link which describes how the filter works), however Lanczos 2 is supposed to provide the best balance of quality vs. ringing/halos. The goal is to downsample (in this case) while preserving as much detail as possible while preventing aliasing. The real-time versions perform a Gaussian blur before downsampling with bilinear (result will be a softer image). All pixels are processed the same (edges don't get special treatment).
  17. Adobe states Lanczos 2 low-pass sampled with bicubic here: http://blogs.adobe.com/premierepro/2010/10/scaling-in-premiere-pro-cs5.html
  18. Scaling down to 1080 in Premiere will be very high quality (Lanczos+bicubic) and will easily run in real-time with CUDA/OpenCL: http://blogs.adobe.com/premierepro/2010/10/scaling-in-premiere-pro-cs5.html (Max Quality is always on for scaling with GPU accel). 420 QuadHD and higher will indeed scale down to 444 1080p (with additional vertical averaging acting as a low pass filter to help reduce aliasing). Depending on the compression quality, shooting in 420 QuadHD can produce higher quality vs. 1080p. If for example AVCHD 1080p ultimately does a better job due to bitrate relative to frame size, it might look better in some scenes (> HD resolutions appear to be limited by card write speeds).
  19. ProRes render is available on OSX. Get the Avid DNxHD codecs (free from Avid). I use DNxHD 175Mbps 10-bit on Windows.
  20. NVIDIA upgrade: via an external Thunderbolt 2 PCIe chassis.
  21. jcs

    Audio Software

    For both Protools and Reaper you'll want to get a decent set of plugins. Native Instruments Komplete is a good start. For low budget, lots of free plugins for VST and AU. Logic is a great deal- plenty of content included for a fair price.
  22. 2010 MacPro 2.93GHz 12-core 24GB RAM GT120 and Quadro 5000, Mercury SSDs, 7200 RPM drives- 5TB (+ external SATA cage 5TB, SATA 4 port drive cartridge system (stick bare drives in it for archival copying and storage offline)). Running in Win7 most of the time- faster, more stable (also boot into OSX). Runs Resolve well, PPro, AE etc. (CS6 and CC).
  23. MJPEG (ProRes, DNxHD) 10-bit 422 is plenty, along with a decent quality in-camera de-Bayer. If using a C100-C500 sensor- then no de-Bayer needed. Cards are plenty fast... They won't even provide 422 10-bit H.264. This level of quality is reserved for the pro cameras. Not likely to be available from Canon for consumer or prosumer cameras for some time. RAW has its place, however RAW is huge time and space burner and a stop-gap until cameras get more powerful processors (or stop using Bayer pattern sensors).
  24. FS700: Sony has reserved internal 4K recording via XAVC H.264 for the F55. 2K/HD 10-bit 422 (after 12-bit RAW upgrade) XAVC would be nice but not clear if this is possible (would also step on F5's market).
×
×
  • Create New...