Jump to content

jcs

Members
  • Posts

    1,839
  • Joined

  • Last visited

Everything posted by jcs

  1. One of the videos shows them typing in code. Perhaps join the free beta?
  2. Yeah the external GPU boxes are used by high end colorists etc. I didn't debug that GPU site, lol :) On the question regarding blur: Box, triangle, Gaussian; they are all the same, just more :) Box is an averaging filter- let's call it box(). Then Box: box(); Triangle: box(); box(); Gaussian: int passes = 9; // more passes = more accurate for (int i=0; i < passes; i++) box(); That's not the only way to do triangle and Gaussian- just shows they are all the same mathematically, just more expensive for more quality.
  3. While x264 is the best, the GPU accelerated encoder in PPro is very good (and fast). Here's my Premiere Pro 16-40Mbps 1080p 23.976 2-pass preset I use for uploads: http://www.brightland.com/t/HD%201080p%2023.976%2016-40.epr.zip
  4. On a low end system not doing any effects, 8-bit could be faster due to bandwidth vs. float (only way to know is to test). Vegas is a particularly slow app in general, compared to other NLEs (large parts written in C#, which is ~2x slower (sometimes much slower) than C++, and not a whole lot of GPU acceleration (I last used Vegas 11 when editing stereo3D footage). H.264 compression is now GPU accelerated in Premiere and likely FCPX & other apps. MacVidCards has modified the GTX 780 to run on internal MacPro power ($750). He's down the street in Hollywood- think he might be doing his card conversion business full time now (the only local source I know of for recent GPUs for Mac's from NVidia). Lots of Hollywood/LA folks using Macs for Resolve, etc. (which heavily uses the GPU). I'm still using a Quadro 5000- will probably upgrade soon (current consumer cards are much faster: Q5000 has 352 cores, GTX 780 has 2304). This is a cool site for comparing GPU performance. While the Q5000 is slightly better than the GTX 750m, the GTX 780 is "massively better" than the Q5000 (except for power consumption): http://www.game-debate.com/gpu/index.php?gid=880&gid2=626&compare=geforce-gtx-780-vs-quadro-5000
  5. Sounds like a GH3/GH4 will give you the detail and quality you are looking for straight from the camera. I currently shoot with a 5D3 (RAW mostly lately) and FS700 + Speedbooster (when I need autofocus, pro audio, or slomo). RAW looks great, but is a lot of work, time, and disk space. The 24Mbit/s files from the FS700 are a bit over compressed (have a Nanoflash external recorder- that's extra weight and complexity). The GH3 has a decent bitrate, and the GH4 does as well (including 4K support, which will make very detailed 1080p in post). A GH3/GH4 with a Speedbooster and the Sigma 18-35 F1.8 would make a nice combo for what you are shooting.
  6. Adobe used to have Pixel Bender (but never for Premiere), now Red Giant is releasing what could be a very cool technology: a fast and relatively easy way for developers (and crafty end users) to write GPU accelerated effects: http://www.redgiant.com/store/universe/ Maybe this will motivate Adobe to open up a GLSL/OpenCL API for Premiere ;)
  7. 32-bit floating-point has been faster than fixed point integer since at least 2000 on CPUs (that's when I stopped using fixed point for my flight and driving simulators). Around that time integer could be faster for certain operations, but when benchmarking a full application, where complex operations are made, float had passed fixed point integer (even when written in assembly). Floating point makes the code much simpler- we can defer clamping, for example, until the very last moment (when converting back to 8-bit for example). Otherwise when using fixed point we have to check for overflow more frequently, etc. The weird FS700 black pixels on strong highlight edges looks like the familiar integer overflow type of bug (where for performance reasons they're not clamping at the right time). GPUs are crazy fast- designed for complex real-time games. 4K isn't a big deal for GPUs these days. NLEs, on the other hand, are mostly based on old tech (including Vegas, Avid, Premiere and especially After Effects (no real GPU accel)). FCPX is modern, though not any faster than PPro on the same hardware (and many cases slower when I tested it). DaVinci Resolve has an excellent GPU-based engine- the fastest NLE-ish tool I have used so far. NLE developers aren't game developers, though. If top game developers were to build an NLE, we'd see a lot more real-time performance. MacPros (before the new Trashcan) can use many of the latest NVidia cards after being modified with the proper bios and hardware changes. I purchased a few cards from MacVidCards and they work great: http://www.ebay.com/sch/macvidcards/m.html . As PC's are slightly faster than Macs (same hardware running Bootcamp, etc.), and much lower cost, anyone on a budget wanting max performance for the dollar will want to build a custom PC with choice parts. Hackintosh's are pretty popular, lots of info online on how to set one up- looks like they are fast and reliable, so folks needing/wanting OSX apps can have the best of both worlds. I haven't looked at PC laptops in a while, however Mac laptops are fast and pretty solid. I've been running VMWare Fusion on my latest MacBook Pros and GPU acceleration is pretty solid for PC apps- no need for Bootcamp (though native will be faster- haven't benchmarked it).
  8. That's right- all the online services recompress to guarantee the videos being streamed are valid formats (and perhaps won't do anything malicious by exploiting a browser/Flash bug). x264 is part of ffmpeg command line, MPEGStreamclip, Handbrake and many other tools (nothing extra needed to install). At lower bitrates, H.264 will remove a significant amount of high frequency detail, especially grain or noise. Car/racecar POV shots are especially challenging due to the way the image data is changing through time. The less detail present in the motion areas, the less H.264 will have to steal from the static/low-motion areas to maintain the target bitrate. I did some noise sweep tests where noise is progressively added, sweeping across the frame. It's quite a challenge to get decent film grain, for example, to survive intact after H.264 compression for web streaming. Using larger 'grains' helps (lower frequency).
  9. A quick test could be to blur the sides in Resolve.
  10. There was a time when integer/fixed-point math was faster than floating point. Today, floating point is much faster. For GPU accelerated apps such as Premiere, Resolve, (FCPX?), they always operate in floating point. The only area where 8-bit can be faster is on slower systems where memory bandwidth is the bottleneck (or really old/legacy systems, perhaps After Effects).
  11. Last I checked both YouTube and Vimeo use customized ffmpeg (with x264) to transcode. x264 has been the best H.264 encoder for a while now. Thus if you want the most efficient upload you could use any tool which uses a recent version of ffmpeg (rendering out ProRes/DNxHD then using Handbrake is a decent way to go). The challenge with your footage is high detail, fast motion. Adding grain or more detail (by itself) can make it worse. In order to help H.264 compress more efficiently in this case you need less detail in the high motion areas. You can achieve this by first shooting with a slower shutter speed (1/48 or even slower if possible). Next, use a tool in post which allows you to add motion blur. In this example you could cheat and use tools to mask off the skateboarder and car and Gaussian blur everything else in motion (mostly the sides but not so much the center/background). You could also apply Neat Video to remove noise and high frequency detail (in the moving regions only) and not use any additional motion blur as this will affect energy/tension of the shot (through adding more blur to motion will help the most). Once you have effectively lowered detail in the high motion areas (however achieved), H.264 will be able to better preserve detail for the lower motion areas- the skateboard, car, and distant background.
  12. I'm not using CS5x anymore- you might consider updating to CS6 or CC. You can pick a good frame using Photoshop and adjust with ACR, then save those settings as a preset (and save in the same dir / near your sequence). Then in AE after the sequence is selected and ACR is open, you can load the preset and get the same look you had in Photoshop.
  13. Thanks for testing- I'll examine my FS700 MTS footage more closely next time I edit. Perhaps someday Adobe will support custom pixel shaders for Premiere- would allow a fast real time solution (ideally supporting multipass shaders in one Effect). Their SDK for plugins didn't provide a means to hook into the Mercury Engine pipeline to provide real-time processing last time I checked (need access to OpenGL textures, etc.).
  14. Scaling chroma is the same math as scaling RGB (each channel is scaled independently). Agreed we can't make assumptions, however that's how I'd write the code (using Lanczos or bicubic:Catmull-Rom or similar). A fast Catmull-Rom bicubic shader: http://vec3.ca/bicubic-filtering-in-fewer-taps/ http://pastebin.com/raw.php?i=YLLSBRFq The excellent GPUImage (which runs on iOS- same shaders can run on desktop GPUs) has a fast Lanczos shader: https://github.com/BradLarson/GPUImage Adobe's code is likely a bit different, though they're implementing the same types of sinc and bicubic shaders for scaling when apparently any PPro RGB effect is used. The other options are nearest neighbor (your example) and bilinear (fast, but doesn't look like they are using it).
  15. Not sure we're talking about the same thing- the link I referenced discusses scaling; I'm referencing scaling UV/CbCr up 2x (to match Y). Thus the 960x540 chroma gets scaled up to 1920x1080 via a filtered interpolator (Lanczos 2 + bicubic). Not clear what Adobe means by using both Lanczos and bicubic unless they use one to scale up and the other down. In any case they are both high-quality interpolaters. 32-bit integer has more precising than 32-bit float; float runs faster and is easier to code. My understanding is that everything in PPro is 32-bit float for the Mercury Engine. When using 32-bit float it won't really matter if it's linear or log, etc. Single-precision float's 23 bits of fraction is plenty (until someone makes a camera with 24-bit integer per color element ;) ). Per yellow, Adobe doesn't bother converting 420 to 444 unless an RGB filter has been added to the sequence. Makes sense I suppose for performance and minimum changes to the image.
  16. Looks like Premiere does decent scaling (Lanczos 2 + bicubic): http://blogs.adobe.com/premierepro/2010/10/scaling-in-premiere-pro-cs5.html . Not clear if they're doing the same thing in CS6 or CC. Would expect them to use their top quality when scaling up chroma. It appears you can test it by adding a GPU RGB filter of some type (e.g. RGB Curves)- per yellow this triggers the chroma interpolation.
  17. If that's the case then no advantage transcoding first. If doing cuts only then disadvantage to transcode first. Unless there's a performance issue, transcoding isn't needed with Premiere.
  18. The MTS image appears to show that chroma is not being interpolated from 420 to 444. While it's possible to throw a low-pass/Gaussian filter on chroma in post, ideally the chroma would use a high quality interpolater to make the final result as sharp as possible without ringing (e.g. bicubic sharper, Lanczos 2, etc.)). I did tests a while ago with 5D Mark III MOV's and it worked as expected- chroma looked properly upsampled to 444 (as others may have done comparing PPro to 5DtoRGB with Canon MOV's). Perhaps that MTS (or something about that particular MTS) told Premiere not to do interpolation (or the MTS importer path doesn't do it). A test might be to use ffmpeg or other tool to rewrap the MTS file as MOV and see if Premiere does proper chroma interpolation.
  19. Going from 420 to 422 is only going to (possibly) improve the horizontal chroma. If the tool is improving quality better than an NLE, ideally you'd go to 444 (both horizontal and vertical). One way to test to see what's going on is to bring in both clips (transcoded and original) to the NLE and place on the timeline/sequence, one above the other. Then set the top track blend mode to Subtract. Add a gamma filter/effect to the top clip and crank it up until you can start to see differences. Another test is to bring the clips in and just A/B (track toggle) the two clips. If you can't see any difference visually, not worth it to transcode (even if you see a difference with the Subtract+gamma test). Pascal's two clips differ in a major gamma/contrast shift.
  20. I've been experimenting with Resolve and ACR, trying to find the best workflow for RAW. By best, the highest quality possible in the shortest amount of time. ACR+AE runs a few frames/sec on my 4-core MBP and about 6-fps on my 12-core Mac Pro (both cases running with AE multiframe processing on). Resolve runs near real-time on the laptop, and slightly faster on the Mac Pro (Quadro 5000 is getting long in the tooth- a newer consumer card with hacked mac ROM would be faster). While I have been able to get Resolve looking closer to ACR, I have not been able to match it. http://www.dvxuser.com/V6/showthread.php?320727-Results-of-testing-various-5D3-RAW-workflows The first goal was to get Resolve to show an accurate image of what was shot. All the experiments with BMD Film color space and LUTs weren't able to do this (including the EOSHD and Hunters "Alexa" LUTs). I also experimented with creating 3D LUTs from scratch. The 5D3 RAW footage is pretty much linear RGB. It appears it can be transformed to BT.709/Rec709 with a simple matrix (linear transform). Using the Rec 709 color space and gamma in Resolve, I can get much closer to the correct colors for the scene shot. Unless we are going from 14-bit to 10-bit (for example), it doesn't make sense to convert to a log color space since we've already got everything captured in linear. From there we can grade for style then save out the final render with a Rec 709 compliant color space (it's not clear what Resolve does with out of gamut colors- clip, etc.). Log makes sense in a camera capturing and preserving dynamic range to a file format which can't otherwise store the full range sufficiently. Log can also make sense if one wishes to use a 3D LUT that needs log as input. However, there a many different log formats, ARRI, Canon, Sony, etc. all make different versions. At the end of the day, we need to get back to something like Rec 709 for broadcast (of course internet test videos are excluded). After comparing ACR, Resolve 10, and AMaZE (used by mlrawviewer: http://www.magiclantern.fm/forum/index.php?topic=9560.0, which plays MLV's in real-time and can output ProRes HQ (a little faster than ACR, but still slow (AMaZE only used for export)) debayering, they are all very close. ACR is doing something with color and microcontrast that puts it above everything else right now. It looks very accurate and also 'pops' in a way that I haven't been able to get from Resolve (granted, I'm just now really learning it). I'm planning for a ~90-minute final product which means lots of footage. Even shooting Robert Rodriguez style (limited takes- edit as much as possible in-camera), there will be terabytes of footage. For now I'm thinking to use Resolve to make ProRes/DNxHD proxies quickly then go back and only process necessary clips with ACR+AE for the final cut (until or unless I can figure out how to make Resolve look as good as ACR). Or batch convert ACR+AE overnight and replace proxies as I go (deleting the massive RAW files). 5D3 RAW is amazing, but requires a lot of planning, work, and most significantly, drive space.
  21. '?do=embed' frameborder='0' data-embedContent>> Interesting to see how these lists compare a year later. Resolution isn't everything- I would put the Alexa on top for overall image quality. They designed their sensor after learning from years of experience building film scanners. We used to argue about whether digital effects for audio would ever match or surpass analog gear. Finally a few years ago, even the most stalwart musician friend finally agreed digital matched analog with the Axe Fx: http://www.fractalaudio.com I just watched MiB II again and was blown away with some of the shots in terms of skin tone and color (Eastman EXR 100T 5248). Is it possible to convert my 5D3 (RAW) or FS700 (RAW) to look like that with something like FilmConvert (two cameras I use)? No, not yet, as I don't think anyone has accurately modeled the film process well enough yet to emulate what film does with light. Initial ADCs and DACs sounded harsh and brittle when CD's were first released (folks still dig records and tapes). Now digital audio is fantastic (and most folks listen to highly compressed audio on iPod/Phone/Droid or cheap computer speakers!). Digital cameras are still very much like early digital audio. Matching the pleasing look of film has not yet been achieved with digital; so far ARRI has gotten the closest, and FilmConvert has made a good start. Accurately modeling and simulating film with digital cameras will happen someday. Why bother? Because it looks better, especially for narrative (where unreality and dreaminess are helpful in storytelling). Dynamic range, resolution, lack of aliasing, and accurate color processing are all very important. The last piece of the puzzle is (optionally) being able to get highlights and skin tones etc. to look like film.
  22. Hair could be an issue, however with a decent optical flow tracker it should worst cast revert to cross blending in areas that can't be tracked. Rotating tires are also tricky, however folks may not notice: Random patterns such as water spray are also difficult, but it can work reasonably well: Both shot with high shutter speed on consumer camcorders. Retiming someone walking behind fence posts is another tricky case.
  23. If the sensor can handle it (low noise etc.), one could shoot everything with a high shutter speed to ensure all frames are in focus. Stills could be pulled at will and motion blur can be added in post with AE, Resolve, etc. Lots of blur = laid back, dreamy, calm, no blur at all = high energy, tense... Retiming quality (frame interpolation) is also better without initial motion blur.
×
×
  • Create New...