Jump to content

jcs

Members
  • Posts

    1,839
  • Joined

  • Last visited

Everything posted by jcs

  1. Perhaps try resetting the camera to the factory state: http://www.manuals365.com/swf/canon/eosc300-mk2-im-en.html?page=190, then try these builtin presets: Canon Log 3 + Rec709 or Wide DR + Rec709. You shouldn't need any grading to look nice if lit with good lights, proper WB and exposure. The live view should also look great- if it doesn't something is wrong.
  2. Hey I think your best bet is getting quality lights (Aputure or better with high CRI/TLCI), setting white balance correctly, and exposing properly using Canon Log 3 + Rec709 or Wide DR + Rec709 if you don't want to mess around with grading. For our green screen shoots we use a bunch of Aputure lights (LS1, LS1/2), Canon Log 2, Production Matrix (for ARRI LUTs), and Cinema Gamut. Then just applying an ARRI LUT in PP CC and it's 99% done (then just minor tweaks to saturation etc. in Lumetri). FCPX applies a LUT automatically (I could also make it work with just the Color Board and no LUTs). I never have to do any other color tweaks. I recently started looking at the other settings (not using Canon Log 2) to get a 100% WYSIWYG to allow for no post needed at all (e.g. for live streaming). I can deep dive into the esoteric settings, however IMO it's a waste of time unless you have a very unusual situation, such as needing perfect camera matching for a live feed, etc. If you still want to deep dive those settings, perhaps post questions here: http://www.dvxuser.com/V6/forumdisplay.php?246-Cinema-EOS-Cameras
  3. Brought the 10-bit file into FCPX and rendered to ProRes, then brought into PP CC and quickly graded with Lumetri. With a little post looks reasonably organic.
  4. You need to use the FS700 RAW output which the Shogun Flame will deBayer and save as ProRes- RAW has nothing to do with the final file sizes. What you'd probably like is something like XF-AVC on the C300 II which does 410Mbps 422 10-bit 4K (ALL-I). The FS7 has a 240Mbps 4K option. Perhaps ProRes LT 4K/24p at 328Mbps and 148GB/hr will have sufficient quality?
  5. Or cut back and forth and use a green screen when on screen at same time https://vimeo.com/cosmicflow/delta#t=426s (A7S I (on location + background plates), GH4, greenscreen).
  6. A few years ago Vegas was the only game in town to natively edit stereoscopic 3D (there was a Cineform3D plugin for Premiere that was too buggy to use), so I used it briefly (example from the Sony TD10 camera: https://www.youtube.com/watch?v=4JzUt9V-2KU ). While Vegas was indeed very easy to use, it was also very slow and buggy. A big part of the speed issue is that the main app is written in .NET, which is much slower than C/C++/OBJC (natively compiled) which all the other NLEs are written in. .NET also allows less experienced developers to do basic work much faster, which is fine for simple business apps, however is a major issue for complex high performance apps (especially those having to run in real-time). Outside of a game development company or military contractor, there are vanishingly few developers who can write reliable high performance real-time graphics code. FCPX currently has the best real-time engine design, and has been the most reliable. From the times I've been onsite at the Apple campus, it's the most like a game development vibe, second to Microsoft Xbox and Sony PlayStation divisions who don't create NLEs (visited an Adobe office in Seattle- biz app focused). Resolve is getting better, however their real-time engine is still behind FCPX for general editing (even behind PP CC). It's still specialized for GPU node-based color editing/grading, where it accels when sufficient GPU resources are available. The Resolve GUI also looks like it was designed by engineers (same with PP CC) vs. product designers (FCPX). All the current NLEs have issues, so there's still room for more competition. And also a good reason to learn more than one so you always have options in case of issues.
  7. You gave away a very important clue lol. However I could tell just from the image
  8. There's a bunch of Tiffen Pro Mist videos on Vimeo, here's one example: These types of filters typically perform 3 main functions: soften image, reduce contrast / boost shadows, provide glow / halation around bright elements. One of the secrets to providing the film look.
  9. This should work, from: https://forums.creativecow.net/thread/3/913091 "You can mimic the Photo filter effect in Premiere by making a colour matte with the same colour of the photo filter, super impose, set Blending modes to Hue and drop opacity to 25%."
  10. Shot from just past 2:42, skin tones look a bit off: Adjusted by reducing color temp and magenta (Adobe Camera Raw as a filter), Adding a warming filter (not perfect but looks more natural to me than original): Are you guys shooting full metal jacket indoors (to keep lead out of the air)?
  11. http://necsel.com/application/remote-source-lighting-1 More efficient and much brighter. In addition to using lasers for traditional lighting (and projectors), also imagine what's possible with an array of very fast scanning lasers (similar to light shows). So fast that a camera can't pick up the flicker/scan rate. Now you can draw light in any shape, and with any diffusion pattern you desire, like a Fresnel on steroids, but can be relatively small and lightweight, without needing a big heavy optical lens. Any shape means any pattern (and any color at any point too!), like a movie projector used as a light source (which of course has been done before, http://jakehicksphotography.com/latest/2016/1/9/using-a-projector-in-your-photoshoot).
  12. @jonpais Neat Video is an amazing product, the best NR on the market (Dark Energy was very cool, too bad it was only for AE). However I don't really use it anymore. Two reasons- A7S II, C300 II and 1DX II don't need it in general, and using lights. The slowdown in editing isn't worth it. I now only use it for short clips such as slomo where I didn't get enough light and didn't realize it was so noisy while shooting (in other words- a mistake). Also note that by the time people see your material online it's gone through two levels of 'free' NR- once when you render to H.264 to upload and once more after it's transcoded on YouTube/Vimeo (you can minimize bit crushing by uploading an ALL-I compressed video at a high bitrate in 4K (upscale 1080p content)). All content is now transcoded to multiple sizes and bitrates for adaptive streaming (we used to ask for no transcoding on the back end to preserve quality and allow for instant playback (after validating the bitstream)).
  13. They don't make 'em like they used to haha (except for maybe the creators of Rick & Morty!)
  14. jcs

    Adobe Rant

    The latest version of CC 2017 is working OK for our projects in both OSX Sierra and Windows 10 Pro with Nvidia GPUs (980ti 6GB and 1080 8GB). Your R9 should work OK performance wise (might be a little short on VRAM at 4GB, depends on your projects). Perhaps give the free trial a spin? Otherwise you can always cancel the subscription. The latest version of Resolve isn't quite ready, though if you have very basic editing needs (e.g little or no audio tweaking), it might work for you as is (perhaps try it to see what to look forward to once out of beta). The only thing I'd be careful about is the CC 2017 bug with the media cache- if set to the root of a drive, clearing that cache will delete all media files for the entire drive: https://forums.adobe.com/thread/2306406 (looks like the bug deletes all media files recursively from wherever it's been set). Reported on this forum here:
  15. Example of backlighting with musicians moving in/out of light (really just an excuse to post this awesomely bizarre video from yesteryear): And the making of:
  16. You need background lights pointing into the camera to get cool flares and glows, especially fun when the singer moves back and forth in front of the light(s). Dust/smoke enhances the effect, as does using a diffusion filter on the camera.
  17. Get the 32GB (I've seen PP CC use well over 16GB (not GH5 specific)). Looks like AMD doesn't support Quad Channel, though 4 8GB sticks are usually cheaper that 2 16GB (only a minor performance difference on Intel boards).
  18. https://www.microsoft.com/en-us/hololens/why-hololens Imagine editing quickly and freely with both hands (standing up and moving around!) with GUIs as seen in Minority Report. Practically unlimited real estate for controls, grabbing and placing clips quickly with your hands in instead of the mouse/keyboard.
  19. Tested on the Win10Pro 6950X 64GB GTX 1080 machine: easily ran real-time with both C300 II and 1DX II 4K footage, along with transitions and some titles. Added 4 OpenFX effects to a C300 II 4K clip: Light Rays, Glow, Gaussian Blur, and Prism Blur (in addition to ARRI 3D LUT and basic curves/grading)- still ran easily in real-time (as one would expect with efficiently implemented GPU acceleration; CPU was at 23%). Rendering speed: 1080p render from mixed 1080 4K timeline 1m:40s long finished in 38s, about 2.6x real-time, which is pretty good. Similar test on OSX with 12-Core and GTX 980ti in PP CC was about 2x real-time for 1080p output. Would be nice to know how to replicate Lumetri effects in Resolve (really fast and easy to quickly grade footage using Lumetri). Audio isn't quite done: As noted by someone else in this thread, I had to set my default/system audio device sample rate to 48KHz otherwise no audio output (including strangely the rendered output). Where are all the audio effects hidden? Poked around briefly and only found EQ and Dynamics (which doesn't appear to actually work (correctly) yet). Also no way to select audio device (checking/unchecking Use System Audio didn't do anything). Tested again on the 2010 MacPro 12-Core (3GHz) GTX 980ti in bootcamp/Win10Pro. Can handle C300 II 4K 10-bit up to about 3 tracks at once (2 50% opacity tracks on top of a base track). Only 2 tracks for real-time, so transitions are good. At 3 tracks, it's not real-time and audio starts to sound shabby. Thus it's clear in this case that the OSX version is slower in software/drivers, since it runs faster in Windows on the same hardware. Overall excellent progress!
  20. jcs

    Adobe Rant

    Historically, CUDA drivers with PP CC have been more stable than OpenCL on OSX. When the CUDA drivers go bad, worst case is black video, garbage, flipped images, etc. When OpenCL goes bad, can cause a kernel panic and instant reboot (lost work if last edit wasn't saved for any open apps). Sounds like you are on Windows running an AMD GPU, no CUDA option? If so, suggest getting a recent Nvidia GPU (GTX 1080 if possible, some of the much less expensive versions run just about as fast (within seconds for most tasks: see the GTX 1070 vs top-of-the-line GPUs here: https://www.pugetsystems.com/labs/articles/Premiere-Pro-CC-2017-GeForce-GTX-1080-Ti-11GB-Performance-912/)). Some 2015.x releases were practically unusable due to bugs and crashes: lots of wasted time working around bugs, including switching between OSX and Windows for whichever version was less buggy, and refactoring edits just to make progress and finish projects. Win10Pro, 6950X, GTX 1080 and OSX Sierra, 12 Core MacPro GTX 980ti have been reasonably stable so far with CUDA and even OpenCL on OSX* (haven't cut anything major on the Win10Pro box since the latest release came out, just tests). * In one simple render benchmark, CUDA was a few seconds faster.
  21. This is pretty cool: the OFX plugin architecture used by Resolve (and other high-end tools) is relatively simple and clean (unlike Adobe AE/PP, which is a unfortunately a mess (at this point, disaster is perhaps a more accurate description)). Third-party developers can have empathy for Adobe's in-house developers who have to work on the core product: it's no wonder there are so many bugs and development progress is slow. For years I had asked for example HW acceleration examples from Adobe which show a basic plugin which can directly access GPU memory, run GPU code, then store the result in GPU memory. All that was ever provided was a pointer to ancient/obsolete code which used OpenGL, and required copying the video frame from CPU RAM to GPU RAM and back (which makes the whole thing just about worthless). In 5 minutes of googling I found an example OFX plugin which does everything I had asked: processes everything from GPU RAM (no copies to CPU RAM and back), has complete examples for both OpenGL and CUDA, and the overall code is clean and relatively simple. Very cool! https://github.com/baldavenger/SoftClipV2.1/blob/master/SoftClipPluginV2.1.cpp . https://github.com/baldavenger/SoftClipV2.1. This is exactly what I'd found for plugin development for AE/PP: https://www.eehelp.com/question/gpu-accelerated-the-development-of-ae-premiere-pro-plugin/ (bold emphasis mine): This is a joke, and explains why third party plugins for Premiere never ran anywhere near as fast as Adobe's internal plugins/effects. Premiere and After Effects are at least 10 years past due for a complete rewrite. It's kind of impressive they've kept it clunking along all these years, however they'd be better off (if they haven't already started) doing a from-scratch rewrite, just like Apple did with FCPX. Maybe even switch over to OFX for plugins/extensions. Keep the clunky version of AE/PP around while the new version is beta released and developers port their code over to OFX (unless it's somehow possible to make a compatibility layer and load old plugins (probably not worth it given the ancient convoluted design)).
  22. The free beta- does the pro version have a faster/better decoder for C300 II?
  23. There's no such thing as a cinema camera, other than for marketing and is quite useful for internet forum debates Any camera that can be used to shoot a movie and be shown in the cinema is a cinema camera. What does that exclude? Nothing. All cameras are cinema cameras. When everything is something then there is no difference (e.g. what is light without shadow). From a marketing perspective, cameras which are primarily suited for making traditional Hollywood style movies such as the anything from ARRI, Sony F35/F65, anything from Panavision, and (to a lesser extent) RED, are classic cameras used when people make films primarily focused for cinema release. If one looks at the cameras used for most of the Oscars in recent years (the so-called pinnacle of cinema), it's almost 100% ARRI (film and digital). 'cinematic' also implies looking like film (mostly highlights, lack of digital artifacts, and color response), here again ARRI excels (especially when shooting on actual film!). Canon markets their Cinema line as cinema cameras (I use a C300 II), and while they can certainly produce very film-like looks (and amazing skin tones), in the classic sense they are not in the same league as ARRI/F65/Panavision (and aren't used nearly as often for major motion pictures (except as B and C cams etc., just like RED, Sony, BM, Panasonic). The same can be said about all other prosumer cameras (including Sony up to and including the F55). Perhaps it's fair to say classic cinema cameras aren't purchased very often by individuals. All (current?) Panavision and ARRI 65 cameras can only be rented. Future cell phones with computational cameras will exceed the capabilities of the latest large sensor ARRI 65...
  24. I think it's important to learn multiple tools. There will be times when your primary tool has issues (bugs, or performance issues etc.) which another tool won't have, so you have options to move forward and aren't stuck. Resolve 14 plays the 1DX II 4K 8-bit 420 files OK, however still struggles with C300 II 4K 10-bit 422 files (both OpenCL and CUDA on OSX; will try on Win10Pro at some point). Hopefully they'll get performance sorted by the time beta is complete. I'll purchase the $300 version, however I'd prefer to have online activation vs. a USB dongle as I switch frequently between OSX and Windows (and multiple machines + laptops).
×
×
  • Create New...