-
Posts
7,849 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
Resolve has quite a number of things that you can keyframe, but a trick for key framing anything is to duplicate the clip, put it on a layer on top of itself, and grade it slightly differently, then use the opacity to crossfade between the two grades. I've used this trick a few times when panning/tilting in mixed lighting situations where I needed to change between grades and it worked really well.
-
My solution is about $10 🙂 Of course, my colour grading setup is hugely more expensive, but that's because it has to drive Resolve, which is a proprietary hardware interface, and it controls things that aren't keyboard mappable either, so it's a different proposition. In terms of spending lots of money for something you'll be slower at to begin with, if you get something that works then the payback is huge. Imagine that you have a controller that saves 1s on a basic operation. If we do that operation twice on every clip, have an average clip length of 3s, and edit a 45 minute show then that's 30 minutes. However, that's if we only saved that time on the shots that made the final edit, but we do lots of editing on clips that don't make the final edit, and for the shots that do make the final edit we will adjust them multiple times, so let's conservatively multiply that by 5. This doesn't count versions with a client where we create something that is finished and then have to move stuff around again for the next version. This gives us 2.5 hours on one project by saving 1s on a single operation. Multiply that by your hourly rate and you can see this starts to add up. In reality a good controller will save time on many operations, but will also cut down on distractions while editing, saving the little moments of having to re-orient yourself, potentially making edits less stressful and potentially making you a better editor.
-
Exactly. My Dad was a computer engineer for a large educational institution before he retired. They once bought a new top of the line computer from their supplier to act as a replacement to their primary server, custom built with the top motherboard, CPU, RAID controller, and HDDs. A week in he worked out that the problems he was having were between the motherboard, RAID controller, and the drivers for one of the chips on the motherboard. Two weeks in he'd found the half-dozen threads online of people complaining about the exact hardware combination not working, worked out which one had the most intelligent people in it, and started working with them to hassle the manufacturers. Four weeks in the group had gotten official replies from the motherboard manufacturer, the chipset manufacturer (who wrote the drivers) and the RAID controller manufacturer, with each blaming one of the other two manufacturers. Two months in he declared defeat and leaned on his supplier to take the whole machine back for a full credit, which he could only do because they bought hundreds of PCs from them per year. With Apple, it doesn't work and someone gets told to fix it, and so they get everyone in a room and fix it. That's the difference between closed and open systems.
-
I've read many times that an investment in a control surface repays itself many times over in increased efficiency, so I think it's a good way to go. I wasn't suggesting the Resolve keyboards, quite the opposite in fact. The link I provided in the other thread (and repeated below) is what made me think the "DIY" option might be the best. From the article: I haven't tried it yet, but what I interpret this to mean is that you can program your extra keyboard to have different hotkeys than your normal keyboard, potentially doubling your controls, or more if you are using more than one additional extra keyboard (not sure if that's possible?). Combined with the keyboard shortcuts within your NLE I'm imagining this should make a setup as flexible as you like. I watched a few reviews of various controllers and the downside was never the hardware, it was always the limitations of customising things, which I thought the above would get around. In terms of my reference to two hands, I wasn't suggesting that you spend time drinking beer or doing some other task with the other hand, more that you could use the other hand to have access to more controls in editing. In practical terms, you get speed by having one-key access to a function, but even more speed when you don't have to look. There's a limit to how many keys you can reliably hit without looking, probably something like 12 or 16 per hand. I don't know about your editing hotkeys, but mine are typically JKL for back/stop/forwards, IOP for MarkIn/MarkOut/Insert, leaving only a few remaining keys for other operations such as ripple trimming etc. If you have a jog wheel then you'll probably have a hard time operating it and reliably hitting a few extra buttons with the same hand without looking. However, if you have two hands active at once, you can use one for the basic navigation and operations you'll need to do in bulk, and then the other hand can have access to another dozen more sophisticated editing commands, or alternate methods of navigation like next and previous clips, navigating between markers or flags or whatever, etc. If you're anything like me you will have very little muscle memory on your left hand, so your injury has kind of forced you to work through the frustration of learning to navigate and do basic operations with your left hand, making it likely that by the end of it you'll be fresher with it than your current dominant hand, especially if you get a control surface of some kind, which your dominant hand won't have experience with. The end-game if you go down this route is to be able to edit a project from start to finish without looking away from the monitor basically at all. Depending on how you work, you may even want to map some basic colour adjustments to your right hand, like WB or exposure, so you can kind of correct as you go. As Resolve is so nicely integrated and I use it for my whole workflow I tend to bounce back and forwards between the Edit and Colour pages, as I find that Colour impacts how I edit to a certain extent. For example, I might make a selects by eliminating all shots that are crap, but then I would do a Colour pass adjusting WB, levels, and conforming to rec709 so I can see the shots (instead of them being in LOG, for example). Then I would go back and make as assembly with more decisions based on how lovely the shots look. Then I'd do a colour pass really working with the clips, especially the 'hero' shots. Then adding music and doing the timing of the edit I would be looking at how great each shot looks from the colour grading to determine how many 'beats' to keep on a particular shot. Sometimes a shot really comes alive in grading and so I might linger on it longer, or maybe even slowing it down slightly, etc. These grading things all contribute to the edit, but I don't want to colour grade every clip before I start editing as that would be a huge waste of time. Anyway, food for thought about keyboard shortcuts. The other thing to think about is your overall workflow. I've seen that there are really two methods for editing. The first is to review all the clips and make selects, then make another pass eliminating more clips and refining timing, etc etc until you have a final edit. This means once you eliminate a clip you shouldn't need to look at it again, but has the downside that you end up looking at lots of clips several times that won't make the final edit. The second is to log footage properly, and then just make a timeline by pulling the best clips in. This is more efficient if you have higher shooting ratios and are organised, but if you have poor organisation skills and a poor memory then you could end up spending minutes/hours looking for each clip that you pull onto the timeline, which would be less efficient overall than the first approach. Essentially the first approach is that you start with everything and delete clips until you have the edit, and the second is starting with nothing and adding clips until you have an edit. Most people have a hybrid of these approaches, so it's whatever works for you, but I'd suggest that getting this sorted would contribute more to your overall efficiency than a control surface would. Anyway, food for thought.
-
I've watched a few videos running down the new features, and I must say that I'm pretty excited for quite a few of the features. The New Colour Wheels look awesome and I think i'll likely use them a lot. I have a control surface so the Lift/Gamma/Gain controls are great for exposure corrections, but they are very 'macro' controls, not really having enough control over things like shadows vs blacks etc, especially for the naturally lit uncontrolled situations I shoot. I can use curves, but they're a PITA to use with the control surface, so having the new colour wheels will be great, giving enough 'resolution' but still being fast enough to use quickly. The Colour Warper (spiderweb) and luma/chroma warper will be quite handy too. I often find myself wanting to quickly change the saturation, hue, and luma of certain colours (eg, foliage) and currently you have to bounce back and forwards between Hue v Hue, Hue v Sat, and Hue v Lum to do that. Also, one of the things that I have played with in the past is a Saturation limiter. The idea is that I want to up the saturation on clips, but when I do that sometimes theres a splash of colour in the background that goes nuclear in OTT saturation, so what you want is a curve that increases the saturation of colours under a certain threshold, but once a colour gets to a certain level of saturation it should encounter a 'knee' and the saturation boost should slow down at that point, limiting the most saturated elements of an image. That's also possible easily with this tool. I had previously used a Sat v Sat curve, but the risk of that is that you end up with overlaps where more saturated colours end up less than colours that started off less saturated to begin with, so I had to generate some test images to ensure I designed it correctly. The new Sat vs Lum curve looks great too. One of the things that I learned in investigating film is that emulating a subtractive colour model requires the darkening of more saturated colours, which currently has to be done in a relatively customised way, whereas this gives a nice curve. I'm also curious about the new Temp and Tint controls. I use Temp and Tint all the time to correct WB and apparently they've redesigned them to work in XYZ colour space (assuming I understood that correctly) which means they will be perceptually better, which is cool.
-
I have a GH5 and only shoot 1080p - although I do shoot 120p for sports. I don't have a problem though. I mean, I can stop at any time.
-
I'm still looking into this, and started a thread some time ago: My little keypad thingy arrived but I haven't had a chance to look at it. My initial impressions are that the only difference between dedicated editing controllers and normal keyboards is that 1) dedicated controllers have a jog wheel for accurately scrubbing forwards and backwards, and 2) dedicated controllers often have a specific layout and colour-coding or labelling of the keys. Beyond those things they're pretty much just keyboards: Zooming back out a little, sorry to hear about your injury, but great to hear you're trying to work around it and potentially take it as an opportunity to improve your setup. I would go one step further and suggest that this is an opportunity to get an edit controller, learn to use it with your left hand, and that maybe you should think of this as a permanent way forwards. You'll be building muscle memory, and (assuming you're right-handed), when your hand heals you will have your dominant hand free to do other things while you're editing. In contrast to that I use my dominant hand for editing control and my non-dominant hand is pretty useless as I'm not as coordinated with it and I don't have any muscle memory for it either.
-
We all know about face-tracking auto-focus (AF). Presumably, face tracking also helps with controlling exposure (AE) because faces are the priority in most shots. I've just worked out that my GH5 has face-tracking AF and AE, but when you turn off AF, it also turns off the face-tracking AE, and then exposes for the whole frame instead of the person, even if the face is clearly in focus and visible. Obviously this is ridiculous as there are lots of situations where you want AE and not AF, but is this a common thing in other cameras? Also, WTF Panasonic....
-
After using my old one for 4 years I replaced my 13" MBP with a new one only a few months ago. I did this deliberately, knowing that the new architecture was coming, because I didn't want to be a beta tester. I haven't read in detail what was in the announcement, but assuming that it was anything close to what was predicted, it's an interesting thing. The things I think are most interesting are that by transitioning all the Apple hardware to ARM: they can optimise the hell out of everything as they'll have control over the whole software/hardware stack it essentially merges the hardware platforms of phones/tablets/computers, meaning that they would go to one App Store, app developers only have to write one version of the apps instead of two all iPhone/tablet apps could be run natively on the computers potentially all the computer apps would be able to be run on iPhone/tablets The reason I waited is that it also means that they'll have to re-write OSX from the ground up, potentially putting into place a huge backlog of fundamental architecture changes that have been accumulating since OSX went to a unix platform, which is going to be a huge and very messy process. Also, that means that every OSX program will have to be re-written, or will have to run in an emulator. That's not something I want to beta test on something as sensitive to performance as video editing. The end-game of this technology choice is that your phone will become your computer. I've said this before, but imagine coming home, taking your phone out of your pocket and dock it which provides power and also connects it to your monitors / peripherals / eGPU / storage arrays and it goes into 'desktop' mode and becomes a 'PC'. This might sound like science fiction, but I've seen someone actually do this years ago running linux on an Android device - it had tablet mode and desktop mode, similar to how modern laptops with touchscreens now have a tablet mode and a pc mode. Modern processors are good at being efficient while they sit almost idling in the background, but then turn into screaming power-thirsty race-horses when asked to do something huge (anyone that has had their phone crash will know it can drain the battery before you even notice that anything is wrong). When you dock it then full-power becomes available and only cooling may be a limiting factor. The other aspect that supports this is the external processing architecture that has been worked out. OSX supports having many eGPUs and Resolve will happily use more than one, although currently you don't get much improvement by having more than a few of them. It's not inconceivable that in future an eGPU will be available that will appear to the OS as a small cluster of eGPUs, and the computer simply becomes a controller. When I studied computer science we programmed software for a couple of toroidal clusters, one of which had 2048 CPUs (IIRC). The architecture is getting there and video processing and graphics is a perfect application for it as you can just divide up the screen, or separate tasks such as decompressing video / rendering the UI / scaling the video / doing VFX processing / colour grading / etc.
-
It's literally the first beta version! Kids these days 🙄🙄🙄 If history is any guide then there will be a new beta out very soon. I don't have the data, but it seems like with previous versions they'd occasionally put out a beta that had a big issue for a lot of people and they'd fix it within days in a subsequent release. One thing that people don't really know about software development is that companies like Amazon make releases multiple times per day. They're just very small and in some feature you probably don't use, so it doesn't seem like that's what's happening. Bug fixing is obviously a different thing as it takes time to diagnose and then fix without breaking other stuff, but that's the life of a beta tester. I gave up on betas a few versions ago - unless you want to be a beta tester then I just wait for the full release.
-
I think a lot of folks out there don't really mind if something is hybrid or MILC or even DSLR, they just want a small(ish) package that can be hand-held or put on a gimbal, and have 10K for a complete setup. In this sense there's things like the Komodo, BGH1, C70, as well as the DSLR form factor with things like S1H, Canon 1DX line, etc. Lots of options around to choose from.
-
The new editing keyboard looks very interesting... Of course, this paves the way for a Resolve Nano Surface which will be the size of a normal keyboard, has the editing controls / jog wheel on one side and three trackballs / control knobs on the other, and will be $199. I'd preorder one of those right now, maybe more than one if I was bouncing between locations.
-
I'm curious about v17, but if it's like any other version we won't get anything but betas for some months, so if you're allergic to instability then it won't be out for quite some time....
-
Resolve has been stable for me since v14. Sometimes something seems to stop working or doesn't do what I'd expect, so I restart and that normally fixes it, but that's rare.
-
Yes, full sized HDMI port. It's funny - I have four cameras and (IIRC) all / most of them have full-sized HDMI connectors. Canon 700D, Canon XC10, GH5, and BMMCC. Of course, this is pure coincidence. Yes, I'm told it has heaps of Mojo, but I'm yet to really get to the bottom of that. Same for me - its crazy that the BMMCC is so much smaller than the GH5 and yet rigged up its considerably larger. The C70 is an odd one. I suspect that most people that invest that amount and shoot with something that big would add a cage / monitor / audio something anyway, thus it's kind of modular by use rather than by design. Technically you're right, although a C70+lens+mic would be as large as my BMMCC rig! Setup times are important, absolutely. I saw a great video of some folks that went to Antarctica with a top-of-the-line cinema setup (something like a C700 or Venice with a huge all-in-one cine zoom) and they didn't want to have to rig up while standing on a beach with the penguins, etc. Their solution (not new) was to get a hard equipment case that could be modified to fit the rig in its fully assembled state. They used the case for accessories during the travel legs, but once they were on the ship they reconfigured it to house the assembled rig (IIRC it was minus the matte box) and then used that for going ashore. Then they just put the matte box on the front, put it on the tripod and they were off to the races. For my purposes I keep my setup (GH5 + Rode VMP + wrist-strap) in one hand the whole time and then I can turn it on and hit record during the motion of raising it up to my face, where the other hand does manual focus and I get the shot. I've included a number of shots in my final edits that were time stretched out to a couple of seconds because the time between my acquiring focus and when the moment ended was under 10 frames, and sometimes only a couple of frames. The Komodo does look pretty cool. Z-Cam is interesting. I've watched a number of reviews and sample footage from Sidney Baker Green on YT (https://www.youtube.com/user/sidneybakergreen) who has a Z-Cam and is a pretty good colourist. His take on it was that the image is pretty good, but the product isn't that refined yet and has small foibles, and the user experience isn't that great with things like the manuals not being kept up-to-date and support really being via user groups. I've really liked the images I've seen from them. Assuming they continue and get more refined over time they look like a pretty good brand.
-
Seems like lots of new affordable modular cinema cameras have come out, with some cool specs like 4K120 or 4K60 10-bit, etc.... or even just getting FF video without a crop, overheating, or crippled codecs. So, who is going modular? How are you finding the transition? My most recent acquisition was a BMMCC and my first modular setup, and I found that getting used to having a separate monitor, needing multiple batteries, cables and cable management, as well as having to rely on a rig just for ergonomics were all a bit of a PITA actually. Plus, despite the BMMCC being really small, and paired with a tiny monitor, the whole rig gets large pretty quickly. and considering that we can't talk about modular cameras without showing awesome pictures of rigs, here's the BMMCC in my sunset configuration:
-
Lenses can be matched in post if you're willing to put in a bit of effort. Here's an example where I match a Samyang to a Lomo... unedited: and matched: If you put a look over the whole film you'll also help to even things out too.
-
Sounds like you know what you want. I'm no expert on what the alternatives to the Irix are, but I know that when you're going as wide as this there aren't that many options to choose from.
-
That link is banned, and also not useful because it links to the original source, which has since been taken down. Luckily, the internet never forgets: https://web.archive.org/web/20180407123236if_/http://juanmelara.com.au/blog/re-creating-the-600-brim-linny-lut-in-resolve https://static1.squarespace.com/static/580ef820e4fcb54017eba692/t/5a560498ec212d5f4a972d25/1515589920801/The_Brim_Linny_Rebuild.zip https://web.archive.org/web/20190227062207/http://juanmelara.com.au/s/The_Brim_Linny_Rebuild.zip
-
What focal length do you actually need? ie, what precise focal length, rather than just wider than 18mm. I'm a huge fan of super-wide-angle lenses, and my second most used lens is a 15mm FF equivalent, but you want to hit the sweet spot. If you go wider than your sweet spot then you will either crop and throw away pixels, or you will get closer and get more wide-angle-distortion. If you go narrower than your sweet spot then you will have to go further back to get the framing you want, and if that's not possible then you won't get everything in frame that you want to. When I was in the market for a super-wide, I got my GoPro, worked out what FOV it was equivalent to, and then did a bunch of test shooting with it to emulate the kinds of shots I wanted to get. I then experimented with cropping to simulate various longer focal lengths (easy to do - double the focal length is half the width/height of the frame) and worked out where the sweet spot was for me. If you're unsure but want to buy something now, get the Tokina 11-16mm second hand, shoot a bunch with it, work out what focal lengths you want, then sell it and buy what you need.
-
What focal lengths and sensor size are you looking for? and what 'look' do you want, ie, the modern look with lots of contrast and sharpness, or a softer rendering?
-
No, no makeup team for me! The people i'm filming have much more variability - more like in this image: That's also a useful image for testing LUTs and seeing what they do to skintones BTW. Nice looking images. The DB certainly had a cult following. Interesting. There were also some great sample shots from the UMP with skintones, I should look through them for good examples. I agree that thickness can happen with low and high key images, with saturated and not so saturated images too. A few things keep coming up, and I think i'm starting to fit a few of them together. One is the ability to render subtle variations in tone, and yet, we're looking at all these test images in 8-bit, and some in less than 8-bit, yet this doesn't seem to be a limiting factor. I wonder if maybe we're thinking about colour subtlety and DR and bit-depth the wrong way. I mean literally, that we think we want more of these things, but that actually maybe we want less. Take this image for example: This image is contrasty and saturated. In fact, it's very contrasty. If you were looking at this scene in real life, these people wouldn't have so much variation in luminance and saturation in their skintones - that baby would have to have been sunbathing for hours but only on the sides of his face and not on the front. In that sense, film and its high contrast is actually expanding and amplifying subtle luma differences, and when we increase contrast we increase saturation too, so it's amplifying those subtle hue variations. One thing i've noticed about film vs digital in skintones is that digital seems to render people's skintones either on the yellow-side, on the pink-side, or in the middle and not that saturated. Film will show people with all those tones all at once. This guy is another example of a decent variation of hues in his skin:
-
I'd suggest using multiple layers of stabilisation. As has been said before, gimbals are great but lose the horizon over time and cornering, and the super-duper-stabilistation modes on the GoPro and Osmo Action etc will also lose the horizon if kept out of level for some time (which is inevitable considering that roads are sloped to drain water away and corner nicely). Due to these factors, I'd suggest a very solid mount (to eliminate wind buffering) combined with a very short shutter speed (to eliminate exposure blurs and RS wobbles from bumps) combined with in-camera floaty-smooth modes combined with stabilisation in post.
-
My images were all GH5, with the second one using the SLR Magic 8mm f4, and the others using the Voigtlander 17.5mm f0.95. I don't think that anyone would suggest the GH5 was designed for low light, and the boat image is actually significantly brighter than it was in real life. The floodlights in the background were the only lighting and we were perhaps 75-100m offshore, so the actual light levels were very low. I was gobsmacked at how good the images turned out considering the situation, but they're definitely not the controlled lighting situation that they're being compared to. The scopes are very interesting and the idea that the good ones transition smoothly is fascinating, and is very different to the GH5 images. That is a spectacular image, and looks pretty thick to me! It clearly has some diffusion applied (I'd say heavily) and I wonder how much that plays into the thickness of the image. Diffusion is very common in controlled shooting. Just for some experimentation sake, I wonder if adding Glow helps us thicken things up a bit? Original image posted above: With a heap of Glow applied (to accentuate the effect): Thicker? It makes is look like there was fog on the water 🙂 I can't match the brightness of the comparison image though, as in most well-lit and low-key narrative scenes the skin tones are amongst the brightest objects in frame, whereas that's not how my scene was lit. No examples offhand, just more anecdotal impressions I guess. I must agree that the bright punchy colours in that image don't look the best. The colours in these two also don't look the best to me either. I've been watching a lot of TV lately and the skin tones that I'm really liking seem to have a very strong look, but I'm yet to find what that maps out to in concrete terms. I suspect that the hues are very well controlled between the yellow and pink ends of the spectrum without either going too far, and the saturation also seems to be very well controlled with lots of skin area being quite saturated but the saturation being limited, in that it goes quickly up to a certain point but doesn't go much beyond that. The skintones I'm used to dealing with in my own footage are all over the place in terms of often having areas too far towards yellow and also too pink, and with far too much saturation, but if you pull the saturation back on all the skin then when the most saturated areas come under control the rest of the tones are completely washed out. I am focusing a lot on skin tones, but that's one half of the very common teal/orange look, so the scene will be skin tones, maybe some warmer colours, and then the rest will be mostly cool in temp. I've been taking screen grabs whenever I see a nice shot and plan on pulling a bunch of them into Resolve and studying the scopes to see if I can see what is going on, and if I can learn from that.
-
Indeed. Reminds me of this article: https://www.provideocoalition.com/film-look-two/ Art Adams again. Long story short, film desaturates both the highlights and the shadows because on negative film the shadows are the highlights! (pretty sure that's the right-way around..) I definitely think it's in the processing. I view it as that there are three factors: 1) things that simply aren't captured by a cheaper camera (eg, clipping) 2) things that are captured and can be used in post without degrading the image below a certain threshold (ie, what image standards you or your client have) 3) things that aren't captured well enough to be used in post (eg, noisy shadows beyond redemption, parts of the DR that break if pushed around too much) Obviously if you expose your skin tones in a range that is either completely lost (eg, clipped) or aren't in an area that can be recovered without exposing too much noise or breaking the image then there's nothing you can do. What I am interested in is the middle part, where a properly exposed image will put the important things in the image, for example skin tones. Anything in this range should be able to be converted into something that looks great. Let's take skin tones - let's imagine that they're well captured but don't look amazing, but that the adjustment to make them look amazing won't break the image. In that case, the only thing preventing the ok skin tones from looking great is the skill in knowing what transformations to make to get there. Yes, if the skin tones are from a bad codec and there is very little hue variation (ie, plastic skin tones) then that's not something that can be recovered from, but if the hues are all there but just aren't nice, then that should be able to be made to look great. This is where it's about skill, and why movies with professional colourists involved often look great. OF course, ARRI has built a lot of that stuff into their colour science too, so in a sense everything shot with an ARRI camera has a first pass from some of the worlds best colour scientists, so is already that much further ahead than other offerings. Of course, many others aren't far behind on colour science, but in the affordable cameras its rare to get the best colour science combined with a good enough sensor and codec. That was something I had been thinking too, but thickness is present in brighter lit images too isn't it? Maybe if I rephrase it, higher-key images taken on thin digital cameras still don't match those higher-key images taken on film. Maybe cheap cameras are better at higher-key images than low-key images, but I'd suggest there's still a difference. Interesting images, and despite the age and lack of resolution and DR, there is definitely some thickness to them. I wonder if maybe there is a contrast and saturation softness to them, not in the sense of them being low contrast or low saturation, but more that there is a softness to transitions of luma and chroma within the image? In other news... I've been messing with some image processing and made some test images. Curious to hear if these appear thick or not. They're all a bit darker, so maybe fall into the exposure range that people are thinking tends to be thicker.