Jump to content

kye

Members
  • Posts

    7,849
  • Joined

  • Last visited

Everything posted by kye

  1. I suspect that it's the grade, or some part of the image pipeline. Apparently the first one was shot on a Red Epic, which according to wikipedia: "In 2010, Red released the Red Epic which was used to shoot The Amazing Spider-Man, The Hobbit, Prometheus, Pirates of the Caribbean: On Stranger Tides and The Great Gatsby as well as many other feature films." I'm assuming that it was the earlier Epic, instead of later ones, but even then, I didn't think that those other films looked particularly thin. I'd imagine that shooting with a Red you'd either shoot in RAW or Prores, so it would have been at least 10-bit. For me, it was quite a difference between the two in terms of image thickness. In terms of the creative effect, the subject matter of the first one is very digital/cold whereas the second one is more analog and human, so 'thin' and 'brittle' is a relevant and appropriate creative choice for the subject matter.
  2. I think if you applied it in the Timeline node tree then it should work? It's worth testing, although applying presets to the Timeline graph may or may not work, I've had trouble doing that in the past. If it doesn't work, you can append it to a single clip, copy it, apply it to the timeline graph, remove it from the clip, take the screenshots you want, then just delete it. The alternative (if it works - I haven't tried it) is to highlight multiple clips and then append the node to all of them. This should work for extracting stills, but removing it might have to be done manually, which is a PITA. The other other way, which is a different approach, is to setup your grade with a shared node as the last node, and apply the adjustment in it, but setting the strength of the effect to zero using the Key Output in the Key tab. Then when you want to enable it just up the Key Output, take the shots, then set it back to zero. If you have an existing project then you'd have to copy/paste that node onto every clip. I'm not sure if there's a bulk way to do that, but once you did it you could do anything you liked with it. If you were going to do that then I would suggest copying half-a-dozen shared nodes so that if you need them then they're already setup.
  3. Thoughts on how thin / thick these two trailers are?
  4. You're have a good point, but I don't think this is what matters. What does matter is the difference between what a camera is capable of and what most people get out of it. If someone is getting half of the potential of a P4K then giving them an Alexa isn't going to give much of an advantage, because the same limitations that prevented them from getting even close to the potential of the P4K will also prevent them from getting the most out of the Alexa. There's a saying about continuity - "if people notice continuity problems then your film is crap". I think colour is kind of the same in many ways. As much as I love it, a great film with BM colour, GH5 colour, or even Sony colour, is still a great film. There have been many reports of people that don't know what they're doing using an Alexa, and the results are reported to look like a home video.
  5. I think the hidden 'hack' for good IQ is to go back to 1080p. The GH5 has great 1080p modes, but even if the camera you have doesn't, if you shoot in the typical 4K modes then put them on a 1080p timeline then they're significantly better due to downsampling. I look at all the latest camera releases and don't really see much that I would want over what I have, from a 1080p perspective. Sure, if you're doing commercial gigs where people are paying for the spec, or recording stock footage or whatever, then sure, go for it. But any time you see a great image come up on your phone, you're not looking at something that's great because of the resolution, it will be the composition, lighting, colour, etc. That's where I'm spending my time and energy now.
  6. Sounds like you're talking about situations where everyone is paid and everything is at or above standards, and of course, at that point it's well worth spending money on equipment as it pays for itself in lost time. Having a dozen people or more plus equipment on set is expensive. In terms of the whole of film-making though, lots of people are making content where they are time rich but cash poor, which is more where I was talking about. I think it's easy to forget how broad a range of film-making there is going on - everything from YouTubers with a phone and (maybe) and LED light and a lav, people making features by working part-time and maxing out a couple of credit cards, folks doing weddings or corporates, people on low budget but industry rate sets, through to productions where there are people above the line and the daily rental on the trucks alone would make the credit-card film-maker cry. I try to keep my comments generic and in the context of everyone. Plus, these forums seem to be more frequented by people at the lower end of the scale than at the higher end.
  7. I'm actually not so sure about DR anymore. I compared the HLG mode vs Cine-D (709 equivalent) modes on my GH5 and the HLG has a couple of stops more DR (IIRC) than the Cine-D, but in real-life the differences weren't that much, even in extreme situations. The only time I missed some DR in my comparison shots was when the sun was in the shot, but even then it wasn't much. I used to shoot and think about DR in terms of making sure nothing was clipped, and then grading it to control everything. Now I realise that I don't care about things with that much DR. If I'm shooting inside and the outside is blown out then I can choose which thing I expose on, and if I care about the relationship between both then typically I can get a silhouette, and times when a face is important on the person inside (maybe they're looking out) then their face will be much better exposed and I can get the outside and their face. I'm not saying there are no situations where extra DR matters, but I'm saying that with, say, the 9-10 stops of DR that most cameras have now in 709 modes, that's enough for most situations. Also, the situations where 12 stops isn't enough, you might find that the 15 stops of high-end cameras is also not enough. Things with high DR that are common like fire, welding, the sun, any night scene where there is no ambient artificial light (eg, moonlight with torches, or moonlight with headlights) etc will be more than 15 stops of DR, so there's no point lusting after an Alexa in those situations either. In terms of lighting, I think maybe you're underestimating how much skill is involved in getting the highest quality shots that top-end shows and movies have. Give someone who is clueless an unlimited lighting budget (and an Alexa and great cine glass) and you'll still get something that looks awful. Lighting budgets only make a difference after you have someone with the skill to know what to do with them.
  8. The way I tested it was to block one eye and look through a tube at the image and test how convincing it was that I was looking at a real scene instead of a flat image. I positioned the tube so that I could only see the image through that eye, so that the border to the image didn't ruin the illusion. You can then flick back and forwards between images from different lenses taken from the same location and directly compare. Of course, going back to my original point, if you compare lenses of different focal lengths then you have to account for the different DOF because that has a huge impact on how 3D a lens looks. I'd suggest that it's so large that much of the perceived differences between two lenses could be that one of them is actually just slightly wider open than the other and that's what you're seeing. Of course, things like other types of distortions can also impact the impression of dimensionality, like the famous CZ 28/2 "Hollywood" lens and its distortions.
  9. Me too. The situation I really want is for the camera to control exposure with auto-ISO and auto-ND and then let me control SS and Aperture manually. I'd also like it to do face-recognition and exposure for that, even if I've got MF enabled. That way, the camera is doing the technical operations, and I am doing the creative operations.
  10. In terms of combining the bits together, you would only add the bit depths if they didn't overlap in DR. For example, if I took two 14-bit readouts, one a stop below the other, then 13 of the 14 bits from each read-out would be duplicate data, and my effective bit-depth is really only 15-bits. So in order to understand the total bit-depth you'll need to know the overlap range in the sensor. That would also be further complicated if they weren't offset by a whole number of stops, which would place the values of one readout between the values of the other, providing more bit-depth but less increase in DR. If you really want to understand this, try modelling things in excel and graphing them. That should give you a more intuitive sense of what is going on. In terms of there being a certain quality in the older sensors, this guy has done lots of tests comparing the older BM cameras to the P4K and Ursa. https://www.youtube.com/user/joelduarte27/videos My overall impression is that most people don't utilise anything like the potential of their cameras, and that the difference between what images most people get and the images that you see from an Alexa or RED is more down to user skill (in terms of lighting, composition, camera operating, and the complete image pipeline in post) than it is about any camera limitations that might exist.
  11. One thing to note, for anyone reading this thread, is that small variations in DOF create a very large perceptual difference of how 3D an image seems. I've previously compared a ton of lenses under test conditions and a 55mm lens seems much more 3D than a 50mm lens, when both are at the same aperture. I know this is different to compression, but it's likely to get mixed up in this conversation at some point considering that all these things are interrelated.
  12. That seems like the sensible option to me. It's a standard hardware configuration and makes the setup 100% Apple, which bodes well for optimisation and support in future updates and FCPX optimisations. Keep us informed about performance when you get it all setup 🙂
  13. kye

    camera movement

    This all looks suitable for hand-held to me. I don't know what they were saying, but from the various clips and acting, I can see that the characters (and therefore the narrative) is raw, unpredictable, unstable, violent, and high-energy. The editing is reflective of this as well. By contrast, think about the aesthetic of viewing something violent, like an armed robbery perhaps, through footage from a security camera. The security footage is completely stationary, and is wide-angle making any movement in the frame much smaller than a tighter lens would make it. This experience is very detached, impartial, and makes the action seem small, despite how intense it might be. Another alternative to having a static angle or hand-held footage would be to have smooth and steady camera movement like a pan or tilt. This would obviously not be a good aesthetic choice either, as smooth controlled movement is just that, smooth and controlled. This type of movement is often associated with beauty (like panning over a grand vista) or scale (tilting up to see a tall building). Both of these situations are controlled - the horizon is always level and the building is always vertical. The alternative to that is a steadicam or crane shot, where the movement is steady but is not controlled in the same way that a pan or tilt is. The aesthetic of a gimbal is that it is floating, and although being closer to the action and maybe even being affected by it in the example of a shot where the camera follows the action or revolves around characters, it's still got the feel of detachment. It lacks the reaction that a security camera lacks - it doesn't jump the way a human would. Crane movement is slightly different that it's normally more geometric, and as it often moves vertically more than a gimbal shot it's also more detached from a human point of view, considering that humans typically experience the world from eye-height. I think they're the choices for camera mounting and the various aesthetics of movement. The trick is to choose the one that most represents the emotional experience you want the viewer to have.
  14. kye

    camera movement

    Depending on what films or TV you are watching, it could well be a choice made due to laziness or fashion rather than an optimum aesthetic choice. There are plenty of film-makers who have a style that other people do not appreciate and are critical of. Michael Bay is a great example of someone that makes creative choices that many are critical of, but others are supportive, so it's all a matter of taste.
  15. Another test - including 4K, 5K, 6K, and 8K RED raw files.
  16. kye

    camera movement

    Any camera movement should be a deliberate artistic choice, designed to support the narrative and aesthetic of the film. I'm far from an expert, but there are lots of articles around if you google 'camera movement'. There is also the element of motivated vs unmotivated camera movement that you can read about as well. In terms of camera shake, it the aesthetic of it is that it's a bit more 'real' because that's how amateurs take video of real life, so it can give a more authentic feel to a shot. It can also make things more exciting, which is why they use camera shake in action sequences. Topics like this are so deep that you can never learn everything about it, even if you studied it for the rest of your life. However, these are the skills that will drastically improve your film-making. Camera movement, composition, lighting, editing, dialogue, sound design, etc etc etc..... the beauty of film-making is that you can learn a little and get a big improvement in your work, you can learn more and get even better, but you can study it for the rest of your life and never run out of things to discover and there is no limit to the degree that you can improve your work.
  17. @KnightsFan I'm far from an expert but the approach that I have taken (and I'm pretty sure aligns to what I was advised to do) was to calibrate the monitors in the OS so that all apps get the calibration applied, but I'm on Mac so it could well be quite different for you. I take a slightly different approach of having a preset node in Resolve that I append to the node tree, take the screen grab, then remove the node again. I've taken multiple shots and the brought them up in the UI and compared them to the Resolve window and found the match to be acceptable. @Oliver Daniel The other thing I forgot to mention about colour accuracy is to get a bias light that will provide a neutral reference. I purchased one from FSI https://www.shopfsi.com/BiasLights-s/69.htm which is just a LED strip that gets stuck to the back of your monitor and shines a neutral light onto the wall behind your monitor like this: The wall behind my monitor has black sound panels so I just pinned up some A4 paper for the light to reflect off. I figured you can always rely on reflex! There's all sorts of other tricks to getting a neutral environment, but probably the most important aspects are to calibrate your monitor to 100nits and 6500K, to get a bias light, to match the ambient lighting to the bias light (which is an accurate reference) and to ensure that the ambient light isn't falling onto the monitor and that it's quite dark compared to the brightness of your monitor. I used my camera as a light meter to measure how bright the ambient light was, but if you calibrate your monitor and pull up an 18% grey then match your ambient light to that then you should be good. Professional colourists talk about getting paint that is exactly neutral grey (which is available and very expensive) but once again, it's diminishing returns.
  18. To further add to the Mac v PC conversation, I think people underestimate the fact that film-making is a creative pursuit focusing on a visual and auditory experience. Every creative pursuit requires that the creator be comfortable to create, which is a function of building an environment that suits the tastes and preferences of the creator. The fact that film-making is a visual and auditory craft means that the kind of creators that it attracts are people that care about visual and auditory aesthetics, so it makes sense that the visual and auditory experience of the creative environment will directly impact the quality and quantity of the creative work, and how good the creative process is to experience for the creator. Considering that video editing is done on computers, this directly maps to the choice of a computer. Only considering the price that you pay for every million floating point operations per second is a valid perspective if it is your perspective. Others have their own perspectives, and they will always be different, either subtly or radically, and that is part of the beauty of people and creativity. I would hate it if we were all the same and every movie or TV show I watched was created the way that I would have created it - how dull and predictable such a thing would be.
  19. I asked the question on the colourist forums about calibration on a normal monitor and although the strict answer is that you need to have BM hardware some people said that their GUI matches their reference monitor almost exactly, obviously after calibrating, so it's really about what level of certainty and accuracy you want. If a pro colourist says the match is almost perfect then I figure that's good enough for me!
  20. Swapping AF and low light for unlimited recording and colour matching seems like a good trade-off if you're doing interviews and weddings, especially for a B-camera. Also WRT low light, the MFT cameras can get very passable low-light performance if paired with a fast lens. I have above average night vision (I can easily go off-road mountain bike riding with no lights if there's a full moon) but the GH5 and f0.95 lens combo can see in the dark better than I can. The other low-light advantage of MFT is that on an f0.95 lens it gets the exposure value of T0.95 but a DoF equivalent to a FF at F1.9, so better exposure without the razor-thin DoF. I'd suggest having a think about the various interview configurations that you might use and then working out what the minimum requirements would be for those setups, in order to avoid buying gear you don't end up using.
  21. In terms of 8-bit, concentrate on getting it as right in-camera as you can. All the 10-bit vs 8-bit tests that identified differences (many didn't) only had issues when the image was pushed by a decent amount, so the better you get it in-camera the less you have to push it around and the better the image will be. Matching colour is a matter of: choosing the same colour profiles on all cameras setting the WB and exposures properly using a colour checker once per setup You can do your homework and record a colour checker on all cameras in controlled conditions, then make an adjustment to match the lesser cameras to the better one, and make it a preset so you can quickly apply it. Then when you're working on actual projects all you have to do is tweak slightly, if anything at all. Yes, I realise that this means you will be shooting a rec709 type profile on your best camera, but this is intentional to match colour profiles, and also it will help you get a good quality image in post. I use the GH5 as my A-cam and shoot the 1080p All-I 10-bit 422 modes, and just recently I swapped from shooting HLG to shooting in Cine-D. Extrapolating from the principle of "the less you mess with it the further you are away from the image breaking" I no longer take my 10-bit image and radically push/pull the bits around in the HLG->rec709 conversion, so now when I get my 10-bit Cine-D images out of camera the bits are nice and thick and almost exactly where I'll end up putting them in the final grade. @Ironfilm makes a good point about using the kit lens on a second / third angle, as the kit lenses are nice and sharp when stopped down a bit, and can also be pretty good when at their longest focal length and wide open, especially if it's an angle you're not going to go to for long or many times in the edit.
  22. I second everything that @IronFilm said. Now is a good time to consolidate to one lens mount as you're selling lenses anyway. This folds into a lens strategy where you should be able to use the same lenses across both cameras, but without having duplicates, which typically means having the same sensor size. For example, if you want a variety of shots in your Multicam setup then you want a different FOV from the two cameras. Typically that would mean having a mid-shot on the A-cam and either a wider shot on B-cam, or a tight shot on the B-cam, and is often done with a 24-70 / 16-35 or 24-70 / 70-200. If your B-cam was an MFT then you'd either have to have a second 24-70 to get a tighter shot with the crop sensor, or you'd have to use a specialist MFT wide lens like a 7-14mm to get something that was actually wide, but then that's a lens you can't use on your A-camera. The other advantage of having cameras of the same sensor size and common lens mount is that everything is a backup for everything else. If a body dies then you have a second body that can use any of your lenses. If a lens dies then you have access to every other lens on that camera. In the spirit of "two is one and one is none" you might consider buying a very cheap third body in case one body dies, which would leave you with a dual-camera setup in that case. I don't know how you shoot and edit so you could always cut up the interview with B-roll if you only had one camera for the interview, but it's worth considering. In terms of a third body, something with good 1080p would probably be the way to go, which typically means older and smaller sensor. In that situation you'd want to get something that required as little extra peripherals as possible, for example, using compatible power solutions, lenses, etc. That might be where a G7 or something comes into play, where you can use the 70-200 on your main camera and the 24-70 on the backup camera as a mid shot, perhaps positioned further away, or go to 70 and without a speed booster it would be a tight shot. I'm employing this strategy in my work. My main setup for travel film-making is a GH5 combined with a Sony X3000 action camera and next trip I will probably buy a GH3 as a second / backup body. It will take the same lenses, can be used as a second angle if required (either for real-time or time lapses), and if my whole main setup of GH5 goes in the drink (for example) then I can easily replace it with the GH3 and the only overheads of the GH3 will be to carry an extra 3.5mm on-camera mic, USB battery charger and a couple of batteries, and a 14/2.5 lens as replacement to my 17.5/0.95 main lens. If my X3000 dies then I can use my 7.5/2 lens on the GH5 or on the GH3 to replace the FOV. I would lose the waterproof abilities of the X3000 but that's the price of a camera dying on a trip.
  23. I suggest: if you don't have a monitor calibration device, buy that as top priority and then buy the monitor with what is left - a cheap calibrated monitor will kill a more expensive uncalibrated monitor read lots of reviews.. when I was monitor shopping I read a bunch of them and there is all the information you need out there think about size and resolution, for example the larger the monitor the larger the resolution is typically, which will end up specifying how many of your controls you can view in your NLE at a time and also what resolution your preview window will be, which isn't so critical for editing but is for things like sharpening and other resolution-related tasks also, in combination with the above, think about aspect ratio, as those super-wide monitors have room for more UI in some NLEs
  24. If anyone is using Resolve and looking at the new Macs, this shows very good results: Note that there's a special download link to the new version for the M1 Macs. TLDR: the cheapest Mac mini plays 4 x 4K files simultaneously with 2 nodes of grading on each clip.
  25. I agree with the above sentiments about getting one and trying it out. In terms of RAM and Apple vs PC, I've found that Apple handles RAM better than PCs, but it all falls apart if you run out of space on the SSD, so I'd suggest finding some SSD benchmarks if they're around yet, and buying the largest one that has good speeds. I haven't kept up with the tech, but it used to be that there was a sweet spot in size where below and above that size the performance suffered. I recently bought a new 13" MBP but seriously considered the Mac Mini as you sure get a lot of performance for your money compared to other Macs. One thing I've wondered with the benchmarks is the T2 chip, which supposedly accelerates encoding and decoding, but it doesn't appear in things like the Activity Monitor, and I struggled to find the detail about which types of files it worked with. For example, maybe it works with h265, but is that only 420 or 422, and what about 10-bit, etc etc.
×
×
  • Create New...