Jump to content

kye

Members
  • Posts

    7,955
  • Joined

  • Last visited

Everything posted by kye

  1. Smuggle in your own popcorn, and put your mobile phone on silent, but don't accept the defeat of cinema! RISE UP!
  2. kye

    Exposure tip

    Good question and thanks for raising it. I've contemplated this option in the past and decided against it because at the time I decided that being able to see colour was more important for framing and composition for me. I shoot personal videos of my friends and family in uncontrolled situations so it's useful for me to see the colour so that I can include it in the shot or exclude it depending on what I want. Having said that though, with the freedom of colour grading, I can 'fix' any undesirable colours in post, so even if I don't see them during filming it's not a problem for the final footage. In a sense I'd like a setting that's half-way, where saturation is reduced to perhaps a third and the highlights are in a fully-saturated colour. It would be great to be able to apply a display LUT in-camera (GH5 doesn't have that feature) as I could design one that partly desaturates the image and also shows an image with increased contrast so that it exaggerates exposure and makes it easier to get the exposure of things like skintones correct. I'd also make it so that pure white was 100% red and pure black was 100% blue, so you could tell what you were clipping.
  3. Q: When will digital catch up to film? A: When you learn to colour grade properly. With a few notable exceptions (you know who you are), the colour grading skill level of the average film-maker talking about this topic online is terrible. Worse still, is that people don't even know enough to know that they don't know how little they actually know. I have been studying colour grading for years at this point, and I will be the first to admit that I know so little about colour grading that I have barely scratched the surface. Here's another question - Do you want your footage to look like a Super-8 home video from the 60s? I suspect not. That's not what people are actually looking for. Most people who want digital to look like film actually don't. Sure, there are a few people on a few projects where they want to shoot digital and have the results look like it was shot on film in order to emulate old footage, but mostly the question is a proxy for wanting nice images. Mostly they want to get results like Hollywood does. Hollywood gets its high production value from spending money on production design. Production design is about location choice, set design, costume / hair / makeup, lighting design, blocking, haze, camera movement, and other things like that. If you point a film camera at a crappy looking scene then you will get a crappy looking scene. There's a reason that student films are mostly so cringe and so cheap-looking. They spent no money on production design because they had no money. Do you think that big budget films would spend so much money if it didn't contribute to the final images? I suggest this: Think about how much money you'd be willing to spend on a camera that created gorgeous images for you, and how much you'd spend on re-buying all your lenses, cages, monitors, and all the kit you would need to buy Think about how much time you would be willing to invest on doing all the research to work out what camera that was, how much time you would spend selling your existing equipment, how much time you would spend working out what to buy for the new setup, how much time you would spend learning how to use it, how much time you would spend learning to process the footage Take that money and spend half of it on training courses and take the other half and put it into shooting some test projects that you can learn from, so you can level-up your abilities Take that time you would have spent and do those courses and film those projects People love camera tests, but it's mostly a waste of time. Stop thinking about camera tests and start thinking about production value tests. Take a room in your house, get one or two actors, hire them if you have to (you have a budget for this remember) and get them to do a simple scene, perhaps only 3-6 lines of dialog per actor. It should be super-short because you're going to dissect it dozens of times, maybe hundreds. Now experiment with lighting design and haze. Play with set design and set dressing. Do blocking and camera movement tests. Do focal length tests (not lens tests). Now do costume design, hair and makeup tests. Take this progression into post and line them up and compare. See which elements of the above added the most production value. But you're not done yet - you've created a great looking scene but it is probably still dull. Now you have to play with the relationship between things like focal length / blocking / camera movement and the dramatic content of the scene. Most people know that we go closer to show important details, and when the drama is highest, but what about in those moments between those peaks? Film the whole scene from every angle, every angle you can even think of, essentially getting 100% coverage. Now your journey into editing begins. Start with continuity editing (if you don't know what that is then start by looking it up). You now have the ability to work with shot selection and you should be using it to emphasise the dramatic content of the scene. Create at least a dozen edits, trying to make each one as different as possible. You can play with shot length, everything from the whole scene as one wide shot to a cut every 1s. You can cut between close-ups for the whole scene, or go between wides and close-ups. Go from wide to mid to close and go straight from wide to close without the mid shots in between. What did you learn about the feel of these choices? What about choosing between the person talking and the person listening? What does an edit look like where you only see the person talking, or just the person looking? Which lines land better when you see the reaction-shot? Play with L and J cuts. Now we play with time. You have every angle, so you can add reverse-angles to extend moments (like reality TV does), you can do L and J cuts and play with cutting to the reaction shot from some other line. What about changing the sequence of the dialogue? Can you tell a different story with your existing footage? How many stories can you tell? Try and make a film with the least dialogue possible - how much of the dialogue can you remove? What about no dialogue at all - can you tell a story with just reaction shots? Can you make a silent film that still tells a story - showing people talking but without being able to hear them? Play with dialogue screens like the old silent films - now you can have the actors "say" whatever you like - what stories can you tell with your footage? Then sound design.... Then coaching of actors.... Now you've learned how to shoot a scene. What about combining two scenes? Think of how many combinations are now available - you can now combine scenes together where there are different locations, actors, times of day, seasons, scenarios, etc. Now three scenes. Now acts and story structure.... Great, now you're a good film-maker. You haven't gotten paid yet, so career development, navigating the industry, business decisions and commercial acumen. Do you know what films are saleable and which aren't? Have you worked out why Michael Bay is successful despite most film-makers being very critical of him and his film-making approach and style? There's a saying about continuity - "people only notice continuity errors if you film is crap". Does it matter? Sure, but it's not the main critical success factor. Camera choice is the same.
  4. I'm going to disagree with all the sentiments in this thread and recommend something different. Go rent an Alexa. For practical purposes, maybe an Alexa Mini. Talk to your local rental houses and see if there's a timeframe you can rent one and get a big discount, often rental houses are happy to give you a discount if you're renting it when the camera wouldn't be rented by anyone else so have a chat with them. Shoot with it a lot. Shoot as much as you can and in as many situations as you can. Just get one lens with it then take it out and shoot. Shoot in the various modes it has, shoot into the sun and away from it. Shoot indoors. Shoot high-key and shoot low key. Then take the camera back and grade the footage. I suspect you won't do this. It's expensive and a cinema camera like an Alexa is a PITA unless you have used one before. So I'll skip to the end with what I think you'll find. The footage won't look great. The footage will remind you of footage from lesser cameras. You will wonder what happened and if you're processing the footage correctly. I have never shot with an Alexa, but I am told by many pros that if you don't know what you're doing, Alexa footage will look just as much like a home video as from almost any other camera. Cinematic is a word that doesn't even really have any meaning in this context. It really just means 'of the cinema' and there's probably been enough films shot and shown in cinemas on iPhones that now an iPhone technically qualifies as being 'cinematic'. Yes, i'm being slightly tongue-in-cheek here, but the point remains that the word doesn't have any useful meaning here. Yes, images that are shown in the cinema typically look spectacular. Most of this is location choice, set design, hair, costume, makeup, lighting, haze, blocking, and the many other things that go into creating the light that goes through the lens and into the camera. That doesn't mean that the camera doesn't matter. We all have tastes, looks we like and looks we don't, it's just that the word 'cinematic' is about as useful as the word 'lovely' - we all know it when we see it but we don't all agree on when that is. By far the more useful is to work out what aspects of image quality you are looking for: Do you like the look of film? If so, which film stocks? What resolution? Some people suggest that 1080p is the most cinematic, whereas some argue that film was much higher resolution than 4K or even 8K. What about colour? The Alexa has spectacular colour, so does RED. But neither one will give you good colour easily, and neither will give you great colour - great colour requires great production design, great lighting, great camera colour science, and great colour grading. By the way - Canon also has great colour, so does Nikon, and other brands too. You don't hear photographers wishing their 5D or D800 had colour science like in the movies. What lenses do you like? Sharp? Softer? High-contrast? Low contrast? What about chromatic aberation? and what about the corners - do you like a bit of vignetting or softness or field curvature? Bokeh shape? dare I mention anamorphics? But there is an alternative - it doesn't require learning what you like and how to get it, it doesn't require the careful weighting of priorities, and it's a safer option. Buy an ARRI Alexa LF and full set of Zeiss Master Primes. That way you will know that you have the most cinematic camera money can buy, and no-one would argue based on their preferences. You still wouldn't get the images you're after because the cinematic look requires an enormous team and hundreds of thousands of dollars (think about it - why would people pay for these things if they could get those images without all these people?) but there will be no doubt that you have the most cinematic camera that money can buy. I'd suggest Panavision, but they're the best cameras that money can't buy.
  5. There was a hack to do it that I came across. From memory you save the project file (very important!), then select all clips on the timeline, edit-cut, now the timeline is empty you can change the frame rate, then paste everything back again. I remember trying it and it working, so if the above doesn't work then let me know and I'll see if I wrote it down anywhere.
  6. Great to hear you're upping your colour grading game, and getting better results! I watched that video a long time ago and found it quite useful at the time. The workflow you describe above is a pretty standard workflow in colour grading circles. In terms of how I believe it's normally discussed: Skintone exposure is typically set on location through a combination of camera and lighting / lighting modifiers Somehow the image gets converted to a 709 space (This can be via a great many methods, depending on your preferred workflow, but a conversion or PFE LUT is fairly common) The adjustments that get made to the whole image are referred to as primary adjustments, or "primaries" The adjustments that get made to parts of the image (for example via power windows or a key) are secondary adjustments, or "secondaries" Colour grading can seem like a bit of a dark art, and in many ways it is, but it's definitely a case of the 80/20 rule where you can put in a little effort and get a big reward in return. Here are a few videos that I've found useful that cover the basics but go a little further than Avery does in the above video... enjoy! Great video from Wandering DP (excuse the clickbait title - it was done as a joke!): I can't speak highly enough of Wandering DP - his channel is full of cinematography breakdown videos where he talks about lighting and composition and are tremendously useful if you shoot your own content. Ironically, the above video by him talking about colour grading is better than most YT colour grading videos, despite the fact he isn't a colourist, doesn't claim to be one, and this is the only video on his channel that talks about it! I have gone through a kind of mental 180 degree shift in how I think about shooting and colour grading over the last 6 months or so, and I think that his videos have played a significant part in that transformation. My understanding about how to go about shooting and grading is now far simpler, clearer, and I'm getting radically better results, and I'm not sure how I didn't understand this years ago or how there is any other way to think about it at all! And to take things up a notch, here's a video from Waqas, who is a professional colourist and obviously enormously talented. This video shows his approach, how he might make a commercial grade, and how he might make a cinematic grade. I can also recommend his videos too, as although he has a standard approach that he likes to use (as all colourists tend to have), most of his videos have little details and tips that you can pick up new stuff from, even if you've watched his other videos already. You'll also notice if you look at his channel that he recently did long interviews with the DP and the Cinematographer of Joker. The common theme between these two YT channels is that both of them are industry professionals, not internet professionals, so their frame of reference is how things are done on set, rather than the typical YT / Vlog / buy-my-LUT / links-in-the-description folks that are all over YT pretending to know what they're doing. Good luck!
  7. I suspect that it's the grade, or some part of the image pipeline. Apparently the first one was shot on a Red Epic, which according to wikipedia: "In 2010, Red released the Red Epic which was used to shoot The Amazing Spider-Man, The Hobbit, Prometheus, Pirates of the Caribbean: On Stranger Tides and The Great Gatsby as well as many other feature films." I'm assuming that it was the earlier Epic, instead of later ones, but even then, I didn't think that those other films looked particularly thin. I'd imagine that shooting with a Red you'd either shoot in RAW or Prores, so it would have been at least 10-bit. For me, it was quite a difference between the two in terms of image thickness. In terms of the creative effect, the subject matter of the first one is very digital/cold whereas the second one is more analog and human, so 'thin' and 'brittle' is a relevant and appropriate creative choice for the subject matter.
  8. I think if you applied it in the Timeline node tree then it should work? It's worth testing, although applying presets to the Timeline graph may or may not work, I've had trouble doing that in the past. If it doesn't work, you can append it to a single clip, copy it, apply it to the timeline graph, remove it from the clip, take the screenshots you want, then just delete it. The alternative (if it works - I haven't tried it) is to highlight multiple clips and then append the node to all of them. This should work for extracting stills, but removing it might have to be done manually, which is a PITA. The other other way, which is a different approach, is to setup your grade with a shared node as the last node, and apply the adjustment in it, but setting the strength of the effect to zero using the Key Output in the Key tab. Then when you want to enable it just up the Key Output, take the shots, then set it back to zero. If you have an existing project then you'd have to copy/paste that node onto every clip. I'm not sure if there's a bulk way to do that, but once you did it you could do anything you liked with it. If you were going to do that then I would suggest copying half-a-dozen shared nodes so that if you need them then they're already setup.
  9. Thoughts on how thin / thick these two trailers are?
  10. You're have a good point, but I don't think this is what matters. What does matter is the difference between what a camera is capable of and what most people get out of it. If someone is getting half of the potential of a P4K then giving them an Alexa isn't going to give much of an advantage, because the same limitations that prevented them from getting even close to the potential of the P4K will also prevent them from getting the most out of the Alexa. There's a saying about continuity - "if people notice continuity problems then your film is crap". I think colour is kind of the same in many ways. As much as I love it, a great film with BM colour, GH5 colour, or even Sony colour, is still a great film. There have been many reports of people that don't know what they're doing using an Alexa, and the results are reported to look like a home video.
  11. I think the hidden 'hack' for good IQ is to go back to 1080p. The GH5 has great 1080p modes, but even if the camera you have doesn't, if you shoot in the typical 4K modes then put them on a 1080p timeline then they're significantly better due to downsampling. I look at all the latest camera releases and don't really see much that I would want over what I have, from a 1080p perspective. Sure, if you're doing commercial gigs where people are paying for the spec, or recording stock footage or whatever, then sure, go for it. But any time you see a great image come up on your phone, you're not looking at something that's great because of the resolution, it will be the composition, lighting, colour, etc. That's where I'm spending my time and energy now.
  12. Sounds like you're talking about situations where everyone is paid and everything is at or above standards, and of course, at that point it's well worth spending money on equipment as it pays for itself in lost time. Having a dozen people or more plus equipment on set is expensive. In terms of the whole of film-making though, lots of people are making content where they are time rich but cash poor, which is more where I was talking about. I think it's easy to forget how broad a range of film-making there is going on - everything from YouTubers with a phone and (maybe) and LED light and a lav, people making features by working part-time and maxing out a couple of credit cards, folks doing weddings or corporates, people on low budget but industry rate sets, through to productions where there are people above the line and the daily rental on the trucks alone would make the credit-card film-maker cry. I try to keep my comments generic and in the context of everyone. Plus, these forums seem to be more frequented by people at the lower end of the scale than at the higher end.
  13. I'm actually not so sure about DR anymore. I compared the HLG mode vs Cine-D (709 equivalent) modes on my GH5 and the HLG has a couple of stops more DR (IIRC) than the Cine-D, but in real-life the differences weren't that much, even in extreme situations. The only time I missed some DR in my comparison shots was when the sun was in the shot, but even then it wasn't much. I used to shoot and think about DR in terms of making sure nothing was clipped, and then grading it to control everything. Now I realise that I don't care about things with that much DR. If I'm shooting inside and the outside is blown out then I can choose which thing I expose on, and if I care about the relationship between both then typically I can get a silhouette, and times when a face is important on the person inside (maybe they're looking out) then their face will be much better exposed and I can get the outside and their face. I'm not saying there are no situations where extra DR matters, but I'm saying that with, say, the 9-10 stops of DR that most cameras have now in 709 modes, that's enough for most situations. Also, the situations where 12 stops isn't enough, you might find that the 15 stops of high-end cameras is also not enough. Things with high DR that are common like fire, welding, the sun, any night scene where there is no ambient artificial light (eg, moonlight with torches, or moonlight with headlights) etc will be more than 15 stops of DR, so there's no point lusting after an Alexa in those situations either. In terms of lighting, I think maybe you're underestimating how much skill is involved in getting the highest quality shots that top-end shows and movies have. Give someone who is clueless an unlimited lighting budget (and an Alexa and great cine glass) and you'll still get something that looks awful. Lighting budgets only make a difference after you have someone with the skill to know what to do with them.
  14. The way I tested it was to block one eye and look through a tube at the image and test how convincing it was that I was looking at a real scene instead of a flat image. I positioned the tube so that I could only see the image through that eye, so that the border to the image didn't ruin the illusion. You can then flick back and forwards between images from different lenses taken from the same location and directly compare. Of course, going back to my original point, if you compare lenses of different focal lengths then you have to account for the different DOF because that has a huge impact on how 3D a lens looks. I'd suggest that it's so large that much of the perceived differences between two lenses could be that one of them is actually just slightly wider open than the other and that's what you're seeing. Of course, things like other types of distortions can also impact the impression of dimensionality, like the famous CZ 28/2 "Hollywood" lens and its distortions.
  15. Me too. The situation I really want is for the camera to control exposure with auto-ISO and auto-ND and then let me control SS and Aperture manually. I'd also like it to do face-recognition and exposure for that, even if I've got MF enabled. That way, the camera is doing the technical operations, and I am doing the creative operations.
  16. In terms of combining the bits together, you would only add the bit depths if they didn't overlap in DR. For example, if I took two 14-bit readouts, one a stop below the other, then 13 of the 14 bits from each read-out would be duplicate data, and my effective bit-depth is really only 15-bits. So in order to understand the total bit-depth you'll need to know the overlap range in the sensor. That would also be further complicated if they weren't offset by a whole number of stops, which would place the values of one readout between the values of the other, providing more bit-depth but less increase in DR. If you really want to understand this, try modelling things in excel and graphing them. That should give you a more intuitive sense of what is going on. In terms of there being a certain quality in the older sensors, this guy has done lots of tests comparing the older BM cameras to the P4K and Ursa. https://www.youtube.com/user/joelduarte27/videos My overall impression is that most people don't utilise anything like the potential of their cameras, and that the difference between what images most people get and the images that you see from an Alexa or RED is more down to user skill (in terms of lighting, composition, camera operating, and the complete image pipeline in post) than it is about any camera limitations that might exist.
  17. One thing to note, for anyone reading this thread, is that small variations in DOF create a very large perceptual difference of how 3D an image seems. I've previously compared a ton of lenses under test conditions and a 55mm lens seems much more 3D than a 50mm lens, when both are at the same aperture. I know this is different to compression, but it's likely to get mixed up in this conversation at some point considering that all these things are interrelated.
  18. That seems like the sensible option to me. It's a standard hardware configuration and makes the setup 100% Apple, which bodes well for optimisation and support in future updates and FCPX optimisations. Keep us informed about performance when you get it all setup 🙂
  19. kye

    camera movement

    This all looks suitable for hand-held to me. I don't know what they were saying, but from the various clips and acting, I can see that the characters (and therefore the narrative) is raw, unpredictable, unstable, violent, and high-energy. The editing is reflective of this as well. By contrast, think about the aesthetic of viewing something violent, like an armed robbery perhaps, through footage from a security camera. The security footage is completely stationary, and is wide-angle making any movement in the frame much smaller than a tighter lens would make it. This experience is very detached, impartial, and makes the action seem small, despite how intense it might be. Another alternative to having a static angle or hand-held footage would be to have smooth and steady camera movement like a pan or tilt. This would obviously not be a good aesthetic choice either, as smooth controlled movement is just that, smooth and controlled. This type of movement is often associated with beauty (like panning over a grand vista) or scale (tilting up to see a tall building). Both of these situations are controlled - the horizon is always level and the building is always vertical. The alternative to that is a steadicam or crane shot, where the movement is steady but is not controlled in the same way that a pan or tilt is. The aesthetic of a gimbal is that it is floating, and although being closer to the action and maybe even being affected by it in the example of a shot where the camera follows the action or revolves around characters, it's still got the feel of detachment. It lacks the reaction that a security camera lacks - it doesn't jump the way a human would. Crane movement is slightly different that it's normally more geometric, and as it often moves vertically more than a gimbal shot it's also more detached from a human point of view, considering that humans typically experience the world from eye-height. I think they're the choices for camera mounting and the various aesthetics of movement. The trick is to choose the one that most represents the emotional experience you want the viewer to have.
  20. kye

    camera movement

    Depending on what films or TV you are watching, it could well be a choice made due to laziness or fashion rather than an optimum aesthetic choice. There are plenty of film-makers who have a style that other people do not appreciate and are critical of. Michael Bay is a great example of someone that makes creative choices that many are critical of, but others are supportive, so it's all a matter of taste.
  21. Another test - including 4K, 5K, 6K, and 8K RED raw files.
  22. kye

    camera movement

    Any camera movement should be a deliberate artistic choice, designed to support the narrative and aesthetic of the film. I'm far from an expert, but there are lots of articles around if you google 'camera movement'. There is also the element of motivated vs unmotivated camera movement that you can read about as well. In terms of camera shake, it the aesthetic of it is that it's a bit more 'real' because that's how amateurs take video of real life, so it can give a more authentic feel to a shot. It can also make things more exciting, which is why they use camera shake in action sequences. Topics like this are so deep that you can never learn everything about it, even if you studied it for the rest of your life. However, these are the skills that will drastically improve your film-making. Camera movement, composition, lighting, editing, dialogue, sound design, etc etc etc..... the beauty of film-making is that you can learn a little and get a big improvement in your work, you can learn more and get even better, but you can study it for the rest of your life and never run out of things to discover and there is no limit to the degree that you can improve your work.
  23. @KnightsFan I'm far from an expert but the approach that I have taken (and I'm pretty sure aligns to what I was advised to do) was to calibrate the monitors in the OS so that all apps get the calibration applied, but I'm on Mac so it could well be quite different for you. I take a slightly different approach of having a preset node in Resolve that I append to the node tree, take the screen grab, then remove the node again. I've taken multiple shots and the brought them up in the UI and compared them to the Resolve window and found the match to be acceptable. @Oliver Daniel The other thing I forgot to mention about colour accuracy is to get a bias light that will provide a neutral reference. I purchased one from FSI https://www.shopfsi.com/BiasLights-s/69.htm which is just a LED strip that gets stuck to the back of your monitor and shines a neutral light onto the wall behind your monitor like this: The wall behind my monitor has black sound panels so I just pinned up some A4 paper for the light to reflect off. I figured you can always rely on reflex! There's all sorts of other tricks to getting a neutral environment, but probably the most important aspects are to calibrate your monitor to 100nits and 6500K, to get a bias light, to match the ambient lighting to the bias light (which is an accurate reference) and to ensure that the ambient light isn't falling onto the monitor and that it's quite dark compared to the brightness of your monitor. I used my camera as a light meter to measure how bright the ambient light was, but if you calibrate your monitor and pull up an 18% grey then match your ambient light to that then you should be good. Professional colourists talk about getting paint that is exactly neutral grey (which is available and very expensive) but once again, it's diminishing returns.
  24. To further add to the Mac v PC conversation, I think people underestimate the fact that film-making is a creative pursuit focusing on a visual and auditory experience. Every creative pursuit requires that the creator be comfortable to create, which is a function of building an environment that suits the tastes and preferences of the creator. The fact that film-making is a visual and auditory craft means that the kind of creators that it attracts are people that care about visual and auditory aesthetics, so it makes sense that the visual and auditory experience of the creative environment will directly impact the quality and quantity of the creative work, and how good the creative process is to experience for the creator. Considering that video editing is done on computers, this directly maps to the choice of a computer. Only considering the price that you pay for every million floating point operations per second is a valid perspective if it is your perspective. Others have their own perspectives, and they will always be different, either subtly or radically, and that is part of the beauty of people and creativity. I would hate it if we were all the same and every movie or TV show I watched was created the way that I would have created it - how dull and predictable such a thing would be.
  25. I asked the question on the colourist forums about calibration on a normal monitor and although the strict answer is that you need to have BM hardware some people said that their GUI matches their reference monitor almost exactly, obviously after calibrating, so it's really about what level of certainty and accuracy you want. If a pro colourist says the match is almost perfect then I figure that's good enough for me!
×
×
  • Create New...