Jump to content

kye

Members
  • Posts

    7,482
  • Joined

  • Last visited

Everything posted by kye

  1. You can agree or disagree or bump whatever threads you want - I showed you examples from the real world. If bumping threads somehow changed reality, I'd bump as many threads as was necessary to make it so that I could get Hollywood level colour from my phone without any work in post. Hell, I'd create all the accounts and make all the posts myself if it would actually make it so. I am so vocal about this because what you are asking about is what I desperately want, but that just isn't how it works. To answer your question directly, no I don't say that the camera doesn't matter at all, but the camera doesn't matter in terms of getting great shots straight-out-of-camera without any work in post, because none of them can do it. Have you ever seen ungraded Alexa footage? It looks just like ungraded footage from any other capable camera (S1H, BMPCC, etc). In fact, you know what... here's your answer. The BMPCC 4K. Here is a comparison between an Alexa and a BMPCC 4K without any grading done in post except the ARRI LUT (and of course the BMPCC4K has had a conversion put onto it to make it match the Alexa). Juan is a professional film-maker / colourist and this is the real-deal, not a YT LUT bro product. If you think that any camera is capable of what you want then the Alexa must be, and as you can see, with that conversion the BMPCC4K can do it too. This is the product page. https://juanmelara.com.au/products/bmpcc-4k-to-alexa-powergrade-and-luts Knock yourself out!
  2. More examples of bad lighting. This was a 709 shot from my GF3, which obviously couldn't auto-WB far enough to compensate (yes, this looked white in person): My best attempt at grading in post also couldn't compensate well enough: But the real demonstration is on a project. Here's a camera test I shot. These are the images after grading: They all look pretty straight-forward, but it took a lot of work to get to that. Here are the shots SOOC: Note that adjacent shots have considerably different looks - SOOC: After: Obviously I've let the flaring lower the contrast on the middle images to a certain extent because otherwise it would look too forced, but the tint of the first image and second ones needed to be evened out as one had the sun in it and the other didn't. I've shot these tests by the beach many times, using many different cameras (OG BMPCC, BMMCC, GH5, GX85, XC10, GF3, iPhone, GoPro, etc), shooting manually and in auto, in RAW / LOG / 709, etc etc. All required decent amounts of work in post to even them out and look normal. It's like anything - the natural look takes the most amount of work and is, in reality, the least natural. You keep saying you want nice looking images without doing any/much work, but I've been working super hard at this for quite some years now and it's just not possible. You either get nice looking images with work, or you wave the camera around and you get out what you put in - a film that looks like a dad with a handycam. The myth that you can buy it was created by equipment manufacturers trying to sell you cameras and LUT bros on YT trying to sell their LUT packs.
  3. ....and if that image doesn't illustrate the problem with lighting, how about this one. SOOC HLG: With a basic grade: These lights all looked white in person! I sat down, pulled out my camera (GH5 again), looked through the viewfinder and was stunned at the green/magenta mess the camera saw.
  4. Sure. You just have to hope that the world is perfect and doesn't give you mixed lighting. That every shot you take has the same lighting ratio. That your camera doesn't have metameric failure as the WB changes to match the lighting changes. Etc Etc. Even on completely controlled film sets, colourists still tweak each shot to even them up between angles etc, so even if the world was perfect your results still wouldn't be.
  5. The image I posted with the bad lighting was GH5, shooting LOG, with a standard conversion to 709. The GH5 is a huge capable camera, the issue was the lighting not the camera.
  6. I watched it at 144p and it looked almost like a normal (shitty) romcom. I know it will get better, but at the moment it's really only putting emotional drivel straight-out-of-film-school first-time-directors out of business, and they weren't making money before either. It's when they start making awful action movies that it will shake things up. Emotional drivel doesn't make money, but the action equivalent makes billions....
  7. True. Especially if you wanted to have good control over it independently across the 6 main colours. There's a lot of talk about how these two plugins are going to put DCTL writers (plugin scripts essentially) out of business. I don't think it will, because DCTLs can be made to do a lot more things than they are currently used for, but it's an interesting observation that the DCTL writers were using them to do things like this because they weren't so easy from the UI. In the same way that a new camera gets everyone on here excited and then start to argue with each other, discussing film emulation has the same effect on the colourist groups. I've found two things: 1) The more I read these discussions the more I realise I don't know. 2) The more I read these discussions the more I realise I don't care! Seriously, the focus for the colourists who are arguing seems to be how accurate they are. What is interesting though, is like here with cameras, the people arguing seem to care a lot about tiny little things, and yet the people out in the world doing the things also don't care about the accuracy of the tiny little things, but just see the emulations as useful for actually doing real work. I suspect that the niche for Dehancer is likely to either be that it's more accurate, at least for certain film-stocks, or that it is more useful in some feature or other. These are the kinds of things that colourists seem to care about. The fact you tested it with a stills film rather than a motion picture stock might also be significant as the colourists likely don't care too much about those.
  8. This is a fundamental split in the camera communities - those who like the look of cinema and those that like the look of video. Yes, the video that @mercer posted was low-contrast / desaturated / greenish. That's the look. It's also the look of a great many feature films and high budget TV shows. Notice the similar colour palette? This one even includes it 🙂 Some consider it THE look of cinema. You might be thinking that you're interested in a neutral look because you aren't making action / thriller / horror movies, and that's fair, so where is the look with the natural skin tones? To that, I ask, which look with natural skin tones are you talking about? But, you meant a neutral look! Sorry. You must mean an image that has only been technically converted to a correct image... like these. But these don't look the same either ...and yet they are shot with neutral lighting by professionals and even have test charts in them to ensure that the image is correctly exposed and balanced etc - these are literally test images! If these images don't look the same then how the hell can any image be correct? This is what I'm saying. There is no neutral image. The lighting angle changes the look. The lighting ratio impacts the look. Time of day impacts the look. High key vs low key. Just kidding! Lenses impact the look. Filters etc etc. This is why I emphasise working in post. Take the above image for example. A little work in post, and voila - now you have a "more accurate" match! Or what if you over exposed your camera drastically? No problem if you know what to do in post.. (Source: https://cinematography.net/alexa-over/alexa-skin-over.html) This is why trying to get the look you want by only looking at the camera and ignoring the other aspects, when they can completely override the differences between cameras is misguided. Hell, with a simple transform you can turn one capable camera into another anyway: (credit: Miguel Santana ILM) I truly do understand the temptation of the camera body. It has the most buttons, it's the thing that everything connects to, it's the complicated thing that everyone talks about, it costs lots of money etc. They're also super cool, absolutely. But they're not the defining object when creating the look, even if you literally buy one with a lens attached and only shoot in a 709 profile, then you're still shooting things that look different based on the lighting and composition and how you expose etc etc. But what if you just want to shoot what is there and have it look nice. Absolutely. This is literally what I do. I shoot travel with my GX85 and mostly a single lens, and it only shoots in rec709 profiles. However, because of all the above factors, the footage will vary from shot to shot. So in order to make it more uniform in the edit I learned to colour grade. Let alone the absolutely horrific lighting that is around... Take this image I've shared previously: Look at the sleeve of the jumper - it is a single colour - the lights are just very low-cost LEDs and although they looked white in person they are very different hues to the cameras eye. Notice how that yellow is bleeding into his hand near his wrist and also into the lower part of the ladies face? If you want good skin-tones - oh boy you better hope that you get good lighting! Otherwise, colour grading is there to fix what your camera did, rather than ruin it. Getting a neutral 709 video-style look with lighting like this would require a huge amount of work in post. That's why these "which camera to buy to get good skin tones" always have a silent assumption before the question that say "assuming the world doesn't exist or if it did exist then assume it's perfect", which obviously isn't a very useful assumption.
  9. No-one is going to hack the BM hardware, they will have put a huge amount of effort into the security because they essentially use Resolve to drive hardware sales, so it's critical to their business model. In terms of buying a panel, here's my advice. There are two benefits you would get from buying a panel.. The first is speed. If you have to grade a 90 minute feature film in 5 days, you will have over 2000 shots to grade and be approaching needing to grade 1 shot or more per minute. If grading a shot involves doing 6 things, then that's 12000+ actions - PER WEEK! With these kinds of numbers, if you can save 1s per action, that's 3.3 hours per week saved. A panel pays for itself in this type of scenario. The second reason is pushing things against each other. For example, if you want a bit of teal/orange with the LGG wheels, you will push the Lift wheel towards blue and at the same time you push the Gamma wheel towards orange. You will be balancing one against the other, and you are adjusting how much of each you push. This is almost impossible to do if you can't adjust both wheels at the same time, so it's extremely tedious with a mouse for example. You might also push Gamma against Gain for a highlight rolloff. Saturation against Colour Boost for saturation compression, or skintones, etc. Lots of things where you push one thing against another. Only controllers that are official to BM can do this type of simultaneous actions. I suggest you take a careful look at what you're doing when you grade a project and look to see if the panel will actually help you do it. I mean look in detail, confirming which controls in Resolve you use and confirming they're on the panel. Resolve has 1000 controls and only a very few are on the cheaper panels. It's also worth checking on things like the HDR Palette how the panel will work. The HDR Palette has a bunch of wheels but the controller does not. IIRC the controller moves the three wheels that are visible on the UI, but if you want to adjust three that aren't next to each other then I think you have to use the UI to navigate back and forth, which isn't that smooth a workflow, and you can't push two against each other unless you can control them at the same time, etc. For example, if you are going to use the Film Look plugin, which has exposure and WB etc, to grade each shot, then you will need a panel that allows control of OFX plugins, which I think is the Mini or Advanced panels, but they're NOT cheap! The other reason why you might want a panel is to teach yourself the old ways of grading, like LGG or Offset/Contrast/Pivot, and so in this sense the limitation makes sense. This is why I bought mine, but I now find those controls to be too limiting and to a certain extent, they're kind of a legacy now, although still popular because lots of colourists are old school and learned on those tools. Anyway, the panels are very good for what they do, but they don't do that much. In todays world where software is super flexible and UIs are great, when using a panel you will be quickly reminded that BM is a hardware company lol.
  10. I've got mixed feeling for the new panel. On the plus side, it's got more controls and buttons than the previous one, and includes functions like adding new nodes etc, which you couldn't do with the previous one. On the negative side, it's still going to be a very limited product, the way that all BM hardware tools seem to be. If you look at the forums there are simple requests with hundreds of people showing support for them that BM has been ignoring for version-after-version of Resolve and based on their track record just won't ever implement. No one knows how they choose what to implement and what they don't, but it sure isn't from the forums. I bought a panel and taught myself to grade using it, and it is very intuitive, but it's very very limited in terms of what you can actually do with it - it's a tool really for people who have to do very basic grading on thousands of shots, week-after-week. If you want to do anything other than the controls on the panel then it's no good. Lots of colourists have a tablet or KB/mouse as the first things on the desk. The fact that the panel is small is a great feature, but in terms of it being a step towards what I'd want as a small control surface for mobile grading, it's a small step towards a far-off destination.
  11. Doh! I guess it could be worse - you might have just bought FilmConvert or Dehancer or one of those things - they're MUCH more expensive than that! In terms of subtractive saturation, there are tonnes of ways to do it in Resolve without buying anything, but you have to do a bit of research and learn a little bit about colour grading. This is why I keep banging on to people about it.
  12. Yeah, Film Look Creator and the ColorSlice looks pretty good, but a few that caught my eye were the AI based Dialogue Separator which give separate volume controls for the Voice, the Background, and the Ambience of the voice (ie reverb) so you can remove room-sound as well as the background, and the Music Remixer which gives separate volume controls for voice, drums, base, guitar, and "other". The music remixer will allow much easier and more powerful editing of music to fit your edit, rather than the other way around. I frequently chop up a song and cross-fade between clips to extend the same section or glue the ending onto the middle to end it sooner etc. TBH my main challenge is working with skin tones and Resolve doesn't have the most intuitive feature-set for what I want, so I'm in the awkward space of contemplating writing my own DCTL scripts or trying to massage it's existing features into use for what I want to do. The Film Look Creator and ColorSlice looks good but they seem to lack a couple of really basic (and quite obvious) features - neither has the ability to compress skintone hues, which is a major feature of film stocks, and the Fill Look Creator also doesn't have a slider directly for desaturating the highlights, which is another odd thing because that's a standard thing that film does too. Still, better than nothing for sure! Also, IIRC they tend to focus on new features in the early betas, and then switch gears to optimising performance as they move to a stable release, so the fact it's a bit slow now is to be expected.
  13. That sounds great! Having two people instead of one would be an incredible advantage and time saver. I've tried to film myself for camera tests enough to know that it's almost impossible to frame and focus and expose correctly when you're the one in the frame!! Also, what about Resolve 19 are you excited about?
  14. Yeah, if you have to walk backwards and it has to be professional-level results then that's absolutely a time when you need to pull out all the stops. I haven't really seriously tested the EIS on the GH5, but I do remember playing with it and it was more stable than the IBIS alone, which makes sense. If you can get gimbal-like or gimbal-near performance then it sounds like that will be a real advantage for those key moments. I bought one of those phone gimbals and I think I used it about 3 times ever. It was such a PITA to have to run the app to connect to the gimbal, and the stupid thing lost the horizon gradually as you were panning. When you need to shoot fast to capture things happening in real-time that faffing is the last thing you want!
  15. What situations do you need that level of stabilisation for? In terms of stabilisation, MILCs really have a long way to go (but a lot of potential!) compared to the crazy level of stabilisation that GoPro etc showed was possible. Obviously there are many factors involved, but there's no reason we couldn't get very good stabilisation from MILCs. One thing I do see, however, is that it's possible to have too much stabilisation if the camera is moving in 3D space, because if you stabilise too hard then you get that gimbal effect where the camera is locked onto a direction but is floating around in space like a drone trying to hover. If the stabilisation isn't quite as good and leaves a little shake in the frame then the floating blends in with the shaking and it just looks like hand-holding and doesn't look so odd.
  16. What software are you grading in? and what's your pipeline? I'm in Resolve, so tend to use Colour Space Transforms, which gives a whole other way to look at things because everything is on the table... the ARRI LUTs, Print Film Emulations, etc etc. Lots of discussions about equipment occur here, but it's basically impossible to discuss cameras without doing it in the context that they're for a particular thing. Otherwise you could be talking about how the GH1 is better than the Z8 because it makes a better paperweight, or how an AE-1 is the best camera ever because when you mount it on a pole in your cornfield the sun glints off the mirror and keeps the crows away. Without context there is no basis to discuss anything.
  17. I'm not familiar with the GH5v2 but Panasonic was (at that time) updating cameras with all the user-feedback, and your description was certainly things that the community was wanting. I definitely agree that one of the main challenges is taking a clip that was shot in LOG and has 10-14 stops of DR in it, and somehow stuffing that into Rec709 which has just over 5 stops of DR. This obviously manifests in having to crush or severely compress various areas of the luminance range, but it also means that the source material can have colours that are dramatically more saturated than Rec709 can contain and you'll need to work out how to contain those too. Once you have enough DR to shoot the scenes you need to shoot, having more is actually a liability rather than a feature. I co-produced a 5-min short with my sister a long time ago, and we estimated that all up it had 10,000 person-hours in it. But enough of this blasphemous film-making talk - we should go back to talking about camera colour profiles like film-making doesn't exist!
  18. I highly recommend going to the source if you're interested in going deeper, here are a couple of interviews with Jill on Joker that I found to be fascinating and thought-provoking: ...and this interview with Jill on John Wick and other films where they even talk about specific shots etc: Lots of info out there if you search and go looking for it 🙂
  19. I'm just wondering why you were posting a random YT video. If I posted a new thread with every video I liked then the place would just look like my YT viewing history and not a forum where discussions can happen and people can share knowledge etc. I don't know what experiences you had in high school, but I've been an adult a long time and I've become fond of talking to people. I really don't know what you were trying to say? What point were you trying to make? Maybe if you actually typed something then there might be some communication....? There are all sorts of interesting things in that video, so the discussion could be a good one. Let's all leave high school behind and try and discuss things like adults 🙂
  20. Here's my recommendation for SOOC shooting - Sony AX100. As Dave says "Sony AX100 looks better than your camera". Just look at the nice contrast, saturation, and above all... skintones! and in mixed lighting no less! Good luck getting that with a "better" camera - they all have far too much DR to give you a punchy image from their 709 LUTs or profiles.
  21. Which is it? Minimal processing / grading? or SOOC? They're VERY VERY different! Just for arguments sake, let's definite "minimal processing/grading" of an image as being limited to: 5 minutes of adjustments applied to the whole project 30s of adjustments per clip only using basic operations that can be done in PP/FCPX/Resolve using the standard tools This definition, if you were to make a 5 minute edit, with an average shot length of 3 seconds, then that's less than an hour of colour grading for the whole project. The differences that that effort can make will completely overwhelm any minor differences that different cameras have. If you were to say that the best SOOC colour was 5/10 and the average was 3.5/10, then the graded images you can do in that time with those rules will easily be 9/10. You might think that an hour sounds like a long time, but it's nothing compared to how long it will take you to edit something anyway. Casey Neistat did his daily vlogs, which were usually between 5-10+ minutes each, and took 5-9 HOURS to edit. This might sound like a lot, but he was an experienced editor even before he did his 800+ daily vlogs, and he also mostly knew what the film was about etc, so he wasn't filming without a plan. I've heard other YT film-makers (where the result is a film and not a vlog or whatever) say that they spent 30 HOURS even just colour grading their 20 minute film! The other thing that you might not be considering is that shooting for great looking images SOOC will require you to either have boring flat and lifeless looking images, or you need to crank up the saturation and contrast in-camera (as someone mentioned earlier in the thread) but this requires you to shoot really really carefully to ensure that all images are shot with exactly the right exposure and the right light levels and contrast levels. One thing I find in colour grading is that different images require very different levels of contrast etc to look coherent together - you might have one shot with something bright or dark in the background and then the next doesn't have it - so in order to look coherent you need to adjust the contrast between the two images slightly. Black and shadow levels is another thing that you want to try and get relatively consistent between the shots in the edit. Shooting meticulously like this will take a lot more time during shooting than to just move a few sliders in post - you have to setup each shot, ideally to adjust the level of contrast and saturation between each setup to get a coherent look, watching your levels and histograms etc. This would add 30s or more to each shot before you hit record. If you're recording anything except a controlled environment where everyone is waiting for the camera to be ready then you'll miss all the good stuff. It's fun to talk about technical things, sure, but these things aren't independent of the rest of the process. Your question may as well begin with "Let's imagine we're in a parallel universe where instead of cameras being for making videos, they're really....."
  22. If you stare at them for hours and hours then you start to notice differences and they start to look normal. That's how all the YT "cinematic" content now looks nothing like cinema. I just watched Kill Bill 1 again, and yeah, it might as well be a different universe... When the people who can create any image they like with virtually unlimited budget create images like these - contrasty and punchy and not sharp in the slightest, then the people who are pixel peeing the 6K cameras aren't even playing the same game.
  23. Interesting comparisons and definitely a step up in quality. When used zoomed-out like this the image is quite good I think - it's very "action camera" but has nice clean colours etc. The DR wasn't bad either I didn't think. Maybe they'll be able to apply whatever improvements they've made to the X4 to the 1" model and it will be better again! It was interesting looking at the stats for the Insta360 Pro 2, which uses "6 x MicroSD cards + 1 x Full SD card" so rather than have an expensive media setup they just went for lots in parallel. The "Bitrate per lens: Up to 120Mbps" so is practically around the 600-700Mbps (there would be some overlap between lenses so not all bitrate would go straight to the final image). Maybe a "Pro" model could be a compromise and have dual media slots, with 200-300Mbps being written to each card. That would be a good compromise. The fact that the image would be across multiple files shouldn't be too difficult to manage considering that these files need specific support to process the image anyway. You might be right about a renewed push to 360 VR and 3D 180 VR, and I have a vague recollection of camera models that could fold one camera around, so it could be a 360 VR or a 3D 180 VR camera depending on how you configured the lenses. Having two card slots with one per camera makes sense in that way too. The fact that the ONE RS was modular also indicates they might be open to such a thing.
  24. Yes @Emanuel, but look at how you're talking about them... these words from you explain it well: they actually shine for what they are I usually divide between usable and unusable only for internal consumption aka no professional use at all these toys must be seen as complementary tools They don't replace anything etc. But let's look at these comments more deeply. Theme 1: They're toys If they had a serious codec then would they still be toys? I mean, in good light a small sensor camera is more than capable of creating professional images - lots of ENG cameras had VERY small sensors! Even if they released a PRO version with a CFExpress or SD UHS-III slot and could record 8K at 1,000Mbps... what would be left that would make it a toy and not a professional tool? Theme 2: They are just to add to the existing tools Let's imagine you're recording out in the real world and don't know where to point the camera because you don't know what will happen - what is the alternative to one of these? Option 1: Insta360 Pro 2 You can't realistically ask someone to carry a 1.5kg / 3.3lbs camera on a stick while they're actually doing things. So that's out. Option 2: Ask the person to carry a "normal" camera You can't realistically ask someone to be a cinematographer and aim a camera at what they're doing while they're actually doing something - plus you can't record the world and also their reaction to it at the same time. So that's out. Option 3: Use multiple camera-operators to walk in-front of the person This completely ruins any spontaneity of someone actually doing something out in the world, your fly-on-the-wall reality content just turned into a film set. So that's out. What else????? There aren't any other options to record content in the way that you can when you get someone to carry a 360 camera in one hand or strap it to their backpack or to their bike/scooter/etc and then just tell them to ignore it. By getting someone to hold the camera on a stick about eye level and a few feet in front of them, we can capture the situation the person is in, and also the person reacting to that situation, simultaneously. It's the perfect way to have a "fly-on-the-wall" perspective, only it's floating in-between the subject and the world, all in the form of a sausage on a stick which few people pay much attention to. If we had an invisible camera that could fly it wouldn't do a much better job than these things do. I genuinely believe these things are just a better codec away from being the main camera to record in some situations. They are an entirely new product category, and yet they're turned into toys because of a stupid design decision that limits any decent use in order to make them more usable for morons.
×
×
  • Create New...