Jump to content

kye

Members
  • Posts

    7,409
  • Joined

  • Last visited

Everything posted by kye

  1. Maybe. It would be a bit silly though - the brand recognition was part of the value of the company. BM is having a difficult time breaking into the cinema ecosystem despite having brand recognition for Resolve, so Nikon buying one of the "big three" brands that was being actively used by Hollywood, and then putting it in a drawer seems a bit counter productive.
  2. Agreed! That's why I said "MILC cameras with serious video features (like the Z9) all the way down to some select "creator" cameras with fixed lenses (like the G7X or Osmo Pocket 3 etc)". 😁 Your examples make sense, but I'd suggest they don't go far enough into the lowest end, which is why I specifically mentioned fixed-lens models like the G7X and Osmo Pocket. I think you might still have too "industry" a view on what is happening out there. On social media, you can be a "professional" in the sense that you're making serious money, but this doesn't automatically mean you upgrade all your equipment to large/heavy/complicated/manual/technical bricks that have already been assimilated by the Borg. This is what happens on sets, but most social media happens out in the real world, not on a set. The cameras I see out there being used most in vlogging land are the iPhone, the G7X, and the recent trend is for retro cameras. Anything larger than a P&S is referred to as "my big camera". Check out this vlogger who was so excited to buy a Canon Vixia Mini X... This isn't even the first time I've seen a vlogger talk about this camera recently, because they went viral because K-Pop artists were using then for vlogging. Why? It's barely the size of a battery! Also, you can film yourself with the wide angle while also having the screen pointing towards yourself, and it doesn't look like you're secretly filming other people, the sound is good, etc etc. All manufacturers except Apple are completely oblivious to the idea that there's a need for physically small cameras that create a professional grade image. I mean, do you think I'd be shooting with the GX85 and its 8-bit 709 files if there was something that shot LOG in the same form factor?? Be serious! Half of the reason I learned to colour grade so well was to overcome the weaknesses in the image from the small cameras.
  3. Hmm.. 10G ethernet isn't that fast I guess, but it's all relative. For example if you're shooting the highest quality setting "12K - 12,288 x 8040 Blackmagic RAW 3:1 - 1,194 MB/s" then you'll only be able to pull it off the camera in real-time, but that's not likely to be a situation that most people would be in. If you were shooting in 8K (Blackmagic RAW 3:1 - 533 MB/s) then you can copy it off in double-speed, or 8K 12:1 and 4K 3:1 (~133MB/s) then you're copying at 9x realtime. Of course, most productions are going to be rolling for a lot less time than they're stopped in-between takes, so the DIT can keep pace with offloading the files off the camera throughout the shoot without having to stop the production to swap out media. It is a pretty low-cost way to protect yourself against camera/media failures where you'd lose the contents of the media. It's like that old backup saying.. Two is One and One is NONE!
  4. I think that @Danyyyel makes a good point about the reputation and brand recognition of RED, so I suspect they will keep that in tact. However, a Nikon N-Line is completely possible. The growth of TikTok etc as video-centric platforms is absolutely true, but I don't think that the cameras that RED currently offer are really where the action is. If you want to make a line of cameras that are designed for shooting for social media, I'd imagine that this new N-Line would be: MILC cameras with serious video features (like the Z9) all the way down to some select "creator" cameras with fixed lenses (like the G7X or Osmo Pocket 3 etc) Log // RAW shooting - potentially with REDRAW because why the hell not? That would mean that Nikon would overhaul their own product lines to have better video (assisted by the techs and technology from RED), a new N-Line with video-centric and ease-of-workflow features, and keep RED as the professional cinema line for working professionals. Currently the market seems to have two kinds of cameras - hybrids for amateurs and cine cameras for working pros. The gap is that the market also contains a huge number of "professional content creators" who are working professionals but have completely different needs to the amateurs and also the video professionals, which is why we lament the poor / limited / crippled functionality of the hybrid cameras and also lament the lack of flexibility in the cinema camera product lines.
  5. As you were the one that asked for skin tones, was there a specific situation you were thinking of when you asked? I thought your question might be related to your adventures with the GX850 and shooting out in the real-world?
  6. Yeah, my impression was that lots of these things were aimed at high-end professional users. There was one bit where (if I understood him correctly) he said you can plug an ethernet cable into the camera and start editing the footage on the card while it's still in the camera. There was another part where (if I understood correctly) he said that Resolve can even access clips that are still being recorded, which would enable an editor to get started on a multi-camera show with 10 cameras all rolling for hours at a time. There is a whole world of situations outside the I-shoot-then-download-then-edit-then-colour-then-audio-then-export workflows.
  7. What a crazy presentation with so many new and big features, but what I took away from it is this: 1) Resolve 19 is awesome. 2) When it comes to cameras, Grant knows what's up:
  8. Apparently Soderbergh shot his latest film on the A9 III.. in this article he talks about a "Sony DSLR" but there are tweets etc elsewhere that say it was the A9 III. Maybe the global sensor was a deciding factor. Anyway, good to see that when moving away from cinema cameras the choice isn't completely ridiculous and something sensible was chosen. It's nice when film-makers don't just treat the world of consumer cameras as a novelty but actually as something that has genuine potential and capability. https://filmmakermagazine.com/124668-interview-steven-soderbergh-presence-sundance-2024/
  9. Firstly, it's awesome that you're actually shooting something! and the fact that you're on-screen is next level above that even, so in my books you're already successful 🙂 I'm not sure what you're shooting, and therefore what making it good / interesting really means in your context, but here's a few random thoughts.. I normally screw things up the first (few/many) times I do something, so just chalk it up to a practice shoot and keep going If you're able to get a technically competent capture then you can really change the look in post, so I'd suggest just covering the fundamentals The success or failure of a film depends 97% on what is in the frame (with the remaining 2% being sound, and 1% image) so that's where your attention should be going Is there a way you can separate the various tasks in your mind while you're shooting? For example, maybe you put in a full battery, an empty memory card, and then completely forget the camera exists and just roll as you do 20 takes of the scene from that angle, focusing on your performance and emotional aspects while you're doing this? If there are small errors in continuity or performance there is always the option to just include them and make it a more stylised final film. For example, if things cut funny or jump around a bit etc, and you lean into it in the edit, then the impression might be that the character might not quite be in control of all their faculties, or might be drunk or on drugs, etc. Obviously I have no idea what the context is, so this might not fit your vision, but it might give you options where previously you didn't see a way forward or just couldn't get excited about the material If you haven't storyboarded or done detailed planning, one thing you could do is to pre-shoot the whole thing without lighting or performances etc, and then just edit it together in the NLE, and treat that as a moving storyboard. This would have the advantage of being able to anticipate any issues with any camera angles, and also to get a feel for the pacing and even things like if you decide to cut an angle or part of the film then you can skip filming it altogether. Best of luck and keep us updated - we are definitely interested in hearing how you get on!
  10. Interesting comments about the ASIC vs FPGA, and the trade-offs of not being able to add new features via updates. I guess everything comes with benefits and drawbacks. The sad thing is that the more that people lower their expectations about stuff like this, the more the manufacturers will take advantage of it. In economics the phrase is "what the market will bear". Dictionary definition: https://idioms.thefreedictionary.com/charge+what+the+market+will+bare Quote from HBR: https://hbr.org/2012/09/how-to-find-out-what-customers-will-pay The HBR article goes on to talk about various strategies for setting pricing, but one element is common - it's about the customers perceptions and their willingness. The more we normalise products overheating, having crippled functionality, endlessly increasing resolution but not fixing the existing pain points in their product lines, the more they will do it.
  11. I suggest you start with the finished edit and work backwards. Your end goal is to create a certain type of content with a certain type of look. This will best be achieved using a certain type of shooting and a certain type of equipment that makes this easier and faster. Then look at options for lenses across the crop-factors, then choose your format/sensor-size, then the camera body.
  12. I guess to reply directly on to your comments, yes, the DR testing algorithms seem to be quite gullible and "hackable" which I'd agree that Apple has likely done specifically for headlines. None of the measurements in the charts really map directly to how usable I think the image would be in real projects, but I haven't read the technical documents by ImaTest, although if I was going to look into it more I think that would be a good idea so you'd know what is actually being measured.
  13. The more I read about DR, the less I realise I understand it. I mean, the idea is pretty simple - how much brighter is the brightest change it can detect compared to the darkest change it can detect, but that has a lot of assumptions in it when you want to apply it to the real-world. I have essentially given up on DR figures. Firstly it's because my camera choice has moved away from being based on image quality and into the quality of the final edit I can make with it, but even if I was comparing the stats I'd be looking at latitude. Specifically, I'd be looking at how many stops under still look broadly usable, and I'd also be looking at the tail of the histogram and comparing the lowest stops: how many are separate to the noise floor (where the noise in that stop doesn't touch the noise floor) how many are visible above the noise floor (but the noise in that stop touches the noise floor) how many are visible within the noise floor (the noise from the stop is visible above the noise floor) how high up is the noise floor In the real world, you will be adding a shadow rolloff, so any noise will be pretty dramatically compressed, so it really comes down to the overall level of the noise floor (which will tell you how much quantisation the file will have SOOC and how much you can push it around) and how much noise there is in the lowest regions, which will give a feel for what the shadows look like. You can always apply a subtle NR to bring it under control, and if it's chroma noise then it doesn't matter much because you're likely going to desaturate the shadows and highlights anyway. The only time you will really see those lowest stops is if you're pulling up some dark objects in the image to be visible, but this is a rare case and with more than 12 or 13 stops you're most likely still pushing down the last couple of stops into a shadow rolloff anyway, so it's just down to the tint of the overall image. Think about the latitude tests and how most cameras are fine 3-stops down, some are good further than that - how often are you going to be pulling something out of the shadows by three whole stops?? That's pretty radical! Most likely you're just grading so that a very contrasty scene can be balanced so that the higher DR fits within the 709 output, but you'd be matching highlight and shadow rolloffs in order to match the shots to the grade on the other shots, so your last few stops would still be in the shadows and can be heavily processed if required.
  14. The entire answer is that it has a fan, despite being tiny. It doesn't need to act as a heatsink because it has a fan, and if the BMMCC can have a fan, then any small camera can have a fan. Do you think that adding a screen and EVF and a handle to the BMMCC would take it from shooting for 24 hours straight in 120F / 48C to overheating in air-conditioning in under an hour? Of course not, because.. it has a fan! It was also considerably cheaper than many/all of the cameras being discussed here. But it's not even just about having a fan.. even without one, there are cameras with much better thermal controls around. I routinely shoot in very hot conditions (35-40+C / 95-105+F) and have overheated my iPhone, but have never overheated my XC10, my GH5, my OG BMPCC, my BMMCC, or my GX85. All this "it's tiny so it overheats in air-conditioning" just sounds like Stockholm Syndrome to me.
  15. 11.2 stops is pretty close to the bottom of the list of DR specs that I keep. Of course, DR specs are a minor aspect of film-making, and not even really indicative of the actual usable dynamic range of the camera (which is better represented by the latitude testing). For example the iPhone 15 tests as having 13.4 stops of DR, but the latitude test shows that it only has 5 stops of latitude, whereas the a6700 has 8 stops, yet only tests as 11.4 stops of DR. If you're going to do a dive into the specifications, you really need to understand what they mean when you're actually using the camera for making images. The reason you want high DR is so that you can use those extreme ends of the exposure range - if you can't use them then there's no point in having them and so a big number on a spec sheet is just a meaningless number on a piece of paper.
  16. I guess once it's released we'll see a new round of strange AF testing from the YT camera gaggle!
  17. The BMMCC that did that recorded for 24 hours straight in 120F / 48C was smaller than almost any other MILC. There are no excuses.
  18. New firmware update just announced: https://www.panasonic.com/global/consumer/lumix/firmware_update.html
  19. Shot a bunch of test shots, this time with skintones. I shot the usual shots gradually under-exposing, but instead of changing the WB in-camera manually to create the off-WB shots I bought a pack of flash gels and held them in front of the single LED light. Also, as @John Matthews enquired about skintones, I have just done a first attempt at grading for the skintones rather than the chart. I looked at the chart of course, but considering that people is what the audience is looking at in the image, skintones are vastly more important. So I didn't try to get the greyscale on the test chart perfect, or correct the highlights or shadows perfectly, etc. The first image is the reference. Those following that are each one-stop darker than the previous one. Then comes the gels. The gels are so strong they're essentially a stress test, not how anyone would / should / will be shooting, but it's nice to see where the limits are. The first set is using only the basic grading controls and no colour management: The results aren't perfect, but if you're shooting this horrifically then you don't deserve good images anyway! What is interesting is that some images fell straight into line and I felt like I was fighting with the controls on others, but this had no correlation with how large a correction was required. I suspect it's my inexperience and getting lucky on some shots and unlucky on others - sometimes I'd create a new version and reset and have another go and get significantly better (or worse) results than the previous one. No single technique or approach seemed reliable between images. The next set had colour management involved and grading was done in a LOG space: I also felt this set was very hit and miss in grading. This next set was a different set of tools again, and I added the Kodak 2383 Print Film Emulation LUT at the end. I did this because any serious colour grading work done in post is likely to be through a 'look' of some kind and the heavier the look is the more it obscures small differences in the source images being fed into it. Next set was a different set of tools, but keeping the 2383 LUT in the grade: ...and the final set was adding a film-like grade in addition to the Kodak 2383 LUT. I mean, no self-respecting colourist would be using a PFE LUT as the only element of their final look... right? All in all, I am pretty impressed with how much latitude is in the files, and although lots of the above results aren't the nicest images ever made, if you're shooting anywhere close to correctly then the artefacts after correction are going to be incredibly minor, and if you actually shot something that was 2 stops under and was lit by a single candle using a daylight WB then you'd be pretty freaking happy with the results because you would have been thinking that that your film was completely ruined. This is all even more true if your shots were all shot badly but using the same wrong settings, as the artefacts from neutralising the images you apply will be consistent between shots and so will have a common look. I'm also quite surprised at how the skintones seem to survive much greater abuse than the colour chart - lots of the images are missing one whole side of the colour wheel and yet the skintones just look like you decided on the bleach-bypass look for your film and you actually know what you're doing. The fact that half the colour chart is missing likely won't matter that much because the patches on the colour chart are very strongly saturated and real-life scenes mostly don't contain anything even close to those colours. I'm still working on this (and in fact am programming my own set of plugins) so there will be more on this to come, especially as now I have a good set of test images. I think I should find a series of real-world shots that only require a small adjustment so I can demo the small changes required on real projects.
  20. To compliment Geralds tech-but-not-filmmaker review, here's a filmmaker-but-not-tech review. TLDR; he's also not so impressed, once you consider the price.
  21. Tony Northrup did a video, probably over a decade ago now, comparing ISO between cameras and found that in the same exact situation different cameras give different exposures at the same ISO, up to almost a stop. IIRC his test was meticulous using the same lens etc and he was shooting RAW, so there was no in-camera processing etc. His conclusion? It doesn't matter... because of all the reasons people have posted above.
  22. The pro cameras are pro because they have xls, nd, double recording, sci out, etc. It's like saying "I really like my Toyota hatchback. it bests all of my so called super-cars and hyper-cars. if you don't care about acceleration, top speed, braking, cornering, cool factor, etc." 🙂
  23. kye

    24p is outdated

    Yeah. Considering those cameras both have quite high-end images, it's not surprising there wasn't a significant improvement. Perhaps focus on something other than the SOOC image? What films do you make?
  24. Also, and I cannot emphasise this enough.... The image SOOC is like looking at the negative film SOOC. It's not fit for direct display, it's not meant to be for direct display, and no-one really cares the differences in the SOOC image unless it appears in the final image. Think of the post-production process as "developing" the image captured by the camera.
  25. Sure. But we've known for a long time that different cameras have differences even with the same settings. If one is brighter and the other is darker, but are basically identical if you adjust the settings to match the image, then does it really matter? Like, if ISO200 on one is the same as ISO250 on the other, how does this impact you making a better film?
×
×
  • Create New...