Jump to content

kye

Members
  • Posts

    7,444
  • Joined

  • Last visited

Posts posted by kye

  1. 5 minutes ago, SRV1981 said:

    What does that have to do with the original questions? 

    You seem to care about matching cameras....

    49 minutes ago, SRV1981 said:

    Sure, even still there seems to be differences between cameras even when trying to match color 

     

  2. 4 minutes ago, SRV1981 said:

    If the goal is to match many cameras yes - I was commenting that even if you matched color to your best ability the images will still look different and some will prefer skin tones and the overall look differently.  

    If you're cutting between two cameras that were used to shoot the same scene but from different angles then you'd be surprised at how different they can be without the viewer noticing.

    This is because:

    • The lighting will be different from the different camera angles
    • The contents of the frame will be different, either subtly or significantly
    • The viewer might (and hold onto your hat here....) be watching the film and not comparing skintones

    Besides, I said to learn colour grading.  If you can't edit two cameras together with skin-tones that are similar enough not to bother viewers, then you haven't learned it enough yet.

  3. 1 hour ago, ac6000cw said:

    A film-making version of the 'No-one ever got fired for buying IBM' (computers) situation 🙂

    Absolutely.

    Of course, ARRI is like the old IBM where you got what you paid for, unlike the modern IBM which is charging for a premium service and just delivering the lowest-cost service they can get away with! 

    1 hour ago, ac6000cw said:

    I do wonder if some of that is driven by fear of jobs disappearing (despite the availability of much lower cost tools - e.g. cameras and editing software - having hugely expanded the overall 'moving image' production market).

    I agree.  I've seen it in many different industries time and time again from the people who are already successful saying that new people shouldn't be allowed in and that equipment should be expensive and hard to get etc.

    In the case of the colourist I was referring to, I suspect they got sick of new people coming into the scene wanting to get great images and not wanting to get a job fetching coffee and then spend the next 30 years working their way up through the ranks to get to being a well-regarded colourist.

  4. 2 minutes ago, SRV1981 said:

    Sure, even still there seems to be differences between cameras even when trying to match color 

    The answer is either putting in the time to learn colour grading, or putting in the time to earn money to buy multiples of the same camera so the image is identical.

  5. 51 minutes ago, JulioD said:

    10Ge surely is faster than a CFe card over usb-c ?

    10G ethernet is only 10Gbit / 1.2GB/s but CFExpress cards can exceed this.

    This test from Petapixel shows speeds up to 2.8GB/s and most of the models tested exceeded 1.2GB/s.

    Average-Peak-Transfer-Speeds.jpg

    If BM put dual CFExpress cards in the camera then in theory this could double that throughput too.

    The ethernet standard is designed for maximum throughput with cables up to 100 meters/yards long and the 10G standard for copper was announced in 2006 so it's hardly a new standard, and stuffing data through long cables is an entirely different challenge to transmitting it an inch or two!

  6. 5 hours ago, SRV1981 said:

    Is it fair to say some cameras produce normative or more pleasing color to most but if using log, you can get similar color/image from most cameras equally?  

    1. if you wanted a personal camera with fast turn around what brands are you usually happy with color wise?

    2. when deciding for more professional or bigger projects, how do you decide what system/log system to get?

    seems canon and then Fuji reign when SOOC is discussed and it’s more nuanced for the latter. 

    In my experience, the internet has a very skewed view of which brands offer the best colour science.

    Millions of folks on the internet will tell you that Canon has the best colour, and recently Fuji is in the game with their film emulation presets, but I think this is just confirmation bias in action.

    All manufacturers have very high quality colour.

    Even Sony, who used to have the most "accurate" colour and looked very unappealing, have turned it around and now have pretty nice colour.

    The other great myth is that great colour comes from the camera, it doesn't.

    Great colour comes from production design, lighting, and colour grading.

    Here is a thread where I show that it's the work in post that makes the images pop.

    After reading your recent posts and threads about one aspect of cameras or other, I have some bad news for you...  you can't buy good images.

    Good images come from skill, not equipment.  Great images come from skill and large amounts of hard work.

  7. 11 hours ago, IronFilm said:

    Well I guess Sony is already doing that with Sony RX10, RX100 and even the RX0 have slog!!

    So yes, it would be good if Nikon does this too, make it very easy to mix and match together the full range of cameras in a professional workflow. 

     

    Yep.  I think it's a matter of friction across an entire lineup, and if you start with a fixed-lens camera and want to "upgrade" then going to a MILC might seem a huge jump up but if your fixed-lens camera also has LOG then there's an "upgrade" right in front of you.  You switch to it, learn about LUTs and colour grading and having flexibility of the image in post.  Then when you want to upgrade your home setup (where size doesn't matter) the fact that you're already shooting N-LOG would be a factor in keeping you in the Nikon ecosystem.

    6 hours ago, EduPortas said:

    Guess only time will tell, friend.

    I agree that Red has brand recognition, but only with a very specific subset of the imaging crowd.

    Nikon has A LOT more recognition from almost everybody, from the absolure noob to the hard-core pro.

    And, let's be honest, Nikon was already hitting home runs with their new lenses and video features with pro-photogs. Now they WILL go full-hog with the video guys. That's the new slice of the imaging pie.

    Integrate, fortify the brand (Nikon) and capitalize on a new growing market.

    Hence, my original snarky comment about Red's Dead with no sight of Redemption.

    Yes, we'll see what happens (or doesn't) in time.

    I guess my thinking was this:

    • RED is a huge amount more than just a patent, therefore
    • When they bought it, they were buying something valuable that they didn't have
    • RED has a bunch of knowledgeable people, a bunch of IP, and recognition and a track record in Hollywood

    If Nikon keep the RED brand active then they could do a "best-of" situation, where the RED engineers and technology gets implemented across Nikons existing product lines, and the RED brand gets the benefit of Nikons extensive support network and R&D and manufacturing capabilities.  This would grow the RED brand in Hollywood and in the cinema camera market, which Nikon has zero market penetration of currently, and would help the Nikon brand in it's more premium products.

    If Nikon let the RED brand die, then the Nikon line of products can still get the benefits of the RED tech, but any new Nikon products that target the cinema market will essentially untrusted / untested / unknown and apart from "it's got REDRAW in it" they will be a completely new player in this market.  

    One thing I think that might not be well known is that a lot of folks in the "industry" have complete contempt and even hatred of the consumer brands and the entire DSLR revolution.  There's a very famous colourist who openly says that putting video into stills cameras was a mistake and they should take it out (yes, he maintains that the manufacturers should all REMOVE the video functions of all these consumer cameras!).

    There's a thing where at the first production meeting of a movie there's a moment when someone asks what they're shooting on, and if the answer is ARRI / Alexa then everyone in the room relaxes.  Yes, this means that if they say RED then people don't relax, but if they say RED then at least someone can say "X, Y, Z were all shot on RED".  If someone said "Nikon" at that moment, the reaction might be "the photo people????".  

    When you have industry people actively hating the fact that people are shooting music videos on GH5s, someone like Nikon are likely viewed as being from a parallel universe!

  8. 2 hours ago, John Matthews said:

    If I need to, I'd like to adjust just a little in post. I think the most important is to get the first shot right.

    Thinking about this more, I think there are three different approaches.

    The first is to shoot manually and get it perfect every time.  Not even the pros do this with completely controlled sets.  Colourists say that they're always making small changes to WB on a shot-by-shot basis, even on big budget productions, so this is only mentioned to make sure we understand that we will be dealing with small changes in post.

    The second is to shoot on a manual WB.
    This will mean that you're going to get errors in the WB, potentially being quite noticeable, but they're likely to mostly be in the warm/cool Temp direction.

    The third is to shoot on auto-WB.
    I've found that, on my Panasonic cameras at least, the WB errors are pretty minor, and the WB is pretty close - even if the lighting is quite variable and I'm taking shots from different angles and in different locations etc.  
    This means that you'll be making only very small corrections, but they could be in the magenta/green Tint axis as well as the warm/cool Temp direction.  We're quite sensitive to Tint errors, so this means that adjusting these is a bit more fiddly, and can take some practice, but is perfectly possible.

    I know that when I shoot I am very likely to completely forget a manually set WB, and will end up shooting a whole evening at 6500K and it'll be so warm it'll look like I shot it through a jar of honey, so I shoot auto-WB and therefore inevitably have to make minor corrections in post but never have to make large ones.

    Going back to the minor curves that are part of the Look, and how we can't un-do in post because we don't have a complete profile of that camera/look combination, shooting on auto-WB will mean that these get applied to the footage in a place that will likely only be a very small distance from where they should have been if the shot had perfect WB.  

    Obviously this still depends on your camera, the profile, your colour management pipeline, the tools in your software, your skill in applying them, and the weather and position of the stars etc...  so this is also something that you would be best testing for yourself too.

  9. 1 hour ago, John Matthews said:

    Yes. The question I've asked myself many times has been: "If I make a mistake in WB, is it better to error on warm side or the cool side when considering skin tones in 8 bit?" From your results, I think it's more on the warm side. Do you concur?

    When I shoot, I'd really prefer to just choose ONE WB for the entire time. If I need to, I'd like to adjust just a little in post. I think the most important is to get the first shot right.

    Good question.

    I think the fundamental challenge of making corrections in post is having the tool operate in a colour space that matches the footage as closely as possible.  

    For example, if your footage is in Linear, and you have a node in Linear, and you adjust the Gain wheel (which literally applies gain by doing a simple multiplication) then they match exactly and the result will be a perfect exposure or WB change, just like it was done in camera.  If you get your colour management pipeline correct then you can get this practically perfect adjustment for LOG footage too.

    The challenge comes when the camera records in 709.  This is mostly because cameras don't just do a CST from Linear to 709, they apply all sorts of "make it look lovely" sort of small tweaks.  When we record in the wrong exposure or WB then these tweaks get applied wrongly.  For example, the profile might compress the skintones, and do so by expanding the reds and yellows on either side.  If you shoot a clip where the skintones are too yellow then your skintones might get expanded rather than compressed.  No CST will un-do all these small tweaks, so you're left with an image that's curved in all the wrong places rather than all the right ones.

    So, what happens in practice is it comes down to the individual profile you choose (which will have its own unique set of tiny curves that make that look) and your own ability to manipulate it using the right combination of tools to get the most pleasing result.  My results vary mostly based on the luck that I had when correcting each individual test image - your results will likely suffer the same variance unless you're a far better colourist than I am.

    I'd suggest you do your own tests.  Either find a spot in the shade on a sunny day, or even better is to do it on a cloudy day.  Do a manual WB against a grey card (or piece of white copy paper if you don't have a grey card), then just shoot a clip of yourself (or a volunteer model if you can get one 🙂 ).  Then shoot a range of test clips setting the Colour Temp manually.  Then just pull them all into post and see which tools seem to work the best for you, and which gives you the more pleasing looks.

    One thing I did notice was that I had trouble getting the blacks and shadows to be right when the skintones were dialled in, with them tending to be the opposite of the original tint on the image (ie, if the image was warm then the correction ended up with cooler shadows) so with everything else being equal that might be a reason to go warmer so you get a bit of colour separation in the final images.

  10. 5 hours ago, EduPortas said:

    Red's dead, bro. I highly doubt they will ever release ANY new camera.

    Japanese companies don't operate that way.

    Maybe not now, but soon enough they will start to slim the American arm until everything is 100% integrated in the new and profitable Nikon N-Line.

    Maybe.

    It would be a bit silly though - the brand recognition was part of the value of the company.  BM is having a difficult time breaking into the cinema ecosystem despite having brand recognition for Resolve, so Nikon buying one of the "big three" brands that was being actively used by Hollywood, and then putting it in a drawer seems a bit counter productive.

  11. 5 hours ago, IronFilm said:

    I'd suggest Nikon needs something a heck of a lot cheaper than the Z9 to capture the TikTok / IG / YT crowd. 

    Agreed!

    That's why I said "MILC cameras with serious video features (like the Z9) all the way down to some select "creator" cameras with fixed lenses (like the G7X or Osmo Pocket 3 etc)".

    😁

    5 hours ago, IronFilm said:

    Sony ZV-E10 => FX30 => FX6 => FX9 => VENICE 

    Canon T2i => 60D => 5Dmk3 => C100 => C300

    Canon R100 => R10 => R6 => R5 C => C70

    Your examples make sense, but I'd suggest they don't go far enough into the lowest end, which is why I specifically mentioned fixed-lens models like the G7X and Osmo Pocket.  

    I think you might still have too "industry" a view on what is happening out there.  On social media, you can be a "professional" in the sense that you're making serious money, but this doesn't automatically mean you upgrade all your equipment to large/heavy/complicated/manual/technical bricks that have already been assimilated by the Borg.  This is what happens on sets, but most social media happens out in the real world, not on a set.

    The cameras I see out there being used most in vlogging land are the iPhone, the G7X, and the recent trend is for retro cameras.  Anything larger than a P&S is referred to as "my big camera".

    Check out this vlogger who was so excited to buy a Canon Vixia Mini X... 

     

    This isn't even the first time I've seen a vlogger talk about this camera recently, because they went viral because K-Pop artists were using then for vlogging.

    image.png.7d6cc9fd2336115e98ae447bc2d80c08.png

    Why? It's barely the size of a battery!

    Canon-Vixia-Mini-X-6.jpg

    Also, you can film yourself with the wide angle while also having the screen pointing towards yourself, and it doesn't look like you're secretly filming other people, the sound is good, etc etc.

    All manufacturers except Apple are completely oblivious to the idea that there's a need for physically small cameras that create a professional grade image.  I mean, do you think I'd be shooting with the GX85 and its 8-bit 709 files if there was something that shot LOG in the same form factor??

    Be serious!  Half of the reason I learned to colour grade so well was to overcome the weaknesses in the image from the small cameras.

  12. 4 hours ago, eatstoomuchjam said:

    Unless their offload medium can write crazy fast, they won't appreciate it a lot more than CF Express or similar.  I'd also say that if the media could be configured with redundaancy, having 4T-6T of local storage in the camera might result in fewer productions choosing to bring on a DIT.  🙂 

    I think that the Resolve bit was working with files that were being uploaded in real time to their network attached storage device over IP 2110.  However, yes, I think he did say that it would be possible to fetch the files from the camera over 10gE without the need for a DIT.  Just keep in mind that offload over 10gE can use, at most/under ideal conditions, 1.25 gigabytes per second...  🙂 

    Hmm..  10G ethernet isn't that fast I guess, but it's all relative.

    For example if you're shooting the highest quality setting "12K - 12,288 x 8040 Blackmagic RAW 3:1 - 1,194 MB/s" then you'll only be able to pull it off the camera in real-time, but that's not likely to be a situation that most people would be in.

    If you were shooting in 8K (Blackmagic RAW 3:1 - 533 MB/s) then you can copy it off in double-speed, or 8K 12:1 and 4K 3:1 (~133MB/s) then you're copying at 9x realtime.  Of course, most productions are going to be rolling for a lot less time than they're stopped in-between takes, so the DIT can keep pace with offloading the files off the camera throughout the shoot without having to stop the production to swap out media.  

    It is a pretty low-cost way to protect yourself against camera/media failures where you'd lose the contents of the media.  

    It's like that old backup saying..  Two is One and One is NONE!

  13. 6 hours ago, EduPortas said:

    In about a year we'll see the first Nikon N-Line camera.

    I think that @Danyyyel makes a good point about the reputation and brand recognition of RED, so I suspect they will keep that in tact.

    However, a Nikon N-Line is completely possible.  The growth of TikTok etc as video-centric platforms is absolutely true, but I don't think that the cameras that RED currently offer are really where the action is.  If you want to make a line of cameras that are designed for shooting for social media, I'd imagine that this new N-Line would be:

    • MILC cameras with serious video features (like the Z9) all the way down to some select "creator" cameras with fixed lenses (like the G7X or Osmo Pocket 3 etc)
    • Log // RAW shooting - potentially with REDRAW because why the hell not?

    That would mean that Nikon would overhaul their own product lines to have better video (assisted by the techs and technology from RED), a new N-Line with video-centric and ease-of-workflow features, and keep RED as the professional cinema line for working professionals.

    Currently the market seems to have two kinds of cameras - hybrids for amateurs and cine cameras for working pros.  The gap is that the market also contains a huge number of "professional content creators" who are working professionals but have completely different needs to the amateurs and also the video professionals, which is why we lament the poor / limited / crippled functionality of the hybrid cameras and also lament the lack of flexibility in the cinema camera product lines.

  14. 13 hours ago, John Matthews said:

    Thank you so much for this hard work. I'm going to look further into this during my holiday.

    Yes, some of the images with wrong WB and underexposed would be expected to be trash, but it's nice to know there are some editing techniques that save it a little. In all honesty, I'd probably just go for monochrome or tint if this were to happen to me.

    As you were the one that asked for skin tones, was there a specific situation you were thinking of when you asked?

    I thought your question might be related to your adventures with the GX850 and shooting out in the real-world?

  15. 1 hour ago, JulioD said:

    Well I’m assuming a DIT on a three camera shoot will appreciate it.  
     

    pretty sure they are doing a CFe dual card magazine too. 

    Yeah, my impression was that lots of these things were aimed at high-end professional users.

    There was one bit where (if I understood him correctly) he said you can plug an ethernet cable into the camera and start editing the footage on the card while it's still in the camera.  There was another part where (if I understood correctly) he said that Resolve can even access clips that are still being recorded, which would enable an editor to get started on a multi-camera show with 10 cameras all rolling for hours at a time.

    There is a whole world of situations outside the I-shoot-then-download-then-edit-then-colour-then-audio-then-export workflows.

  16. Apparently Soderbergh shot his latest film on the A9 III..  in this article he talks about a "Sony DSLR" but there are tweets etc elsewhere that say it was the A9 III.  Maybe the global sensor was a deciding factor.  

    Anyway, good to see that when moving away from cinema cameras the choice isn't completely ridiculous and something sensible was chosen.  It's nice when film-makers don't just treat the world of consumer cameras as a novelty but actually as something that has genuine potential and capability.

    https://filmmakermagazine.com/124668-interview-steven-soderbergh-presence-sundance-2024/

  17. 14 hours ago, Tim Sewell said:

    I was so focussed on getting everything 'right' that I didn't pay enough attention to getting it good or interesting.

    So I'm going to reshoot and one of the things I'm going to do is lean in to the limitations I have - first among which is that I'm doing this almost completely alone. I am both crew and talent! The biggest limitation caused by that is that camera movement while I'm on screen is not going to happen, which means that to create interest I need to make my angles and composition more interesting.

    Firstly, it's awesome that you're actually shooting something!  and the fact that you're on-screen is next level above that even, so in my books you're already successful 🙂 

    I'm not sure what you're shooting, and therefore what making it good / interesting really means in your context, but here's a few random thoughts..

    • I normally screw things up the first (few/many) times I do something, so just chalk it up to a practice shoot and keep going
    • If you're able to get a technically competent capture then you can really change the look in post, so I'd suggest just covering the fundamentals
    • The success or failure of a film depends 97% on what is in the frame (with the remaining 2% being sound, and 1% image) so that's where your attention should be going
    • Is there a way you can separate the various tasks in your mind while you're shooting?  For example, maybe you put in a full battery, an empty memory card, and then completely forget the camera exists and just roll as you do 20 takes of the scene from that angle, focusing on your performance and emotional aspects while you're doing this?
    • If there are small errors in continuity or performance there is always the option to just include them and make it a more stylised final film.  For example, if things cut funny or jump around a bit etc, and you lean into it in the edit, then the impression might be that the character might not quite be in control of all their faculties, or might be drunk or on drugs, etc.  Obviously I have no idea what the context is, so this might not fit your vision, but it might give you options where previously you didn't see a way forward or just couldn't get excited about the material
    • If you haven't storyboarded or done detailed planning, one thing you could do is to pre-shoot the whole thing without lighting or performances etc, and then just edit it together in the NLE, and treat that as a moving storyboard.  This would have the advantage of being able to anticipate any issues with any camera angles, and also to get a feel for the pacing and even things like if you decide to cut an angle or part of the film then you can skip filming it altogether.

    Best of luck and keep us updated - we are definitely interested in hearing how you get on!

  18. 8 hours ago, eatstoomuchjam said:

    Another factor for overheating can be the processor in the camera.  The Z Cam E2 series are known for running very well even in fairly extreme conditions (though other components such as memory cards, SSD's, monitors, etc overheat).  This is partly because of proper cooling/lack of IBIS, but also because a lot of image functions in the camera are done with ASIC vs the FPGA that some of the overheating cameras use.  It has the drawback that Z Cam can't implement some changes (at least not without releasing a whole new controller board), but has big advantages for cooling and power use (can run for hours on an NP-F550).

    I guess they added a cooling fan to the new F6 Pro - so maybe they changed processors for it or they decided to cool the memory card, etc.

    Interesting comments about the ASIC vs FPGA, and the trade-offs of not being able to add new features via updates.  I guess everything comes with benefits and drawbacks.

    4 hours ago, EduPortas said:

    You're totally right, friend. That makes all the diff in the world for such a small device.

    The sad thing is that the more that people lower their expectations about stuff like this, the more the manufacturers will take advantage of it.  In economics the phrase is "what the market will bear".

    Dictionary definition: https://idioms.thefreedictionary.com/charge+what+the+market+will+bare 

    Quote

    You should charge as much money for a product or service as customers are willing to pay.

    Quote from HBR: https://hbr.org/2012/09/how-to-find-out-what-customers-will-pay

    Quote

    The right answer to that question is a company should charge “what the market will bear” — in other words, the highest price that customers will pay.

    The HBR article goes on to talk about various strategies for setting pricing, but one element is common - it's about the customers perceptions and their willingness.  

    The more we normalise products overheating, having crippled functionality, endlessly increasing resolution but not fixing the existing pain points in their product lines, the more they will do it.

  19. 9 hours ago, SRV1981 said:

    If you’re struggling to comprehend DR I’m hopeless! 😂 

     

    so it seems I’ll rely on some other metrics as I’ll be baking images in with LUT in camera and not pushing in post. That may open up options for me as to what body I can use. Portability and compact is a prime start for me. 

    I suggest you start with the finished edit and work backwards.  Your end goal is to create a certain type of content with a certain type of look.  This will best be achieved using a certain type of shooting and a certain type of equipment that makes this easier and faster.  Then look at options for lenses across the crop-factors, then choose your format/sensor-size, then the camera body.

  20. 53 minutes ago, sanveer said:

    There may be a small series of glitches with the dynamic range test chart. It cannot make out the difference between overbaked images and actual dynamic range very clearly. All smartphone images are way too processed. And the excessive noise reduction and over-sharpening seems to make the image very limited for post work. Apple has clearly figured out to fake results on the test chart. Much like some smartphone companies having  higher results on the SoC testing apps.

    The difference between total visible stop at SNR 1 and usable ones at SNR 2 seem to suggest good headroom, especially when codec is high bitrate and with good bit depth (atleast 10-bit 4-2-2?). Then SNR 1 and SNR 2 are similar it's difficult tonsee whether the image is way too baked in to recover any more than the visible image shows. 

    I guess to reply directly on to your comments, yes, the DR testing algorithms seem to be quite gullible and "hackable" which I'd agree that Apple has likely done specifically for headlines.

    None of the measurements in the charts really map directly to how usable I think the image would be in real projects, but I haven't read the technical documents by ImaTest, although if I was going to look into it more I think that would be a good idea so you'd know what is actually being measured. 

  21. 51 minutes ago, sanveer said:

    There may be a small series of glitches with the dynamic range test chart. It cannot make out the difference between overbaked images and actual dynamic range very clearly. All smartphone images are way too processed. And the excessive noise reduction and over-sharpening seems to make the image very limited for post work. Apple has clearly figured out to fake results on the test chart. Much like some smartphone companies having  higher results on the SoC testing apps.

    The difference between total visible stop at SNR 1 and usable ones at SNR 2 seem to suggest good headroom, especially when codec is high bitrate and with good bit depth (atleast 10-bit 4-2-2?). Then SNR 1 and SNR 2 are similar it's difficult tonsee whether the image is way too baked in to recover any more than the visible image shows. 

    The more I read about DR, the less I realise I understand it.

    I mean, the idea is pretty simple - how much brighter is the brightest change it can detect compared to the darkest change it can detect, but that has a lot of assumptions in it when you want to apply it to the real-world.

    I have essentially given up on DR figures.  Firstly it's because my camera choice has moved away from being based on image quality and into the quality of the final edit I can make with it, but even if I was comparing the stats I'd be looking at latitude.  

    Specifically, I'd be looking at how many stops under still look broadly usable, and I'd also be looking at the tail of the histogram and comparing the lowest stops:

    • how many are separate to the noise floor (where the noise in that stop doesn't touch the noise floor)
    • how many are visible above the noise floor (but the noise in that stop touches the noise floor)
    • how many are visible within the noise floor (the noise from the stop is visible above the noise floor)
    • how high up is the noise floor

    In the real world, you will be adding a shadow rolloff, so any noise will be pretty dramatically compressed, so it really comes down to the overall level of the noise floor (which will tell you how much quantisation the file will have SOOC and how much you can push it around) and how much noise there is in the lowest regions, which will give a feel for what the shadows look like.  You can always apply a subtle NR to bring it under control, and if it's chroma noise then it doesn't matter much because you're likely going to desaturate the shadows and highlights anyway.

    The only time you will really see those lowest stops is if you're pulling up some dark objects in the image to be visible, but this is a rare case and with more than 12 or 13 stops you're most likely still pushing down the last couple of stops into a shadow rolloff anyway, so it's just down to the tint of the overall image.  Think about the latitude tests and how most cameras are fine 3-stops down, some are good further than that - how often are you going to be pulling something out of the shadows by three whole stops??  That's pretty radical!  Most likely you're just grading so that a very contrasty scene can be balanced so that the higher DR fits within the 709 output, but you'd be matching highlight and shadow rolloffs in order to match the shots to the grade on the other shots, so your last few stops would still be in the shadows and can be heavily processed if required.

  22. 3 hours ago, EduPortas said:

    Hmmm. Not really comparable since the BMMCC has no LCD or EVF.

    It's a cube that functions as a heatsink and makes huge ergonomic sacrifices vs any decent MILC or DSLR.

    The entire answer is that it has a fan, despite being tiny.  It doesn't need to act as a heatsink because it has a fan, and if the BMMCC can have a fan, then any small camera can have a fan.

    Do you think that adding a screen and EVF and a handle to the BMMCC would take it from shooting for 24 hours straight in 120F / 48C to overheating in air-conditioning in under an hour?  Of course not, because..  it has a fan!

    It was also considerably cheaper than many/all of the cameras being discussed here.

    But it's not even just about having a fan..  even without one, there are cameras with much better thermal controls around.  
    I routinely shoot in very hot conditions (35-40+C / 95-105+F) and have overheated my iPhone, but have never overheated my XC10, my GH5, my OG BMPCC, my BMMCC, or my GX85.  All this "it's tiny so it overheats in air-conditioning" just sounds like Stockholm Syndrome to me.

  23. 2 hours ago, SRV1981 said:

    Sarcasm?

    11.2 stops is pretty close to the bottom of the list of DR specs that I keep.

    Of course, DR specs are a minor aspect of film-making, and not even really indicative of the actual usable dynamic range of the camera (which is better represented by the latitude testing).  

    For example the iPhone 15 tests as having 13.4 stops of DR, but the latitude test shows that it only has 5 stops of latitude, whereas the a6700 has 8 stops, yet only tests as 11.4 stops of DR.

    If you're going to do a dive into the specifications, you really need to understand what they mean when you're actually using the camera for making images.  The reason you want high DR is so that you can use those extreme ends of the exposure range - if you can't use them then there's no point in having them and so a big number on a spec sheet is just a meaningless number on a piece of paper. 

×
×
  • Create New...