Jump to content


  • Posts

  • Joined

  • Last visited

Posts posted by kye

  1. 39 minutes ago, MrSMW said:

    Better this than being the old lady front and centre capturing everything with her iPad.

    Everyone is complaining about their EVF and LCD screens, but she's not!

  2. 57 minutes ago, Jedi Master said:

    It never occurred to me that some people purposely don’t want their films to look realistic. That’s a completely foreign concept to me.

    I also find this whole dreamlike concept puzzling. When I watch a movie, I imagine myself looking through a perfectly clear window, not some kind of dreamy view. 

    The whole idea of movie stars when I was growing up was that they were "larger than life".

    I think, once you start looking, you'll find that practically nothing about cinema is even remotely realistic / real-looking.

    Look at the visual design / colour grading for a start....







    I mean, these projects all had the budget, had highly skilled people, and had every opportunity to make things look lifelike, but none of these things look remotely like reality.

    Even the camera angles and compositions and focal lengths - none of these make me feel like I'm looking at reality or "I am there".

    Bottom line: studios want to make money, creative people want to make "art", neither of these are better if things look like reality.  I'm in reality every moment of every day, why would I want movies to look the same way?  It's called "escapism" not "teleport me to somewhere else that also looks like real-life...  ism".

  3. 4 minutes ago, zlfan said:

    no i am not him. but i follow him. he seems having a lot of radical ideas.

    He does.

    I applaud Markus for the work he does and the passion that he brings, trying to beat back the horde of Youtube-Bros who promote camera worship and the followers who segue this into the idea of camera specs above all else.

    But there's a progression that occurs:

    • At first, people see great work and the cool tools and assume that the tools make the great work - TOOLS ARE EVERYTHING
    • Then, people get some good tools and the work doesn't magically get better.  They are disillusioned - TOOLS DON'T MATTER
    • Then they develop their skills, hone their craft, and gradually understand that both matter, and that the picture is a nuanced one.  TOOLS DON'T MATTER (BUT STILL DO)

    This is the same for specs - they are everything, they are nothing, then they matter a bit but aren't everything.

    By the time you get to the third phase, you start to see a few things:

    • Some things matter a LOT, but only in some situations, and don't matter at all in others
    • Some things matter a bit, in most situations
    • Some things matter a lot to some people, but less to others, depending on their taste

    Film-making is an enormously subtle art.  Try replicating a particular look from a specific film/show/scene and you'll find that getting the major things right will get you part of the way, but to close the gap you will need to work on dozens of things, hundreds maybe.  

    3 minutes ago, zlfan said:

    you see, now a lot of practical reasons show up, not only about aesthetics any more. 

    The purpose of any finished work is to communicate something to the audience.  For this, the aesthetic always matters.  Even if the content is purely to communicate information, if you shoot a college-looking-bro delivering the lines sitting on a couch drinking a brew filmed with a phone from someone lying on the floor, well, it's not going to seem like reliable or trustworthy information, unless it's about how many beers were had at the party last night (and even then...).  The same exact words delivered by someone in a suit sitting at a desk with a longer lens on a tripod and nice lighting will usually elicit a very different response (sometimes one of trust, and sometimes a reaction of mis-trust, but different all the same).  A person wearing glasses and a lab coat standing in front of science-ish stuff in a lab is also different.

    Humans are emotional animals, and we feel first and think second.  There isn't any form of video content that isn't impacted by the aesthetic choices made in the production of the video.

    Some might be so small that they don't seem relevant, but they'll still be there in the mix.

  4. Even if budget was no option, sadly it's the GX85 for me, and that's a very compromised option in almost every way, but is the right balance of trade-offs.

    In a way I'm lucky that my best camera isn't ridiculously expensive, but also, it means there is nothing I can do to work towards a better setup.

  5. 47 minutes ago, IronFilm said:

    I think I'm floating dangerously in the middle between those two groups, straddling both worlds. 

    ...beard flowing down majestically into the unbridgeable abyss...

  6. 43 minutes ago, IronFilm said:

    Can't see why a rental house would be interested, when you consider the up front costs + overheads vs how often they could rent this out, then there isn't any sensible price they could rent this out at per day where they could make a profit on it. 

    Maybe some niche rental house in LA?  Even just if folks making music videos wanted to rent it?

  7. 8 minutes ago, zlfan said:

    with widely available 4k 60p acquisition tools like phones and action cams, and with ubiquitous youtube 4k 60p viewing mode, the new generation of viewers' eyes and brains are trained very differently. and the rest is just history. 

    Time will tell if it's just familiarity or if it's actually something innate.

    I wouldn't be so quick to assume that the human visual system has nothing to do with how we feel about what we see, and that it's all equal and is just what we're used to.

  8. 4 hours ago, Jedi Master said:

    My four friends who saw the movie with me were all engineers and they knew I wasn’t asking costumes or anything like that. After they said they didn’t notice anything different, I told them the movie was shot and projected at 48 FPS, and they still said they didn’t notice that—they all said it seemed just like another movie in that regard.

    Yes, this wasn’t a scientific test by any means, but it did indicate to me that 48 FPS didn’t stick out like a sore thumb to my very technically-minded friends. The gold standard in audio testing is double-blind testing with accurately matched levels, etc. Something similar could probably be arranged with projection video or film. One difference would be that people participating in a test like this would be looking for differences and probably more attuned to them, whereas my friends were not.

    I'd be curious to see proper research on the topic.

    My predictions would be that a certain percentage would identify that it "looks different" in some way, and would be able to identify the effect in an A/B/X style test.  I suspect that a further percentage would say it looks the same, but that it somehow feels different, essentially anthropomorphising it to be sadder or more surreal or something.

    I've done this test with a few people in a controlled environment, where I recorded a tree in my backyard moving in the light breeze with my phone in 24p, 30p and 60p, and then put them all onto dedicated timelines in Resolve, then by playing back each one through the UltraStudio 3G it would switch the monitor to the correct frame rate for each one.  It wasn't perfect, but all the clips were recorded with very very similar settings, so it wasn't a bad test.

    Perhaps the best test would be to render a 3D environment in the three frame rates but have each render start with the same random variables and so the motion of the scene would be exactly the same.  

    Of course, if you were going to do it, I'd put in a few other things too, like varying the shutter angle.  It would be a huge amount of work and would require a pretty large sample group to get meaningful results.

    4 hours ago, Ty Harper said:

    Honestly I'd rather hear people discuss the pros/cons etc of 24p vs 60p vs others than the hyberbolic end of days talk. 

    What kinds of storytelling, visual messaging, etc, benefit from these combinations of frame rates, lightings, resolutions, formats, color grades, audio soundscapes - y'know?

    Good question.

    I've noticed the gulf widening between "video should look like reality so technology advancement is awesome" and "video should be the highest quality to democratise high-end cinema and advance the state-of-the-art".  There are also the "technology is always good, why are you talking about a story?" folks, but they're best ignored 🙂 

    The challenge in this debate is that if we're not even trying to achieve the same goals, then what's the point of discussing the tools?

    3 hours ago, zlfan said:

    it is interesting to see how people are locked in the stereotypic ideas. 

    24p for cinema, 30p for tv broadcast, 60p for sports. 

    right now, holly wood block busters are typically action packed, green screen vfx focused. so high res high fr are actually very suitable for hollywood features, these are more sports like than classic drama dialogue center like. 

    the classic drama which emphasizes on dialogue is moving into the broadcast or streaming service. so if 24p is critical for this kind of drama, actually 24p should be used for soap opera on tv or netflix. 

    in summary, the paradigm has shifted.

    60p should be for hollywood feature cinema

    24p should be for tv soap opera, netfix

    30p is still for news. 

    Interesting concept!

    I think you might be overstating the take-over of the heavy-VFX component of the industry, and even perhaps the nature of the segments themselves.

    Certainly a majority of Hollywood income might be from VFX blockbusters, but the world is a lot larger than Hollywood.  The majority of films made likely weren't VFX-heavy, and the majority of film that people actually cared about definitely wouldn't have been.  

    If you asked me if I'd seen <insert blockbuster here> then I probably couldn't answer, because truthfully they're mostly forgettable.  On many occasions I've been pressured into watching a movie with my family that one of my kids chose, and the experience was mostly the same - famous actors / bursts of action / regular laughs / the USA wins in the end, and then a few days later I remember that I watched the film but genuinely couldn't remember the plot.  This is counter to something like Roma where years later I remember some aspects but I also remember how it felt in critical moments and how my life is very different to theirs.

  9. 23 hours ago, zlfan said:

    24p is hailed as cinematic. it was just a frugal approach at the time of film days. 

    5k 60p of gp 12 is so smooth. seeing is believing. 

    Computer displays are a long way from being superior to human vision, so it's all about compromises and the various aesthetics of each choice.

    I would encourage you to learn more about how human vision works, it can be very helpful when developing an aesthetic.

    A few things that might be of interest:

    Video is an area where technology is improving rapidly and a lot of the time the newer things are better, but that's not always the case.

    The other thing to keep in mind is that there are different goals - some people want to create something that looks lifelike but other people want to create things that don't look real.  Much of the tools and techniques in cinema and high-end TV production are to deliberately make things not look real, but to look surreal or 'larger than life' etc.

    6 hours ago, Jedi Master said:

    I went with four friends to see The Hobbit in a theater that showed it at 48 FPS. I knew it was at 48 FPS, but my friends didn’t. I asked them after the movie if they noticed anything different and none of them did.

    I've been doing video and high-end audio for quite some time and have put many folks in front of high-end systems or shown people controlled tests of things back-to-back, and often people do notice differences but don't talk about them because they don't know what you're asking, or don't have the language to describe what they're seeing or hearing and don't want to sound dumb, or simply don't care and don't want to get into some long discussion.

    Asking people who have just seen a movie for the first time if they noticed "anything different" is a very strange approach - if they hadn't seen the film then everything about the film would have been different.  Literally thousands of things - the costumes, the lighting, the seats, how loud it was, how this cinema smelled compared to the last one, etc.

    Better would be to sit people in front of a controlled test and show them two images with as few variables changed as possible.  Even then it can be challenging.  When I first started out I couldn't tell the difference between 24p and 60p, now I hate the way 60p looks and quite dislike 30p as well.  Lots of people also knew what the 'soap opera effect' is, without being camera nerds..

  10. 2 hours ago, MrSMW said:

    Focal ranges are quite personal/dependant on individual needs but for me these are:

    Indoor, 20-70 and outdoor 35-150.


    As a one lens to do it all though, I’d struggle to see past the 12-40mm f2.8, the most recent version.


    It’s the one lens that along with Fujis 16-55 (24-83 equivalent) has FF beat IMO, because the ‘traditional’ 24-70mm f2.8 is just a bit short at the long end.


    That OM-1 paired with the 12-40mm f2.8 and the 75mm f1.7, for me would be the ultimate travel camera combo.

    I'm really interested in how long the longest end needs to be.  

    You've basically said that 70mm is a bit short, that 75 or 80 are better, but that you use a 35-150mm outdoors.

    How do you feel about the 100-150mm range?  Do you use it a lot?  If so, what specific types of compositions and situations do you use it for?

    How do you feel about the 150-200mm range?  Gathering from the above and other posts, you seem to have traded it in for other considerations, but is the focal length useful at all?  Or is it too long for what you shoot?  If you got given a weightless 28-200mm F2.8 lens then how many of your compositions would be above 150mm, and what would they be?  What about 28-300mm?

    I guess what I'm looking for is feedback on the creative elements like that anything above 150mm is too compressed, or that it's only useful in certain situations, or that it feels too distant and out-of-context in an edit, or that it's not useful at all, etc.  

  11. I'm keen to get some feedback on focal lengths.

    As many know, I shoot travel and want to be able to work super-quickly to get environmental shots / environmental portraits / macro / detail shots, and have narrowed down to three options:

    1. GX85 + 12-35mm F2.8
    2. GX85 + 12-60mm F2.8-4.0
    3. GX85 + 14-140mm F3.5-5.6

    The GX85 also has the 2x digital punch-in which is quite usable.

    GX85 + 12-35mm F2.8
    This is essentially a 24-70mm (48-140mm) with constant aperture, and is the best for low-light.  The question is if this is long enough for getting all the portrait shots.

    GX85 + 12-60mm F2.8-4.0
    Same as above but slower and longer.  I'm still not sure if this is long enough.

    GX85 + 14-140mm F3.5-5.6
    Slower but way longer.  This would be great for everything, and also zoos / safaris too, but isn't as wide at the wide end, which is only a slight difference but is still unfortunate.

    I'm keen to hear what people's thoughts are in terms of the practical implications of these options in terms of what final shots these will provide to the edit.  My experience has been that the more variety of shots you can get when working a scene the better the final edit.  I'm not that bothered about the relative DoF considerations, but aperture matters for low-light of course.  I'm shooting auto-SS so am not fiddling with NDs, so the constant aperture doesn't matter in this regard.

    There are obvious parallels to shooting events, weddings, sports, and other genres - keen to hear from @BTM_Pix @MrSMW and others who shoot in similar available-light / uncontrolled / run-n-gun / guerrilla situations.

  12. 9 hours ago, Jedi Master said:

    I sync more than two without much hassle (I have seven desktop computers scattered around the house). It’s easily done with the right software (although mine are all Windows and FreeBSD boxes, but I’m sure there’re similar tools for MacOS).

    When I made the move from Windows to Mac, the primary reason I did that was I wanted to use a computer, but not have a part time job as a systems administrator, which is what Windows forces you to do in order to just use the computer.

    I was stunned at the time how much time I used to have to spend on the computer not doing the things I wanted to do, but doing technical things to enable those things to be done.  TBH I'm over that, so while I'm perfectly capable of managing the IT of a medium sized business, I'd rather just use the computer for what I want to use it for, and not have to troubleshoot an array of file and network management infrastructure.

  13. 22 minutes ago, eatstoomuchjam said:

    Some of this is also being done at the sensor level instead of in the camera.  The GFX 100 series have a full sensor width 10-bit 4K mode (12-bit in raw).  They're not reading the entire sensor and downscaling.  They're using a 4K readout mode that's built into the sensor and when set for raw HDMI output, feeding that to the recorder.  It's "raw" in the sense that it's the exact data that the SOC receives from the sensor.  I assume that the implementation is similar in at least some other cameras.

    No links to share just now - just stuff I've read over the years as well as applying logic that if two cameras have the same sensor, but non-matching raw output, they must be doing some processing.  Not all are confirmed, but Sigma fp/Sony A7 III, Sigma FP-L/Sony A7R IV, GFX 100/H2D, Z Cam E2/Panasonic GH5S/BMPCC4K, etc etc etc.  If zero processing is applied, any of those cameras should look exactly the same when shooting raw...  unless the colors are determined by the raw decoder software - and if that's the case, it should be possible for the vendors to easily transform any of those cameras to any of the others.  🙂

    Yes, I understand the logic.

    One element of colour science often forgotten is the choice of RGB filters for the filter array, but I'm not sure if that would also be the same between these cameras, or if they'd just be supplied by Sony and therefore be identical between brands?  In theory this would create differences due to the corners of the gamut, but once transformed to XYZ differences between two different filters on the same sensor would be rounding errors at best.

    The best discussion I've seen on digital processing is the discussion of the Alexa 35 - page 52 onwards.


    As you say, much of what is going on might well be things that occur on the sensor.

    The other component that is worth looking at between RAW sources is the de-bayer algorithms between cameras.  AFAIK you can't choose which de-bayer algorithm gets used on RAW footage, so there might be differences between the manufacturers algorithms contributing to the differences we see in real life.

  14. 19 hours ago, bonzcruzez said:

    I might see the odd 60+ year old guest fumbling with an old DSLR or the occasional hipster with something Fuji flavoured, but otherwise nada, it's wall to wall phones and if they are not taking snaps with them, they are gazing at them for whatever enlightenment they individually receive from these devices.

    Don't be so skeptical...  there is much enlightenment available!


  15. 2 hours ago, markr041 said:

    "Don't confuse the bit-depth with DR, they're independent variables." Do you seriously think that I do not know this? Do you understand how condescending and insulting that statement is? 

    Do you also seriously think that I just defend GoPro no matter what?

    Is that how you justify thinking you actually know what you are talking about when you talk about GoPro's or the many other other cameras you have not used - that pushback to what you say is based on prejudice? Attack the person exposing you?

    Let me make it very clear - I have access to any camera, any. You might have noticed that I have posted videos from 10's of different cameras of all types. I mostly use the ones I think are best in their category at the time, and I care about size (so I am ignoring the medium-frame cameras for now) and ability to shoot handheld (so you won't see me packing an ARRI or a Venice, though I did work with the Sony FS cameras).

    I thus do not defend based on name brand or "ownership", I only care about facts (this does NOT mean I do not make mistakes). You made some ignorant statements about the latest GoPro's, made some ridiculous statements about bit depth, and got caught, so you try to make the person who pushed back look like an idiot - both ignorant and prejudiced. You can dis on GoPro's or Nikons or Canons or Sonys or BlackMagic's or Z-cams or Panasonics or Sigma's (all of which I have used and posted videos from), but it better be based on facts (eg, forget shooting with stabilization in low light with a GoPro - yes; Panasonic AF sucks, until only recently - yes; Sony IBIS is actually weakest - yes; BMPCC and Sigma images are more pleasing for some reason than from other cameras - yes; the Hero 12 is not worth buying over the 11 - yes; the Nikon Z8 is bulky and its AF is difficult to use compared to other cameras - yes).

    Do you actually think you have a sense of humor? 

    What exactly is your question? I am happy to answer it.  Even if it is, as is likely, an insulting one.

    I didn't realise you had access to every camera.  I realise now that this makes every statement you make correct!

    I'll learn my place eventually....


  16. 10 hours ago, zlfan said:

    GP12 can record unprocessed raw audio separately simultaneously and save into the same folder. Is there any workflow (prefer free instead of costly addons) to process this raw audio in Resolve to get good sound? Something like AI  sound noise reduction in iphone? Thx. 

    You might benefit from the Voice Isolation feature in Resolve.  I'm not sure if this is a paid feature or in free version, but it seems to work pretty well in some situations.

    In general though, having something on the microphones to block them from wind is essential.  You can't side-step the laws of physics 🙂 

    4 hours ago, zlfan said:

    You might wonder how to color grade or color correct GP-Log. The problem is that no matter how good you are at color grading, the exposure needs to be right in order to get good results. 

    That's true of almost every camera.  The closest you get it right in-camera the better the output will be.

  17. Gotta love a demo video that starts with a world-famous model / actress...

    All else being equal, products like this probably help to keep film alive.  There's a chance the rich might adopt things like this, and perhaps burn through film like they're rich (because they are).  For the rest of us, if you wanted to shoot on 8mm film then get yourself a second-hand real film camera and then benefit from the film consumption of the rich to give economies of scale.

    Personally, I think that the iPhone / smartphones have finally gotten here.  The compression artefacts (and even the crushing over-processing) are made invisible by the time you add enough blur and grain to get a semi-decent match, and they've finally caught up in terms of dynamic range, so you can have contrasty mids with strong saturation but gentle and extended rolloffs.

    Also, the more these images trend on social media, the more my OG BMPCC and BMMCC go up in value!

  18. On 11/29/2023 at 11:53 PM, TomTheDP said:

    Not a scientific test either but maybe more informative than the first one haha. 

    I think each camera's processing is just as important as the codec itself. That is what RAW theoretically gets around, the processing. Though with BRAW it's not really getting around that as much as it isn't a true RAW format. 

    Agreed that it's often about the processing.

    I think there's a bunch of stuff going on and often people don't understand the variables, or aren't taking into account all the relevant ones.  Also, people forget that the main goal of any codec is getting nuanced skin tones in the mids of whatever display space you'll be outputting to.

    In that context:

    • RAW in 12-bit is Linear, which is only about the same quality as 10-bit Log
    • LOG in 10-bit isn't significantly better than 709 in 8-bit when exposed well
    • It's commonly believed that only 12+bit RAW let's you adjust exposure and WB in post, and that 10-bit LOG is required for professional work, but thanks to colour management we can adjust exposure and WB in any of these (obviously 12-bit RAW > 10-bit LOG > 8-bit 709 from most cameras in real life and for big changes but if you're only making small changes then the errors are often negligible)
    • 8-bit LOG is much worse than 8-bit 709 unless you were delivering to HDR (because if you convert LOG to 709 then you're pulling the bits in the mids apart significantly, which is absolutely visible)
    • HLG is sometimes better than LOG - despite "HLG" being a marketing phrase and not a standard (unlike rec2020 or rec2100) it typically has a rec709 curve up to the mids then has a more aggressive highlight rolloff above that to keep the whole DR of the sensor, and combines this with a rec709 level of saturation.  This is brilliant because it typically means that you get the benefits of having a 10-bit image with 709-levels of saturation and the full DR of the sensor.  This is superior to a 10-bit LOG profile from the same camera (unless you clip a colour channel) because there is greater bit-density in the mids for skin tones (roughly equivalent to 12-bit LOG) and you get to keep the whole DR.  You can also change exposure and WB in post with proper colour management.  The GH5 and iPhone implementations of HLG are like this.
    • The pros require greater quality than consumers because they have to keep clients happy when viewing the images without any compression.  Much of the subtleties get crunched by streaming compression, and unless you're shooting in controlled conditions you can't really expect to keep the last levels of nuance right into the final delivery.
    On 11/30/2023 at 4:17 AM, eatstoomuchjam said:

    Keep in mind that even a number of cameras still do some processing even on so-called "raw" formats.  If they didn't, any two cameras using the same sensor would look exactly the same, but they don't.  There's a reason that raw looks different from multiple manufacturers despite that each one is using Sony sensors (for those who do - Sony, Fuji, Z Cam, Panasonic (I think?), and Black Magic (also I think?)).

    I'm keen to learn more about what happens at this stage of processing.  If you have resources on this, please share!

    On 11/30/2023 at 4:17 AM, eatstoomuchjam said:

    As far as whether raw is worth it, it depends on a combination of how much work you want to do with it and how flexible you want the footage to be.  For what you're doing, if you have lots of time to set up and prefer a slower pace of shooting, raw is probably not too important.  You have plenty of time to make sure to "get it right" in camera.  On the other hand, if storage space isn't a concern, there's very little reason _not_ to shoot raw.  

    The other aspect to consider is that RAW and uncompressed aren't the same thing.

    The megapixel race has obscured the fact that very close to 100% of material is now being shot at resolutions above the delivery resolution.  This is fine, and oversampling is great for image quality, but if you want to shoot RAW (and get the benefits of having no processing in-camera) and not have to deal with 5K / 6K / 8K source files when delivering 4K or even 1080p, then you have to deal with the sensor crop and having your whole lens package changed because of it.

    The alternative to this, and what I think is the big advantage of Prores, is to have a full sensor readout downscaled in-camera, but unfortunately this typically means that you have some degree of loss of flexibility to the source image (either by compression, bit-depth reduction, or even straight-out over processing like iPhone 14 did).  The alternative is to downscale in-camera and then just save the images as uncompressed.  

    There is nothing stopping manufacturers from implementing this.  The GH5 downscaled 5K in-camera and many cameras record Cinema DNG sequences in-camera - just combine the two and we're there.

  19. On 11/30/2023 at 12:28 AM, Emanuel said:

    A dream come true of the Italian neorealism... think about it : X

    To those who are used : D to complain on crappy YT entries, here's something... (Marty @PannySVHS, this time, starts with my beloved Rome found now right after when arrived to that earlier Lisbon clip of before ; )



    Just got around to watching this - interesting stuff.

    One thing described as a key element to the French New Wave was the availability of affordable and portable 16mm film cameras, allowing the available-light, hand-held shooting style.  I see from wikipedia that the Italian Neorealism was a precursor to the FNW, with Italian Neorealism in full swing from ~1945-1955 and FNW not really getting started until the late 1950s.

    I couldn't find any good timeline about when 16mm film became affordable - but I did note that wikipedia said "The format was used extensively during World War II, and there was a huge expansion of 16 mm professional filmmaking in the post-war years." so maybe the Italian Neorealism was the first movement to really benefit from this technological advancement?

    I was also under the impression that the FNW was the innovative movement that took the new tech and developed new techniques that fully utilised it, but maybe that's not the case?

  20. 1 hour ago, zlfan said:

    I think the major critical advancement of the last decade of the camera development is being able to put 4k or 6k into one tiny body, together with hfr to 60p or 120p, ibis, video af, etc. Maybe in another 5 years, raw codec on a u43 or s35 or 1-inch sensor will come to these small bodies, then a red epic in a true pocket cinema camera body is possible. Any manufacturer does this first will scoop a lot of money, like what Red experienced around 2010. 

    While not being action-camera sized, the BMMCC had RAW (compressed and uncompressed) from a S16 sensor (only a tiny bit smaller than 1" sensor) approaching a decade ago.  Of course, that was with 1080p - the mindless pursuit of increasing resolution means that what was once possible now generates too high data-rates, too much heat from processing, etc.

    Realistically, there isn't a huge overlap between the people who want RAW and the people who demand cameras smaller than a RED Komodo.  I wish there was, as I think it would be great, but I just don't think it's true!

    26 minutes ago, markr041 said:

    I am not confusing anything. You are just choosing to believe that only you know it all, and others cannot possibly know as much as you on any topic. You have knowledge, cut the condescending... (which is not just to me). And post some of your work instead of lecturing. 

    I cannot argue with what you hate.

    And unlike you, I do not assume that what I like is what everyone else should like. I have pushed 60 fps because of the bias that WAS in this forum that somehow 24 fps is the correct way to shoot. I think now that has gone away, because the holiness of film has diminished here at last.

    In fact, stop your personal lecturing . I get it, you are not used to pushback because you believe in your own head you know it all. Your prejudices are now revealed, as well as your attitude.

    I argue against what you say when it is based on ignorance of cameras you have never used, which has been exposed often. Your experience with color grading does not carry over to camera capabilities that you read on the internet.

    You didn't answer my question.

    I'll just write lines until you tell me to stop....


  21. On 11/29/2023 at 2:33 PM, MrSMW said:

    I ‘traded’ (off-loaded to the wife) my M1 14” at the beginning of this year for an M2 Max with 64GB ram 16” because the old one was barely faster than my 7 year old gaming desktop PC.

    A good test was batch editing a typical wedding of say 700 raw files through DXO PureRaw tidying up noise and detail.

    Overnight on my PC, around 8-10 hours?

    Same thing failed every time I tried it on the M1 chip MacBook.

    With this year’s M2, 40 minutes maybe?

    It has transformed my workflow and as someone who spends a lot of time working away from home Apr-Sep, it’s been brilliant.

    I use it with 3 external SSD.

    One for photo, one for video, one (armoured) backup for both.

    My only wishes were it was slightly less laggy for video editing because yup, there is some (but that may be me not having it set up right?) and I wish the screen was bigger than 16”, but as an all in one, photo and editing machine, at home at the dining room table, or in the office or in the motorhome, it’s a great/essential tool for sure.

    Interesting!  You must have really been utilising something that moved from software to hardware in the upgrade - every benchmark I've seen was incremental between the two.  Unless your M1 wasn't an M1 Max?

    On 11/29/2023 at 2:59 PM, Jedi Master said:

    The biggest issue I have with laptops, besides the speed issues, is screen size. My desktop PC, where I do editing, has three 34" 4K monitors. That much screen real estate really makes it easier to edit video and photoshop stills and I'm not willing to give it up just to get portability.

    I use a 13" MBP as my only computer, but connect it to my 32" UHD panel / hifi audio setup for normal uses, and then when working in Resolve I connect the UI to a 27" FHD panel to the side of my view, and the BM UltraStudio to my UHD panel as a clean 1080p feed of the timeline.

    I also sit about 1.5m/yards from the screens and operate with wireless kb / trackpad / BM Speed Editor / BM Micro Panel, which is why I have a large 1080p panel for the UI.

    On 11/30/2023 at 4:24 AM, eatstoomuchjam said:

    at some point, performance is "enough" for what one is trying to do. 

    Absolutely.  My current 2020 Intel MBP is enough for editing 4K 10-bit IPB h264/h265 on a 1080p timeline with very basic colour applied.  Then I can apply heavier colour grading and effects before exporting.  At the moment it's not fast enough to edit the footage with the heavy colour grading applied, but I make do.

    On 11/30/2023 at 5:15 AM, Jedi Master said:

    If you do that, you might as well have a desktop machine with its better performance. That's not a reasonable justification for a laptop over a desktop.

    I solve the issue of portability by having both a desktop and a laptop. For me, that's a better solution that expecting a laptop to do double duty. Jack of all trades, master of none.

    When the M1 Mac Mini came out I contemplated getting one as a fast editing machine but there was just too many hurdles to overcome with the need to sync two computers.

    The fundamental logic is this:

    1. If you need to be portable then you need a laptop
    2. If you want a desktop as well then you either need to separate your uses or you need to sync between them

    I didn't want to separate my uses, so that was it, game over, and so that meant I needed a relatively powerful laptop.  It would have been cheaper to get a desktop only setup, but it just didn't work for me.

  22. 4 hours ago, zlfan said:

    I think that GP may go 1 inch or larger, but only under the extreme pressure of Insta360 and DJI and iphone, etc. The major concern is that the af is much harder than 1/1.9 inch. 

    I highly hope that either Panny or Oly comes out with a small form factor u3/4 cam like Sony RX0 ii, but can do 6k to 8k 10 bit log open gate, great ibis and video af, like s5 iix or g9 ii or om1.  

    I thought that GoPro models are fixed focus?  and were in-focus between about 30cm and infinity?  Going to a larger sensor size might mean they'd have to include AF, which would be a significant change over a fixed focus approach.

    Still, it would be good to get a larger sensor, as that would improve low-light performance etc.  The size of the camera is a variable to watch though.

    I thought that the RX0 was a really interesting camera and that despite Sony telling everyone it wasn't an action camera and not to review it like that, mostly people were too stupid to take that advice and so it was largely mis-understood and not really utilised for its potential.  Unfortunately it had many of the normal Sony weaknesses including overheating.

    It would be really interesting to see what Panny or Oly would do with a similar form-factor, but based on the poor understanding of the RX0 I wouldn't think they'd make an attempt...

  23. 8 hours ago, markr041 said:

    We all agree that REC709 has very limited dynamic range on any camera, yes?

    Not all.  The Canon XC10 included the whole sensor DR in its standard profile.  I really wish that other camera companies would adopt this practice, but unfortunately they don't.


    8 hours ago, markr041 said:

    We all agree, that for the same sensor, a log gamma with 10bit color permits a higher dynamic range than REC709, yes?

    Don't confuse the bit-depth with DR, they're independent variables.  You can take an ARRI Alexa image and make it 4-bit if you like, and that doesn't radically reduce the DR.

    8 hours ago, markr041 said:

    Now, as to "action": I maintain, to which we all do NOT agree for some reason, that 24 fps is an inappropriate frame rate for any movement, including walking with the camera. So, 8K 24p is not useful for action, although in a pinch it is not terrible (As I showed with my cell phone video, with plenty of action if anyone actually viewed it, including drone flights). Any full body motion is action. 24p emulates film, and we all put up with the odd motion it displays as part of the separation from reality that film gives us (so I am not dissing 24p in general).

    You're confusing a bunch of things.

    I personally absolutely HATE the look of 60p.  I am so sensitive to this look now that 30p also looks pretty awful to me.  It looks far less like reality than 24p does, 60p/30p look like hyper-reality, like how reality would look if I was part robot living in the year 3000 after I'd taken my smart drugs for the day.  I don't know why they look like this, but they do.

    If you analyse all the variables involved then you become aware that 24p and 60p are both significantly different to how reality looks to our eyes, they are just making different trade-offs, and neither one is more 'correct' than the other.  The errors of 24p are just less offensive than those of 60p to me, and many others too.

    I can understand that watching sports in 60p is useful because there are quick motions of the ball etc, so it's great for making things obvious, but it doesn't look real.  Oddly, when playing video games, 60p / 120p / 240p don't have the same look to me as video.  Maybe it's to do with interacting vs passively watching, I'm not sure.  Maybe it's that games are artificial and so the artificial look of those frame rates doesn't clash with the aesthetic.

    I understand that you like the look of 60p and you want to make things look like reality, but that's not what everyone wants.

    9 hours ago, markr041 said:

    So, an "action" cam stuck in 8bit REC709 is not for my colleagues.


    9 hours ago, markr041 said:

    Any comparisons of "image quality" based on posted videos is meaningless - all the non REC709 Hero 12 videos are color graded, so if you do not like the images, that is on the user.

    If we promise to never say a bad word against the mighty and perfect GoPro will you promise not to come and tell us off for blaspheming? 

    Seriously though, the sooner you accept that not everyone wants what you want, shoots what you shoot, shoots how you shoot, the sooner you'll be able to discuss things rather than always argue with people.

  • Create New...