Jump to content

Michael S

Members
  • Posts

    36
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Michael S reacted to newfoundmass in EOSHD YouTube Thread   
    I'm loving the videos @Andrew Reidhas been posting on YouTube lately. They're cracking me up, because he's saying a lot of the things I'm thinking, but doing it with the British humor and wit that I lack. The Matt Granger one did me in this morning! If you haven't watched them you should! Keep it up, Andrew!
  2. Like
    Michael S reacted to Amro Othman in "Creator" replaces "filmmaker"   
    On a side note, I never liked the term "creator" from the start. It's pretentious. 
    No matter how awesome a video I make is, I didn't cReAte anything. I captured something that God created, and edited it into something cool. I produced something, but I didn't create shit.
    Imagine an architect looking at a building he/she designed and declaring: "I created this". 
    But it makes people who make a basic-ass video or reel feel good and important, so that's one of the reasons it's so popular IMO.
    /rant.
  3. Like
    Michael S got a reaction from PannySVHS in "Creator" replaces "filmmaker"   
    All "social media" platforms are eventually always turned into marketing platforms by their owners, and as content-creators are a species which lives exclusively on such platforms I would like to call them outsourced-marketeers.
    Why employ your own staff to drive a taxi when you can run a platform like Uber and have all these individual drivers compete for rides?
    Why have your own staff to deliver packages when you can contract all these individuals to deliver packages and have them compete with each other?
    Why have your own marketing department when you can have all these content-creators compete with each other to peddle your message?
  4. Thanks
    Michael S got a reaction from Chxfgb in Noob question about video type on TVs   
    What is supported by a TV depends on the model (obviously) and how it gets ingested; i.e. Is it read from an inserted thumb drive or through some dlna server etc. All three things you mentioned can prevent a TV from playing back the footage. The more you stick with bog-standard formats, the more likely it is to play. So something like 8-bit 4:2:0, 6 Mbs 1080x1920 or UHD will work. I think following DVD standards for HD Tv's and blue-ray standards for UHD Tvs should be a safe bet. (e.g. DVD has a max bit rate of 10Mbs if I remember correctly and are in practice on average 6Mbs).
  5. Like
    Michael S got a reaction from PannySVHS in How are you converting V-Log to "normal" colour?   
    When people say you must overexpose vlog by 2 stops, they mean the exposure meter must say +2. The reason for this is that the exposure meter assumes a standard gamma curve and not a log curve. As mid gray on a log curve sits two stops above mid gray compared to a normal gamma curve, the exposure meter must say +2. In my opinion this is a user interface design error from Panasonic which only leads to confusion. So the proper answer is that you must expose correctly, and therefore ignore the exposure meter when shooting in vlog. Use the spot meter (which switches to EI in vlog) or waveform. Spot meter must say +0 EI and mid gray on the waveform sits on 42%.
  6. Like
    Michael S got a reaction from hyalinejim in How are you converting V-Log to "normal" colour?   
    I would have sworn I have seen the exposure meter on my S5 jump up and down when switching between vlog and the standard profile but as I checked recently, it now also stays fixed on 0. I wonder if it is one of those things they also silently fixed during one of the firmware updates. Or the fix accidentally got included as it is part of a common code base that gets shared between all models. Anyway, it behaves properly now on at least the S5.
  7. Like
    Michael S got a reaction from IronFilm in Panasonic S5 II (What does Panasonic have up their sleeve?)   
    Just a guy, reading the same rumour sites as everyone else, but then decides to make a video about it with catchy thumbnail (expressive face, bold colours) and suppressing a healthy dose of scepsis, hoping to gain some traffic. He's not the only one using that technique.
  8. Like
    Michael S reacted to markr041 in Can't make decent HDR grade from Panasonic S1 V-Log. What am I doing wrong?   
    The discussion is a confused mess, with the usual tropes about 4K resolution, compression artifacts, etc. Almost nothing on the key advantage of HDR - extended dynamic range with 10bit color. That advantage can be seen at 480 bps or FullHD or 4K. It can be seen on most high-end phones, on m1 MacBook Pros and on most new TVs even at most-common internet bitrates.
    Arguing for limited dynamic range is really stupid.
  9. Haha
    Michael S got a reaction from SMGJohn in Panasonic S5 II (What does Panasonic have up their sleeve?)   
    Just a guy, reading the same rumour sites as everyone else, but then decides to make a video about it with catchy thumbnail (expressive face, bold colours) and suppressing a healthy dose of scepsis, hoping to gain some traffic. He's not the only one using that technique.
  10. Haha
    Michael S got a reaction from ntblowz in Panasonic S5 II (What does Panasonic have up their sleeve?)   
    Just a guy, reading the same rumour sites as everyone else, but then decides to make a video about it with catchy thumbnail (expressive face, bold colours) and suppressing a healthy dose of scepsis, hoping to gain some traffic. He's not the only one using that technique.
  11. Like
    Michael S got a reaction from MrSMW in Panasonic S5 II (What does Panasonic have up their sleeve?)   
    Just a guy, reading the same rumour sites as everyone else, but then decides to make a video about it with catchy thumbnail (expressive face, bold colours) and suppressing a healthy dose of scepsis, hoping to gain some traffic. He's not the only one using that technique.
  12. Like
    Michael S got a reaction from FHDcrew in Panasonic S5 II (What does Panasonic have up their sleeve?)   
    Just a guy, reading the same rumour sites as everyone else, but then decides to make a video about it with catchy thumbnail (expressive face, bold colours) and suppressing a healthy dose of scepsis, hoping to gain some traffic. He's not the only one using that technique.
  13. Haha
    Michael S got a reaction from Davide DB in Panasonic S5 II (What does Panasonic have up their sleeve?)   
    Just a guy, reading the same rumour sites as everyone else, but then decides to make a video about it with catchy thumbnail (expressive face, bold colours) and suppressing a healthy dose of scepsis, hoping to gain some traffic. He's not the only one using that technique.
  14. Like
    Michael S got a reaction from IronFilm in Panasonic GH6   
    Online stores have essentially become like auctions. Computer sees what all customers and competing online stores are doing in real time and adjusts it's price according to fancy algorithms trying to maximize profits. Most shops have the decency to only adjust prices during the night but Amazon and decency...
    In physical stores they are gradually switching over to digital price tags. It is only a matter of time until these start to adjust automatically as well during the day if local laws will allow for it. If only it was legal to set the price based on user profile data...
  15. Like
    Michael S got a reaction from John Matthews in Panasonic GH6   
    Online stores have essentially become like auctions. Computer sees what all customers and competing online stores are doing in real time and adjusts it's price according to fancy algorithms trying to maximize profits. Most shops have the decency to only adjust prices during the night but Amazon and decency...
    In physical stores they are gradually switching over to digital price tags. It is only a matter of time until these start to adjust automatically as well during the day if local laws will allow for it. If only it was legal to set the price based on user profile data...
  16. Like
    Michael S got a reaction from Emanuel in Panasonic GH6   
    Online stores have essentially become like auctions. Computer sees what all customers and competing online stores are doing in real time and adjusts it's price according to fancy algorithms trying to maximize profits. Most shops have the decency to only adjust prices during the night but Amazon and decency...
    In physical stores they are gradually switching over to digital price tags. It is only a matter of time until these start to adjust automatically as well during the day if local laws will allow for it. If only it was legal to set the price based on user profile data...
  17. Like
    Michael S got a reaction from kye in How fragile are OIS and IBIS mechanisms?   
    1) If the vibrations are high frequency enough, you will still get the jello effect, even with the tiny sensors. I would not expect any damage though as there are no moving parts in the iphone.
    2) All Ibis units have hard limits on their range of movement and I'm quite sure if you yank them to their limit hard enough and often enough something will break eventually. These systems were not designed for violent movements, there is no mechanical dampening or absorption when they reach the limits of their range, just loud clicks. I would not risk my camera at it unless I would be ok with breaking it anyway.
    Maybe you could get creative with something like this: https://www.proaim.be/collections/shock-absorbing-systems like this photographer: https://radpowerbikes.eu/blogs/the-scenic-route/a-rad-setup-for-ebiking-photographers
    If you happen to know any farmers, they are usually also quite good at creatively putting together some mechanized contraptions.
     
     
  18. Like
    Michael S reacted to kye in Adventures with a colour grading panel and grading "manually"   
    There has been a revolution in colour grading over the last 15 or so years with the invention of colour managed workflows.  These enable the automatic conversion of footage between various colour spaces, and enable things like colour matching between cameras.  
    Prior to this, all colour grading was based on either manufacturer-provided LUTs (or other LUTs like print film emulation LUTs), or manually grading the camera files to create the desired output (typically grading log into rec709).  However, colour management doesn't negate the need for manually adjusting the image to get a desired look.
    I've been working with colour management and colour grading for years now, but decided to up my game by getting a control surface and learning to do things manually, no colour management or LUTs - just full manual ruthlessness.  
    Enter the BlackMagic Micro Panel!

    which isn't actually that micro in real life....

    After shipping delays (8 weeks!!!) it has arrived and I've put in maybe 6 hours over two sessions.  As anticipated, my skill level is "disappointing", but my plan is simply to put on some music and put in the hours, like building any other skill.
    My first grading session was actually a bit of a revelation.
    I started off grading C-Log footage from the XC10, and using on the Lift/Gamma/Gain controls.  My second session was grading HLG footage from the GH5, and including Contrast/Saturation/Offset as well as a bit of Lift/Gamma/Gain.
    The three trackballs adjust the hue offset, and the three rings/wheels adjust the luminance.  At first I thought that the wheels were very insensitive, large rotations seemed to make small changes in the image - especially the Gamma wheel.  However, the more I used them a funny thing happened.  I found that there were all these little "niches" where suddenly a particular thing emerged.  Go a little bit one way or the other and you adjust the feel, but go a bit too far and the look dissolves.  These are so fragile that the whole niche might only be 1-2mm of adjustment on one of these wheels.  So when you find one of these all of a sudden the control feels like it's very sensitive, not too sensitive but you definitely don't want it to be faster.
    These things are "looks" related to a colour balance, but can also be "textures" related to shadow levels and shadow contrast, or to do with highlight rolloffs.  They can be broader too, like "warm sunset glow" where the balance of the colour matches the contrast, or when I was grading some Thai temples there's a way to make the gold-gilding on the buildings and statues really glow.  These looks really seem to be based on combinations of various things in the image.
    Here are my initial take-aways:
    These controls are enormously powerful
    There are dozens / hundreds / more? of looks that you can do with only the LGG controls - throw in the Contrast/Pivot/Saturation/Offset controls and it's almost limitless.
      Just using a surface is a revelation
    I've used all the individual controls (LGG, Contrast/Pivot/Offset/Saturation, etc) literally thousands of times over the years, but I'm learning new things by the hour that I never noticed or never understood.  I genuinely have no idea why having a control surface has made this difference, but it really has.  Maybe it's being forced to concentrate on only one or two controls at once.  Maybe it's the tactile nature of it.
      Moving multiple controls at the same time is game-changing
    Moving two controls at the same time and in opposite directions is game-changing and simply isn't possible without a control surface.  This is where the plethora of looks comes from, as you adjust multiple controls against each other the overall image doesn't change much (assuming you're balancing the adjustments) but the ratio between the two does and you can gradually dial in different looks by navigating up and down this balance point.  There's no way you can do this with a mouse because by the time you adjust one control (which throws off the whole look of the image) and then adjust the second control (to almost completely eliminate the impacts of the first control) you've forgotten what it looked like before, so you can't possibly dial in the subtle changes required to find these tiny niches in any reliable way.
      Muscle memory developed really early
    This surprised me, but it was really fast to really develop.  The surface feels familiar even after a few hours.  I'm told that pros grade without looking down, maybe at all, and that's part of their efficiency.
      You can grade full-screen
    This is perhaps a Resolve-specific thing (I don't know how panels work in other NLEs) but if you're adjusting things with the mouse then you can't do that with a full-screen image because the controls are hidden from the cursor.  I have an external reference monitor, but it means that I can put scopes on my UI monitor to cover the controls and I can still adjust things even though those controls are under the scopes.  Very useful.
      It's teaching me to see
    I've spotted a few things happening in the footage (which I had seen previously) but because I was adjusting something at the time they emerged, I was able to play with the controls and see what caused them.  Now, I recognise that thing and know what is causing it.  I've learned what causes things I've been seeing for years.  Once you've found a look it's interesting to adjust each control individually to see how that control impacts the look.  That can help to dial-in the look too - you adjust each control to optimise the look and after a few 'rounds' of tweaking each control you'll have nailed it.  You'll also learn very quickly which controls matter to the look, and also which ones that look is more sensitive to. Would I recommend this?  
    Yes and no.
    Yes, but only if you're willing to put the time in.  If not then you're probably going to have a very bad time.  I tried grading some iPhone footage, with its auto-WB and highly processed 709 image, and I was half-way to rage-quitting within about 15 minutes.  I still had that sour taste in my mouth the next day, and it took me a few days to get over!  I've now realised that all practice is good practice and so I may as well grade more forgiving footage and leave the iPhone until my skills are significantly more developed.
    I don't know what my long-term plans will be, maybe I will learn to grade well enough that I don't need to use a panel but will be able to use the knowledge I've gathered.  Maybe I'll always want one.  I will definitely grade real projects using colour management and LUTs, but having these skills will complement that.
    At the moment, it's a learning tool, and damn - I'm learning a lot.
  19. Like
    Michael S got a reaction from kye in The Thread for Good Deals and Discounts   
    For our european visitors, in my country the Lumix S 24mm F1.8 is 900 euro's in every shop (well, 899 actually) but in Amazon italy it is 750 for reasons unknown to me. Has been for quite a few days now. I ordered one, it was an excellent copy. https://www.amazon.it/s?k=lumix+s+24mm&i=electronics
  20. Haha
    Michael S reacted to MrSMW in Panasonic GH6   
    Shooting video at f22 is gonna be a thing.
    Yup, that shallow DOF is sooooooo 2021.
  21. Like
    Michael S reacted to aaa123jc in Crunch Time For Panasonic Autofocus   
    After using the Sony A7S3 in one occasion, I quickly realized how reliable and convenient modern AF systems have become. It makes me question why I have decided to stick with my FS7 and not upgrade to a A7S3 or FX3 (well, because I have no budget😆). The new AF system just saves so much time. 
    Why Panasonic still doesn't offer a good AF system is beyond me. Almost always the improvements on the AF system are for stills. For stills, that system is great. Very accurate and fast. Somehow the AF for video mode is just bad, and in my experience, worse than even some older contrast detection AF systems. I suspect the problem is not entirely the DFD system. 
    Panasonic cameras are always very close to a perfect camera. IMO, they just have to fix the AF system, and they can easily out sale other brands. 
  22. Haha
    Michael S reacted to Tim Sewell in The Aesthetic   
    Not just those considerations - also actors who can stay on their mark, who can repeat the scene multiple times without screwing up. I forget the film and the director, but I heard a tale about Bette Davis where the director told her 'we're going to track up the stairs and along the corridor in a continuous shot, then enter the room and dolly to a close up on your face, where a tear is just starting to form.' 'Which eye do you want the tear in?' asked Davis.
  23. Like
    Michael S got a reaction from kye in Analysing other people's edits   
    I generally also use just straight cuts or the occasional dissolve, but even when doing straight cuts I find that doing J-cuts, L-cuts, inserts and in general, timing shots and audio together the way I want can get quite complicated due to dependencies between shots in timings between audio and video and that some editors make all this a lot easier than others. This is one of the reasons why I haven't transitioned to Resolve yet. Maybe I just never learned Resolve properly but I do find it's interface to actually edit, combine and time in- and out-points between shots and audio not particularly convenient. E.g. if you learn the keyboard shortcuts in Vegas Pro which I currently use, you can basically do anything you want without having to lift your fingers off the keyboard and slip and slide all event edges or clip content anyway you like. (Too bad I don't use it often enough to become really proficient in it :-/) I haven't discovered such convenience yet with Resolve.
  24. Like
    Michael S reacted to kye in The Aesthetic   
    Your points are all true, but what I'm getting at is that the balance is off.  
    If I got a family car and made it sportier it would be good and appeal to a wider audience.  If I made it faster still it would appeal to a narrower audience and most people would want other improvements rather than speed, for example perhaps safety or comfort.  Making it faster and faster and faster leave behind most people because they'll never need the speed  but would really prefer to pay for extra safety and comfort and a better stereo instead of paying for speed they won't use.
    Cameras are like that now.  The only people where 8K is actually better than 6K in any meaningful way (when actually looking at the end result) is people doing VFX of some kind (crazy stabilisation, severe cropping, VFX) but they're specialists.  So 8K is really a feature for specialists that is implemented in every camera.  So we all have to pay for this feature that we won't really benefit from.  But it's worse because all the energy being put into that feature is investment that could have been put into the other things that would have been of more use to a wider audience.
    Take the OG BMPCC for example.  It was 1080p RAW internal, but had terrible battery life.  You'd think that in a decade they'd have a camera that would take care of the battery life, because that was one of the cameras leading issues.  Not so, the R5C can record 8K RAW, but not on battery.  They've under-improved one feature and over-improved another.
    It's like Canon announcing "Last year we announced our 25K flagship camera which required external power to record, and that wasn't the ideal camera for everyone, so we're proud to announce that our new camera is 50K and still requires external power to record!" and people are sitting back and thinking "WTF - you worked on the wrong thing!".
    Similarly, think about the reaction when Panasonic keep releasing camera after camera with more and more resolution, larger sensors, but the AF is still the Achilles heel of the whole thing - "WTF - you worked on the wrong thing!".
    That's what I'm doing now.  I'm sitting here looking at my OG BMPCC, my BMMCC, my GH5, my GX85, and thinking that all those cameras had weaknesses that would be great if they were fixed, but the current crop of cameras has been improved in ways I didn't want (and very few people actually benefit from) and most of the current cameras still have the same issues as before.
    They're working on the wrong things, diverting money from the right things.
    Hahaha. I love the old "I found one example in the entire history of mankind - therefore your argument has no merit at all so go home and let the rest of us forget you ever existed" logic 🙂 
    There's an interesting error of logic that people seem to be making around colour science.
    I keep saying that I wanted better colour science, and people keep saying that now 10-bit and RAW is more affordable so there I have it, but this is missing the point.  Colour grading RAW is very difficult and manufacturers are much better at doing it than we are (otherwise, why are people so enamoured with Canon colour, if anyone can do it?) so actually, the lower the cost of the camera, the better I want the colour science because the worse the owner will be at colour grading and the less money they can devote to it.
    I think you're hitting the nail on the head here, The Aesthetic is about getting the right look.  It's a "right amount" mindset rather than a "more is always better" mindset.
    If you concentrate on the right amount, then you're interested in getting the right amount of resolution from the sensor, the right amount of resolution and sharpness from the lens, the right amount of distortions, etc.
    The challenge is that, for everyone except specialist VFX applications, the right amount of camera resolution has already been gained and now they're just piling on more and more, but we haven't gotten the right amount of other things, like functionality or reliability.
    The R5 was a classic..  it can record way more pixels than you need, for way less time that you needed.  It doesn't average out!
    The trip to film and back is a perfect example of an artistic treatment rather than a 'fidelity' treatment.  Essentially it degraded the process in every way possible, when viewed from a technical perspective.  Lower sharpness, lower resolution, altered colours, and cost both time and money.  Worse technically, but better aesthetically.
    If I make two versions of a camera, one with a lower resolution sensor and one with a higher resolution sensor, the higher resolution sensor one will:
    drain the battery faster, or require a larger heavier more expensive battery (camera has to process more pixels) fill the memory card faster, or require a larger more expensive card have worse low-light and noise performance have worse colour (think about how colour goes to shit in low light) cost more to manufacture require a faster computer to edit, or require time to render proxies To a certain extent these costs are hidden, because technology is getting cheaper, so the cost of getting a memory card that can record an hour of footage doesn't go up from year to year.  However, if I already own a large enough SD card for a given resolution, and they don't increase the resolution of the camera, the cost of an SD card for that camera drops to zero because what I own now is fine and I don't have to buy anything.
    This is a point that most people don't realise.
    I've seen it.  Great film, really really enjoyed it.
    For anyone in this thread using this as an example of higher resolutions being useful, absolutely.  Anyone shooting a VFX film with a budget more than $100,000,000 - please understand that I'm not talking to you! 🙂 
    LUT support in camera would be great.  Guess why they don't include it in lots of cameras?  It takes processing power.   ............processing power that would be spare if the camera wasn't processing so many pixels!!
    This desire for "authenticity" through the smartphone look has been around for years but is interesting that it has permeated this far.  I remember reading about it years ago when I wasn't even into video yet.
    IIRC people don't trust the more polished, longer focal length, shallower DoF image because they became so associated with big corporates (the only ones who could afford that look) making highly polished videos that lied about crimes against humanity and the oppression of the poor and working classes etc (you know - business as usual).
    Yet, the vloggers still seem to want shallow DoF, and go to extraordinary lengths to get it, so who knows if that aesthetic will somehow gradually be redeemed due to the huge volume of honest authentic people vlogging with a shallow DoF out there.  
    Conversely, I wonder if we're in for a spate of heavily misleading content with the smartphone look somehow tarnishing the 'authenticity' that this look currently enjoys.  If you get enough people spouting anti-social crap through it then that would do it, but of course that would require the people watching to realise that the content is anti-social, rather than radical free speech.  Seems we're losing the ability to tell fact from lunatic in the current climate!  Still, it's a genuine thing since the authenticity came from the content of that 'look' being more honest than the previous 'look'.
  25. Thanks
    Michael S got a reaction from kye in The Aesthetic   
    I believe the point OP is trying to make is: "A part of the people who are shooting video have specific ideas about what a good image is but I think they are wrong. These people often have their origin in photography more so then cinematography which would explain their preference for specific visual attributes. Cinematographers however have very different criteria to judge an image and should not take their cues from these people."
    I do think that photography and cinema do each have their own language. Being a good photographer doesn't make you a good cinematographer or vice versa. An image which works as a photo might not work as part of a narrative sequence and a great scene from a movie might very well fall flat as a still. However I think this distinction has nothing to do with a particular aesthetic. A good photographer may just as well "dirty-up" the image as part of his work. The significant distinction is intent. Professional photographers and cinematographers first think about what they want to achieve with their images and then use anything in their toolbox to achieve that, be it softening, sharpening, fish-eye distortion, rectilinear (distortion), vintage aberrations etc. The not so professional doesn't think it through that much and uses what he has, or simply uses what he saw others using because it worked really well or looked cool without thinking about how appropriate it is for what he is trying to do.
    The starting point should be intent, why do I shoot this image? Everything else should follow from that.
    And then there is the distinction between those who want to lock a look in camera (so it becomes harder to mess with your intent during post-production) and those who prefer to capture it all as neutral and pristine as possible to allow for maximum flexibility in post (so you can change your intent I guess?).
     
×
×
  • Create New...