Jump to content

kye

Members
  • Posts

    7,846
  • Joined

  • Last visited

Everything posted by kye

  1. kye

    Stabilisation in post

    That's interesting and makes sense - although it's something that EIS could also compensate for.
  2. kye

    Stabilisation in post

    I thought that gyro would allow the camera to eliminate the warping as it would know the focal length and the cameras direction, but it seems to not be so. That was literally the only advantage of gyro stabilisation over EIS. I'd conclude that gyro is worth trying in post to see if it does better than EIS, but doesn't seem to have any advantage. Both are last-resorts compared to IBIS / OIS, or physically controlling the camera with a gimbal, tripod, monopod, slider, crane, jib, etc etc. (Disclaimer... the below might sound harsh, but it's directed at the theories you're presenting, not you! Hopefully my comments are useful and informative and correct some of the staggering misinformation floating around). That's nonsense. The data coming off the sensor is RAW - it's whatever the resolution is x the bit-depth x 3 (RGB channels). I think that Osmos and GoPros have the best stabilisation based on two factors - 1) they have fixed lenses and can tune their algorithms based on that, and 2) the entire success or failure of those products rests on how well they implement this feature. That's also partly nonsense. Man, the internet really doesn't understand WTF is going on with stabilisation. I realise that manufacturers measure stabilisation in stops. This is complete marketing crap - it's correct but irrelevant. Think of the suspension system on a car. The tyre follows every tiny bump on the road and the body of the car doesn't want to feel any of those bumps. The goal of the suspension system is to connect the two but without transmitting the shake from the road to the car. It's not a perfect parallel as there are differences between these examples, but it's good enough for our purposes. The cars suspension system can be viewed in two scenarios. 1) how well it smooths small bumps, and 2) what the maximum bump size is that it can handle. In the first scenario, you're driving down a road and there's a small pothole. You drive over it, you hear a thump from the tyres, but feel almost nothing in your seat. This is a reduction in the vibration, and in cameras, this is measured in stops. It is the ratio of how much vibration goes in vs how much gets through the mechanism. In the second scenario, you drive up a large curb. If you're in a small city car, the tyre flexes, the shock goes all the way in, but the wheel hits the end of the shocks and sends an enormous thump up into the car, sending you and the contents of your car flying. If you were in a huge off-road 4wd, you would hear a thump but the tyres and shocks would have enough vertical travel to absorb it. You would still feel it to some extent, but it wasn't a disaster. The second scenario is what you're seeing in your OIS/IBIS mechanism when you see the footage still have shake. This is what separates small sensors from larger sensors - it's the amount they can travel, not the "stops" of IS. The sensor simply runs out of travel and can't move far enough. The math is very clear. Take 5 stops for example, that's a reduction of vibration by a factor of 32x. So, you move the camera by 32 pixels but the image only moves by 1 pixel. You move the camera 50% to one side, and the image moves by 1/64 of the frame - in 4K that would be 60 pixels when you moved the camera almost 2000 pixels....and that's only 5 stops. This is why the stops don't matter. The issue is how far the camera can move the sensor. Larger sensors probably don't have as much room to move as a smaller sensor. This is one reason for having a MFT camera the size of a FF MILC - to accommodate this mechanism. The other biggest challenge is the wobble of IS (both OIS and IBIS) on wide-angle lenses. This is a problem because spherical lenses are, well, spherical, and sensors are flat. In terms of EIS, it's essentially a complete fail on behalf of the manufacturers to compensate, as GoPro and DJI have shown by doing it properly. I looked at a package that corrects lens distortion in post (and also does things like RS correction and flicker elimination) but didn't buy it as it was closer to $1000 than I would have liked, but it's possible. I could even tell you the math, but I haven't worked out how to implement it yet, unfortunately. Resolves EIS pipeline isn't designed correctly to do it.
  3. Or in summer of any country that doesn't snow in winter.
  4. kye

    Stabilisation in post

    My hand-held shooting is mostly of static shots now, so of course they stabilise quite well with IS or EIS, but I do the odd follow-shot when walking with family, which is where the Sony X3000 comes in handy - OIS in an action camera. I just did a quick search for samples of walking shots with it and found a few that seemed nothing special, but I know when I do those shots I am always surprised when I see the footage as it looks gimbal-like almost every time. I try and do the ninja walk and hold my hand floating in space, but I haven't practiced it and wouldn't say I'm particularly talented for it, but who knows. Perhaps the best EIS is from the 360 cameras, where the EIS has infinite crop factor into the lens and the lens distortions are all cancelled out completely. Of course, if you're filming, a normal shot then you'll crop into the image so much the IQ will be unusable, so that's the downside. I'm not sure what the sensor size and stops of light have to do with IBIS?
  5. I'm no expert, as I only know Resolve, but from my understanding its normally worth investing in things like shortcuts as they'll pay off in the long-term. In terms of learning Premier shortcuts vs customising them, I'd suggest reviewing the Premier ones and seeing if they suit your workflow. The problem in editing that no-one talks about is workflow and how some peoples workflows are very very different. Some examples: Do you add all your clips into a timeline, then make selects by removing the material that is no good (bad takes etc), or do you review the material from the Bins and add the good stuff? Fundamentally different shortcuts and even mindset. For larger non-linear projects, do you sort your selects into timelines (eg, b-roll by location, interviews, events, etc) or do you do this filtering via tagging and metadata and not via timelines at all? My understanding is that Premier and FCPX have much more power in their media management (like tagging etc) than Resolve, which is why editors don't think Resolve isn't really "ready" for big projects yet. Do you build your final edit by a subtractive or additive way? ie, do you pick all the good bits and pull them into a timeline, then cull and re-arrange and tighten in passes until you're done, or do you just pull in the absolute best bits and add and re-arrange things until you're done. If its the former then you'll spend a lot of time looking at clips that will eventually get cut (if a clip makes it through many passes and gets cut at the last minute), but once you've cut something you're unlikely to ever look at it again. Alternatively, if you add the good stuff then you'll potentially be reviewing your entire collection of clips every time you want to find the next clip to add. If your media is well curated (metadata, tagging, etc) then this can be an efficient process but if not you could get lost forever as there's no guarantee that you're making progress. When you're making an edit, do you want to move the timing of the edit point to suit the content of the clips or do you want to change the content of the clips around a fixed timed edit point? If you were editing dialogue you'd do the first but if you were editing to music you'd do the second - fundamentally different shortcuts and thinking. When you're editing, are you concentrating on the edit point, or the clip? For example, if you're looking at the edit point then the Clip End will be of one clip and the Clip Start will be of the next clip - so the editor will be focusing on editing two clips at once. If you're in a clip focussed approach, Clip Start and Clip End will be of the same clip. I hit this issue in Resolve as I edit in a clip-centric way and edit to music but some of the Speed Editor controls (very handy ones I might add) are in a edit-point-centric way, and there's no way to change this. etc... The other thing to realise is that there are many small tasks that must be done in an NLE, and the different editors may have different ways of doing them. It's easy to compare shortcuts, but it might be that to accomplish that outcome in one NLE can be done at (potentially) a fraction of the time by using one mindset over another. This is the hidden aspect of NLEs - they are designed to edit in a particular way and may be less efficient to be used in another way (or may not really support that other approach at all). Obviously this is very personal to what you edit, how you edit, how large the projects are, etc, so the answer ultimately can only be determined by you, but don't only learn how Premier can do what Resolve can do, try to learn what Premier can do that Resolve can't do.
  6. Yeah, I think you're right. I have a small 1080p action camera that would be pretty good so that's my current best plan. I could either mount it to the scooter or maybe get a mount for my helmet, which would be stable and more likely to be remotely level too. One thing I really like about the hypersmooth stabilisation on the latest action cameras is that when you mount them to a vehicle or something they aren't locked to the vehicle, so you see it moving around in response to the terrain (smoothly of course) but it's a nice effect I think. Can you turn the stabilisation on your cameras down to a lower setting perhaps? Not sure if that's something that's available?
  7. Just kidding... 🙂 My understanding is that Premier is probably still a better editor than Resolve (although Resolve is closing the gap) and that the main issue with Premier is that it crashes all the time. Save early, save often. In terms of colour though, we all love to think that the 5,984 controls that Resolve has are required for good colour, but it's not true - if you get the basics dialled in then you can get great looking images with the tools that basically any NLE has.
  8. kye

    Stabilisation in post

    Yeah, I suspect that it's often under the threshold of what is perceptible. I also have a theory that this threshold is getting higher over time as people slowly get used to cameras that expose with SS. Your comment about compression from online platforms is an interesting one, as, YT in 4K has more resolving power than basically any affordable camera had a decade ago, so that's actually gone through the roof, but peoples perception has dulled more than enough to compensate. I've actually gone the other way in my work - I used to shoot quite dynamic shots and stabilise in post a lot, whereas now my shots are much more static and I basically don't stabilise in-post at all. This forum used to be full of people talking about motion cadence, which despite never really getting a good definition was a pretty subtle effect at the best of times, and yet now people seem to be comfortable with the blur not matching the cameras movement, which I would imagine would be an effect at least one or two orders of magnitude more significant than motion cadence. I also find it amazing that people have adjusted to 4K being cinematic, when even now many cinemas are 2K, and every movie (apart from those on 70mm) basically had 2K resolution by the time you saw it in a theatre. How perception changes over time!
  9. kye

    Fuji X-H2S

    I'd be a bit careful about interpreting this type of Information - I remember statements from the launch of the Alexa 35 that indicated otherwise. Most likely everyone was telling the truth (there's unlikely to be a scandal here!) but that everyone was using carefully chosen words.
  10. kye

    Stabilisation in post

    If you have 180 shutter and shake the camera then your images will have shake and motion blur. This will look normal because the blur will match the shake - if you shake / move left the blur will be horizontal and the size of the blur will match the shake / motion in the shot. If you stabilise in post, you remove the shake but not the blur. If you stabilised in post completely so that the shot had no shake then it would look like a tripod shot because the camera movement would be gone, but all the blur would remain, so a stationary shot would blur in random directions at random times for no conceivable reason. This is a test I did some time ago comparing OIS / IBIS vs EIS (stabilisation in post is a form of EIS). The shot at 25s on the right "Digital Stabilisation Only" shows this motion blur without the associated camera shake. The IBIS + Digital Stabilisation combo was much better and is essentially the same as OIS + Digital Stabilisation. The issue here is that people using IBIS or OIS often have all the stabilisation they need from that, so the gyro stabilisation is aimed at people who have neither. This "blur doesn't match shake" also happens in all action and 360 cameras when they shoot in low-light and their auto-SS adjusts to have shutter speeds that include blur (which is why I bought an action camera with OIS rather than EIS).
  11. With Sony and BM now offering gyro stabilisation, what are your thoughts about stabilisation in post as it relates to the 180 degree shutter rule?
  12. Interesting tiny cameras. What are you using them for?
  13. No personal experience, but some years ago when I was looking for portable backup options for the XC10 I found that some of the portable HDD backup units would backup from a USB drive, and I read that those allowed you to plug in a card reader and backup from that. As I never confirmed it personally, don't take my work for it, but it might be a way to side-step the issues of which portable units have CF card-readers and which don't.
  14. kye

    Shooting Open Gate

    When it comes to TikTok, the overwhelming factor in how I create and consume content is called "opportunity cost". Yes, it can be very useful if you're shooting for aspect ratios less wide than 16:9. Noam Kroll is a fan of alternative aspect ratios: https://noamkroll.com/aspect-ratios-in-filmmaking-are-officially-no-longer-standardized-the-creative-possibilities-are-endless/ https://noamkroll.com/the-magic-of-the-1-661-aspect-ratio-how-i-plan-to-use-it-on-my-feature-film/ https://noamkroll.com/playing-against-filmmaking-trends-on-our-feature-with-arri-alexa-classic-2k-prores-hq-43-aspect-ratio/ That's just a few - he talks about it much more on his blog if anyone is curious.
  15. ML with Canon truly is some special images. If you can get over the lower DR, poorer lowlight, and lack of stabilisation then the images can be truly excellent. I found the grain from it on my 700D to be spectacular as well, very natural feeling. The other camera that seems to get forgotten is the 700D. When I last checked it was at the cutting edge of the ML development as one of the most active developers had one, so it got every new feature basically immediately. The limitation of the 700D compared to the 5D was that the 700D only had ~1700 pixels across the sensor (IIRC that was every third pixel) and if you wanted more horizontal resolution than that then you'd have to crop pretty severely into the sensor, but 1.7K RAW upscaled to 1080p didn't look too bad, especially if you were going for a slightly more organic look. The image from the 5D is nicer of course, but I'm yet to see images that the EOS-M can do that the 700D couldn't.
  16. @leslie @BTM_Pix I don't care about getting stabilised shots - I care about not shaking the camera literally to death - to the point that the OIS / IBIS mechanism stops working and the camera becomes unusable. The severity of the vibrations are unlike anything I can think of, except perhaps amusement park rides, and even then this would be a pretty harsh one.
  17. kye

    Shooting Open Gate

    Ha, an influencer making videos about how to be an influencer. He said "What do you struggle the most with as a content creator?" and I was thinking - obviously it's editing and sound design. Then he talks about cropping to post on TikTok. Umm, no. I mean, I get it. If you're an influencer then you want to shoot in 8K RAW so you can record everything and if something wonderful happens then you want to be able to crop and post on all the socials and make an edit where you crop from a wide to a mid to a long to a nasal close-up etc, but I'm just thinking - at what point would concentrating on writing and ideas be more beneficial.
  18. My recent hobby is riding my electric scooter around the place for Little Camera Tests, and I've contemplated mounting a camera to the scooter to get some zoom-zoom shots, but also just to get a smartphone mount to be able to have navigation or whatever open. The challenge is that the scooter has zero suspension, and footpaths / sidewalks are not even remotely smooth. The bumps ripple up the handlebars with almost infinite transmission and actually make my wrists sore from the severity of the bumps. I'm forced to drop down curbs etc occasionally too, so that's a massive jolt as well. 1) is mounting a smartphone (iPhone) a reasonable idea? 2) what about an IBIS mechanism like the GX85?
  19. kye

    Olympus OM-1

    Interesting. Olympus has the reputation for having superior stabilisation and of course have PDAF, but have been let down by lack of bitrates and other codec options (such as the open gate functions of the GH line) but maybe they'll rectify that on this new body? If they took some inspiration from the GH6 and implemented Prores then it could be a serious camera for those in the MFT ecosystem.
  20. kye

    Olympus OM-1

    Is the E-M1X their flagship camera? For some reason no-one seems to understand their range and how the model numbers work.
  21. It does depend on how good the compression algorithms are in the camera, but remember that action cameras and smartphones tend to be hand-held (increasing movement in the frame) and wider angle with deep depth of field (so all of the frame has loads of detail to compress) as well as high-res/low-bitrate.
  22. Use however much you need, but be aware that how much you need can vary radically depending on what you're filming. 50Mbps is tonnes if you're filming a talking-head with a blurry background, but point you camera at a tree while there's lots of wind, or during rain or snow, or at the ocean, or from a moving vehicle, and the 50Mbps you were loving before might make you cry. Also, if you're filming in higher frame rates and then conforming to normal speed to make things appear in slow motion then your bitrate will get stretched accordingly. 50Mbps is 25Mbps when viewed at 50% speed on a timeline, etc. You can't add bitrate in post!
  23. Thanks! That's really interesting - I wouldn't have run into that yet as my current Custom modes are all based on P mode and I normally use it with manual lenses and thus have control over the aperture and focus with the camera being auto-SS and auto-ISO.
  24. kye

    Lenses

    Thanks - I wasn't sure and hadn't gotten around to googling anything yet.
×
×
  • Create New...