Jump to content

kye

Members
  • Posts

    7,490
  • Joined

  • Last visited

Everything posted by kye

  1. @BTM_Pix can we be sure they're completely smooth? ie, if I put a camera on it, record a pan, analyse it and find jitter, how do I know if the jitter came from the camera or from the slider? I have the BMMCC so I could test things with that, but if I do the test above and get jitter then we won't know which is causing it. It would only be if we tested it with the Micro and got zero jitter that we'd know both were jitter-free. I'm thinking a more reliable method might be an analog movement that relies on physics. Freefall is an option, as there will be zero jitter, although I'm reminded of the phrase "it's not falling that kills you, it's landing" and that's not an attractive sentiment in this instance!! Maybe a non-motorised slider set at an angle so it will "fall" and pan as it does? Relying on friction is probably not a good idea as it could be patchy. The alternative is to simply stabilise the motion with large weights, but then that requires significantly stronger wheels and creates more friction etc.
  2. We may find that there are variations, or maybe not. Typically, electronics has a timing function where a quartz crystal oscillator is in the circuit to provide a reference, but they resonate REALLY fast - often 16 million times per second, and that will get used in a frequency divider circuit so that the output clock only gets triggered every X clock cycles from the crystal. In that sense, the clock speed should be very stable, however there are also temperature effects and other things that act over a much slower timeframe and might be in the realm of the frame rates we're talking about. Jitter is a big deal in audio reproduction, and lots of work has been done in that area to measure and reduce its effects. However, audio has sampling rates at 1/44100th of a second intervals so any variations in timing have many samples to be observed over, whereas 1/24th intervals have very few data points to be able to notice patterns in. I've spent way more time playing with high-end audio than I have playing with cameras, and in audio there are lots of arguments about what is audible vs what is measurable etc (if you think people arguing over cameras is savage, you would be in for a shock!). However, one theory I have developed that bridges the two camps is that human perception is much more acute than is generally believed, especially in regards to being able to perceive patterns within a signal with a lot of noise. In audio if a distortion is a lot quieter than the background noise then it is believed to be inaudible, however humans are capable of hearing things well below the levels of the noise, and I have found this to be true in practice. If we apply this principle to video then it may mean that humans are capable of detecting jitter even if other factors (such as semi-random hand-held motion) are large enough that it seems like they would obscure the jitter. In this sense, camera jitter may still be detectable even if there is a lot of other jitter from things like camera-movement also in the footage. LOL, maybe it is. I don't know as I haven't had the ability to receive TV in my home for probably a decade now, maybe more. Everything I watch comes in through the internet. I shoot 25p as it's a frame rate that is also common across more of what I shoot with, smartphones, action cameras, etc so I can more easily match frame rates. If I only shot with one camera then I'd change it without hesitation 🙂 For our tests I'm open to changing it. Maybe when I run a bunch of tests I'll record some really short clips, label them nicely, then send them your way for processing 🙂 Yeah, it would be interesting to see what effects it has, if any. The trick will be getting a setup that gives us repeatable camera movement. Any ideas? We're getting into philosophical territory here, but I don't think we should consider film as the holy grail. Film is awesome, but I think that its strengths can be broken down into two categories: things that film does that are great because that's how human perception works, and things that film does that we like as part of the nostalgia we have for film. For example, film has almost infinite bit-depth, which is great and modern digital cameras are nicer when they have more bit-depth, but film also had gate weave, which we only apply to footage in post when we want to be nostalgic, and no-one is building it into cameras to bake it into the footage natively. From this point of view, I think in all technical discussions about video we should work out what is going on technically with the equipment, work out what aesthetic impacts that has, and then work out how to use the technology in such a way that it creates the aesthetics that will support our artistic vision. Ultimately, the tech lives to support the art, and we should bend the tech to that goal, and learning how to bend the tech to that goal is what we're talking about here. [Edit: and in terms of motion cadence, human perception is continuous and doesn't chop up our perception into frames, so motion cadence is the complete opposite of how we perceive the world, so in this sense it might be something we would want to eliminate as the aesthetic impact just pulls us out of the footage and reminds us we're watching a poor reproduction of something] Maybe and maybe not. We can always do tests to see if that's true or not, but the point of this thread is to test things and learn rather than just assuming things to be true. One thing that's interesting is that we can synthesise video clips to test these things. For example, lets imagine I make a video clip of a white circle on a black background moving around using keyframes. The motion of that will be completely smooth and jitter-free. I can also introduce small random movements into that motion to create a certain amount of jitter. We can then run blind tests to see if people can pick which one has the jitter. Or have a few levels of jitter and see how much jitter is perceivable. Taking those we can then apply varying amounts of motion-blur and see if that threshold of perception changes. We can apply noise and see if it changes. etc. etc. We can even do things in Resolve like film a clip hand-held for some camera-shake, track that, then apply that tracking data to a stationary clip, and we can apply that at whatever strength we want. If enough people are willing to watch the footage and answer an anonymous survey then we could get data on all these things. The tests aren't that hard to design.
  3. Wow. These lenses look like they measure very nicely too. https://www.lensrentals.com/blog/2019/05/just-the-cinema-lens-mtf-charts-xeen-and-schneider/ and the quality control looks to be excellent: https://www.lensrentals.com/blog/2017/08/cine-lens-comparison-35mm-full-frame-primes/ The drop-off in resolution towards the edges of the frame remind me of the Zeiss CP.2.
  4. @BTM_Pix that motion mount is AWESOME! It also, unfortunately, probably means that there's patents galore in there to stop other camera companies from implementing such a thing. eND that does global shutter with gradual onset of exposure is such a clever use of the tech. What is interesting is how much it obscures the edges when in soft shutter mode. Square: Soft: This would really make perception of jitter very difficult as there's no harsh edges for the eye to lock onto, effectively masking jitter from the camera. And integrating ND into it as well is great. One day maybe we'll have an eND in every camera that can do global shutter, soft shutter, and combined with ISO will give us full automatic control of exposure from starlight to direct sunlight, where we simply specify shutter angle and it takes care of the rest. The future's so bright, I have to wear eNDs.
  5. Welcome to the forums Jim! How is your editing going? I feel your pain. I used to also be like this, but what turned it around for me was two things. The first was Resolves new Cut page. I'm not sure if you've edited in Resolve, but the process to review footage was a bit painful previously. You had to double-click on a clip in the Media pool to load it in the viewer, then JKL and IO to make a select, and I set P to insert the clip into the timeline. Then you had to navigate with the mouse to load the next clip. I could never find how to set keyboard shortcuts to get to the next clip. I suspect it might have required a numeric keypad, which my MBP doesn't have. Then Resolve created the Cut page. Theres a view in the Cut page that puts all the clips in a folder end-to-end like a Tape Viewer. Then you can just JKL and IO and P all the way through the whole footage. No using the mouse, or even having to take your fingers off those keys, and can do it completely without looking. It sounds ridiculous but those extra key presses were adding enough friction to really make an impact. Looking at my current project, if it took 5 seconds in total to take my hand from the JKL location to the mouse, navigate the cursor to the next clip, double-click, then put my hand back at JKL, and I had to do that 3024 times, then that's 4.2 hours just navigating to the next clip! Thinking about it like that it doesn't seem such a small thing! My suggestion would be to try and optimise your setup to have as little friction as possible, as even little things will be adding up unconsciously. The second thing that I had forgotten when I stalled in editing was how lovely it was to look at the footage. Not only did I get to re-live my holidays, and only the best bits of them at that (we don't film the awful bits, when you're cold / hot / tired / grumpy and things are smelly etc doesn't come through). Also, I've found that though the sheer quantity of footage I take, the lovely shots are inevitable and finding them is very rewarding. I do find frustrating things sometimes, like when I was in the boat in the wetlands and I missed the shot of the eagle swooping down and pulling the fish out of the water because I was filming something else in some other direction, or when I get out of sync and record the bits in-between the shots and don't record the bits when I'm aiming the camera at something cool, that's frustrating! The other thing to keep in mind is that for our lives, and family or friends, the footage actually gets more valuable as it ages, not less valuable as it does for commercial or theatrical footage. In that sense, keep shooting because sometime later on you might pick it up and go through it. Or someone else might. I don't know about you, but if my grandparents or great-grandparents had vlogged, or recorded videos of holidays, or whatever, I'd be very interested in looking at that footage. In a sense, our own private footage is about history, not the latest trends. Also, the longer it has been since you shot the footage, the more objective you will be in editing it. Street photographers often deliberately delayed developing their film because the longer they delayed the better they were at judging how good each shot was, rather than remembering the sentiment and context around it. Hope that helps!
  6. Great discussion! Yeah, if we are going to compare amounts of jitter then we'd need something repeatable. These tests were really to try and see if I could measure any in the GH5, which I did. The setup I was thinking about was camera fixed on a tripod pointing at a setup where something could swing freely, and if I dropped the swinging object from a consistent height then it would be repeatable. If we want to measure jitter then we need to make it as obvious as possible, which is why I did short exposures. When we want to test how visible the jitter is then we will want to add things like a 180 shutter. One is about objective measurement, the other about perception. Yes, that's what I was referring to. I agree that you'd have to do something very special in order to avoid a square wave, and that basically every camera we own isn't doing that. The Ed David video, shot with high-end cameras supports this too, with motion blurs starting and stopping abruptly: One thing that was discussed in the thread was filming in high-framerates at 360 degree shutter and then putting multiple frames together. That enables adjusting shutter angle and frame rates in post, but also means that you could fade the first/last frames to create a more gradual profile. That could be possible with the Motion Blur functions in post as well, although who knows if it's implemented. Sure. I guess that'll make everything cinematic right? (joking..) Considering that it doesn't matter how fast we watch these things, it might be easier to find an option where we can just specify what frame rate to play the file back at - do you know of software that will let us specify this? That would also help people to diagnose what plays best on their system. That "syntheyes" program would save me a lot of effort! But validates my results, so that's cool. I can look at turning IBIS off if we start tripod work. To a certain extent the next question I care about is how visible this stuff is. If the results are that no-one can tell below a certain amount and IBIS sits below that amount then it doesn't matter. In these tests we have to control our variables, but we also need to keep our eyes on the prize 🙂 One thing I noticed from Ed Davids video (below) was that the hand-held motion is basically shaky. Try and look at the shots pointing at LA Renaissance Auto School frame-by-frame: In my pans it was easy to see that there was an inconsistent speed - ie, the pan would slow down for a few frames then speed up for a few frames. You can only tell that this is inconsistent because for the few frames that are slower, you have frames on both sides to compare those to. You couldn't tell those were slow if the camera was stopped on either side, that would simply appear to be moving vs not moving. This is important because the above video has hand-held motion where the camera will go up for a few frames then stop, then go sideways for a frame, then stop, then.... I think that it's not possible to determine timing errors in such motion because each motion from the hand-held doesn't last long enough to get an average to compare with. I think this might be a fundamental limit of detecting jitter - if the motion has any variation in it from camera movement then any camera jitter will be obscured by the camera shake. Camera shake IS jitter. In that sense, hand-held motion means that your cameras jitter performance is irrelevant. I didn't realise we'd end up there, but it makes sense. Camera moves on a tripod or other system with smooth movement (such as a pan on a well damped fluid-head) will still be sensitive to jitter, and a fixed camera position will be most sensitive to it. That's something to contemplate! Furthermore, all the discussion about noise and other factors obscuring jitter would also apply to camera-shake. Adding noise may reduce perceived camera-shake! Another thing to contemplate! Wow, cool discussion.
  7. Cool. What are the benefits of ToF vs PDAF? @BTM_Pix would know 🙂
  8. OK, here's all four combinations. The blue line is the movement per frame. The orange line is a trend line (6th order polynomial) to compare the data to and see if the data goes above then below then above which would indicate jitter. Looking at it, there is some evidence that all of them have some jitter, possibly ringing from the IBIS. I got more enthusiastic with my pans in the latter tests so there's less data points, so they're not really comparable directly, but should give some idea. They appear to be similar to me, in terms of the up/down being about the same percentage-wise as the motion.
  9. I just made this clip in Resolve using a few keyframes. Not only will it have zero jitter, but it will also have no RS. Use it as a test for playback jitter. https://www.sugarsync.com/pf/D8480669_08693060_8819366
  10. APOLOGIES ALL. The test above was GH5 4K 150Mbps h264, not the 5K mode. I just shot a few other tests in other modes and came back in Resolve to look at the files and saw 3840x2160 next to the file I used for the above. Good news is that I now have 1080p ALL-I, 1080p Long-GOP, and 4K ALL-I clips to analyse, so we'll see if there are differences.
  11. I'm good at the technical, not so much with the creative. If you can, please download the clip and see if you can see the jitter. What is visible is something I need other people to help with. If there's enough appetite (especially amongst new camera fever season) then I might even generate some A/B tests and see what levels of jitter are visible.
  12. Here's the original 5K h265 file that I analysed above: https://we.tl/t-YgQIWv1Sps The frames I analysed above were the first pan from left-to-right. Let me know what you see if you watch it. Link expires in 7 days, so don't delay! Limited time offer!! *ahem*
  13. I've put the GH5 to the test. Setup was GH5 in 5K h265 mode (to stress the camera and get the most resolution), Voigtlander 42.5mm 0.95 lens focused at f0.95 then stopped down a couple of stops to sharpen up. This produced shutter speeds in the 1/10,000s range and shorter. First test was to pan and track a stationary object, in this case the corner of a bolt in the fence which was a sharp edge with high contrast. I put a tiny box around it, I think it was about 3-4 pixels wide/tall, and tracked it on a 4K timeline at 300% zoom: and here's the results: Observations: Test produced good data, with images being sharp even in mid-pan and margin of error was small (only a couple of pixels) compared to large offsets (60-120 pixels) Test was hand-held and between myself and the IBIS we did a spectacular job (I think it was all me, but... 😉 ) As offset was both horizontal and vertical I reached deep into my high-school geometry to calculate the diagonal offset using both dimensions There is evidence of 'ringing' in the movement shown in the fluctuations between ~110 and 120 This ringing may well come from the IBIS mechanism, as ringing is a side-effect of high-frequency feedback loops (of which IBIS is a classic example) Discussion: This 10px P-P of jitter is there in the footage, so the question is if it would be visible under more normal circumstances. Let's start with motion blur. If I had shot this with a 180 degree shutter then the blur would be approx 60px long, making the jitter a 16% variation of the blur distance, which is small but isn't nothing. Also, if the shutter operated like a square wave with each pixel going from not being exposed to being fully exposed instantaneously then the edges of the blur would be sharp, although much lower in contrast. What about timing? 10px out of 115 pixels is 8.7%, which if this jitter came from the timing of the frames rather than the direction of the camera then it represents a change of about 3.6ms when compared to 24p which has a 41.66ms cadence, so at a given frame it might be ahead by 1.8ms and two frames later be behind by 1.8ms. Would this be visible? I don't know.
  14. OK, analysis of this first video: (thanks to Valvula Films for sharing this footage). In a sense, this video isn't well suited to an objective jitter test as the focus is pulled during the pan so everything is blurred for most of the pan. Regardless, the testing methodology was to create an overlay box and offset the underlying video, frame by frame, to the overlay box, and record the offset. Like this: I compared the pan that went to the chart vs the pan away from the chart and chose the one with the greater number of frames. For each camera I chose a range of frames between when the test chart became too blurred, and when the movement became too small for single pixel measurements. Where there wasn't a perfect whole-pixel offset I chose the closest one. Where the offset to the left seemed identical to the one on the right I chose the one on the right. Here are the results: The first column is the offset of each frame, and the second is the movement between this frame and the last. The pan was accelerating / decelerating so the speed went up/down. My impressions of this are: The numbers don't show any jitter My impression of which frames were bang-on vs somewhere in-between didn't seem to indicate that there are any big nasties not shown in this data These are all high-end cameras so it is feasible that we didn't find any jitter because there isn't any to find I could have gone much more in-depth and tried to offset by fractions of a pixel (Resolve will do this) but on a 1080p image any jitter less than a single pixel is probably invisible What I learned: High-end cameras probably don't have much jitter (not really surprising, but let's start from a known position) In a test like this, blurring things isn't a good idea, either from a focus pull or from motion blur, and the more frames something moves the more precise a test would be A better test would be to shoot where exposure time is very short, there are fine details to track - both from a lens focus perspective as well as simply having details only a few pixels wide
  15. Love that video by Ed David - the commentary was hilarious! Nice to see humility and honesty at the forefront. So far, my thoughts are that there might be these components to motion cadence: variation in timing between exposures (ie, every frame in 25p should be 40ms apart, but maybe there are variations above and below) - technical phrase for this is "jitter" rolling shutter, where if an object moves up and down within the frame it would appear to be going slower / faster than other objects Things that can impact that appear to be a laundry-list of factors, but seem to fall into one of three categories: technical aspects that bake-in un-even movement into the files during capture technical aspects that can add un-even movement during playback perceptual factors that may make the baked-in issues and playback issues more or less visible upon viewing footage My issue appears to be that the playback issues are obscuring the issues added at capture. As much as I'd love to do an exhaustive analysis of this stuff, realistically this is a selfish exercise for me, as I want to 1) learn what this is and how to see it, and 2) test my cameras and learn how to use them better. If I work out that I can't see it, or it's too difficult to make my tech behave then I likely won't care about the other stuff because I can't see it 🙂 First steps are to analyse what jitter might be included in various footage, and to have a play with my hardware.
  16. Both. Tried VLC and Quicktime on the file. I even tried to load it into Resolve to convert it to a different format but Resolve doesn't stoop to such lowly formats as mp4 🙂
  17. Thanks @BTM_Pix - I'll keep an eye out for unicorns on ebay. I think I've hit my first issue. I've watched the video they linked and I see judder in all of them. First attempt was my MBP running my 4K display with internal GPU, unplugging the external display my second attempt was MBP running only the laptop screen, third was plugging in my eGPU with RX 470. All attempts displayed significant judder, both when playing the video fullscreen as well as at a 100% window (720p). I guess if your computer sucks at displaying it, then not much point experimenting with capture. Any tips for improving things at my end?
  18. I have a few cameras that reportedly vary with how well they handle motion cadence. I say reportedly because it's not something I have learned to see, so I don't know what I'm looking for or what to pay attention to. I'm planning to do some side-by-side tests to compare motion cadence - can you please tell me: 1) what to film that will highlight the good (and bad) motion cadence of the various cameras, and 2) what to look for in the results that will allow me to 'spot' the differences between good and bad. Thanks. I'm happy to share the results when I do the test. I'll also be testing out if there's some way to improve the motion cadence of the bad cameras, and what things like the Motion Blur effect in Resolve does to motion cadence.
  19. kye

    Pro-Mist Filters

    We're agreeing. The article you linked says that "Clear Supermist is made of fine clear colorless particulate that is bonded between two pieces of Schott B270 Optical Glass which exceeds 4K resolution capability." and "Black Supermist is made of fine black color particulate that is bonded between two pieces of Schott B270 Optical Glass which exceeds 4K resolution capability." Particulate refers to something being a bunch of particles. ie, it's tiny lumps of stuff - it's not a continuous smooth layer of something. The optical effect appears to be that when you have a bunch of stuff (lumps, bumps, patterns, whatever) very near or inside the lens, then that texture somehow becomes an in-focus mask on out-of-focus areas. This is why the texture of the meshes that I linked in the article I shared also showed up, and these were attached between the camera and the lens. You can also use this effect for special effects: Once again, this is the shape of something close to or inside the lens creating an in-focus mask on the out-fo-focus areas. Tiffen are just using a mask made of tiny particles that have optical diffraction effects rather than just casting a silhouette, although it's the same optical principle that makes the texture appear in-focus.
  20. kye

    Pro-Mist Filters

    My understanding is that the speckled patterns in bokeh are to do with any textures in / on the lens. You get them if you use a mesh for diffusion: https://www.provideocoalition.com/the-secret-life-of-behind-the-lens-nets/ I also thought that it can be if you've got dirt or something inside the lens or on the front filters?
  21. I bought the Studio edition of Resolve and logged a support question some time back, and they directed it to a support partner of some kind here in Australia (I think they were a dealer or something - they had a different business name), who then emailed me back and forth to diagnose and help me out. I was trying to work out how to optimise my MBP + eGPU setup and at one point they checked with a BM engineer about something. It was a very positive experience overall, and I was kind of surprised because I'm used to being a customer of other brands and basically you get their forums or it's impossible to even find a phone number for them, so customer service isn't something I'm used to even thinking is available, let alone getting help from really knowledgable folks.
  22. kye

    Lenses

    More spectacular footage from the Laowa probe lens.. and here's the BTS: I watch James Hoffman because I'm interested in coffee, so this is a completely unexpected foray into camera tech and slow-motion loveliness, but if you like coffee then I can also recommend his channel. He's a Barista World Champion, has a ridiculously nerdy interest in coffee, and is hilarious when he tries coffee products that taste terrible or reviews cheap and bad coffee equipment. You're welcome!
  23. Think about it like this. FCPX, PP, Resolve, and other programs, will all exist in some way over the next decade. In all likelihood they will have new versions, new features, new bugs, various frustrations and limitations, and various performance levels doing different tasks on various hardware platforms. Over the course of the next decade, which do you feel would be the best total experience for you? ie, experiencing the ups and downs of PP for that whole time, going with an alternative for that whole time, or sticking with PP now but transitioning at some point in the future. Think about the hardware required, the whole package. If you feel you'd be better off with another package, then think about the best time to switch. Over the next decade you'll buy multiple new computers, cameras, and other equipment, so think about the best time to switch. Factoring in the next decade, switching platforms isn't such a big deal, especially if the grass is actually greener on the other side - you'd be eating better grass long after the scrapes from climbing the barb-wire fence have healed and long been forgotten. Changing software packages isn't a chore, it's an investment. You invest time, energy, money for the software, maybe new hardware even, but you would do so if the return was worth it over the longer-term. Think about the cost of your frustration. How it impacts your quality of life and creativity. How demotivating it is. If it's worth making that investment, then choose when is the best time. In the middle of a project with a deadline isn't that best time, maybe there are other factors to consider too. but if it's worth switching then schedule it in your life, then do it.
  24. I've only had one Sigma lens, the 18-35/1.8 but it was a great lens. I'd buy Sigma again in a heartbeat.
  25. Great stuff. +1 for GH6 info. Also, more of a comment than a question, but could they please consider adding video modes in wider aspect ratios, eg, 2:1, 2:35:1, 2:66:1 etc. I know you can film 16:9 and crop in post but it wastes bitrate and storage. They currently have guides on the GH5 for these but they're so faint they're very difficult to use.
×
×
  • Create New...