Jump to content

Towd

Members
  • Posts

    117
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Towd reacted to kye in How can I compare motion cadence?   
    We may find that there are variations, or maybe not.  Typically, electronics has a timing function where a quartz crystal oscillator is in the circuit to provide a reference, but they resonate REALLY fast - often 16 million times per second, and that will get used in a frequency divider circuit so that the output clock only gets triggered every X clock cycles from the crystal.
    In that sense, the clock speed should be very stable, however there are also temperature effects and other things that act over a much slower timeframe and might be in the realm of the frame rates we're talking about.  Jitter is a big deal in audio reproduction, and lots of work has been done in that area to measure and reduce its effects.  However, audio has sampling rates at 1/44100th of a second intervals so any variations in timing have many samples to be observed over, whereas 1/24th intervals have very few data points to be able to notice patterns in.
    I've spent way more time playing with high-end audio than I have playing with cameras, and in audio there are lots of arguments about what is audible vs what is measurable etc (if you think people arguing over cameras is savage, you would be in for a shock!).  However, one theory I have developed that bridges the two camps is that human perception is much more acute than is generally believed, especially in regards to being able to perceive patterns within a signal with a lot of noise.  In audio if a distortion is a lot quieter than the background noise then it is believed to be inaudible, however humans are capable of hearing things well below the levels of the noise, and I have found this to be true in practice.  If we apply this principle to video then it may mean that humans are capable of detecting jitter even if other factors (such as semi-random hand-held motion) are large enough that it seems like they would obscure the jitter.  
    In this sense, camera jitter may still be detectable even if there is a lot of other jitter from things like camera-movement also in the footage.
    LOL, maybe it is.  I don't know as I haven't had the ability to receive TV in my home for probably a decade now, maybe more.  Everything I watch comes in through the internet.
    I shoot 25p as it's a frame rate that is also common across more of what I shoot with, smartphones, action cameras, etc so I can more easily match frame rates.  If I only shot with one camera then I'd change it without hesitation 🙂
    For our tests I'm open to changing it.
    Maybe when I run a bunch of tests I'll record some really short clips, label them nicely, then send them your way for processing 🙂
    Yeah, it would be interesting to see what effects it has, if any.  The trick will be getting a setup that gives us repeatable camera movement.  Any ideas?
    We're getting into philosophical territory here, but I don't think we should consider film as the holy grail.
    Film is awesome, but I think that its strengths can be broken down into two categories: things that film does that are great because that's how human perception works, and things that film does that we like as part of the nostalgia we have for film.  For example, film has almost infinite bit-depth, which is great and modern digital cameras are nicer when they have more bit-depth, but film also had gate weave, which we only apply to footage in post when we want to be nostalgic, and no-one is building it into cameras to bake it into the footage natively.
    From this point of view, I think in all technical discussions about video we should work out what is going on technically with the equipment, work out what aesthetic impacts that has, and then work out how to use the technology in such a way that it creates the aesthetics that will support our artistic vision.  Ultimately, the tech lives to support the art, and we should bend the tech to that goal, and learning how to bend the tech to that goal is what we're talking about here.
    [Edit: and in terms of motion cadence, human perception is continuous and doesn't chop up our perception into frames, so motion cadence is the complete opposite of how we perceive the world, so in this sense it might be something we would want to eliminate as the aesthetic impact just pulls us out of the footage and reminds us we're watching a poor reproduction of something]
    Maybe and maybe not.
    We can always do tests to see if that's true or not, but the point of this thread is to test things and learn rather than just assuming things to be true.
    One thing that's interesting is that we can synthesise video clips to test these things.  For example, lets imagine I make a video clip of a white circle on a black background moving around using keyframes.  The motion of that will be completely smooth and jitter-free.  I can also introduce small random movements into that motion to create a certain amount of jitter.  We can then run blind tests to see if people can pick which one has the jitter.  Or have a few levels of jitter and see how much jitter is perceivable.
    Taking those we can then apply varying amounts of motion-blur and see if that threshold of perception changes.  We can apply noise and see if it changes.  etc. etc.
    We can even do things in Resolve like film a clip hand-held for some camera-shake, track that, then apply that tracking data to a stationary clip, and we can apply that at whatever strength we want.
    If enough people are willing to watch the footage and answer an anonymous survey then we could get data on all these things.  The tests aren't that hard to design.
  2. Like
    Towd reacted to kye in How can I compare motion cadence?   
    Great discussion!
    Yeah, if we are going to compare amounts of jitter then we'd need something repeatable.  These tests were really to try and see if I could measure any in the GH5, which I did.
    The setup I was thinking about was camera fixed on a tripod pointing at a setup where something could swing freely, and if I dropped the swinging object from a consistent height then it would be repeatable.
    If we want to measure jitter then we need to make it as obvious as possible, which is why I did short exposures.  When we want to test how visible the jitter is then we will want to add things like a 180 shutter.  One is about objective measurement, the other about perception.
    Yes, that's what I was referring to.  I agree that you'd have to do something very special in order to avoid a square wave, and that basically every camera we own isn't doing that.
    The Ed David video, shot with high-end cameras supports this too, with motion blurs starting and stopping abruptly:

    One thing that was discussed in the thread was filming in high-framerates at 360 degree shutter and then putting multiple frames together.  That enables adjusting shutter angle and frame rates in post, but also means that you could fade the first/last frames to create a more gradual profile.  That could be possible with the Motion Blur functions in post as well, although who knows if it's implemented.
    Sure.  I guess that'll make everything cinematic right?  (joking..)
    Considering that it doesn't matter how fast we watch these things, it might be easier to find an option where we can just specify what frame rate to play the file back at - do you know of software that will let us specify this?  That would also help people to diagnose what plays best on their system.
    That "syntheyes" program would save me a lot of effort!  But validates my results, so that's cool.
    I can look at turning IBIS off if we start tripod work.  To a certain extent the next question I care about is how visible this stuff is.  If the results are that no-one can tell below a certain amount and IBIS sits below that amount then it doesn't matter.  In these tests we have to control our variables, but we also need to keep our eyes on the prize 🙂
    One thing I noticed from Ed Davids video (below) was that the hand-held motion is basically shaky.  Try and look at the shots pointing at LA Renaissance Auto School frame-by-frame:
    In my pans it was easy to see that there was an inconsistent speed - ie, the pan would slow down for a few frames then speed up for a few frames.  You can only tell that this is inconsistent because for the few frames that are slower, you have frames on both sides to compare those to.  You couldn't tell those were slow if the camera was stopped on either side, that would simply appear to be moving vs not moving.
    This is important because the above video has hand-held motion where the camera will go up for a few frames then stop, then go sideways for a frame, then stop, then....  I think that it's not possible to determine timing errors in such motion because each motion from the hand-held doesn't last long enough to get an average to compare with.  I think this might be a fundamental limit of detecting jitter - if the motion has any variation in it from camera movement then any camera jitter will be obscured by the camera shake.
    Camera shake IS jitter.
    In that sense, hand-held motion means that your cameras jitter performance is irrelevant.
    I didn't realise we'd end up there, but it makes sense.  Camera moves on a tripod or other system with smooth movement (such as a pan on a well damped fluid-head) will still be sensitive to jitter, and a fixed camera position will be most sensitive to it.
    That's something to contemplate!
    Furthermore, all the discussion about noise and other factors obscuring jitter would also apply to camera-shake.  Adding noise may reduce perceived camera-shake!  Another thing to contemplate!
    Wow, cool discussion.
  3. Like
    Towd reacted to BTM_Pix in How can I compare motion cadence?   
    While you are down this rabbit hole, you might want to take a look at the RED Motion Mount that they did for the EPIC.
    http://docs.red.com/955-0013/REDMOTIONMOUNTOperationGuide/Content/1_MM_Intro/1_Intro.htm
    As well as doing vari ND, it had a few tricks including simulating soft shutter and square shutter.
    There are links within that page to other explanatory documents about the process.
    What may be of use to you are the videos of it in action such as this one as its a unique reference really to how the look of the same camera can be changed with these simulations.
     
  4. Like
    Towd reacted to crevice in Canon EOS R5 has serious overheating issues – in both 4K and 8K   
    My thoughts:
     I don’t really get how people are upset that 8k raw overheats...why are you looking at a small mirrorless camera if you need long captures of 8k raw to begin with? Buy a real cinema camera. I think everyone is forgetting what this camera is and what was asked of Canon. Everybody is getting mad at canon for giving us what the majority of people asked.
     
    People wanted a canon mirrorless camera that was full frame with amazing IBIS, dual pixel autofocus, high megapixel count for photos , and could shoot 4K 4:2:2 10 bit uncropped, with a swivel screen. So, they did that. They did exactly that and more. So then they thought, well we could give them a full raw 8k readout of the sensor and have them record as long as they can before it overheats, better than holding it back. Also, we can throw them 4K 120 as well and also have them go as long as they can before we max out. I mean, might as well give them everything the camera can do, instead of just not unlocking it and having the magic lantern folks do it for us, right? 
     
    So they threw in some crazy bonuses nobody was asking for and here we are, people bitching. The camera everyone wanted is here. The full frame autofocus beast with great IBIS, 10 bit, no crop, and great at photos true hybrid by canon is here and we are upset. Are we complaining that they have the overheating numbers written down? Because there is no way It’s a shock that 4K 60p 4:2:2 10 bit with dual pixel autofocus will overheat near the 40 minute mark...or are we surprised that 8k raw in a mirrorless full frame with IBIS engaged, dual pixel autofocus,  and no crop in a small mirrorless overheats? I mean, what are we mad about here? 
    This is the camera that everyone wanted. There are several frame rates and resolutions that Canon threw in that will be very useful within the limits of this specific camera and yes they have limits, but at least they are unlocked and not behind a magic lantern hack like the old 5D. As someone that shoots an equal mix of photography and video, this is a dream camera. I get a high megapixel camera and high quality video, with all of canons special sauce including their great autofocus and colors. I see no mention of limits for standard 4K 24p and 35 minutes of continuous 4K 60 is more than what I need. If it’s not for you, than that’s fine - keep waiting. 
     
    There is so much negativity in the world right now, that I’m honestly not surprised that Canon finally doing something good gets shit on. I expect nothing less at this point. 
  5. Like
    Towd reacted to kye in How can I compare motion cadence?   
    OK, here's all four combinations.

    The blue line is the movement per frame.  The orange line is a trend line (6th order polynomial) to compare the data to and see if the data goes above then below then above which would indicate jitter.
    Looking at it, there is some evidence that all of them have some jitter, possibly ringing from the IBIS.  I got more enthusiastic with my pans in the latter tests so there's less data points, so they're not really comparable directly, but should give some idea.
    They appear to be similar to me, in terms of the up/down being about the same percentage-wise as the motion.
     
  6. Like
    Towd reacted to Chrille in Panasonic interview - question suggestions   
    I think it would be great to ask them to implement a 2,8K Mode on the GH5 or GH5S via firmware at the max. 4K Bandwith that might add to the quality of the picture i guess. Given that this is one of the learnings that came up in the last time i believe that would be something that they could do to make their customers happy...
  7. Like
    Towd reacted to Video Hummus in Panasonic interview - question suggestions   
    I'm not giving up on believing they can deliver something. It's just that the market has changed and having reliable AF available (doesn't mean you have to use it!) is becoming a must-have feature these days.
    Just having a really solid, rich 6K or 4K image and the ability to do ProRes RAW via HDMI isn't as appealing anymore when (pending more hands-on tests) the R5/R6/A7SIII gives us that and solid AF.
    Anyway, what I'm trying to say is, it's becoming hard and harder to look the other way on Panasonics AF woes when the competition is starting to offer features that pretty much use to be the sole domain of GH4/GH5 in the sub $3K market.
    So they need to kinda have something compelling if their AF is bottom tier, hence my questions I wrote in this thread for Andrew to potentially ask. What's their plan?...because I have fairly expensive MFT gear, and with Olympus out of the game, and the camera industry contracting like it is, I'm evaluating what system to spend my hard earned cash in for the next 5+ years. It's important they have a clear answer.
  8. Like
    Towd reacted to kye in How can I compare motion cadence?   
    Love that video by Ed David - the commentary was hilarious!  Nice to see humility and honesty at the forefront.
    So far, my thoughts are that there might be these components to motion cadence:
    variation in timing between exposures (ie, every frame in 25p should be 40ms apart, but maybe there are variations above and below) - technical phrase for this is "jitter" rolling shutter, where if an object moves up and down within the frame it would appear to be going slower / faster than other objects Things that can impact that appear to be a laundry-list of factors, but seem to fall into one of three categories:
    technical aspects that bake-in un-even movement into the files during capture technical aspects that can add un-even movement during playback perceptual factors that may make the baked-in issues and playback issues more or less visible upon viewing footage My issue appears to be that the playback issues are obscuring the issues added at capture.
    As much as I'd love to do an exhaustive analysis of this stuff, realistically this is a selfish exercise for me, as I want to 1) learn what this is and how to see it, and 2) test my cameras and learn how to use them better.  If I work out that I can't see it, or it's too difficult to make my tech behave then I likely won't care about the other stuff because I can't see it 🙂 
    First steps are to analyse what jitter might be included in various footage, and to have a play with my hardware.
  9. Like
    Towd reacted to Trek of Joy in Canon EOS R5 has serious overheating issues – in both 4K and 8K   
    This thread is insane. All this over a chart without any actual tests.
    I remember buying the a6300, a7s2 and a7r2 despite the internet overload about overheating, single card slots that no professional would ever use, 8-bit 4:2:0 video with blotchy zombie skin tones, crop on everything that's not 24p, only 4k30p and so on. I live in Florida and never had any issues despite using it on a gimbal all the time in the sun and shooting longer takes, it doesn't take much to swap batteries or turn the camera off when not in use, or not leav it sitting out in the sun all day. I took them to Africa on safari shooting thousands of frames and hours of video, also took them to Dubai, Egypt and Israel in August when it was upwards of 125f. Now we have a camera that shoots 8k raw, 4k120fps with amazing AF, class leading IBIS and can even shoot 45mp stills with bursts of 130 raw images or infinite jpegs if that's your thing, and its trash because you can't shoot an entire wedding in one take?
    Wow.
    Can't wait for the shitstorm with Sony's cripple hammer on the a7s3 and the pendulum to swing back to the R5/6 and how "I can work around the issues, its worth it for raw and 120p." This reads like a Sony Alpha rumors post.
    LOL!
  10. Like
    Towd reacted to wolf33d in My thoughts on the Canon EOS R5 8K monstrosity - 1TB footage per 50 minutes   
    You are too hard on them and I have a hard time understanding that much frustration. A 8K badge on the box? What about 4K no crop? 10 bit 422? 4K60 and 4K120 with best in class AF? Best FF IBIS on the market? Top level color science and top level ergonomic. 
    You are omitting a lot of things. They did not trade reliability for a badge. They did what the world asked: best in class video and photo spec in a mirrorless body, period. Those are indeed more than best in class, and to achieve that some of the features have time limitations. 
    Canon has been a joke and frustration for now many years. I welcome with a lot of positivity the great options they bring to the table today. They deserve a round of applause. 
     
  11. Like
    Towd reacted to Video Hummus in Canon EOS R5 has serious overheating issues – in both 4K and 8K   
    While 20 mins overheating times in certain modes are a bummer and could be a problem for some people, I for one, am glad they took the “lets put the feature in” versus saying “heat limitations wouldn’t have allowed it” and released something less exciting or groundbreaking. They would haven been lambasted for it either way.
     It’s a lose-lose for them and they made the better decision here in my humble opinion.
    Wether the headline is “R5 falls short: only offers 4K60p cropped” or “R5 has potential to overheat when recording 8K RAW(!) 4K120p(!), or oversampled 4K from 8K HQ mode(!). The later headline is forgivable!
  12. Like
    Towd reacted to BTM_Pix in How can I compare motion cadence?   
    This is what I've gleaned on the issue over the years...
    To test motion cadence effectively you need to mount the two cameras on a dual mount tripod bar and ensure you have ,if not the same lens, at least the same focal length one on each.
    For a test subject, you need something that can move of its own accord so you will see motion within a static framing but also that has enough movement to exit the frame so you can pan to track it and see the effects on a moving framing.
    The ideal subject for this when it comes to testing motion cadence is, by common consent, a unicorn.
    The ideal speed for the unicorn to be moving is from a slow walk up to a gentle trot.
    Most unicorns are lazy but skittish, so a few party poppers will enable you to encourage this range of movement.
    On the camera itself, make sure that the power draw isn't effecting motion cadence by using a fully charged battery and that you have mojo set to 10 or boost if your camera has it.
    Depending on the brand of camera that you are using, you should also ensure that you have fully topped up your Koolaid levels.
    😉
    On a more serious note, this is a decent thread with some informed discussion about it and also contains a link to a side by side video between real film, Alexa, Red and the Sony F65.
    https://cml.news/g/cml-raw-log-hdr/topic/16427778?p=Created,,,20,1,0,0::recentpostdate%2Fsticky,,,20,2,0,16427778
     
  13. Like
    Towd got a reaction from kye in How can I compare motion cadence?   
    I'm really interested in seeing the results of your test, and I've thought about this myself.  I think shooting something high speed that is repeatable would be interesting to see if there is any unusual timing-- like a recording frames at 60 hz, but then dropping frames and just writing 24.  Maybe shooting an oscilloscope running a 240 or 120 hz sign wave or something so you could measure if the image is consistent from frame to frame.
    Another option for checking if frames are recorded at consistent intervals would be to use a 120hz or higher monitor and record the burned in time code from a 120 fps video.  This might be enough to see if frames are being recorded a a consistent 24 fps and not doing something strange with timing.  You'll probably have to play around with shutter angles to record just one frame.... or you could add some animated object (like a spinning clock hand) so you could measure that you are recording 2.5 frames of the 120 fps animation as would be correct for 180 degree shutter recorded at 24fps.  A 240hz monitor and 240fps animated test video would be even better as you could check for 5 simultaneous frames.  Anyway, just throwing out some ideas.
    The only other things I can think of that could be characterized as bad motion cadence, might be bad motion blur artifacts, bad compression artifacts, and rolling shutter.   These might show up with a "picket fence" type test by panning or dollying past some kind of high contrast white and black strips.  At the very least it would be interesting to see if the motion blur looked consistent between cameras with identical setups.
  14. Thanks
    Towd reacted to kye in How can I compare motion cadence?   
    I have a few cameras that reportedly vary with how well they handle motion cadence.  I say reportedly because it's not something I have learned to see, so I don't know what I'm looking for or what to pay attention to.
    I'm planning to do some side-by-side tests to compare motion cadence - can you please tell me:
    1) what to film that will highlight the good (and bad) motion cadence of the various cameras, and
    2) what to look for in the results that will allow me to 'spot' the differences between good and bad.
    Thanks.
    I'm happy to share the results when I do the test.  I'll also be testing out if there's some way to improve the motion cadence of the bad cameras, and what things like the Motion Blur effect in Resolve does to motion cadence.
  15. Like
    Towd got a reaction from IronFilm in Panasonic interview - question suggestions   
    Of course any information regarding Panasonic's thoughts regarding the future of Micro Four Thirds would be welcome.
    Maybe specifically, with Panasonic possibly becoming the sole standard bearer for the MFT format, have they considered extending the standard to cover an S35 image area?  Like an MFT+ spec.
    Or possibly, have they considered releasing a GH series camera with a multi aspect sensor that scales from the standard MFT area up to S35?  I believe JVC did something like this. 
    For still photography a larger s35 sensor could also allow for a larger 1:1 aspect ratio image area that would still fit inside the normal MFT image circle as a side benefit for Instagram shooters and other photographers who like that format.
  16. Thanks
    Towd got a reaction from kye in Olympus sells Imaging Business   
    Whenever I read a long thread regarding the merits or shortcomings of the MFT format I'm reminded of a conversation I had with a  DP quite a few years ago about Super 35 vs Academy ratio framing.
    For historical context every VFX heavy movie that was shot on film was typically shot Super 35 with spherical lenses even if its final projected format would be widescreen 2.39.  Of the twenty or so films I've worked on that originated from scanned film, only one was shot with Anamorphic lenses even though the vast majority were slated for widescreen delivery.
    Anyway, the big concern back then was minimizing film grain and had less to do with the depth of field and blurry backgrounds, so you'd typically want to use as much of the negative as possible to reduce the film grain before scanning it.  However, in talking with a DP one day, he told me that when a feature wasn't slated for a lot of VFX, they would often just frame the project in Academy Ratio with room on the negative for where the sound strip would go. In this way you would go Negative -> Interpositive -> Internegative -> Print without mixing in an optical pass to reduce the super 35 image into the Academy framing for the projector.
    The reason for this according to the DP was that the optical pass would introduce additional grain, so shooting S35 was kind of a wash in that you were trading the larger film area of S35 for framing directly for the Academy area on a projector and skipping the optical reduction that introduced more grain than just the steps in making a print.
    Of course now with cheap film scanning, Digital Intermediates, and of course digital cameras, everyone just shoots using the S35 area.  My understanding from the conversation is that if you were shooting something like "Star Wars" that would have a bunch of opticals anyway, you'd shoot S35 even before the days of digital film scanning, but if you were shooting some standard film it was common for the DP to just frame inside the Academy ratio area of the film strip and skip the optical reduction.
    Is there anyone with more experience than myself with shooting old films that can confirm this?  It really has me curious.  The reason being is that the Academy Ratio on a projector is defined at 21mm across.  While a multi aspect MFT sensor on something like the GH5s is 19.25mm across if you shoot DCI.  When you consider projector slop  or the overscan on an old TV for the action safe area, the difference in the exposed film or sensor area seen by an audience seems negligible-- and thus the perceived DOF or lack there of between the two formats.
    Anyway, just wanted to share this, since I have never seen it discussed anywhere.  And maybe its just me, but I'm really curious as to whether the vast bulk of films shot from the 1930s and into the 90s were actually using the Academy Ratio area of the negative for framing.
    Also, hope I'm not derailing the conversation regarding Olympus too much, but as we're discussing the MFT format of Olympus cameras as a possible reason for it's lack of sales, this seemed apropos.
    https://en.wikipedia.org/wiki/Academy_ratio
    https://en.wikipedia.org/wiki/Super_35
  17. Like
    Towd got a reaction from Video Hummus in Olympus sells Imaging Business   
    Whenever I read a long thread regarding the merits or shortcomings of the MFT format I'm reminded of a conversation I had with a  DP quite a few years ago about Super 35 vs Academy ratio framing.
    For historical context every VFX heavy movie that was shot on film was typically shot Super 35 with spherical lenses even if its final projected format would be widescreen 2.39.  Of the twenty or so films I've worked on that originated from scanned film, only one was shot with Anamorphic lenses even though the vast majority were slated for widescreen delivery.
    Anyway, the big concern back then was minimizing film grain and had less to do with the depth of field and blurry backgrounds, so you'd typically want to use as much of the negative as possible to reduce the film grain before scanning it.  However, in talking with a DP one day, he told me that when a feature wasn't slated for a lot of VFX, they would often just frame the project in Academy Ratio with room on the negative for where the sound strip would go. In this way you would go Negative -> Interpositive -> Internegative -> Print without mixing in an optical pass to reduce the super 35 image into the Academy framing for the projector.
    The reason for this according to the DP was that the optical pass would introduce additional grain, so shooting S35 was kind of a wash in that you were trading the larger film area of S35 for framing directly for the Academy area on a projector and skipping the optical reduction that introduced more grain than just the steps in making a print.
    Of course now with cheap film scanning, Digital Intermediates, and of course digital cameras, everyone just shoots using the S35 area.  My understanding from the conversation is that if you were shooting something like "Star Wars" that would have a bunch of opticals anyway, you'd shoot S35 even before the days of digital film scanning, but if you were shooting some standard film it was common for the DP to just frame inside the Academy ratio area of the film strip and skip the optical reduction.
    Is there anyone with more experience than myself with shooting old films that can confirm this?  It really has me curious.  The reason being is that the Academy Ratio on a projector is defined at 21mm across.  While a multi aspect MFT sensor on something like the GH5s is 19.25mm across if you shoot DCI.  When you consider projector slop  or the overscan on an old TV for the action safe area, the difference in the exposed film or sensor area seen by an audience seems negligible-- and thus the perceived DOF or lack there of between the two formats.
    Anyway, just wanted to share this, since I have never seen it discussed anywhere.  And maybe its just me, but I'm really curious as to whether the vast bulk of films shot from the 1930s and into the 90s were actually using the Academy Ratio area of the negative for framing.
    Also, hope I'm not derailing the conversation regarding Olympus too much, but as we're discussing the MFT format of Olympus cameras as a possible reason for it's lack of sales, this seemed apropos.
    https://en.wikipedia.org/wiki/Academy_ratio
    https://en.wikipedia.org/wiki/Super_35
  18. Thanks
    Towd got a reaction from IronFilm in Olympus sells Imaging Business   
    Whenever I read a long thread regarding the merits or shortcomings of the MFT format I'm reminded of a conversation I had with a  DP quite a few years ago about Super 35 vs Academy ratio framing.
    For historical context every VFX heavy movie that was shot on film was typically shot Super 35 with spherical lenses even if its final projected format would be widescreen 2.39.  Of the twenty or so films I've worked on that originated from scanned film, only one was shot with Anamorphic lenses even though the vast majority were slated for widescreen delivery.
    Anyway, the big concern back then was minimizing film grain and had less to do with the depth of field and blurry backgrounds, so you'd typically want to use as much of the negative as possible to reduce the film grain before scanning it.  However, in talking with a DP one day, he told me that when a feature wasn't slated for a lot of VFX, they would often just frame the project in Academy Ratio with room on the negative for where the sound strip would go. In this way you would go Negative -> Interpositive -> Internegative -> Print without mixing in an optical pass to reduce the super 35 image into the Academy framing for the projector.
    The reason for this according to the DP was that the optical pass would introduce additional grain, so shooting S35 was kind of a wash in that you were trading the larger film area of S35 for framing directly for the Academy area on a projector and skipping the optical reduction that introduced more grain than just the steps in making a print.
    Of course now with cheap film scanning, Digital Intermediates, and of course digital cameras, everyone just shoots using the S35 area.  My understanding from the conversation is that if you were shooting something like "Star Wars" that would have a bunch of opticals anyway, you'd shoot S35 even before the days of digital film scanning, but if you were shooting some standard film it was common for the DP to just frame inside the Academy ratio area of the film strip and skip the optical reduction.
    Is there anyone with more experience than myself with shooting old films that can confirm this?  It really has me curious.  The reason being is that the Academy Ratio on a projector is defined at 21mm across.  While a multi aspect MFT sensor on something like the GH5s is 19.25mm across if you shoot DCI.  When you consider projector slop  or the overscan on an old TV for the action safe area, the difference in the exposed film or sensor area seen by an audience seems negligible-- and thus the perceived DOF or lack there of between the two formats.
    Anyway, just wanted to share this, since I have never seen it discussed anywhere.  And maybe its just me, but I'm really curious as to whether the vast bulk of films shot from the 1930s and into the 90s were actually using the Academy Ratio area of the negative for framing.
    Also, hope I'm not derailing the conversation regarding Olympus too much, but as we're discussing the MFT format of Olympus cameras as a possible reason for it's lack of sales, this seemed apropos.
    https://en.wikipedia.org/wiki/Academy_ratio
    https://en.wikipedia.org/wiki/Super_35
  19. Like
    Towd reacted to IronFilm in COVID19 Kibosh   
    I think you'd need to be a little oblivious of the recent news to not be aware of what @sanveer is saying? (not meaning to imply I agree with him, but I at least understand the perspective he is coming from)

    For instance, here are the top two results I got for "China" when I searched Google from the news just now, major headline stories from big mainstream news sites:

    Trump angers Beijing with 'Chinese virus' tweet
    https://www.bbc.com/news/world-asia-india-51928011

    Trump sparks anger by calling coronavirus the 'Chinese virus'
    https://www.theguardian.com/world/2020/mar/17/trump-calls-covid-19-the-chinese-virus-as-rift-with-coronavirus-beijing-escalates

    There are countless more like this that have been published just in the past 24hrs or so:
    Trump tweets about coronavirus using term 'Chinese Virus'
    https://www.nbcnews.com/news/asian-america/trump-tweets-about-coronavirus-using-term-chinese-virus-n1161161
    Trump’s ‘Chinese Virus’ Tweet Adds Fuel to Fire With Beijing
    https://www.bloomberg.com/news/articles/2020-03-17/trump-s-chinese-virus-tweet-adds-fuel-to-fire-with-beijing
    China to Pompeo: Stop Calling It the ‘Wuhan Virus’
    https://www.thedailybeast.com/china-tells-pompeo-to-stop-calling-coronavirus-covid-19-wuhan-virus
    Trump Roundly Condemned for Divisive ‘Chinese Virus’ Tweet
    https://www.thedailybeast.com/trump-roundly-condemned-for-divisive-chinese-virus-tweet
    Trump called it the 'Wuhan coronavirus' for a legal — and commonsensical — reason
    https://thehill.com/opinion/white-house/487931-trump-called-it-the-wuhan-coronavirus-for-a-legal-and-commonsensical
     
  20. Like
    Towd reacted to Video Hummus in Panasonic GH6   
    To be honest, sounds like the “lowly” GX85 punched above it’s weight.
    There is much more to photography and videography than the fucking sensor size. “Oh, buts it’s not Full frame”. “Oh, it doesn’t have that FF look”.
    M43 isn’t trying to be Full frame.
    It’s a lovely system. It’s very capable and can punch above it’s weight in many categories. It’s a FUN system with lovely cameras. Ask anyone that goes out with their EM1 or G9. You get to pack a light pack and leave the tripod at home.
    The new EM1.3 can take 50MP handheld Astro photography. Think about that for a moment. Are they as good as a FF camera with a tripod and some expensive lens? Probably not. I bet it was still fun as hell for the M43 user.
    And that’s why people like Tony Northrop need to shut the fuck up when they don’t know what they are talking about. Like somehow M43 is inferior? It’s actually pretty amazing and flexible system!
    Take a look at Chris Eyre-Walker or James Popsys. Think they care that they aren’t using FF? Nah, they don’t give a fuck. They see the virtues and trade offs and then just create and capture some nice stuff, all most all of it handheld, with a small kit, in challenging conditions. They leverage the virtues of M43. But they aren’t using professional equipment because it doesn’t have a FF sensor in it? Phhftt.
    And as we can see fromTony Northrop the full frame look can be highly over-rated. Thanks tony for the great memes. 
  21. Like
    Towd reacted to newfoundmass in Panasonic GH6   
    It's amazing to me how dismissive people are of the benefits of M43. To me the only "insurmountable" negative of the system, at least on the Panasonic side, is the auto focus but even that, in my opinion, is overblown. Everything else can be pretty much remedied. Poor in low light? Throw a little light on your subject/scene! Need shallow depth of field? Just use the right lens! 
    I understand why people love full frame. I really do! But you don't NEED full frame, you just WANT full frame. There's nothing wrong with that either but are you incapable of creating your vision using a smaller sensor? I doubt it. 
    Two weeks ago I filmed a pro wrestling event with my 3 camera set up. My buddy did stills using the A7iii with the Sony 70-200mm. I blew his mind when I showed him the GX85 with the 35-100mm f/2.8. The two side by side was comical. Cost of his set up, by the way? $4,600. I paid $250 for the GX85 and $400 for the 35-100mm on the used market. Even if I bought them new though it'd have been $1,500. I know it's not the best comparison, but my buddy could've easily done his job with a G9 instead. Price of that new? $1,200.
    I'm rambling, sorry, ha ha, but hopefully some of that makes sense! 
  22. Like
    Towd reacted to Wild Ranger in Canon EOS R5 - What Panasonic, Sony and Fuji can do to fight THE 8K BEAST   
    Now with the most recent adoption of AV1 in Netflix and youtube, i think that internal codecs are going the AV1 way, it's really efficient supporting up to 12bit and 4:4:4 color space. Also IT'S FREE OPEN SOURCE MAGIC!!!
    Implementing Prores is a more hard thing to accept in my opinion, It needs to be hardware encoded to be efficient in smaller cameras, also licensing and chip development it's quiet a risk for mirrorless cameras. That's why Panasonic opted for a 10bit h264/265. its cheaper, smaller and, let's be honest, quality wise it's not really far from Prores HQ. The only benefit of apple's codec it's the easy to use and adoption in NLEs.
  23. Like
    Towd reacted to Video Hummus in Canon EOS R5 - What Panasonic, Sony and Fuji can do to fight THE 8K BEAST   
    I'm happy with the lowlight performance of MFT with regards to the GH5S. Gives us a GH6 with a dual gain sensor thats performs a bit better and it will be good as a A7III or A7SII....and no one complained about their performance.
    I've shot candle lit scenes in a cabin in Norway with a f2.8 12-35 at ISO 6400 to 8000 on the GH5S. A few shots at 8000 needed some noise reduction that took about 5 mins of my time to process.
    I've shot video of Elk with a 50-200mm f/2.8-4 at dusk on the GH5S at 200mm f/4 ISO 3200-6400 with a monopod. Video came out great. I had other photographers and video people there too lugging around 400mm FF and crop lenses...they absolutely HATED to re-position. I wonder why?
    So I no longer fret about lowlight ability with the GH5S. IF the GH6 can take what they did with the GH5S and improve upon it, I am more than happy. Especially if I get glorious IBIS and full V-Log.
    I have never yet needed to shoot something wider than 18mm and faster than f1.7. When I do I guess I'll buy one of the new Laowa f0.95 ZeroD primes that are coming out this year.
    I prefer to have my humans subject's eyes and ears in-focus in my talking head shots. If you shoot f1.2 FF like Tony...well god help you.
  24. Like
    Towd reacted to newfoundmass in Canon EOS R5 - What Panasonic, Sony and Fuji can do to fight THE 8K BEAST   
    Not if the Full Frame is significantly bigger! 
    M43 is about smaller bodies and lenses, not just cost. I can carry 3 bodies and 3 zoom lenses in a backpack and it will be nearly equal in weight to one full frame camera and zoom lens. 
    I mean the GH5 with 35-100 lens combined weighs less than the popular 70-200 f/2.8 full frame lenses alone! 
  25. Like
    Towd reacted to mkabi in Canon EOS R5 - What Panasonic, Sony and Fuji can do to fight THE 8K BEAST   
    I'm not trying to pick a fight here.
    But, I don't know... I felt the same, but after trying out mft...
    I got to call it as a I see it... Most of the above is mostly B.S. from online pundits.
    1) <-- Very subjective and its content based not related to sensor size.
    2) <-- Well, watch this video and tell me that 3 feet is really a big deal (may be switch out the lens for the 16-35mm):
    https://www.youtube.com/watch?v=IFH6bmesqVE
    3) <-- Plenty of bokeh on mft here (by the way thats my video with the stupid Yi Camera):
    https://www.youtube.com/watch?v=pv0QRR3Nbh8
    4) <-- This I agree with - but manipulating light with reflectors, external soft boxes, LEDS, flags, etc. even time of day (the golden hours) and working within their limitations is half the fun.
    5) <-- This matters if you are not using any editing tools at all, but I've taken some amazing pictures with the 7D. And, they looked great straight out of camera. Funny story, I was working with a friend for a wedding, I told him... man its just a 7D + a couple of L lenses, he said that it will be fine. He was shooting with a 5D mark 2, and the groom wanted all my pictures instead. After reviewing my photos, my friend was blown away by them too, mostly because I captured a lot of those "intimate" moments. 
×
×
  • Create New...