Jump to content

kye

Members
  • Posts

    7,514
  • Joined

  • Last visited

Everything posted by kye

  1. kye

    Panasonic GH6

    I don't think we'd only get 8K. Any 8K sensor would mean an update to the colour science, which Panasonic has definitely improved in other cameras since the GH5 and I think is one of the GH5s main weaknesses. 4K120 would be pretty good and would also be a bit headline grabbing which is cool too. Depending on the data rates and the imaging pipeline they put in there, I wonder if it could do 8K60? 8K downsampling would be cool. It also adds the possibility of a 6K mode that's also downsampled, which would be pretty newsworthy. I'm not sure how many of the 6K cameras are oversampling, and back in the day there was a significant IQ bump from oversampling rather than just doing a straight pixel readout. At this point I'm not sure what it would take to be revolutionary. If you take the R5 and A7S3 and imagine something that would blow them out of the water and having people coming back to MFT, then what would it have to be? Not only don't I think it's possible to deliver that, but I'm not even sure that if it did that it would connect with the market. Imagine that it had a ridiculous configuration like 16K, 8K60, 4K240, 2K600, internal RAW, triple native ISO, eyelash AF for people animals insects and amoeba, 12-bit h266 444, and 8 hour battery life... who would sell their R5 or A7S3? It's still MFT in a FF hype world, would people really be abandoning FF because "I need 16K" or "My clients require 4K240" or "I have to film at f11 at night under a new moon with no lights"? My inspiration for the GH6 was for it to be a solid upgrade but keep it's workhorse status. We're all about those specs here, but other GH5 forums I am part of online are much more focused on the work and how to get good results with it. Many shoot in 1080p (gasp!) because they never once had a client ask for 4K, etc etc. I suspect those people would think that FF was an advantage in the sensor size that was more valuable than 2K600 or whatever. Which I guess is a way of saying that beyond a certain point people don't care about specs when they're 'good enough' and I think that the pack is probably good enough for most, so going further doesn't really help much, and what people will pay for is an upgrade that is meaningful, which leads me to.... My logic for that thread was that now we're about to get a $6K medium format camera that is 'good enough' for most people (either internally or with the 4K Prores RAW) we can get a massive increase in sensor size, which is something more valuable to me than 4k120 or 8K. In a sense, my issues with the GH5 are minor enough that even if they fixed everything wrong with it the upgrade probably wouldn't be worth it for me. I've recently worked out that the difference between videographers and film-makers is that videographers are having to please many future clients who may demand 4K or request radical changes in post so the priority is to be as flexible as possible, thus buying a camera with high resolution and frame rates and sharp lenses. Film-makers only have to please one client at a time, and if there are particular requirements they can hire equipment without losing money, and if they have a good enough reputation or are independent then they only have to please themselves. As someone who only shoots my own projects, this means that my camera purchase only has to please me and I'm not interested in 'what-if' questions about pleasing others. I seriously doubt that anyone is buying the A7S3 because they have their own needs for 600Mbps 4K ALL-I or 4K120 422 10-bit. Rather I think they will be buying these cameras because they don't know what future projects require and want to cover their bases and never have to turn down work.
  2. kye

    Panasonic GH6

    I always thought that Panasonic had three options: Treat the GH line like a pony and teach it more tricks, by going 8K / RAW / <spec goes here> .... this is the path of the R5 Treat the GH line like a 'workhorse' and improve the things that currently get in the way .... this is the path of the A7S3 Treat the GH line like it's dying and do a half-assed update that can be cheaply manufactured hoping enough people will buy one No GH6 Personally I don't mind if it goes 8K, because the GH5 is 5K and I shoot 1080, so a 2.5x oversampling to a 4x oversampling isn't a large difference. A new sensor means we'll also get new colour science and that's one of the biggest weaknesses of the GH5 IMHO. A new sensor might mean a new AF technology, but who knows. Dual ISO could be on the table too, which would be fantastic. Sadly, the Panasonic way seems to be releasing the camera with only crappy codecs and adding the good ones later through firmware. This means that the hype surrounding high bitrate and bitdepth modes don't get included on launch and firmware updates don't get the hype, and I wouldn't buy it until it has what I need, unlike the S5 which might never get those upgrades. Still, it would be a shot in the arm for the MFT system and would protect peoples investment in lenses and accessories for some time, which seems to have been a concern for some. Does anyone have any info on the sensor? a @androidlad perhaps?
  3. I'm not sure what the chances are for that - potentially lower than you might think. Consider the options: The new models are impressive and expensive.... "it's the end of the world - $6K cameras have doomed civilisation" The new models are impressive and affordable.... "today our company is closing its doors due to low unit profit of recent releases and the failure of the compact camera market" The new models are modest upgrades and affordable.... "WTF that's not impressive - WORST CAMERA EVER" The new models are modest upgrades and expensive.... "what the absolute @#$$" Personally, I think the GH5 is actually very close to perfection, and so a modest upgrade of its weaknesses, even just some of them, would justify its existence. The hype train that seems to expect new cameras to be spectacular won't let the "we've taken a solid performer and given it a solid incremental update" message gain traction. These days things are either spectacular or despicable, and no logic of common sense can cut through the mania.
  4. Video will never be a niche. The demand for video is on the rise, but never seems to get mentioned when the units shipped graphs come out each year. It's on the rise for people shooting with smartphones, but also for people who want better than they can provide. The professional videographers continually banging on about why the latest equipment is the absolute bare minimum and the camera from last week is trash paint a picture that there is an un-ending supply of people who want 6K advertisements for their laundromat, but the problem is that they don't want to pay enough for it. I agree. Smartphones are eating the market for cameras that will take less than 10,000 images before being left in a drawer. Prices might not have come down in an absolute sense, but what you get for your money is growing steadily - likely in rough alignment with Moores Law. If the obsession with 4K and 6K and 8K and 12K calmed even slightly then people would realise that the price of a camera that can deliver an image of X quality is dropping steadily.
  5. I find my exposures typically change from shot to shot, as unless i'm waiting for something to happen, I typically shoot for the edit and know what shot i'm aiming for and when i've got it. When I bought the GH5 I had shortlisted an Olympus as it had better stabilisation, and also the A73 for the great AF. I was a huge fan of good autofocus, and knew that the AF was the achilles heel of the GH5, but decided to test myself and "prove" that manual focus wasn't for me. So I shot a few short test projects using MF and I found that not only did I find it 10 times easier than I thought it was going to be, I found that I liked the aesthetic. It's imperfect in a very human way, rather than mistakes that are made by the tech and look robotic or plain awful. I also remembered, and started to not instantly forget, the times when AF does a great job of focusing on the wrong thing and ruins a shot. I don't think that AF is at a level yet where I would rely on it, because as imperfect as MF is, I NEVER EVER EVER find myself focusing on something that I didn't want to be focusing on 🙂 Yes, manually focusing isn't the fastest thing in the world, but it's usable in the edit. I see people using manually focused shots that are far from perfect regularly online, but it's rare for anyone to show a shot that the AF gets wrong, mostly because it's not nice aesthetically. I know cameras now have a speed control setting but I always find that if i'm seeing an AF pull focus it's either too fast or too slow. In terms of the GH5, I have it dialled in and I know it well now. I feel like i've finally learned it. I'm invested in lenses, and even bought two new lenses (Laowa 7.5/2 and Voigtlander 42.5/0.95) in late 2019 that I haven't even had the chance to use on a real trip yet! I don't need more DoF, because my lenses are fast enough, and I get high-bitrate 10-bit ALL-I 1080p. You're right that the colour science is a weak point but it's passable and my grading skills are getting better, so I don't mind, however I think this is the biggest thing that would tempt me to upgrade. It won't be any time soon, and it may very well be to a Medium Format camera, skipping these tiny "full frame" cameras lol. I'm not sure what Canon FF camera you're talking about with "good results", but I'm not aware of anything that shoots 10-bit internally except perhaps the R5? One day Canon might make a FF camera to rival the GH5 at a similar price point, but we're not there yet! I've spent many hours of frustration grading the 8-bit footage from the XC10, and the 10-bit footage from the GH5 is absolutely spectacular in comparison, practically being from a different universe. I see grading videos of people grading Alexa footage and pushing UMP footage around and it moves smoothly without a hint of breaking and the GH5 footage always gives me that same feeling when i'm grading it. I've tried to break it and failed. The Canon 8-bit footage I've shot and graded works fine for perfect conditions but given any difficulties at all it broke before it even looked normal, let alone having latitude to push or pull the footage to adjust compositions in post.
  6. I love it when people claim to speak on behalf of an entire world-wide industry, but let's introduce some science... Here is the table of required sample size to give a certain margin of error. Ignore the blue box, someone else put that there. We want to look at the bottom left corner - to get a 95% confidence we're within 5% of what we say, we need to have 384 samples. So, for those saying easyrigs are common, please provide photographic evidence of 384 films using an easyrig. For those saying they're not common, please provide a time-lapse video for 384 productions showing the DoP over the full shoot schedule not using an easyrig. I look forward to your responses. On a separate note, I'm thinking of buying a new Canon cinema camera to replace my GH5 for my families home videos which I shoot handheld and I was wondering if it will have IBIS or if I will need to buy an easyrig. Thanks.
  7. Yeah, but can it shoot 8K without overheating? Getting the shot is overrated.... it's all about that sweet sweet megapixel count, everyone knows that.
  8. Yeah, I suppose. I guess from the outside it just looks like they're all fawning over whatever looks cool at any given moment, with the guiding principles of 1) MORE=more, 2) MOAAR!!!!, 3) See #1, and 4) How high do you think I can count? It seems odd to think that the hype would go against the more is more principle, but I guess the golden rule is the golden rule. Interesting, and makes sense. I wonder what a setup with the 2X anamorphic adapter looks like, assuming the optical path will work? I kind of liked the look of the ground glass.. it had way too many issues across the whole frame, but the texture in the centre was lovely.
  9. I agree with your summary from the perspective of a stills-first shooter. You're right that FF lenses and the speed booster lend huge support to a new system, which is hugely important commercially. It will be interesting to see if FF ends up being the sweet spot. When you have something where bigger-is-better it tends to only be better because all the things you've seen are underneath the 'best' size, or it ends up that although the thing you're looking at keeps getting better with size, other things take over in importance. No-one is going to carry around an 8x10 sensor camera even if the image was beyond belief, except on crazier projects, like that movies where they hand-held the IMAX film camera(!). You're right that it will depend on the other manufacturers getting in on the act and collectively PR-ing people to death and making them upgrade. I can also understand Fuji using this as a strategy. My question is really "if I have to change from MFT and bigger is better then why would I settle for FF if there are better options?" If there was a MF camera with the features of the A7S3 and lenses available then I think the camera trendies would be plastering it all over YT.
  10. Cool technique and I can see why you are able to use aperture to set exposure. Not a lot of folks on this board seem to understand that a lens closes down. My overall philosophy is 1) get the shot, and 2) make it as 3D as possible. Making it as 3D as possible explains many / most of the things that are done on controlled big budget sets, and as I don't do some of those things because of (1) I overdo the rest, but just by a little. I prefer to shoot with a DOF effect similar to my eyes, but just a little shallower in order to help with depth. Even on bright sunny days we still see blur in backgrounds, so that's how I use the aperture. In low light I will open up completely and although it's a radical thing with fast lenses, it looks more natural because our eyes naturally open up in low-light conditions so it doesn't seem unnatural. I shoot in public in uncontrolled situations so it's nice to be able to separate out the subject from the environment to create focus. For example: Excuse the colour grading, they are ungraded or only a quick job. You can definitely over-do the shallow DoF thing. Philip Blooms film shot in Greece with the GFX100 had DoF that was too shallow for my tastes, so I'm not a bokeh fiend or anything. Sometimes you want to take everything in (once again, basically ungraded): My thoughts about DoF are that it's about the story you want to create. It's about controlling what the focus of the shot is. Sometimes the situations are so crowded that frames would be chaos and it's difficult to tell who the subject is or know where you're meant to be looking. Obviously that's the main job of composition, and to an extent DoF is about how 'deep' your composition is. Very shallow DoF says "that other stuff back there isn't important" and deep DoF says "everything here is important". It's a creative tool, which is why I don't want to be forced to use it to control exposure. Exposing with SS isn't my preferred aesthetic, but the number of times that I only just get a shot of the kids doing something funny means that shaving every second off the reaction time really counts. I've even taken the first two frames of a shot (or the first two in focus) and used Optical Flow to create a kind of moving snapshot of the moment, and did that because frame #3 of the clip showed the smiles fading or the kids noticing the camera or something else coming into play and the moment is clearly gone. Literally, I got the shot but had I been 0.08s later, I'd have missed it. This has happened more times than I can count and often they're the best moments. I don't know about you, but I can't set a variable ND to correct exposure in under 0.04s. When a camera uses auto-ISO and auto-eND to set exposure I will set the shutter angle to 270 and leave it there forever. I say 270 instead of 180 because I slightly overdo the things I can in order to compensate for the things I have no control over, like lighting conditions, or sometimes even my filming location (like if I'm stuck in my seat on a tour bus / boat / etc).
  11. You're right that photographers were all about FF, but I think you have to remember that photographers lusted after Medium Format, but didn't buy it because it was far far too expensive, and was too slow with slow AF and slow burst rates (or no burst rates at all) etc. The fact that we have a MF camera coming out that is HALF the price of what previous models of MF have been, and is usable in real-world conditions instead of just being a 'studio camera', well, that changes things. I think photographers have more lust for megapixels and sharper lenses than they do for FF, so if they had to choose between FF and MF I think they'd choose MF in a heartbeat. Of course, that involves changing systems, so that's not going to be something that will happen quickly, and the price may have to come down for the majority of the stills market to start thinking of it as an accessible option. So yeah, I think people didn't talk about MF because it was viewed as unattainable, but now that's changing, we might see the lust start to emerge. They are slow to change, and rightly so, if the image was the only thing that mattered then they'd change quickly, but as you know, it's not the most important thing on a film set, and it takes time to understand the new lenses and the new colour science and all that stuff. Plus, it's not like shooting with an Alexa is a terrible place to be as your default option! I agree that it will be quantifiable, but we haven't quantified it yet, and i've dug pretty deep in this stuff. I think people don't know. It's probably some specific combination of things, which of course makes it that much more difficult to zero in on. In the meantime we only have our aesthetic impressions to go from. There are quite a lot of subjective accounts from highly respected people that larger sensor sizes and certain lenses or lens designs have a certain X-factor, which is really one of the main attractions of MF and the purpose of this thread. My entire philosophy of image has changed over the last few years as i've gone on this journey. I started out with the philosophy of getting a neutral high-resolution high bitrate image shot with sharp lenses and then degrading it in post to give the aesthetic that is desired. What I learned along the way is that: resolution does matter and I want less of it, not more, so now I shoot 1080p DoF has a much larger impact on aesthetic than I thought, so now I shoot with wider aperture lenses (sometimes but not always used at larger apertures) bit-depth and bit-rate are highly important, so now I shoot 200Mbps 1080 at 10-bit in a 709 profile (not log) halation, flares, and contrast can't be simulated very well in post, so now I try and have lenses with a less clinical presentation in these areas In short, I worked out what is more important and what is less important, and I worked out what I can and cannot do convincingly in post. Therefore, I moved these things to be done right in-camera. If medium format becomes accessible for video (it's not in my price-range yet!) then that's another thing I'll be able to shift from trying to do in post (but not knowing how, like your comment about not knowing what is going on outlines) to doing things in-camera and then having it baked-in to begin with.
  12. Absolutely, and that's why I am even tangentially interested in the format. My expectations of a camera are that the 'rig' is the camera body, an SD card, a lens, an on-camera mic and a wrist strap and then I put a couple of spare batteries and a couple of other lenses in my bag and I'm off for an 18-hour day, during which I shoot anything and every that that peaks my interest. In those circumstances my phone would take better images than a Phase One because my phone would suit the conditions and the Phase One would be a PITA. I'm looking at it from the perspective of video-only and also from the perspective of getting good-enough quality with a portable package. For me, shooting with an external monitor / recorder is a downside as it means the rig is larger, heavier, requires more complexity in power solutions, has messy cables, and creates unwieldy file sizes. The A7S3 internal codecs are good enough for me (actually they're radically more than what I'd need, but luckily they have high quality 1080p and ALL-I codecs). I just saw that the GFX100 can do 1080 at 400Mbps so I guess that's fine for my purposes. In a sense the more integration that they build into these cameras the less they will make them usable with things like focus peaking and exposure tools etc, so that's not a good thing. Anyway, it's good to have the option of external RAW but keeping good internal quality should remain a high priority. I meant that with a 100MP sensor it's odd that it can only do 4K. Considering the hype has moved to 6K and 8K which are now settled as standards you'd think that offering these would be a 'home turf' advantage of MF. If you think about MFT, doing 6K or 8K means having to work on new sensors and dealing with all kinds of new issues, but MF was already the king of high resolution, so you'd think that these things would be playing to the strengths it already has. Ok, that makes sense. I guess I see the crop factor as being a strength and weakness. It's great if it can use FF lenses, but that also means that the sensor isn't so much larger than FF. Given a hypothetical 6x4.5 camera as a competitor, it wouldn't be able to use FF lenses, but would have a huge sensor size advantage over FF, so would be easily worth the trouble. I guess that brings us to.... For me, I see MFT as having the advantage of being what I already have lenses for. FF as being the thing that is now good enough, has a larger sensor, and has heaps of lenses and overall support. MF represents going away from what I already have, and where all the lenses are, but you'd do it for the mojo. Considering that the GFX is only a little bit larger than FF, but is the best you've ever seen that lens and is also almost good enough to use the M-word, maybe a 645 sensor would be crazy good and worth all the 645 lens shenanigans that would be required. To me, a format that is only just a little bit better than FF seems to be skimping on the thing that it really has going for it. Now, of course, there are limits - I'm not going to be lining up at the camera store to buy an 8x10 camera for shooting my travel films, but MF needs to offer something significant over FF to really make it worth the hassle of going through that transition. @mercer is talking about a certain X-factor that can occur with larger sensor sizes. I've been trying to chase down what this might be, and you're right that it's not FOV or DoF, but it's important to know that the math doesn't explain everything that's going on with sensors and lenses. I've tested a lot of lenses in controlled conditions and when you do these tests you start to see differences that there are no readily available explanations for. An example of this is the Takumar lenses, which render images that are noticeably flatter and less 3D-looking than other lenses, and this is under controlled conditions with everything else being equal. Same focal lengths, apertures, same lighting, camera position, etc etc. It's something that the Takumars are known for. The question is, if the background is the same level of blurriness, then how is the perception of 3D space different? I've been looking at this question for years and haven't come up with anything, except that I've seen it myself enough times to know that something is going on. Sensor size can have a similar effect, some things look more 3D than other things. Not sure why, it just does. This is one of the attractions of larger sensors. See @BTM_Pix comments above about the Contax lens being better than any other camera he's seen it on. Why would this be the case? Who knows. I've played with things like this and these effects hold up even if you decrease the resolution, bitrate, and even colour depth and even if you make the images B&W, so I can't readily find an explanation for it. FF only took a few years to 'catch up' to where MFT was, and the MF cameras we're talking about aren't that far away from FF in terms of sensor size. Certainly they're a lot closer to FF than FF was to MFT. If there is market demand, which is debatable considering FF still has a lot of hype and many haven't moved from MFT or S35 to it yet, it could be that MF 'arrives' in a few years.
  13. I don't see a difference between 24p and 30p. Or at least, 30p doesn't have the same look that 60p has to me. I figure the only way to get SS you want is by having an ND. You can have a variable ND or you can use fixed NDs and then vary your aperture (assuming it's declicked) to fine tune it. The reason I say that is because there is very little tolerance for variations in SS if you want some motion blur in the frame. To put it in context, let's say you're aiming for 180 degree shutter. Obviously a 170 degree shutter would still be fine, but where are the boundaries of the aesthetic. Some say that 360 shutter gives too much blur, but let's say that we're ok with it. Steven Spielberg famously used a 45degree shutter on Saving Private Ryan because he wanted the aesthetic to be jarring and he wanted the audience to be able to see the bits of peoples bodies splattering everywhere when things exploded. Let's say you don't want to go this far and so a 90 degree shutter is our limit. That's a 90-360 shutter. Take the sunny-16 rule. For outdoor exposures at ISO 100 you'd typically have a 1/100s exposure when set to f16. No-one contemplating not using an ND will be wanting to shoot at f16, so let's say they're going to be shooting more like f4. That's 4 extra stops of light we have to get rid of in the SS, so that's a SS of 1/1600. That's a 180 shutter if we're shooting 800fps. This is absolutely no-where near what we need for 24fps, 30fps, 60fps, or even 120fps, so changing from 24p to 30p because you don't want to use an ND doesn't work. Using a graduated ND won't take 4 stops off the brightest part of the image, so that does't work. Polarisers won't work either. Maybe you never shoot in bright sunlight, sure. For me, I realised that I would need NDs that went from something like 6-stops to zero, and even then I'd still need to dial them in every time and miss a bunch of my shots. So I just abandoned using one. I used to use a fixed one on my XC10 because it couldn't do a short enough SS to exposure during the day with the aperture wide-open, as it was a cinema camera and not a hybrid. I will happily go back to using an ND when they implement a built-in eND that is controlled by the camera automatically like the Sony cinema cameras have. That way I control the aperture because it's a creative tool, I control the SS because it's a creative tool, and the camera controls the ND and the ISO to get the full range of exposure values, because neither ND nor ISO is a creative tool.
  14. I shoot handheld and don't have problems with pans without a subject, but they're not really a big part of what I shoot, especially now I have a 15mm equivalent lens, so landscapes etc don't require that much panning. The issue with 24p panning is that the 180 shutter obscures the detail, but that doesn't bother me as I'm shooting with auto-SS for exposure, so normally very short shutter speeds. I'm aware it makes the video less cinematic, but not having to use an ND means I get about 20-30% more usable shots, considering that much of the time I see something happening and only just get the camera going and in focus, and in the edit I end up using the first 2-3 seconds of the clip, so if I had to manually expose with an ND then I would have missed the moment. I chose the GH5 because of the 10-bit internal and IBIS. Even now, if I was re-buying my whole setup I would still consider the GH5 as the best option due to the 200Mbps 422 ALL-I 1080p 24p and 60p modes downscaled from 5K, the IBIS, the fact it's much lighter than things like the S1H, the fact I manually focus so don't care about AF, and the MFT crop factor gives me a 2X zoom on long lenses which I use for filming sports in the 120p mode. Is it the best camera available, no. Is it a camera with only 4K60 as its only defining factor? No way. It is still considered a workhorse today, and the GH5 FB group I'm in has 37k+ members and has a steady stream of people buying a GH5 for the first time and asking questions as they familiarise themselves with the camera, or asking advice about buying GH5/GH5s as a second or third camera for their setups. The group is interesting in that it seems to be full of people who are shooting things, posting their work, and are still very excited about the benefits that it provides.
  15. Good info. Interesting to hear it's great for both stills and video. Are you saying that it's better than the Phase One and Hassleblad MF cameras? Or are they in a different category in your eyes? The addition of Prores RAW certainly gives it a serious shot in the arm for video, although that puts it more in cine-camera territory in terms of form-factor, compared to cameras like the P6K or A7S3 that can record to ferocious bitrates internally or to nicely compact USB-C drives. It also gives a camera a big price shift in the wrong direction. I'm also a bit surprised that with a 100MP sensor that it's only 4K30 12-bit. A P6K can do 6K60 internally. Considering that MF is all about huge resolution it's a bit odd... I did appreciate the composition in the video at 1:44 though! How often are you utilising the full resolving power of the sensor in your still images? I stopped taking stills when I got into video but recall the old "no-one needs more than X megapixels because the larger the print the further you view it from" adage, and although I disagreed with people asserting that the magic number was 2MP, 3MP or 5MP, my gut suggests that it's probably less than 100MP unless you're the FBI picking out terrorists from a crowd or whatever. It's interesting that these are so close to FF, it means you can use FF lenses but it also means that any "Tom the DP" size advantage that it gives you (or @TomTheDP ) would be relatively minimal. It's interesting you mention that, because in one of the other Prores RAW videos I saw, I noticed a Canon TS lens pop in there.. Do they all cover larger sensors?
  16. I found this an interesting video: There's a few things about this that struck me. First, it looks like an ad, which is odd. The other things are that they shot the whole thing on the camera hand-held, that the lenses seemed to cover the basics (but weren't especially fast) and that it didn't look fundamentally different to a well-shot video from <insert nice cine camera here>. @elgabogomez I agree that you have to consider the whole package in terms of codecs, tech features, battery life, ergos, etc etc. I also agree about the relative new-ness of the format with lenses and other supporting factors as @noone says, but think about where FF was 3 or 5-years ago, without the frame rates, stabilisation, etc that FF now offers and is even considered a requirement rather than notable feature. Your Polaroid 600SE might appreciate in value proportionately to the newer MF systems and you might be able to swap the Polaroid for a complete MF setup at some point in the future as more players enter the MF space and lens systems are built out etc. I think it will be a very very long time before video can do fake DOF like you're suggesting @Video Hummus - as you say the effect has to be consistent across many frames and so considering the push for higher and higher increases in resolution, the bar is continually being raised about what level of quality the fake effect has to create. I suspect that if you programmed the latest iPhones to take 24 Portrait-mode photos per second and downscaled the result to 640x480 the effect would be perfect, maybe it would hold up at higher resolutions than that even, but 4K? Not a hope in hell. Sensor technology might have to have an oversampling factor too, like you might need an optical and depth camera pair that operate at 5x the resolution of the output image for the result to be video-perfect, but as sensor resolution increases so does the expectation about distribution resolutions, so it might be an equation that won't be answered for some time.
  17. No offence taken! I've played with shutter angle and waving my hand on front of my face and I've gotten a sense of how 24p is different to reality. The subjective experience for me is that 24p has a 'look' which is made to look the least un-natural by having a shutter angle somewhere in the 120-240 degree range, depending on your mood and if it's dark etc. But the thing is that 60p doesn't look more neutral to me, it looks like it has about the same amount of a 'look' in comparison to reality that 24p does, but the aesthetic of that look is very different. 24p seems to have a kind of 'heightened sense' aesthetic, like realty can have in moments of strong emotion. Kind of like the visual component of "time slowed down" and in a sense it's an effect that kind of increases the romance and emotion and depth and pain and very texture of experiencing the world as an emotional animal. 60p has an aesthetic that makes reality seem like every atom has been lubricated and everything is kind of slipping all over itself, kind of like everything is falling in slow-motion except that it's doing it at the speed of reality, and perhaps a little bit too fast for comfort. It has an aesthetic like the love child of slipping over in the bath, being scammed by a con artist that was so good the only warning that you got was that everything was happening slightly too easily, and what I imagine it would be like taking pills that make you smarter and give you superhero reflexes. In my mind, 24p has a more relatable aesthetic, it fits with things that I occasionally experience in my sober real-life, but it's also familiar from watching movies and TV, so that's an advantage too. 60p has an aesthetic that I have never experienced in sober real-life. 24p disappears but 60p never seems to fade-away into the background, it's like I've had my brain downloaded into a robot body and somehow they got the code wrong. My answer to your question about what to film for a simulation ride was 60p, but not because it mimics reality, but for two reasons - the first is that in motion-simulations it's been shown that lower frame rates make people nauseous and that it doesn't look like reality or like 24p. So people would come out of the ride having kept their lunch and having had an experience that they'd say "wow, it really was an experience" rather than say "I watched a movie and the seat moved". Talking about frame rate and shutter angle to mimic reality is like talking about drawing with crayons to mimic a moving sculpture - there's enough similarity to make it seem reasonable to ask the question but only good enough to choose between fundamental challenges that cannot all be met.
  18. My first answer would be "find someone else to do this". My second answer would be "no really, I'm not the person for this task". My third answer would be 60p 360 shutter. But that's only because you said that the displays are limited to 60p. If I had access to a 24-240fps display that could do any frame rate in-between then I would test the 1080p VFR mode at 24/30/60/90/120 and 180fps. I would then downscale the whole lot to SD in order to eliminate the fact that the slower frame rates have more data per pixel than the higher frame rates. Based on that I would then look at the resolution/framerate/bitrate combinations that the camera could offer, and work out how to choose between them, shoot identical test materials, and do a blind test with an audience to determine which mode to use. Of course, it's a ridiculous question, along the lines of "if you needed to paint the ceiling but only had a drawer of cutlery to apply the paint, which utensil would you choose, and by the way you're limited to only a fork or spoon".
  19. Yeah, creativity comes first, not second, or ninth, or last... but a DSLR form-factor operating at a high-SS and frame rate would be great for sniping shots from very high-risk angles, like holding it up above the people in front of you, holding it out a window from a moving vehicle, or trying to track an athlete or vehicle as they go screaming past you on a track. The upgrade from 20MP 10fps stills to 50MP 60fps slightly compressed stills would be an enormous upgrade in terms of getting the shot. I took my modified action camera out today to a nearby national park and filmed some stuff, and the shooting experience was completely different holding a camera the size of a matchbox compared to holding a full-size camera - people don't notice you in the same way, and you can throw it around in a different way too. My GH5 captures lovely images, but those images are different because the camera effects where you are, what you shoot and how the people you're pointing it at react to it being there. No IQ upgrade can make those changes, and why your RED got put back in its box.
  20. Shoulda-woulda-coulda. I remember hearing about this funny thing called "bitcoin" that was a "virtual currency" and people were generating them with their home computers. I contemplated setting up my home PC to generate some. I was also contemplating SETI@home, but never got around to either of them. My thought was that if I did I'd just run it all night and all day when I was sleeping and at work. I genuinely have no idea when that was, but I remember when it hit the news that you could buy a pizza with bitcoin I had already heard of it, so it was even before the pizza buying days. Had I done it, not lost the digital wallet, not got robbed, and not had a ego-centric-rockstar-complex-related-breakdown from getting rich, I probably would have been a billionaire by now.
  21. This idea comes up every so often, the first time it came up was when mere mortals could get their hands on 4K cameras. The results at the time were "sure" and (IIRC) Popular Electronics magazine featured a cover photo that was a still from a 4K camera, which obviously with 8K and hyper-datarates of todays cameras would be far higher IQ. One of the interesting counter arguments came from Peter Hurley, a headshot photographer who did a trial with a RAW-shooting cine camera, was that the benefit of always "getting the shot" by never missing the moment was more than offset by the work in post of having to find the frames you wanted, and all the media management that comes with the huge files. Obviously that was the perspective of a stills photographer, so if you're shooting video and stills, then pulling the stills out of the video would be a no-brainer, as even though it will take work to go through the footage and find the right frames, you're partly reviewing the footage during video editing anyway, you're colour correcting the footage anyway, you're managing your media anyway, plus you don't have to pay attention during the filming process to stills and video separately, and can capture both simultaneously using the one set of equipment. Plus it gives you the ability to reframe in post for the 4K(TM) that every client seems to want, despite not knowing WTF it even is, let alone knowing that resolution is the last thing that makes a video great. I'd suggest that you'd still want the old three-camera setup for the ceremony, reception speeches, reception dancing, although the ability to shoot wider and reframe in post might mean you can get away with not having a second shooter but still "simulating" one in post with things like getting close-up then getting a wide and maybe even a pan/tilt if your editing style warrants it.
  22. Yeah, FF to MF is one "step"... just like MFT to APSC/S35 is 1.33, APSC/S35 to FF is 1.5, and FF to MF is either 1.27 or 1.56 depending on the sensor. Adding to @BTM_Pix comments below, MFT -> FF is very similar to APSC -> MF, with 2x and 1.89x or 2.3x. I just looked up 645 lenses and wow, I didn't realise that the 6x4.5 sensor has a crop factor of 0.58, which is even more extreme than the Phase One etc MF cameras. You're right about anamorphics, essentially increasing the sensor size through optical compression/decompression techniques. Although with the 'lesser' size increases in the MF cameras on offer, maybe we should be using those open-gate with anamorphic adapters on 645 lenses.. Why did I think of the Titanic when I read that? Anyway, moving on! You're right about the wow factor and feeling like you should be getting something impressive after re-buying all your equipment. Do we know what the new GFX100S image is likely to look like? If MFT was abandoned and I switched systems then a compact MF camera with a decent 1080p mode would certainly tempt me to just skip FF. Do you have any idea why MF should be fundamentally a nicer image than FF?
  23. From where I sit, as a GH5 owner noting the year-on-year absence of a GH6, combined with Hollywood types having their own FF fanboi crush on the new "LF" cinema cameras, the future looks potentially like larger sensor sizes. In anticipating a FF future, and thinking of my love of vintage lenses, I had a period of a few days where I contemplated buying some FF lenses just to keep in case I'm forced to abandon MFT some years in the future. During the tumble down that particular rabbit hole I wondered if I should jump straight to Medium Format lenses, as potentially they've escaped the FF-fad-inflation-factor, and it would also mitigate me against a take-over of Medium Format cameras. I have since worked out that if that happens then I can just factor in the lenses to my system swap, and that with the plethora of new cheap fast glass coming from China, there will be acceptable options available when I make the change. I also realised that the 'look' of vintage primes is mostly due to some combination of simpler optical recipes, lower manufacturing tolerances and less sophisticated coatings. The fact that cheaper lenses typically have simpler optical recipes and lower manufacturing tolerances covers off those angles, and the less sophisticated coatings can be emulated with filters, which are widely available. This all made me curious... What is the current state of MF video? Resolutions, bitrates, bit-depths, codecs? Do you think that MF will overtake FF? Before you say it's not possible because of sensor read speeds and sensor size for IBIS motors, consider that the GH5 was miles ahead of the FF video offerings when it was released but the current crop of FF cameras have made up most/all of that ground in the subsequent years. I estimate this will continue and so having 6K120 and 5.5-stop IBIS on a Medium Format camera is simply a matter of time and market demand, not problems with the laws of physics. I have absolutely LOVED the video I have seen from medium format cameras in the past, and I'm not sure if that's the lenses (which are ridiculously high-quality) or the larger sensor capturing more light or the shallower aperture or the fact that the colour science of a 5-figure camera is potentially better than CaNikon is famous for, but the images were moving in a way seldom seen from other cameras, and almost all MF video had that glorious feel. Thoughts?
  24. Human vision is fundamentally different to the way that video works, so there is no frame-rate & shutter angle combination that makes sense. To expand on this, imagine you have a fan with only one fan blade, and imagine that it's spinning quite quickly. We would see the fan blade as a blur between (let's say) the 12-oclock position and the 6-oclock position. Then a tiny bit of time passes, and now we see the fan blade as a blur between 1-oclock and 7-oclock. etc. To put it into traditional video terms, the shutter angle is much much more than a 360 degree shutter. There have been attempts to actually simulate this. They filmed scenes with a very high frame rate and using a 360 shutter, and then you can combine many frames together, let's say that output frame #1 has capture frames 1-100, then output frame #2 has capture frames 11-110, etc. In this way, you can have a shutter angle that is larger than 360 degrees. You could also do things like have the motion blur be a fade rather than all parts of the motion blur be the same. I think this might be what we're running into when we talk about 24p vs 60p. Maybe 24p has the right motion-blur, but 60p has the right refresh rate, but can't have a shutter angle more than 360 degrees. I believe that computer games have worked out that the human eye can't detect anything more than a certain frame rate, ie, 120fps or 240fps or something, so in that instance there's no point rendering a game at faster than that. So what we need is a frame rate at that pace, but with motion blur around 1/50th of a second (corresponding to 24p 180 shutter) which with current technology isn't possible. Thus, the 24p 60p debate will never be resolved because the technology isn't the right kind of design. Actually, it's that 24p is a problem because people who do video use equipment designed for computer gaming, but don't know that that's what they're doing. Do you recall my earlier post where I said that film-making is deceptively simple and that people don't know what they don't know? This is one of the things I was talking about. There's no real effort required to get great 24p - just buy equipment designed for film-making and not for computer games. There are a huge number of external display adapters that are available for purchase, and they're very affordable too. BlackMagic sells a bunch of them here: https://www.blackmagicdesign.com/products including the Decklink which is $145 for 1080p and $195 for the 4K version. I suspect these only work with Resolve, but there would be others that work with other NLEs. These will also give you support for 10-bit, HDR, and SDI if you have SDI monitoring equipment, and perhaps best of all is that they are a completely managed colour pipeline, so the operating system and display drivers and all the crap can't stuff up your colour calibration, giving you a completely calibrated display to work from. Most monitors will happily display a 1080 or 4K signal at 24p if that's what the hardware is giving them, so all you need is one of these interfaces and all the problems you're facing will go away. You could make the argument that this gives you a great 24p pipeline but it doesn't solve it for everyone viewing your videos, and that's true. For them, it will be a mixture of watching on computers designed for gaming, phones, and smart TVs. People watching on computers probably aren't going to have good 24p playback, but as has already been mentioned, will they even notice? I'm not sure about phones, but smart TVs may do this happily, considering they're designed for media consumption, not for gaming, but it might well be a patchy. I remember setting up my media boxes to be PAL and not NTSC (before I had a completely smart TV) so they were definitely broadcast focused rather than PC/gaming mentality. Also, you'd be surprised at how many people can spot the 50p "soap opera effect". I doubt that many would spot the difference between 24p and 30p, but you never know. If you ever sort out your equipment to give you proper 24p (or 25p) playback you could test your friends and family and see if they can tell. You might be surprised.
  25. kye

    The D-Mount project

    As promised... Shot on the mighty SJ4000 with replacement 8mm M12 lens. I brought the footage into Resolve and pounded it with a hammer until it no longer looked like a cheap modern camera, but reminded me of an expensive older camera. I might have been a little heavy-handed with the film-grain though, I thought YT would compress it slightly more, anyway, enjoy.
×
×
  • Create New...