Jump to content

kye

Members
  • Posts

    8,027
  • Joined

  • Last visited

Everything posted by kye

  1. My advice is to forget about "accuracy". I've been down the rabbit-hole of calibration and discovered it's actually a mine-field not a rabbit hole, and there's a reason that there are professionals who do this full-time - the tools are structured in a way that deliberately prevents people from being able to do it themselves. But, even more importantly, it doesn't matter. You might get a perfect calibration, but as soon as your image is on any other display in the entire world then it will be wrong, and wrong by far more than you'd think was acceptable. Colourists typically make their clients view the image in the colour studio and refuse to accept colour notes when viewed on any other device, and the ones that do remote work will setup and courier an iPad Pro to the client and then only accept notes from the client when viewed on the device the colourist shipped them. It's not even that the devices out there aren't calibrated, or even that manufacturers now ship things with motion smoothing and other hijinx on by default, it's that even the streaming architecture doesn't all have proper colour management built in so the images transmitted through the wires aren't even tagged and interpreted correctly. Here's an experiment for you. Take your LOG camera and shoot a low-DR scene and a high-DR scene in both LOG and a 709 profile. Use the default 709 colour profile without any modifications. Then in post take the LOG shot and try and match both shots to their respective 709 images manually using only normal grading tools (not plugins or LUTs). Then try and just grade each of the LOG shots to just look nice, using only normal tools. If your high-DR scene involves actually having the sun in-frame, try a bunch of different methods to convert to 709. Manufacturers LUT, film emulation plugins, LUTs in Resolve, CST into other camera spaces and use their manufacturers LUTs etc. Gotcha. I guess the only improvement is to go with more light sources but have them dimmer, or to turn up the light sources and have them further away. The inverse-square law is what is giving you the DR issues. That's like comparing two cars, but one is stuck in first gear. Compare N-RAW with Prores RAW (or at least Prores HQ) on the GH7. I'm not saying it'll be as good, but at least it'll be a logical comparison, and your pipeline will be similar so your grading techniques will be applicable to both and be less of a variable in the equation. People interested in technology are not interested in human perception. Almost everyone interested in "accuracy" will either avoid such a book out of principle, or will die of shock while reading it. The impression that I was left with after I read it was that it's amazing that we can see at all, and that the way we think about the technology (megapixels, sharpness, brightness, saturation, etc) is so far away from how we see that asking "how many megapixels is the human eye" is sort-of like asking "What does loud purple smell like?". Did you get to the chapter about HDR? I thought it was more towards the end, but could be wrong. Yes - the HDR videos on social media look like rubbish and feel like you're staring into the headlights of a car. This is all for completely predictable and explainable reasons.. which are all in the colour book. I mentioned before that the colour pipelines are all broken and don't preserve and interpret the colour space tags on videos properly, but if you think that's bad (which it is) then you'd have a heart attack if you knew how dodgy/patchy/broken it is for HDR colour spaces. I don't know how much you know about the Apple Gamma Shift issue (you spoke about it before but I don't know if you actually understand it deeply enough) but I watched a great ~1hr walk-through of the issue and in the end the conclusion is that because the device doesn't know enough about the viewing conditions under which the video is being watched, the idea of displaying an image with any degree of fidelity is impossible, and the gamma shift issue is a product of that problem. Happy to dig up that video if you're curious. Every other video I've seen on the subject covered less than half of the information involved.
  2. I remember the discussions about shooting scenes of people sitting around a fire and the benefit was that it turned something that was a logistical nightmare for the grip crew into something that was basically like any other setup, potentially cutting days from a shoot schedule and easily justifying the premium on camera rental costs. The way I see it is any camera advancement probably does a few things: makes something previously routine much easier / faster / cheaper makes something previously possible but really difficult into something that can be done with far less fuss and therefore the quality of everything else can go up substantially makes something previously not possible become possible ..but the more advanced the edge of possible/impossible becomes the less situations / circumstances are impacted. Another recent example might be filming in a "volume" where the VFX background is on a wall around the character. Having the surroundings there on set instead of added in post means camera angles and sight-lines etc can be done on the spot instead of operating blind, therefore acting and camera work can improve.
  3. I'm seeing a lot of connected things here. To put it bluntly, if your HDR grades are better than your SDR grades, that's just a limitation in your skill level of grading. I say this as someone who took an embarrassing amount of time to learn to colour grade myself, and even now I still feel like I'm not getting the results I'd like. But this just goes to reinforce my original point - that one of the hardest challenges of colour grading is squeezing the cameras DR into the display space DR. The less squeezing required the less flexibility you have in grading but the easier it is to get something that looks good. The average quality of colour grading dropped significantly when people went from shooting 709 and publishing 709 to shooting LOG and publishing 709. Shooting with headlamps in situations where there is essentially no ambient light is definitely tough though, and you're definitely pushing the limits of what the current cameras can do, and it's definitely more than they were designed for! Perhaps a practical step might be to mount a small light to the hot-shoe of the camera, just to fill-in the shadows a bit. Obviously it wouldn't be perfect, and would have the same proximity issues where things that are too close to the light are too bright and things too far away are too dark, but as the light is aligned with the direction the camera is pointing it will probably be a net benefit (and also not disturb whatever you're doing too much). In terms of noticing the difference between SDR and HDR, sure, it'll definitely be noticeable, I'd just question if it's desirable. I've heard a number of professionals speak about it and it's a surprisingly complicated topic. Like a lot of things, the depth of knowledge and discussion online is embarrassingly shallow, and more reminiscent of toddlers eating crayons than educated people discussing the pros and cons of the subject. If you're curious, the best free resource I'd recommend is "The Colour Book" from FilmLight. It's a free PDF download (no registration required) from here: https://www.filmlight.ltd.uk/support/documents/colourbook/colourbook.php In case you're unaware, FilmLight are the makers of BaseLight, which is the alternative to Resolve except it costs as much as a house. The problem with the book is that when you download it, the first thing you'll notice is that it's 12 chapters and 300 pages. Here's the uncomfortable truth though, to actually understand what is going on you need to have a solid understanding of the human visual system (or eyes, our brains, what we can see, what we can't see, how our vision system responds to various situations we encounter, etc). This explanation legitimately requires hundreds of pages because it's an enormously complex system, much more than any reasonable person would ever guess. This is the reason that most discussions of HDR vs SDR are so comically rudimentary in comparison. If camera forums had the same level of knowledge about cameras that they do about the human visual system, half the forum would be discussing how to navigate a menu, and the most fervent arguments would be about topics like if cameras need lenses or not, etc.
  4. I think this is the crux of what I'm trying to say. Anamorphic adapters ARE horizontal-only speed boosters. Let's compare my 0.71x speed booster (SB) with my Sirui 1.25x anamorphic adapter (AA). Both widen the FOV: If I take a 50mm lens and mount it with my SB, I will have the same Horizontal-FOV as mounting a (50*0.71=35.5) 35.5mm lens. This is why they're called "focal reducers" because they reduce the effective focal length of the lens. If I take a 50mm lens and mount it with my 1.25x AA, I will have the same Horizontal-FOV as mounting a (50/1.25=40) 40mm lens Both cause more light to hit the sensor: If I add the SB to a lens then all the light that would have hit the sensor still hits the sensor (but is concentrated on a smaller part of the sensor) and the parts of the sensor that no longer get that light are illuminated by extra light from outside the original FOV, so there is more light in general hitting the sensor, therefore it's brighter. This is why it's called a "speed booster" because it "boosts" the "speed" (aperture) of the lens. Same for the AA adapter Where they differ is compatibility: My speed booster has very limited compatibility as it is a M42 mount to MFT mount adapter, so it only works on MFT cameras and only lets you mount M42 lenses (or lenses that you adapt to M42, but that's not that many lenses) My Sirui adapter can be mounted to ANY lens, but will potentially not make a quality image for lenses that are too wide / too tele, too fast, if the sensor is too large, if the front element in the lens is too large (although the Sirui adapter is pretty big), and potentially just if the internal lens optics don't seem to work well for some optical-design reason The other advantage of anamorphic adapters is they can be combined with speed boosters: I can mount a 50mm F1.4 M42 lens on my MFT camera with a dumb adapter (just a spacer essentially) and get a FF equivalent of mounting a 100mm F2.8 lens to a FF camera I can mount the same lens on my MFT camera with my SB and get a FF equivalent of mounting a 71mm F2.0 lens to a FF camera I can mount the same lens on my MFT camera with my AA and get a FF equivalent of mounting a 80mm F2.24 lens to a FF camera (but the vertical FOV will be the same as the 100mm lens) I can mount the same lens on my MFT camera with both SB and AA and get a FF equivalent of mounting a 57mm F1.6 lens to a FF camera (but the vertical FOV will be the same as the 71mm lens) So you can mix and match them, and if you use both then the effects compound. In fact, you'll notice that the 50mm lens is only 57mm on MFT, so the crop-factor of MFT is converted to be almost the same as FF. If instead of my 0.71x speed booster and 1.25x adapter, we use the Metabones 0.64x speed booster and a 1.33x anamorphic adapter, that 50mm lens now has the same horizontal FOV as a 48mm lens, so we're actually WIDER than FF. What this means: On MFT you can use MFT lenses and get the FOV / DOF they get on MFT On MFT you can use S35 lenses and get the FOV / DOF they get on S35 (*) On MFT you can use FF lenses and get the FOV / DOF they get on FF (**) On S35 you can use S35 lenses and get the FOV / DOF they get on S35 On S35 you can use FF lenses and get the FOV / DOF they get on FF (*) On S35 you can use MF lenses and get the FOV / DOF they get on MF (**) On FF you can use FF lenses and get the FOV / DOF they get on FF On FF you can use MF+ lenses and get the FOV / DOF they get on MF (***) The items with (*) can be done with speed boosters now, but can also be done with adapters so anamorphic adapters give you more options. The items with (**) were mostly beyond reach with speed boosters, but if you combine speed boosters with anamorphic adapters you can get there and beyond, so this gives you abilities you couldn't do prior. The item with (***) could be done with a speed booster there aren't a lot of speed boosters made for FF mirrorless mounts, so availability of these is patchy, and the ones that are available might have trouble with wide lenses. One example that stands out to me is that you can take an MFT camera, add a speed booster, and use all the S35 EF glass as it was designed (this is very common - the GH5 plus Metabones SB plus Sigma 18-35 was practically a meme) but if you add an AA to that setup it means you can use every EF full-frame lens as it was designed as well.
  5. I shoot in uncontrolled conditions, using only available light, and shoot what is happening with no directing and no do-overs. This means I'm frequently pointing the camera in the wrong direction, shooting people backlit against the sunset, or shooting urban stuff in midday-sun with deep shadows in the shade in the same frame as direct sun hitting pure-white objects. This was a regular headache on the GH5 with its 9.7/10.8 stops. The OG BMPCC with 11.2/12.5 stops was MUCH better but still not perfect, and while I haven't used my GH7 in every possible scenario, so far its 11.9/13.2 stops are more than enough. The only reason you need DR is if you want to heavily manipulate the shot in post by pulling the highlights down for some reason, or lifting the shadows up for some reason. Beyond the DR of the GH7 I can't think of many uses other than bragging rights. When the Alexa 35 came out and DPs were talking about its extended DR, it was only in very specific situations that it really mattered. Rec709 only has about 6 stops of DR, so unless you're mastering for HDR (and if you are, umm - why?) so adding more DR into the scene only gives you more headaches in post when you have to compress and throw away the majority of the DR in the image.
  6. In a lot of cultures these things are still very present. You only need to go online and listen to the children of immigrants talk about their struggles of living in their new culture but still respecting the wishes of their parents, and these are the parents that were open-minded enough to literally move their whole family overseas. The people who live and are completely immersed in the culture they were born in could be far more traditional than that, especially in Asian countries where there are thousands of years of history and culture. The progressiveness of modern life in western liberal democracies is also quite deceiving as there are a great many superstitious things embedded in those places too - not a lot of skyscrapers have a 13th floor for example. We get used to the peculiarities of the culture we live in, and find the peculiarities of other cultures odd. A fun exercise is to watch the "culture shock" videos of people moving or travelling to where you live. If the person explains their perspective well, you can get a real sense of how strange some things are, and not just different but actively backwards! I found the image from that GH7 film to look very video-ish actually. It's odd because when I paused the trailer and studied the image, they seemed to be doing almost everything right. The only thing that I could think of was that they didn't use any diffusion, whereas the vast majority of movies or high-end TV show that have shots with bokeh will reveal they're using netting as diffusion. Those that don't may well be using glass diffusion and that might not show in the bokeh. Maybe the difference is more than diffusion, but that's my current best guess why it looked like that. I think there are different kinds of diffusion, with some looking overbearing at low strengths and others being fine at much greater strengths. The majority of movies and narrative TV will be using a decent amount of it. Maybe it's the type that you didn't care for? I've also found diffusion filters to be almost impossible to use in uncontrolled situations, even at 1/8 which is the lowest strength available - on some shots it'll be too weak and you turn around and a light hits the filter and now the image is basically ruined because half the frame is washed out. Maybe because of the uncontrolled conditions they just had some shots that suffered. When I started in video I couldn't tell the difference between 24p and 60p, now I hate 30p almost as much as I hate 60p. I also couldn't tell the difference between 180 shutter and very short shutters except on very strong movement. Now I am seeing the odd shot in things like The Witcher which 'flip' in my head and look like video and I don't know why. Some people outside the industry/hobby will be able to see the difference between something shot on a phone and a cine camera, but I suspect most won't, and those that can probably don't care because if they did they wouldn't be able to watch almost anything on social media, no home videos, nothing they record on their phone, etc. A surprising number of people will just think that a smartphone vlog looks "different" to Dune 2, rather than "worse" than it.
  7. Actually, speaking of vertical video and open-gate, I know of at least one professional shooting low-squeeze-factor anamorphic with the anamorphic squeeze oriented vertically not horizontally, which gives them an image that is almost perfectly square. 4:3 is still an aspect-ratio of 1.33:1, so if you mount a 1.33x adapter vertically then you get a 1:1 image and can crop horizontally and vertically with the same ease. Or, if you mount a 1.33x adapter vertically to a 16:9 sensor then you end up with a 4:3 image - essentially "adding open gate" to your camera. Imagine how many people would pay a few hundred dollars to do that! Yet another thing that these adapters can do. Being able to wrap your head around the math involved in combining sensor crop-factors, sensor readout aspect ratios, anamorphic squeeze factors, speed booster factors, lens focal length equivalence, and aperture equivalence, really is a super-power in todays camera market.
  8. There's only a dollar in it if someone finds it to be useful or desirable in some way. If you think otherwise, please PM me about discount codes for my all-you-can-eat gravel and building rubble buffet! My experience with both my Sirui 1.25x adapter (and even my no-name-brand M42-MFT speed booster) is that there is no visible softening from them. This is my experience on MFT at least, and while I don't shoot with hospital lenses, I sure went pretty deep when testing these - shooting full-resolution RAW and doing direct A/B comparisons at all F-stop values of my sharpest lenses and pixel-peeing at large zoom-in-post levels. The Sirui 1.25x only flared a couple of times in the 6+ hours I spent shooting with it at night in the streets of Hong Kong and China, and the flares that did happen only happened when the headlights from the vehicles hit the lens directly at exactly the right angle, so the vehicle is driving straight towards me and headlights were in-frame and not flaring, then the vehicle turns slightly / hits a slight bump in the road and the angle hits exactly right and it flares for literally only a few frames, then afterwards the vehicle is still driving towards me and in frame with the headlights still shining almost directly into the camera but the flare is gone. I went frame-by-frame trying to find a good frame to grab to show off the horizontal streak, but it was barely perceptible in a still frame, I can't remember if I even bothered uploading it. Even the bokeh is almost imperceptibly stretched, which is to be expected for a 1.25x squeeze-factor is pretty low. There is much more bokeh stretching and deformation from other lens factors that are present in spherical glass (cats-eyes or swirl etc). All those add up to me just thinking it's a horizontal-only speed booster, and an economical one at that... The cheapest Metabones speed boosters are 0.71x and USD399 and are for a specific camera-mount/lens-mount combination, the Sirui is effectively a 0.8x horizontal speed booster that can be used on practically any lens of any mount with any camera essentially forever. It does add complexity, but only during testing. Once you have tested it on your lens collection and worked out which lenses you want to pair it with, it just becomes another "lens". For example, the "lenses" that are of interest to me at the moment are: For daytime shooting: 14-140mm zoom 9mm F1.7 prime For night shooting: 12-35mm F2.8 zoom 17mm F1.4 Takumar 50mm F1.4 with speed booster 42.5mm F0.95 with Sirui 1.25x adapter The Sirui now "lives" on the 42.5mm and the speed booster now "lives" on the 50/1.4, so in a sense they're now just lenses in my mind. Anyone with multiple camera bodies (or even those who only use one camera but rig it in different ways) will be familiar with having different configurations. This is the same. Squeeze factors and ratios are dead, but are also "back". You're right that 1.33x was designed for 16:9 cameras, because it turns 16:9 into 2.35:1, but considering the GH5 has had open gate for almost a decade now and current Sony FF models still don't have it, saying it was designed for MFT is completely ass-backwards. But that thinking is dead - almost no-one is shooting for 2:35:1 with a 16:9 sensor and for whatever reason can't crop the image to work around different squeeze factors. Most people have the creative freedom to shoot whatever aspect ratios they want (look at how many people shoot open-gate on social media now) and the people who do have specific ratios they have to provide for a client are more likely to be vertical than horizontal! It's even becoming common for projects to mix the aspect ratios within the same video. I've been experimenting with 2:1, which I think is a really nice look. My current project is a series of half-a-dozen videos and they all have different aspect ratios to fit with the vibe of each one.
  9. @FHDcrew Cool clips. Doing smooth moves that are meant to be like slider / crane work while walking and using a small setup with just camera/lens/filters and no big/heavy rig etc is definitely at the edge of stabilisation tech, and your results look pretty good. The way I think about your tradeoffs are: When you get the 12-35 2.8 you'll get Dual-IS where the IBIS links with the OIS and you get an additional level of stabilisation, so that will up your results by another notch or make it a lot less fiddly in post to get the same results you're getting now The 12-35 will definitely have a deeper DOF wide-open than the 18-35 setup wide open, but none of the shots in those videos really struck me as being too shallow for the 12-35. They might be, but while you might lose some DOF, you're gaining stabilisation, so you might be able to get the same amount of stabilisation from a longer focal length on the 12-35 than you did on the 18-35 which will push towards a shallower DOF because of the constant aperture, so there are lots of things that are competing with each other here and 'all-else-being-equal' comparisons are misleading. Another trade-off people don't seem to talk about is that having shallower DOF might make still images look a bit nicer, but if your deeper-DOF setup allows you to shoot faster then you'll shoot more shots in the same situation, which will mean that you have more shots to draw from in the edit and the best ones that get used in the edit will be better simply because there were more of them.. so it's your edit having shots with shallower-DOF vs your edit having shots with deeper-DOF but being better composed, including nicer moments from people, etc. Continuing on from the previous point, if you shoot more shots then not only will each individual shot in the edit potentially be nicer, you'll also have a lot more options in the edit. This is something that is a huge dynamic in what I do (shooting travel) because it means I can get a greater variety of shots which means I can pull things off in the edit that I couldn't otherwise do. This might be a bit non-obvious to some, but if you imagine you have to make a 1-minute edit with only 1-minute of footage, then you have almost no options whatsoever, and although there is diminishing returns the more footage you have, sometimes in the edit you'll want to use one moment but in order to do so you have to have another shot that fits a very specific purpose and if you don't have that then you can't use the good moment (or if you do it will be clunky). So in this sense a really dull or even badly-shot clip can make your edit better by letting you use a great shot or have a more interesting structure or line things up with the music better to have a nicer ebb-and-flow to the whole thing. Elaborating still, the more stable your shots are in-camera, the less time you'll spend tinkering with them in post and the more time you can devote to finding a better audio track, doing more sound design, doing a couple more versions and smoothing over the rough edges a bit more, or pulling off a colour grading look that needs a bit of time to work out. People think MFT = deep DOF = less cinematic, when in realistic terms it can also be MFT = better stabilisation = shoot faster = more shots and more variety of shots = less work in the edit = more options in the edit = a better edit overall. If camera-bro YT has taught us one thing it's that shallow DOF doesn't make an edit great. Some of the best edits I've ever seen were made of exclusively non-spectacular shots. It's a bit of a blind spot because less-skilled people online don't yet understand about how things in pre impact prod, or how things in pre and prod impact post, and the professionals who work on film sets where people work in departments often don't understand the downstream implications because they just follow the guidelines of their department without understanding them.
  10. This is the first smartphone video I've seen that didn't shout (or whisper) that it was shot on a smartphone. I do get flavours of it being shot on something small and mirrorless because of the movement of the camera (if it was a heavy rig it would have moved differently). I'm pretty stoked actually, and can't wait to get a vND solution for my iPhone 17 Pro. I suspect that I might end up shooting a lot of street stuff on it just because the form-factor is so small and people are far less curious/suspicious of smartphones than real cameras.
  11. Nice. In a sense, the fact that lots of modes on FF cameras have an APSC crop is a bit of a blessing in disguise. Not only to get a RAW file without insane resolution / bitrate, but it means that there are speed boosters for FF mounts. I think @Andrew - EOSHD has investigated speedboosting Medium Format lenses onto FF sensors, but my impression was that it's probably a difficult architecture to find combinations of equipment that won't vignette heavily or perform poorly at fast apertures. This is where the front anamorphic adapters can be useful, as I'd imagine there would be far more usable configurations from fitting a front anamorphic adapter to a FF camera + FF lens combo. The front adapters don't care about your camera mount / flange distances / lens mount / lens rear protrusion / etc, so in a way they're more like PL glass which you'd be able to keep and use regardless of what cameras you got in the future.
  12. Seems similar to my priorities. Since getting the GH7 I haven't been jealous of FF except for my foray into 'night street cinema' where having crazy shallow DOF would be awesome, but it's not worth swapping over and thanks to adapters there are options for me. Yeah, everyone would be better off if they shot more. When you do it becomes obvious that our wants/needs are highly situational and you become more understanding when someone describes their wants/needs as being different to your own. Definitely agree with @MrSMW about just assessing the cameras on their current capabilities. TBH, the GH7 is more capable right now than most flagship cameras, when it comes to things that actually matter. There's a funny thing that happens when you write down what is important to you, then assess cameras against that. Things like 8K60 internal RAW somehow magically don't make the camera better at many/any of your actual criteria, but having 4K Prores HQ internally might. The list above from @FHDcrew is a good example of that. Then if you write down a list of everything you own that wouldn't work with the alternative camera, and think about selling it and then re-buying it again, you realise that getting a different camera with half-a-stop more DR isn't worth the thousands of dollars and weeks of work you'd lose selling and re-buying all your lenses, batteries and power management, cage and accessories, trips to the post office or courier, waiting for things to be shipped, assembling and troubleshooting everything when it arrives, learning new firmware and menus, doing a battery of tests to understand the sensor and colour science and codecs and how to treat it in post, then getting familiar with the rig to the point you can think about what you're pointing it at instead of what settings are assigned to which button etc. In terms of the G9ii / GH7 vs the S5 series, I wouldn't count on the MFT options being that much lighter, and it will probably come down to the lenses rather than bodies, which are pretty chunky.
  13. Adapters used to be a big deal... with camcorders there were wide-angle adapters. Then for smaller sensors there were speed boosters to try and get shallower DOF and make more use of vintage photo lenses. Anamorphic adapters were always an option / confusing dream. Then "everyone" went FF to get that shallow DOF, and Chinese manufacturing started giving us good quality affordable fast primes and now there seems to be an avalanche of anamorphic lenses (with recent additions including anamorphic zooms, and some even have AF now!). Why are they "back"? Not only can adapters take MFT cameras to beyond FF, or take S35 to FF and beyond, but they can also take FF to Medium Format and beyond as well. The key to this is the emergence of high-quality single-focus anamorphic front adapters, and especially combining them with the new anamorphic glass or with speed boosters or with super-fast native lenses. I realised this when I shot with the GH7 + Voigtlander 42.5mm F0.95 + Sirui 1.25x combination and realised it's the horizontal FF equivalent of a 68mm F1.5. That's not impossible in FF terms, as there are 75mm F1.4 lenses and 85mm F1.2 lenses available, but this was only with the Sirui 1.25x adapter. There are 1.33x, 1.35x, 1.5x, 1.7x, and even 1.8x front adapters available. There is even a PL-to-PL 1.33x rear anamorphic adapter available. Combining the 1.33x PL-to-PL rear adapter with the 1.8x front adapter would be a 2.4x adapter. Not a lot of anamorphic glass over 2x! The combinations are practically endless, and compatibility will definitely be an issue with some combinations, but the thing about adapters is they multiply your lens collection..... If you buy an extra lens then you have an extra lens but if you buy an adapter you multiply your whole lens collection by a factor somewhere between 1-2x, depending on compatibility. Here are couple of worked examples. Just to get the juices going, and as a completely manufacturer supported Full Frame option, Sirui has the Venus anamorphic set, which are 1.6x anamorphics with T2.9 aperture from 35mm to 135mm, with the 1.6x giving them a horizontal crop factor of 0.625, so they're the horizontal equivalent of 22mm to 84mm T1.8 lenses. Add the Sirui 1.25x anamorphic adapter, which is officially part of the set, and they become 2x anamorphics horizontally equivalent to 17.5mm to 68mm T1.45 lenses. This isn't completely beyond normal spherical FF glass, but it's an adapter that can be used across a range of lenses and quickly change the crop-factor of these and many other lenses. Let's go bigger.. Rokinon / Samyang have a 1.7x front anamorphic adapter specifically designed for their Cine V-AF line, which includes 35mm to 100mm T1.9 Full Frame lenses (and a 24mm T1.9 APSC lens) but with the adapter they'll be 21mm to 59mm T1.1. Once again, these aren't impossible to find in spherical versions, but we're getting into more rarified territory. Also remember you now have two lens sets, not one lens set with an extra lens. Would the 1.7x adapter work on other lenses? Not easily - it seems to have a strange proprietary mount to attach to the lenses, which have a 58mm front thread diameter. The Sirui 1.25x adapter is huge and has an 82mm rear thread diameter, so would work on lots more lenses. The Blazar Nero 1.5x has a smaller 52mm rear thread diameter, and the SLR Magic Anamorphot-40 1.33x has an even smaller 40mm rear thread. However, the SLR Magic Anamorphot-65 1.33x has a 82mm rear thread and the Anamorphot-50 1.33x has a 62mm rear thread. I think the BLAZAR LENS 1.35x has a 77 rear thread (not sure), and the Venus Optics 1.33x definitely has a 77mm rear thread. But, if you have the funds and really want coverage, then the Letus35 AnamorphX-PRO series (1.33x and 1.8x) seem to clamp to the outside of 114mm lenses. Of course, they're USD2500 and up! Let's go faster... The Sirui 1.25x adapter claims it's T2.8, but on my MFT 42.5mm F0.95 lens it didn't soften the lens even when shot at F0.95. Maybe it wouldn't be so good if paired with a lens faster than F2.8 on a FF sensor - not sure. If you took an 85mm F1.4 and attached the Sirui 1.25x adapter you'd get a 68mm T1.12 - and an 85mm F1.2 would become an 85mm F0.96! But take the 85mm F1.2 and attach the Anamorphot-65 1.33x instead and now we're down to 68mm F0.90!! My impression was that the FF 50mm F0.95 lenses were pretty mushy at F0.95, but with a 1.33x attached they'd be 38mm F0.71 (and probably like a drug-fuelled haze). You could find out what an F0.71 lens looks like for only USD800 - completely doable if you're crazy enough. If we ignore the compatibility issues, and zoom out, then here's how I think of it - front anamorphic adapters are horizontal speed-boosters you mount to the front of the lens. 1.25x is a 0.8x horizontal speedbooster 1.33x is a 0.75x horizontal speedbooster 1.5x is a 0.67x horizontal speedbooster 1.7x is a 0.59x horizontal speedbooster 1.8x is a 0.56x horizontal speedbooster On FF, you can take a lens and use an adapter to boost your lenses from FF to Medium Format and beyond. The Alexa 65 has a crop factor of 0.67 - which is within reach of these adapters. On S35, you can use an 1.5x anamorphic adapter to get you to FF, but you can combine that 1.5x with a 0.71 speed booster to boost non-mirrorless glass to a crop-factor of 0.71, which is about 6% smaller than the Alexa 65! On MFT, you can use the 1.33x adapter to get you to S35, or combine a 0.71x speed booster with the 1.5x adapter to get you to 0.95 - just bigger than FF! BUT maybe you can push harder than that. No idea! Any discussion that puts S35 closer to the Alexa 65 than FF, or MFT larger than FF would have been unthinkable even a few years ago. Are there caveats? Sure. Compatibility for one thing. The stronger squeeze-factor adapters likely have limits with how fast you can push the aperture and perhaps on sensor size too. I suspect that my Sirui 1.25x T2.8 adapter might only be sharp with my F0.95 lenses wide open because they're MFT lenses on an MFT sensor. I could be wrong though. The smaller rear diameter of some of the other options might cause vignetting on larger sensors, and maybe softer corners at larger apertures. But lens sharpness and shallow DOF are only useful to impress paying customers and for bragging to your friends, and that's not what anamorphic is really for.. so if you're willing to stop acting like you live in a hospital, these things can open up a whole new world.
  14. These knock-off cameras seem a bit extra to me.
  15. Good choice. I think that film-making is about compromises and the more you understand what you're trying to achieve then the more specific you can be with your equipment selection. The "I'll being everything just in case" shooter does so because they don't know what they want and therefore can't make any decisions. The more I shoot the more I understand what I want, how I work, what challenges I face, and what is available, and the clearer I get on what equipment I should use and why. It's sort of incredible that even with todays offerings of 8K video and 240p slow-motion and 14+ stops of DR and RAW and all the lenses available, even if you had infinite money, infinite strength to carry and operate heavy equipment, there are still serious compromises that have to be made. The does-everything camera is still a wild fantasy, even for gym bros in the billionaire class.
  16. Welcome back! Can you tell me your name? Where are we? What year is it? Good, good... You've been in a DOF-induced coma for the last 7 years. We'll contact your families and let them know you've woken up - they'll be very happy to see you!
  17. It's sad to hear this - the S9 seems to suffer with the same cost-cutting mechanisms of all smaller cameras. It's funny how consumer electronics mostly tend to charge a premium for smaller devices and yet when it comes to cameras the industry seems to regard small as being cheap and large as being professional.
  18. This is the core challenge of film-making - compromises and trade-offs. Absolutely, open gate without dropping quality means many trade-offs.... more rolling-shutter, more processing demands in-camera, greater heat generated in-camera, more power requirements and therefore shorter battery life or larger batteries required, great write-speed requirements for the media and larger capacity media, more processing power to edit, greater hard drive capacity, etc etc etc. His choice to use the Ronin 4D would have been like any other choice in film-making - a tradeoff of various factors to try and optimise the outcome (video quality, customer satisfaction, profit on the job, or some other factor) but if I had to guess it might have been that the 4D has excellent stabilisation including a fourth axis, which would have been important if the operator was walking/running rather than having the shot on rails or using something that rolls (considering the shot was a horizontal move with parallax on what looked like a longer lens). So this situation ends up potentially being a trade-off between DOF, sharpness, size of area required for a shoot, and stabilisation (fourth-axis stabiliser / slider / rails / dolly / etc and the associated setup and teardown times, etc). The value of open gate is that it removes the requirement to trade some of these things off against each other. The key question being debated though, is choice. No-one is suggesting that everyone be forced to record in open-gate modes. Yet the people who are arguing against it are saying that it shouldn't be in cameras and therefore no-one should have the option to use it.
  19. kye

    The Aesthetic (part 2)

    I've been procrastinating and thinking about bokeh shapes, and have now realised that there are a range of bokeh shapes seen from time to time in various shows. Very bubble (not my favourite) Oval with top and bottom chopped off pretty severely Not sure if this is all the same lens, but all from the same episode - there's a variety of shapes.. Slightly chopped oval Very stretched and so little curvature they're almost rectangles and then whatever on earth this shape is Quite chopped off tops/bottoms but not quite vertical Thinking about it more, I went back to why I started being interested in anamorphic bokeh to begin with and I think it's the fact it doesn't match how our vision works, so it looks surreal. So while I don't like distracting bokeh, once the bokeh isn't round then it can be whatever shape you want it to be. Anamorphics just happened to have it stretched vertically, which I think is probably one of the most pleasing non-circular varieties, but it isn't the only option, and the more rectangular it is the more stretched it will appear, so I'm thinking I might aim for something that looks like a slightly rounded vertical rectangle. The other thing I have been contemplating is if I could make an insert that didn't have hard edges, but had graduated edges. The only way I could think of doing that is to get some sort of semi-translucent sheet material and cut it at an angle so the edges had a slightly translucent transition to being completely opaque, but I am not aware of such a material.
  20. I also posted Cam Mackeys video earlier in the thread, which shows vertical TVs showing video at 2m07s (linked to timestamp): Also at 3m55s he shows an ad for Dior that is vertical video too. I don't know the delivery resolution was and I don't think he mentioned it in the video (unless I missed it?) but he does mention the value of having extra resolution to provide sharper images to the client, so it's obviously a consideration for him and his business.
  21. The vertical resolution of deliverables is only part of the equation. As I mentioned earlier in the thread, another consideration is FOV and DOF. It's actually a crop-factor problem. Just as it's harder to get shallow DOF on smaller sensors, a FF sensor that shoots 16:9 only has a vertical crop factor of 1.185 compared to the same sensor shooting in 3:2. Also, sometimes there isn't room to just back up:
  22. I'm pretty sure that in 1927, as photography became more accessible, all the great scientific and mathematical minds of the time converged in Paris and after a fortnight of rigorous collaboration they proved mathematically that one can, in fact, never have too many cameras.
  23. I don't really know what to say.... The post I was replying to said they hadn't seen 9:16 displays in public so I shared some images. The fact you haven't experienced the need to deliver 4K 9:16 videos doesn't mean other people haven't. Why is everyone so interested in being an expert about what people they've never met are required to deliver for companies they've never heard of in other countries they've never visited?
  24. In my latest review of my gear, I started taking note of things I want to use but haven't found a use for. Certain things have an X-factor and in creative endeavours it's useful to listen to that voice. Equipment can be inspirational sometimes.
  25. I suspect that most people will have different "categories" depending on what they're doing, but I absolutely like the thinking behind this. The more we can make sense of what we do and how we do it, the more clarity we can get and the faster we can get a kit that works and then focus on using it. As I only shoot personal projects, I don't need a work camera, so my main category is my run-n-gun travel camera, the GH7, which is used exclusively hand-held. For daytime use it's the GH7 with 14-140mm zoom, which has incredible stabilisation, and the zoom lens means I can capture almost anything I can see. It also has an integrated fan, great image quality, strong codecs, etc. For night use I can use the GH7 with 12-35mm F2.8 and get great neutral images. For funky night cinema I can pair it with fast primes like the Voigtlander 17.5mm F0.95 or Speedbooster with Takumar 50mm F1.4. The second camera is (of course) my phone, which I recently upgraded to the iPhone 17 Pro from the 12 Mini. The combination of Apple Log, internal Prores HQ, and the 0.5x / 1x / 2x / 4x / 8x cameras makes it incredible for travel. I'm waiting for a good vND solution to come out. Apart from the low-light, it's basically an all-in-one solution now. Some time ago smartphones replaced my waterproof camera category which was previously GoPro / Sony X3000 action cameras. I used to have a fourth "category" which was a backup camera and used for time-lapses, but now the iPhone is good enough in the unlikely event of something happening to the GH7 and I don't really shoot time-lapses anymore so I don't really need one, but I still have an "itch" for something else. Random thoughts: It could be something very retro, like something with poor video quality that was nostalgic in some way, and graded to look either digital or analog electronic or film My OG BMPCC and BMMCC and GF3 all come to mind for this. It could be something very stylised / attitude like being super fisheye or 360 or something It could be something very niche in how you'd use it, like it could be mounted on something for a unique perspective, or could be on a pole for strange perspectives.. or even something like an action camera that you wear on your wrist and take a 10s clip every 15 minutes, or pocketable camera that you record a clip with every so often. The whole point would be a tool that would make me use it differently to how I normally use / think about shooting, and therefore be a fun and creative addition.
×
×
  • Create New...