Jump to content

EthanAlexander

Members
  • Posts

    355
  • Joined

  • Last visited

Posts posted by EthanAlexander

  1. 11 minutes ago, androidlad said:

    But if that’s relevant to what we are discussing here, all films should be shot with just one super wide focal length and just crop in post for different shot sizes.

    I know you're being sarcastic, but you're making an invalid point - it's about perspective. The artistic choice comes from deciding how close or far you are from your subject. Roger Deakins, for example, loves 27mm and 21mm on S35 because it's wider than is traditional for a 1-shot which allows him to get closer to people and make the audience feel like they're closer. It doesn't have to do with the FOV, which you could match with a longer lens from far away, it has to do with perspective, which changes when you get closer. People know, even if only subconsciously, how close the camera is to the subject. So whether you're on S35 or FF or whatever, the feeling will be the same if the camera is in the same spot, and then to match the FOV you just use an equivalent focal length (So a 27mm on S35 or a 40.5mm on FF).

  2. 6 hours ago, tupp said:

    By the way, at what point was it declared that lenses for larger formats have more "imperfections" than those made for smaller formats?

    It wasn't. I meant imperfections in real world tests, not larger lenses.

     

    6 hours ago, tupp said:

    love M4/3, but there are general differences in the look of different formats that do not involve "imperfections." 

    This is why I said "real world availability of manufacturing" - I do believe there is a general "look" that comes from larger sensors, but I think it has more to do with the lenses that are available. The lack of availability of extreme wide angles with super shallow depth of field (such as a 12mm f0.7) is a big reason FF is hard to match with smaller sensors like MFT. It also accounts for the fact that not every sensor is going to deliver the same results for things like dynamic range etc. But in a computer, equivalency is accurate, and we can come close enough in the real world that only minor differences happen.

     

    6 hours ago, tupp said:

    To do the test properly, you have to use two different lenses -- one designed for a smaller format and one designed for a larger format.

    The only difference is the size of the image circle that comes out the back. The size of the sensor then determines the field of view. This is why if you're shooting on an APSC camera with a FF 24-70mm 2.8 lens or an APSC 18-55mm 2.8 (two common Canon zooms, for instance), you'll get the same image for the entire overlapped range of focal lengths of 24-55mm. I invite you to test this out yourself and see.

  3. 5 minutes ago, kye said:

    Even the animated gif in the thumbnail thread (showing the Canon camera and shallow DoF) clearly shows that the camera position has changed, which is hopeless - you can't compare the size of the bokeh if you're moving the camera around.

    Thank you.

    From that other thread:

    On 2/22/2017 at 7:33 PM, jcs said:

    I did those tests- it's not really possible to do perfect equivalence with physical lenses unless all the settings can be exactly matched. In the first example, the only major difference was shadow detail which could be related to ISO... In the second example, maybe I made a mistake or it's still related to optics not really being equivalent. The 'normal' test matches almost perfectly.

    Brain Caldwell, the optical engineer and inventor of the Speed Booster says the same thing regarding FF vs. MF. That's why he wasn't interested in making a MF to FF SpeedBooster...

    In any case, the differences are minor and most people couldn't tell the difference. Someone posted computer graphics (ray traced?) examples that matched perfectly, as the math predicted.

    I'm about to get my first MFT camera so I'm excited to do my own tests vs FF, but it really seems to me that mathematically there's no difference between formats, it just comes down to real-world imperfections and real world availability of manufacturing.

    For instance, one thing that I think stands out with larger formats is shallow depth of field at wider fields of view: 24mm 1.4s are commonly made for FF cameras, whereas a 12mm f/0.7 doesn't exist to my knowledge so you can't match that look on MFT. It's technically possible it just doesn't get made because the cost wouldn't be worth it.(.64 speed boosting a Sigma 20mm 1.4 would be pretty close though)

     

  4. 8 minutes ago, kye said:

    That’s what I was thinking too, that it’s basically false.

    I didn’t want to say so straight out because of two reasons, the first is that I don’t think I understand this stuff well enough to say things like that (and i’ve been wrong before!) 

    I used to think that there was something distinct about lenses with longer focal lengths but after reading a lot of smart people's posts (smarter than me) on forums like this and then doing my own tests, I realized it really only has to do with perspective and equivalency. 

     

    10 minutes ago, kye said:

    although I can’t find any tangible reason that a larger sensor should be better I have seen enough videos shot with larger sensors (FF and also larger) that had some kind of X-Factor that I just couldn’t place, so they always left me wondering if there was something to these urban legends....

    I think this just has to do with the fact that generally better IQ cameras tend to be larger sensor, and the people with access to these kinds of cameras tend to have higher skills and more importantly, better camera support, lighting, set design, etc.

    One thing that is KINDA true about larger sensor cameras though is the extremes, like super SUPER shallow depth of field at wider angles, are really hard to achieve with smaller sensors. For instance, a 24mm at f/1.4 looks great and has that "mojo" on FF that would require a 12mm f/0.7, which to my knowledge doesn't exist, and even speed boosting the same 24mm FF lens with a .64x metabones would only give you an f/1.8 equivalent. 

    19 minutes ago, User said:

    Oh great, now I feel like a bit of a bozo for posting this nonsense... ;)
    One thing for sure is that the skilled folks who do have access to these limited and expensive cameras, went ahead with them for a reason. Somehow I'm willing to bet it was more than just urban legends... now where is my snorkel?

    lol it's all good the guy is writing on indiewire so you'd think it would be accurate ?

  5. The author of this article does not understand perspective. It's the kind of thing I used to think until I took the time to understand that for instance a 50mm has no magic powers to change the way light works over for instance a 25mm. 

    "...specifically a shallower depth of field and more compressed rendering of space. In other words, the large format allows you to see wider, without going wider, as you can see in the example below."

    ? This is false. It's 100% false. There is no such thing as lens compression, only perspective. If you're standing in the same place, perspective will be the same, and as @kye is pointing out, within reason, you can mimic the look of any size sensor by matching the FOV and using an equivalent aperture (and then compensating the ISO).

    Having said that, there are certain things that are hard to do, like super shallow depth of field on wider lenses on MFT, or mimicking the look of a 50mm 1.2 FF on MFT, etc.

    5 minutes ago, User said:

    - I'm really not the best person to comment on this, but if I had to take a stab, it's as simple as having the extra sensor size and the field of view it affords. Na?

    You can just get a wider lens though... for instance a 12mm on MFT is the same as a 24 on FF in FOV

  6. Just purchased an SLR Magic hyperprime 25mm T0.95 and 10mm T2.1!

    Next purchase (maybe today) will be the Z-Cam E2. I have been wanting an internal ProRes cam for a while, and the 10 bit 4K 120 h265 looks great too. With the SLR Magics I should be able to fake a full frame look. Can't wait to test it all out - I'll post results.

  7. 16 hours ago, kye said:

    The people that do those "how to make this camera from 1987 look like a RED Epic" are always doing it with lighting.

    ...and adding a bunch of green tint in post ?

    But seriously, yeah it's what you see in a lot of promotions, too, where any camera can look good, including an iPhone, if you've got the production design,  makeup, location etc etc which really teaches a lesson if you think about it - the camera just needs to give you a clean image and after that it's up to you to captivate an audience.

    I have to publicly apologize and retract my last post, though, because I'm already back to looking at cinema lenses. Thinking of picking up some SLR Magic hyperprimes to go with either a Z-Cam or MBPCC4K. Just a couple lenses I swear... 

  8. I think I'm on a "screw buying new lenses I'm spending money on filters and light modifiers" phase. Layering diffusion, colored lighting and practicals, haze, actually caring about incidental bounce enough to use negative fill...  aaaand FINALLY got a black pro mist! Should get to use it out for a talking head shoot on Friday, but so far I'm really liking even just boring test shots.

  9. 1 hour ago, Mokara said:

    It is not the resolution, it is the amount of oversampling a camera can handle. Technically footage that makes use of the full sensor size but line skips/pixel bins is undersampling.

    BRO, @Michi said this way back at the beginning and you've just been arguing for the sake of arguing.  He was saying that on the EOS R and the 1DX2 by using math it seems like the 4K crops were 1:1 readouts of different megapixel sensors, hence the different crop sizes. So he said if this continues this will have more crop on the Mk3 than on the Mk2 because of the higher megapixel count. Then he says at the end of his first post this could change with any kind of oversampling. So WHY keep arguing unless you're just trolling?

    You're literally cluttering the EOSHD boards with your arguments and pro-Canon nonsense that's been proven wrong. Every once in a while you contribute something but it's only 10% of the time...

     

    Michi's original post:

    From my understanding the crop on this cameras is determined by the sensors total resolution.. That's why there's a 1.8 crop on the 31mp EOS R and a 1.3 crop on the 20mp 1DX II. Isn't it? So if the rumors are true that the 1DX III will have a 28mp sensor and Videos are still recorded from a 4K "cut out" of that sensor, the crop will be bigger than on the 1DX II... But only if there still is no full sensor read out of course...  

     

  10. 21 minutes ago, fuzzynormal said:

    Netflix has this camera list, but really...what's the list specifically for?  The programs they directly produce-for-hire?

    Yes - For anything they're actually involved in the production (even if it's just money) the camera list is in effect, but they will purchase anything after the fact if they judge it good content that people will watch, regardless of the camera.

    It's pretty smart, really - they're pushing forward 4K HDR to future-proof their own content while also being flexible enough to pick up any kind of good content. 

  11. 5 hours ago, sanveer said:

    I am guessing if RED loses the compressed RAW suit, the chances of smartphones moving from the present 10-bit 422 to compressed RAW is also pretty high. 

    Maybe in a few select offerings but 98% of the smartphone market wouldn't be interested - even I wouldn't because there's just no point when you're making so many compromises, especially in low light where you can see image breakdown so fast even using Filmic Extreme bitrates (~130Mbps HEVC) on the latest iPhone.  

    HEVC and HEIF are really quite good for best-of-both-worlds size vs quality on phones, especially considering cloud storage and data speeds - it's why I actually love that Canon is going to offer HEIF on the new 1DX MkIII.

    I agree with the rest of what you're saying though :) 

  12. At least I get to keep my money in my pocket on this one. I'd love the improvements but not worth the cost to upgrade by any stretch.

    8 hours ago, omega1978 said:

    optical variable lpf = e vnd ?

    No these are not the same (OLPF is for anti-aliasing, ND is for cutting light). The variable part is a cool concept though that I've never heard of. I never really had any problems with moire/aliasing on the A73 unless I was shooting high frame rate HD so maybe this means it would have cleaner slow motion by being more aggressive, but turn off when shooting 4K or stills so it's even more detail than before?

    I'm just hoping...

  13. 1 hour ago, Otago said:

    I think this is true if it is a linear 14 bit file but not if it is log 10 bit ( assuming each bit corresponds to an extra stop of dynamic range ) There are only 1024 values for the whole dynamic range, rather than each stop so the numbers about should be 50 values representing each stop rather than 100 - the concept is the same but the values are / were wrong.

    I appreciate your desire to dig into this further. In any linear recording, each stop brighter is actually getting twice as many bit values as the previous darker stop, starting with 1 for the darkest and 512 for the brightest in 10 bit. That means the brightest stop is actually the "top half" of all the bit values (so in 10 bit 513-1024 would actually be reserved for just one stop of light.) If you want each stop of light to be represented by an equal amount of values (for instance, ~100 as you are suggesting) It requires a log curve to map the input values to that. (How many and which values get used for the different stops is what makes the differences between different log curves like SLog2 and 3, V Log, N Log, etc.)

     

    2 hours ago, Otago said:

    If you ETTR and put lots of information in the curve of the log then in the brightest values they will be sharing bits. Depending on what curve your camera uses you could end up with, say, your 2 brightest stops being compressed into one bit in the codec and so only have 512 values representing each stop rather than 1024 - whether that is noticeable is another question!

    They won't be sharing bits before compression whether it's linear or log, but to your point, you're right and this is a big reason why shooting log on a high compression camera is troublesome - the codec has to throw away information and that means that values that are close together will likely be compressed into one. This is why I said several times that highly compressed vs raw recording is a big factor. But if we're talking raw recording with lossless or no compression, or even ProRes HQ frankly, then a 10 or 12 bit file mapped with a log curve will look practically the same as a linear 14 bit recording. Either way you still have to decide where you want middle grey to land, which means you're deciding how many stops above and below you're going to get.

  14. There seems to be some confusion about what's happening with ETTR so hopefully this clears it up.

    2 things that will never change no matter how you're exposing V Log are the "shape" of the curve and the dynamic range (For this post we will assume 14 stops). V Log will always be V Log will always be V Log. So, technically, the highlight rolloff will always be the same. @Jonathan Bergqvist and @Mmmbeats are right in this regard.

    BUT - the useable part of the dynamic range (of the scene) is definitely shifting and it is definitely destructive, as @helium is pointing out. Once you lose something to overexposure, you'll never get it back. 

    If a scene only has, for instance, 7 stops of dynamic range, then you could easily argue that ETTR will offer a better image because of the high SNR which will lead to low noise and a cleaner image. We're fitting 7 stops into.a 14 stop container so it's easy to make sure everything is captured. You could probably argue that when shooting raw or super low compression, anything less than the full dynamic range should be ETTR by the amount of stops in the scene fewer than the maximum allowed by the log curve.

    The complication comes in when dealing with scenes with high dynamic range. This is when you have to decide what to put into the 14 stop "container." When using ETTR, you're making a compromise - higher SNR for less useable dynamic range in the highlights. This is definitely a "destructive" choice in the sense that this can't be undone in post. You'll never get back those stops in the highlights that you chose to sacrifice. For many people and scenes, this is an acceptable tradeoff. You're also getting more stops dedicated to your shadows, which can be useful. 

    If your scene has a lot of stops above middle grey, then ETTR will definitely limit the amount of useable recorded dynamic range, and exposing according to "manufacturer guidelines" (or even lower) will indeed give you the better result if you're trying not to lose anything to overexposure.

    This image shows how the captured range of 14 stops never changes but the amount of stops above and below middle grey definitely do change. I think it is for the Alexa but the concept applies to literally any curve, so just ignore the exact numbers of stops above and below 18% and look at how it always adds up to 14. These changes are 100% baked in no matter how you record (raw or highly compressed).

     

    14Log-exposure-index.thumb.jpg.ecb670e0475d989b682041a2fd7ec3f5.jpg

     

    There can be color shifts as well, but that's a totally different topic to dive into...

     

  15. 32 minutes ago, mkabi said:

    This is the problem. And, this is the problem that most people are worried about...

    From an art perspective, its great... but for the "some" who can't dissociate a movie from reality...

    It becomes a bigger problem right?

    Again, for you and me that don't associate with those "some that can be fooled all the time," this movie is going to be great, I'm definitely going to check it out sometime this weekend if time permits.

    But, those "some that can be fooled all the time" are going to go see this movie and think its okay to go around killing a bunch of people. However small this hypothetical situation is, all it really takes is 1 person to hurt the many - e.g. strap a bomb to his chest and run into corporate building (boom!).

    By that logic nobody should be allowed to do or say anything because it could be taken the wrong way. 

  16. This part is really cool though! 

    Quote

    Designed to improve the speed of news agencies’ workflow, the Alpha 9 II features a new Voice Memo function that allows spoken information to be attached to images in the form of voice memos that can be replayed when the images are reviewed. The voice data can also be included with images sent to an editor, giving them important information needed for effective editing. Alternatively, a field photographer can also use the ‘Transfer & Tagging add-on’ “Imaging Edge™” application[vi] to transfer voice tags with the images to their mobile device and have the voice memos automatically[vii] converted to text and added to the JPEG images in the form of IPTC metadata[viii]. All of this can be done automatically or manually, selectable by the photographer.

    I'd love for this to come to video on their other cameras. Imagine how much faster piecing together edits would be!

  17. 1 hour ago, heart0less said:

    Matrix Trilogy.

    I've never really got a grip on the story (no surprise; when it was first released I was 8 years old, hahaha), so I decided to watch and understand it for the first time.
    ( :

    There's nothing to understand in the second and third ones... Personally, I think they're garbage.

    Love the first one though. I rewatch it every couple years.

  18. After watching this video and the one before it, it's easy to see that ALL of RED's claims should be reexamined.

    Their false use of "made in the USA" and false claims of designing their own sensors go well beyond ethical business practices. 

    I have little doubt that the industry will be better off once this patent is revoked. It's smoke and mirrors, and it's holding back fair competition.

  19. 16 hours ago, barefoot_dp said:

    It has a niche market among skate fillm-makers, though. Up until a few years ago even it's predecessor the VX1000 (from 1995!) was popular with skaters. I think it was a combination of skate films having a punk/grunge style where IQ was not terribly important, and 4:3 still being popular because it could allow better framing and coverage with a fisheye lens. And the built it top-handle allowed them to hold it low to the ground while riding. This market meant that for a long time the prices on for a VX1000/VX2000/VX2100/PD150/PD170 were generally higher than other cams from that era.

    I got my start 15 years ago on a VX2100. Skate videos and comedy short films. Those were the days...

  20. On 9/10/2019 at 9:53 PM, ntblowz said:

    Sharper than Sony's 1080P

    This is not true. I collaborated with someone and we had an A73 and EOS R - the colors were easy enough to match but the Sony footage was WAY more detailed and even after adding sharpening to the Canon footage it was noticeably softer. We shot 1080p60 mostly, with in camera sharpening turned down all the way.

×
×
  • Create New...