Jump to content

tupp

Members
  • Posts

    1,148
  • Joined

  • Last visited

Posts posted by tupp

  1. 22 hours ago, tupp said:

    Are you referring to the concept color emulsion layers subtracting from each other during the printing stage while a digital monitor "adds" adjacent pixels?

    16 hours ago, kye said:

    From the first linked article:

    Quote

     

    What makes these monitors additive is the fact that those pure hues are blended back together to create the final colors that we see. Even though the base colors are created through a subtractive process, it’s their addition that counts in the end because that’s what reaches our eyes.

    Film is different in that there is no part of the process that is truly additive. The creation of the film negative, where dyes are deposited on a transparent substrate, is subtractive, and the projection process, where white light is projected through dyes, is also subtractive. (This section edited for clarity.)

     

    So, the first linked article echoed what I said (except I left out that the print itself is also "subtractive" when projected).

     

    Is that except from the article (and what I said) what you mean when you refer to "additive" and "subtractive" color?

     

     

    Also from the first linked article:

    Quote

    The difference between subtractive color and additive color is key to differentiating between the classic “film” and “video” looks.

    I'm not so sure about this.  I think that this notion could contribute to the film look, but a lot of other things go into that look, such as progressive scan, no rolling shutter, grain actually forming the image, color depth, compressed highlight roll-off (as you mentioned), the brighter tones are less saturated (which I think is mentioned in the second article that you linked), etc.

     

    Of all of the elements that give the film "thickness," I would say that color depth, highlight compression, and the lower saturation in the brighter areas would be the most significant.

     

    It might be possible to suggest the subtractive nature of a film print merely by separating the the color channels and introducing a touch of subtractive overly on the two appropriate color channels.  A plug-in could be made that does this automatically.  However, I don't know if such effort will make a substantial difference.

     

    Thank you for posting the images without added grain/noise/dithering.   You only had to post the 8-bit image and the "4.5-bit" image.

     

    Unfortunately, most of the pixel values of the "4.5-bit" image still fall in between the 22.6 value increments prescribed  by "4.5-bits."  So, something is wrong somewhere in your imaging pipeline.

     

     

    16 hours ago, kye said:

    In terms of your analysis vs mine, my screenshots are all taken prior to the image being compressed to 8-bit jpg, whereas yours was taken after it was compressed to 8-bit jpg.

    Your NLE's histogram is a single trace, rather than 255 separate columns.  Is there a histogram that shows those 255 columns instead of  a single trace?  It's important, because your NLE histograms are showing 22 spikes with a substantial base that is difficult to discern with that single trace.

     

    Something might be going wrong during the "rounding" or at the "timeline" phase.

  2. 6 hours ago, kye said:

    Additive vs subtractive colours and mimicking subtractive colours with additive tools may well be relevant here, and I see some of the hallmarks of that mimicry almost everywhere I look.

    I am not sure what you mean.  Are you referring to the concept color emulsion layers subtracting from each other during the printing stage while a digital monitor "adds" adjacent pixels?

     

    Keep in mind that there is nothing inherently "subtractive" with "subtractive colors."  Likewise, there is nothing inherently "additive" with "additive colors."

     

     

    6 hours ago, kye said:

    In EVERY instance I saw adjustments being made that (at least partially) mimicked subtractive colour.

    Please explain what you mean.

     

     

    6 hours ago, kye said:

    It makes sense that the scopes draw lines instead of points, that's also why the vector scope looks like triangles and not points.

    Yes, but the histograms are not drawing the expected lines for the "4.5-bit" image nor for the "5-bit" image.  Those images are full 8-bit images.

     

    On the other hand, the "2.5-bit" shows the histogram lines as expected.  Did you do something different when making the "2.5-bit" image?

     

    7 hours ago, kye said:

    I'm happy to re-post the images without the noise added, but you should know that I added the noise before the bit-depth reduction plugin, not after, so the 'dirtying' of the image happened during compression, not by adding the noise.

    If the culprit is compression, then why is the "2.5-bit" image showing the histogram lines as expected, while the "4.5-bit" and "5-bit" images do not show the histogram lines?

     

    Please just post the 8-bit image and the "4.5-bit" image without the noise/grain/dithering.

     

    Thanks!

  3. 6 hours ago, kye said:

    I'm still working on the logic of subtractive vs additive colour and I'm not quite there enough to replicate it in post.

    If you are referring to "additive" and "subtractive" colors in the typical imaging sense, I don't think that it applies here.

     

     

    6 hours ago, kye said:

     In my bit-depth reductions I added grain to introduce noise to get the effects of dithering:

    "Dither is an intentionally applied form of noise used to randomize quantization error,

    There are many different types of dithering.  "Noise" dithering (or "random" dithering) is probably the worst type.  One would think that a grain overlay that yields dithering would be random, but I am not sure that is what your grain filter is actually doing.

     

    Regardless, the introducing the variable of grain/dithering is unnecessary for the comparison, and, likely, it is what skewed the results.

     

     

    6 hours ago, kye said:

    That's why I haven't been talking about resolution or sharpness, although maybe I should be talking about reducing resolution and sharpness as maybe that will help with thickness?

    Small film formats have a lot of resolution with normal stocks and normal processing.

     

    If you reduce the resolution, you reduce the color depth, so that is probably not wise to do.

     

     

    6 hours ago, kye said:

    Obviously it's possible that I made a mistake, but I don't think so.

    Here's the code:

    Too bad there's no mark-up/mark-down for <code> on this web forum.

     

    The noise/grain/dithering that was introduced is likely what caused the problem -- not the rounding code.  Also, I think that the images went through a YUV 4:2:0 pipeline at least once.

     

    I posted the histograms and waveforms that clearly show that the "4.5-bit" image is mostly an 8-bit image, but you can see for yourself.  Just take your "4.5-bit" image an put it in your NLE and look at the histogram.  Notice that there are spikes with bases that merge, instead of just vertical lines.  That means that a vast majority of the image's pixels fall in between the 22 "rounded 4.5-bit" increments.

     

     

    6 hours ago, kye said:

    Also, if I set it to 2.5bits, then this is what I get:

    image.thumb.png.c45065a653193019069b696d5719a3d6.png

    which looks pretty much what you'd expect.

    Yes.  The histogram should show equally spaced vertical lines that represent the increments of the lower bit depth (2.5-bits) contained within a larger bit dept (8-bits).

     

    6 hours ago, kye said:

    I suspect the vertical lines in the parade are just an algorithmic artefact of quantised data.

    The vertical lines in the waveforms merely show the locations where each scan line trace goes abruptly up and down to delineate a pool of a single color.  More gradual and more varied scan line slopes appear with images of a higher bit depth that do not contain large pools of a single color.

     

     

    7 hours ago, kye said:

    Also, maybe the image gets given new values when it's compressed?  Actually, that sounds like it's quite possible..  hmm.

    I checked the histogram of "2.5-bit" image without the added noise/grain/dithering, and it shows the vertical histogram lines as expected.  So, the grain/dithering is the likely culprit.

     

     

    7 hours ago, kye said:

    I wasn't suggesting that a 4.5bit image pipeline would give that exact result, more that we could destroy bit-depth pretty severely and the image didn't fall apart, thus it's unlikely that thickness comes from the bit-depth.

    An unnecessary element (noise/grain/dithering) was added to the "4.5-bit" image that made it a dirty 8-bit image, so we can't really conclude anything from the comparison.  Post the "4.5-bit" image without grain/dithering, and we might get a good indication of how "4.5-bits" actually appears.

     

     

    7 hours ago, kye said:

    Essentially the test was to go way too far (4.5bits is ridiculous) and see if that had a disastrous effect, which it didn't seem to do.

    Using extremes to compare dramatically different outcomes is a good testing method, but you have to control your variables and not introduce any elements that skew the results.

     

    Please post the "4-5-bit" image without any added artificial elements.

     

    Thanks!

  4. 8 hours ago, kye said:

    The question we're trying to work out here is what aspects of an image make up this subjective thing referred to by some as 'thickness'.

    I think that the "thickness" comes primarily from emulsion's color depth and partially from the highlight compression that you mentioned in another thread, from the forgiving latitude of negative film and from film's texture (grain).

     

    Keep in mind that overlaying "grain" screen on a digital image is not the same as the grain that is integral to forming an image on film emulsion.  Grain actually provides the detail and contrast and much of the color depth of an film image.

     

     

    8 hours ago, kye said:

    We know that high-end cinema cameras typically have really thick looking images, and that cheap cameras typically do not (although there are exceptions).

    Home movies shot on Super8 film often have "thick" looking images, if they haven't faded.

     

    8 hours ago, kye said:

    The fact I can take an image and output it at 8-bits and at 5-bits and for there not to be a night-and-day difference...

    You didn't create a 5-bit image nor a "4.5-bit" image, nor did you keep all of the shades within 22.6 shade increments ("4.5-bits") of the 255 increments in the final 8-bit image.

    Here are scopes of both the "4.5-bit" image and the 8-bit image:

    4.5bit_scopes.png.d3d159b676bde903d6b2afd6b40d8964.png

     

    8bit_scopes.png.b2e2f32bd0005d127bfa0fd986d55681.png

    If you had actually mapped the 22.6 tones-per-channel from a "4.5-bit" image into 26.5 of the 255 tones-per-channel 8-bit image, then all of the image's pixels would appear inside 22 vertical lines on the RGB histograms, (with 223 of the histogram lines showing zero pixels).

     

    So, even though the histogram of the "4.5-bit" image shows spikes (compared to that of the 8-bit image), the vast majority of the "4.5-bit" image's pixels fall in between the 22.6 tones that would be inherent in an actual "4.5-bit" image.

     

    To do this comparison properly, one should probably shoot an actual "4.5-bit" image, process it in a "4.5-bit" pipeline and display it on a "4.5-bit" monitor.

     

    By the way, there is an perceptible difference between the 8-bit image and the "4.5-bit" image.

  5. 15 hours ago, kye said:

    He says "The latitude of a typical motion picture negative film is 3.0 log exposure, or a scene contrast of 1000 to 1.  This corresponds to approximately 10 camera stops".  The highlights extend into a very graceful highlight compression curve.

    Most of us who shot film were working with a capture range of 7 1/2 to 8 stops, and that range was for normal stock with normal processing.

     

    If one "pulls" the processing (underdevelops) and overexposes the film, a significantly wider range of tones can be captured.  Overexposing and under-developing also reduces grain and decreases color saturation.  This practice was more common in still photography than in filmmaking, because a lot of light was already needed to just  to properly expose normal film stocks.

     

    Conversely, if one "pushes" the processing (overdevelops) while underexposing, a narrower range of tones is captured, and the image has more contrast.  Underexposing and overdeveloping also increases grain boosts color saturation.

     

    Because of film's "highlight compression curve" that you mentioned, one can expose for the shadows and midtones, and use less of the film's 7 1/2 - 8 stop  capture range for rendering highlights.

     

    In contrast, one usually exposes for the highlights and bright tones with digital, dedicating more stops just to keep the highlights from clipping and looking crappy.

     

     

    15 hours ago, kye said:

    However, if you're talking about mimicking film then it was a very short period in history where you might shoot on film but process digitally, so you should also take into account the print film positive that would have been used to turn the negative into something you could actually watch.

    6 hours ago, BenEricson said:

    Fair enough. I didn't realize you were talking about reversal film or film prints. Reversal was extremely popular at that time. The Dark Knight was shot negative but done with a traditional film print work flow and saying it has poor dynamic range and is comparable to ML is a tough sell.

    I don't think @kye was referring to reversal film.

     

      

    6 hours ago, BenEricson said:

    Yeah, the OG pocket cam is S16 and also has a similarity in the highlight sensitivity.

    The BMPCC was already mentioned in this thread.

     

     

    6 hours ago, BenEricson said:

    Ergonomically though, beyond terrible,

    No.  Because of its tiny size, the BMPCC is more ergonomically versatile than an NPR.

     

    For instance, the BMPCC can be rigged to have the same weight and balance as an NPR, plus it can also have a shoulder pad and a monitor -- two things that the NPR didn't/couldn't have.

     

    In addition, the BMPCC can be rigged on a gimbal, or with two outboard side handles, or with the aforementioned "Cowboy Studio" shoulder mount.  It can also be rigged on a car dashboard.  The NPR cannot be rigged in any of these ways.

     

      

    7 hours ago, BenEricson said:

    any of those cameras mentioned are like Fisher Price toys compared to the Eclair.

    To exactly which cameras do you refer?  I have shot with many of them, including the NPR, and none of the cameras mentioned in this thread are as you describe.

  6. 11 hours ago, BenEricson said:

    I'm still curious how someone could think a EOSM is comparable in any way...Have you guys actually watched something shot on 16mm? Bizarre comparison.

    I have watched some things captured on 16mm film, and I have shot one or two projects with the Eclair NPR.  Additionally, I own an EOSM.

     

     

    The reasons why the EOSM is comparable to the NPR is because:

    •   some of the ML crop modes for the EOSM allow the use of 16mm and S16 optics;
    •   the ML crop modes enable raw recording at a higher resolution and higher bit-depth.

     

     

    By the way, there have been a few relevant threads on the EOSM.  Here is thread based on an EOSHD article about shooting 5k raw on the EOSM using one of the 16mm crop modes.

     

    Here is a thread that suggests the EOSM can make a good Super-8 camera.

     

    Certainly, there are other digital cameras with 16 and S16 crops, and most of them have been mentioned in this thread.  The Digital Bolex and the Ikonoskop A-cam dII are probably the closest digital camera to a 16mm film camera, because not only do they shoot s16, raw/uncompressed with a higher bit depth, but they both utilize a CCD sensor.

     

     

    8 hours ago, BrooklynDan said:

    When you have a properly shoulder-mounted camera, you can press it into your shoulder and into the side of your head, which creates far more stability.  [snip]  Try that with your mirrorless camera hovering in front of you.

    Of course, one can rig any mirrorless camera set back and balanced on a weighted shoulder rig, in the same way as you show in the photo of your C100.  You could even add a padded "head-pressing plate!"

     

     

    7 hours ago, HockeyFan12 said:

    But I prefer the Aaton/Arri 416 form factor or the Amira over the Red/EVA1/Alexa Mini form factor, too, and it's an issue with prosumer cameras that the form factor really makes no sense ergonomically. Too big for IBIS to make sense, too small to be shoulder-mounted comfortably.

    Just build a balanced shoulder rig and keep it built during the entire shoot.

     

     

    7 hours ago, HockeyFan12 said:

    I keep going back to the $30 cowboy studio stabilizer, which somehow distributes weight evenly even with a front-heavy camera by clamping around your back. For 2-4 pound prosumer cameras and cameras without IBIS, I've found it preferable to a shoulder rig.

    I've always wanted to try one of those!:

    41OOv0oN1PL._SL500_AC_SS350_.jpg&f=1&nof

  7. 13 hours ago, tupp said:

    there is also that shoulder mount digital camera with the ergonomic thumb hold of which I can never remember the name.

    The name of this S16 digital camera is the Ikonoskop A-cam dII.

     

    Of course the BMPCC and the BMMCC would also be comparable to the NPR.

     

     

    13 hours ago, John Matthews said:

    Would I be wrong in saying there was a carefree nature of this camera, meaning you didn't have to think so much about setup, just find a moment and start shooting.

    Well, since the NPR is a film camera, of course one had to be way more deliberate and prepared compared to today's digital cameras.  If you had already loaded a film stock with the appropriate ISO and color temperature and if you had already taken your light meter readings and set your aperture, then you could  start manually focusing and shooting.  Like many 16mm and S16 cameras of it's size , the NPR could not shoot more than 10 minutes before you had to change magazines.  One usually had to check the gate for hairs/debris before removing a magazine after a take.

     

    Processing and color timing and printing (or transferring) was a whole other ordeal much more involved and complex (and more expensive) than color grading digital images.

     

    On the other hand, the NPR did enable more "freedom" relative to its predecessors.  The camera was "self-blimped" and could use a crystal-sync motor, so it was much more compact than other cameras that could be used when recording sound.

     

    Also, it used coaxial magazines instead of displacement magazines, so it's center of gravity never changed, and with the magazines mounted in the rear of the camera body, it made for better balance on one's shoulder than previous cameras.  The magazines could also be changed instantly, with no threading through the camera body.

     

    In the video that you linked, that quick NPR turret switching trick was impressive, and it never occurred to me, as I was shooting narrative films with the NPR, mostly using a zoom on the turret's Cameflex mount.

     

    The NPRs film shutter/advancing movement was fairly solid for such a small camera, as well.

     

     

    14 hours ago, John Matthews said:

    Question: would you consider the modern-day version a camera with raw (big files) or 8 bit?

    In regards to the image coming out of a film camera, a lot depends on the film stock used, but look from a medium fast color negative 16mm stock is probably comparable to 2K raw on current cameras that can shoot a S16 crop (with S16 lenses).

     

    By the way, film doesn't really have a rolling shutter problem.

     

     

    14 hours ago, Anaconda_ said:

    As far as a point and shoot doc camera, I'd say the Ursa Mini Pro g2 is pretty close  [snip]  Next I'd say FS5/7 and Canon's Cx00 range depending on specific needs. Or if you have the budget, go for Red.

    It is important to use a digital camera that has a S16 (or standard 16) crop to approximate the image of the NPR, because the S16 optics are important to the look.

     

     

    14 hours ago, Anaconda_ said:

    Eosm is nice, but not grab and go,

    The EOSM is a bit more "grab and go" than an Eclair NPR.

  8. 52 minutes ago, noone said:

    For those interested, Google seems to work.

    Mind you I can not find anyone with a definition on how to work out colour depth, but plenty on working out file sizes.

    Makes sense really given that not all photos are the same size but can have varying colour depth for varying size sensors.

    https://www.google.com/search?q=color+depth+calculation&oq=co&aqs=chrome.0.69i59l3j69i57j0l4.2659j0j15&sourceid=chrome&ie=UTF-8

     

    Most of the results of your Google search echo the common misconception that bit depth is color depth, but resolutions' effect on color depth is easily demonstrated (I have already given one example above).

  9. 9 hours ago, noone said:

    I cringe now when i look at some colour photos in old glossy magazines from the 70s and 80s taken with film.

    Yeah.  All of those photos by Richard Avedon, Irving Penn and Victor Skrebneskiphotos were terrible!

     

     

    18 hours ago, tupp said:

    Color depth in digital imaging is a product of resolution and bit depth (COLOR DEPTH = RESOLUTION x BIT DEPTH).

    9 hours ago, noone said:

    The best image quality metric that correlates with color depth is color sensitivity, which indicates to what degree of subtlety color nuances can be distinguished from one another (and often means a hit or a miss on a pantone palette).  Maximum color sensitivity reports in bits the number of colors that the sensor is able to distinguish.

    "Color sensitivity" applied to digital imaging just sounds like a misnomer for bit depth.  Bit depth is not color depth

     

    I have heard "color sensitivity" used in regards to human vision and perception, but I have never heard that term used in imaging.  After a quick scan of DXO's explanation, it seems that they have factored-in noise -- apparently, they are using the term "color sensitivity" as a term for the number of bit depth increments that live above the noise.

     

     

    9 hours ago, noone said:

    My lowly aging 12mp A7s still fairs very well for portrait colour depth.

    That's a great camera, but it would have even more color depth if it had more resolution (while keeping bit depth and all else the same).

     

     

    9 hours ago, scotchtape said:

    A good image starts with good lighting.

    That is largely true, but I am not sure if "good" lighting is applicable here.  Home movies shot on film with no controlled lighting have the "thickness" that OP seeks, while home movies  shot on video usually don't have that thickness.

     

     

    19 hours ago, tupp said:

    Color depth in digital imaging is a product of resolution and bit depth (COLOR DEPTH = RESOLUTION x BIT DEPTH).

    2 hours ago, kye said:

    Interesting.  Downsampling should give a large advantage in this sense then.

    No.  There is no gain of color depth with down-sampling.  The color depth of an image can never be increased unless something artificial is introduced.

     

    On the other hand resolution can be traded for bit depth.  So, properly down-sampling (sum/average binning adjacent pixels) can increase bit depth with no loss of color depth (and with no increase in color depth).

     

     

    2 hours ago, kye said:

    I am also wondering if it might be to do with bad compression artefacts etc.

    Such artifacts should be avoid, regardless.  "Thick" film didn't have them.

     

     

    2 hours ago, kye said:

    Converting images to B&W definitely ups the perceived thickness

    There is no chroma sub-sampling in a B&W image.

     

    I really think color depth is the primary imaging property involved in what you seek as "thickness."  So, start with no chroma subsampling and with the highest bit depth and resolution.  Of course, noise, artifacts and improper exposure/contrast can take away from the apparent "thickness," so those must also be kept to a minimum.

  10. 4 hours ago, kye said:

    My question is - what aspect of the image shows thickness/thinness the most?

    As others have suggested, the term "density" has a specific meaning in regards to film emulsions.

     

    I think that the property of "thickness" that you seek is mostly derived from color depth (not bit depth).

     

    Color depth in digital imaging is a product of resolution and bit depth (COLOR DEPTH = RESOLUTION x BIT DEPTH).  The fact that resolution affects color depth in digital images becomes apparent when one considers chroma subsampling.  Chroma subsampling (4:2:2, 4:2:0, etc.) reduces the color resolution and makes the images look "thinner" and "brittle," as you described.

     

    Film emulsions don't have chroma subsampling -- essentially film renders red, green and blue at equal resolutions.  Almost all color emulsions have separate layers sensitive to blue, green and red.  There is almost never a separate luminosity layer, unlike Bayer sensors or RGBW sensors which essentially have luminosity pixels.

     

    So, if you want to approximate the "thickness" of film, a good start would be to shoot 4:4:4 or raw, or shoot with a camera that uses an RGB striped sensor (some CCD cameras) or that utilizes a Foveon sensor.  You could also use an RGB, three-sensor camera.

     

     

  11. 5 hours ago, bwhitz said:

    2. Some people actually LIKE the boomer-protectionism of the 1980's technology markets. I.e. hacks can just say "I own X or Y expensive camera! You HAVE to hire me now!"

    No need for ignorant bigotry.

     

    The notion that camera people got work in the 1980s by owning cameras couldn't be further from the truth.  "Hiring for gear" didn't happen in a big way until digital cameras appeared, especially the over-hyped ones -- a lot of newbie kids got work from owning an early RED or Alexa.  To this day, clueless producers still demand RED.

     

    Back in 1980's (and prior), the camera gear was almost always rented if it was a 16mm or 35mm shoot.  Sure, there were a few who owned a Bolex or a CP-16 or 16S, or even an NPR with decent glass, but it was not common.  Owning such a camera had little bearing on getting work, as the folks who originated productions back then were usually savvy pros who understood the value of hiring someone who actually knew what they were doing.  In addition, camera rentals were a standard line-item in the budget.

     

    Of course, there was also video production, and Ikegami and Sony were the most sought-after brands by camera people in that decade.  Likewise, not too many individuals owned hi-end video cameras, although a small production company might have one or two.

     

    Today, any idiot who talks a good game can get a digital camera and an NLE and succeed by making passable videos.  However, 99% of the digital shooters today couldn't reliably load a 100' daylight spool.

  12. 6 hours ago, BTM_Pix said:

    This means that you should be able to bring those out those externally and power the adapter from an external source that you would then be able to engage and disengage with a switch as required.

    Why not just put a switch inline on the "hot" power lead of the adapter instead of powering with an external source?  That way, OP can just enable and disable the electronics by merely flicking the switch.

     

    Incidentally, here is a small dip switch that might work:

    dip_switch2.jpg.302ca13e8d4e5d7186eafcd6b98c8bdf.jpg

  13. 1 hour ago, tupp said:

    This should read:  "Once again, I have repeatedly suggested that it is NOT the sensor size itself that produces general differences in format looks..."

    28 minutes ago, noone said:

    Great so you DO you think the sensor size has nothing to do with any difference so we do agree!

    The paragraph reads:  "I have repeatedly suggested that it is not the sensor size itself that produces general differences in format looks -- it is the optics designed for a format size that produce general differences in format looks."

     

    Again, you somehow need to get that point through your head.

     

     

    28 minutes ago, noone said:

    Of course if you do not agree with that, you would be able to prove it with science since you cannot prove it with photos (as any differences in photos taken with systems not identically scaled can be explained by difference in the systems.).  Now unless you CAN provide something (ANYTHING) showing how  (often tiny) differences in photos  could not even remotely be explained by differences in the equipment, I think we have gone several pages too far and I am out Really really really this time).

    Perhaps you should merely address my points individually and give a reasonable counter argument each one.  Unless, of course, you cannot give a reasonable counter argument. 

  14. 58 minutes ago, tupp said:

    Once again, I have repeatedly suggested that it is the sensor size itself that produces general differences in format looks

    This should read:  "Once again, I have repeatedly suggested that it is NOT the sensor size itself that produces general differences in format looks..."

  15. 2 hours ago, noone said:

    I do not need to address each point as I disagree with YOUR (no one else its seems) theory that you have shown

    You certainly don't need to address each of my points, and, indeed, you have avoided almost all of them.

     

    In regards to the your parenthetical insinuation, I would never claim that the number of individuals who agree/disagree with one's point has any bearing on the validity of that point.  However, please note how this poster unequivocally agrees with me on the problems inherent in your comparison test, saying, "I certainly can see what you're talking about in the areas you've highlighted. It's very clear."

     

     

    2 hours ago, noone said:

    NO, zero, nil, zilch nix, NOTHING in evidence to support  other than saying there are (often tiny) difference so it MUST be because of the sensor size difference.

    In regards to my not providing evidence, again, are you referring to evidence other than all the photos, video links, and references that I have already provided in this thread, which you have yet to directly address?

     

    Additionally, you have misconstrued (perhaps willfully) my point regarding sensor size.  I have continually maintained in this thread that it is the optics designed for a format size -- not the format size itself -- that produce general differences in format looks.

     

     

    3 hours ago, noone said:

    That article explains things pretty well to me and I can not understand how YOU can not understand that ANY difference in a system can explain very tiny differences in photos while at the same time you think those differences are explained by sensor size difference without a shred of evidence why ?

    The paper that you linked does address points made in this thread, but a lot of the paper discusses properties which are irrelevant to DOF equivalency, as I pointed out in my previous post.  Interestingly, the paper frequently suggests that larger format optics have capabilities lacking in optics for smaller formats, which is what I and others have asserted.  Not sure how you missed those statements in the paper that you referenced.

     

    Regardless, I have more than once linked Shane Hurlbut's example of an exact focus match between two different lenses made from two different manufacturers.  So, there should be no problem getting such a close DOF/focus match from other lenses with the proper methods.

     

    Once again, I have repeatedly suggested that it is the sensor size itself that produces general differences in format looks -- it is the optics designed for a format size that produce general differences in format looks.  Somehow, you need to get that point through your head.

     

     

    3 hours ago, noone said:

    The fact that this amounts to many many pages of yes, no, yes, no is reason enough to end it now.  This thread should be locked.

    Your thoughtful consideration and open-mindedness is admirable.

  16. 7 hours ago, noone said:

    I Disagree!

    Well, I certainly appreciate your thoroughly addressing each one of my points and your giving a reasonable explanation of why you disagree.

     

     

    7 hours ago, noone said:

    Got ANY shred of evidence to support your case?

    You mean, do I have any evidence other than all the photos, video links, and references that I have already provided in this thread, which you have largely avoided?

  17. 21 hours ago, noone said:

    It's not a good read on this at all, as most of the information given is irrelevant.

     

    Furthermore, many of the conclusions of this paper are dubious.

     

     

    21 hours ago, noone said:

    "Nevertheless, real world IQ differences (including total image noise) will inevitably occur in practice even when equivalent photos are taken. These will arise due to differences in the underlying camera and lens technology, such as:  • sensor quantum efficiency;

    How is "sensor quantum efficiency" relevant to optical equivalency?

     

     

    21 hours ago, noone said:

    • read noise;

    How is "read noise" relevant to optical equivalency?

     

     

    21 hours ago, noone said:

    • lens aberrations;

    Lens aberrations are absolutely relevant to optical equivalency and DOF/focus.

     

    According to Brian Caldwell, aberrations can affect DOF and lenses for larger formats generally have fewer aberrations.  Hence, the refractive optics of larger formats generally influence DOF differently than lenses for smaller formats.

     

    Keep in mind that the DOF/equivalency formulas do not account for any effects of refractive optical elements, yet optical elements can affect DOF.

     

     

    21 hours ago, noone said:

    • JPEG tone curve; and

    Again, this property is not really relevant to optical equivalency and DOF/focus.

     

     

    21 hours ago, noone said:

    • image processing.

    This property is not really relevant to optical equivalency and DOF/focus.

     

     

    21 hours ago, noone said:

    In other words, since the total light received by each format is the same when equivalent photos are taken, it is factors such as those above that explain real-world cross-format IQ differences rather than format size. These factors will be discussed further in Sec. 4."

    Only one of these six factors (aberrations) that you and the paper present are relevant to optical equivalency and DOF/focus.  So, why is this paper quotee/linked?

     

    On the other hand, here is a choice sentence from he paper that immediately follows your excerpt:

    Quote

    Although real-world IQ differences could favor any of the cameras being compared when equivalent photos are taken, the advantage of a larger format is that it offers extra photographic capability over a smaller format.

     

    There are other similar passages in that paper suggesting differences in image quality between different sized formats.

     

    If the intention of quoting/linking that paper was to assert that it is difficult to get an exact match between two different lenses made by two different manufactures, I  once again direct you to Shane Hurlbut's test in which he compared two different lenses made by two very different manufacturers (Panasonic and Voigtlander), that exactly matched in regards to the softness/bokeh of the background, with only a slight difference in exposure/color.

     

    So, a more exact match can be achieved than what we have seen so far in "equivalency tests."  In addition, we can compare the actual DOF, instead of seeing how closely one can match an arbitrarily soft background set at some arbitrary distance, while relying on lens aperture markings and inaccurate math entries.

  18. On 9/23/2020 at 11:30 AM, noone said:

    1) Equivalence theory HAS been tested and is accepted by the majority of photographers and scientists.

    It is doubtful that any of the equivalency tests presented so would be accepted by "scientists" as a valid DOF/equivalency comparison.

     

     

    On 9/23/2020 at 11:30 AM, noone said:

    Most accept it even though no one has done an EXACT match (IE the photos LOOK very similar but someone will always point out a tiny difference) to the satisfaction of SOME but the deniers have never shown evidence that it is wrong either.

    In regards to your claim in this thread that it is impossible to exactly match the focus between two lenses of the same focal length made for the same format from different manufacturers, I have already linked a comparison conducted by Shane Hurlbut in which the focus matches precisely -- much more exactly than any equivalency comparison presented here.

     

    So, we probably can get a significantly closer match in a DOF equivalency test than what we have seen so far.

     

     

    On 9/23/2020 at 11:30 AM, noone said:

    The problem in getting an EXACT match is you would have to scale the equipment for an EXACT match and that would be near impossible.

    This is false, as exemplified by the Shane Hurlbut test mentioned above.

     

    On 9/23/2020 at 11:30 AM, noone said:

    To the point the EASIEST way might be to build from scratch very simple low element number formulas that test this (but may not be great images).

    That might work, especially if one likes to do things the hard way.  Not sure what the point is regarding low element numbers.

     

     

     

  19. On 9/23/2020 at 9:09 AM, Jay60p said:

    A few thoughts on this topic:

    1) I would have expected this equivalency theory would have been tested more reliably by still photographers at the numerous photography forums long ago.  They use a much wider range of format sizes than the video people here at EOSHD.  If not, it could be there is just too many variables to control, or no consensus on the methods to use.

    The reason why we don't have a proper test of DOF equivalency from still photographers (nor from cinematographers) likely doesn't involve involve the number of variables.  The true reason would probably require some philosophizing.

     

     

    On 9/23/2020 at 9:09 AM, Jay60p said:

    2)  I would suggest using a 4x5 sheet film camera (8x10 is at $15 a shot!) and limit the test to manual lenses.

    Mount all lenses on a 4x5 lens board and take a 4x5 shot for each, to be scanned for viewing.

    This way the camera does not change, the sensor does not change, no digital transformations are done in camera.

    The different lenses would have different size image circles in the 4x5s, so would be of different resolutions,

    but that should not effect the depth of field comparisons much.

    It probably would not be wise to shoot DOF/equivalency comparisons using the same emulsion for different formats.  The smaller format on the same emulsion could appear to have a lower resolution, more softness and more grain, which would invalidate the results.

     

    On the other hand, digital formats lend themselves perfectly to such a test, as they have standardized resolutions.  So, a Super16 full HD camera will have the same digital resolution as an 8"x10" full HD camera.

     

     

    On 9/23/2020 at 9:09 AM, Jay60p said:

    I did look at the SLR primes with the Turbo II speedbooster. It shrinks the first fringing seen, but it includes more of the edges of the

    image circle, with more CA, so overall the fringing looked the same.

    Yep.  Focal reducers tend to transfer the qualities of the larger format lens to the image of the smaller format.

     

     

    On 9/23/2020 at 9:09 AM, Jay60p said:

    Here is a review of my favorite Fuji lens that includes comments on the in-camera corrections (CA, vignetting, distortion) for anyone

    unfamiliar with this: https://opticallimits.com/fuji_x/887-fuji1024f4ois?start=1

    Thanks for the link!

     

    For the benefit of those who are unfamiliar with in-camera corrections for chromatic aberration, vignetting, barrel/pincushion distortion, etc., such features have been implemented in digital cameras for a long time, and these corrections are not unique to Fuji cameras.

  20. 54 minutes ago, ZEEK said:

    For the EOS M ML RAW Modes, Jip-hop a while back posted the calculations of how the modes compare relative to the width of a full frame sensor (The Crop Factor).
    1080 RAW Mode [1736x976] = 1.6x Crop Aps-c
    1080 RAW Mode + x3 Crop Mode Enabled 1800px wide = 4.61x Crop (1.6x * 2.88x) 
    2.5K RAW Mode @2.35:1 = 3.29x Crop (1.6 * 2.06) 
    2.8K RAW Mode @2.35:1 = 2.96x Crop (1.6 * 1.85) - (Closest mode to Super16 or the BMPCC FOV)

    Thanks!  Very helpful!

     

    So, the 2.5K raw mode @ 2.35:1 vignettes with 16mm lenses?   Does the 2.8K mode vignette with Super 16 lenses?

  21. 36 minutes ago, Anaconda_ said:

    Sorry to quote you again, but there's a new test build with complete realtime preview, non-cropped for 5k modes now. I just shot 2 minutes of 16:9 12bit raw without any dropped frames. I could see my framing perfectly as though I was using a native camera mode... almost. 

    What is the crop factor of these modes (what lens formats match the cropping)?

×
×
  • Create New...