Jump to content

tupp

Members
  • Content Count

    959
  • Joined

  • Last visited

Posts posted by tupp

  1. tupp, you didn't check the meaning of 'circle of confusion', did you? You take for granted, that when you see a projection, there don't need to be reflective textures that are fine enough to define the individual picture element you recognize?

    No.

    I do not wish to get into a semantics argument, but the definitions of depth-of-field and circle-of-confusion involve basic, well-established optical properties that apply to imaging. The resolution of the sensor/film (and projector/monitor) is not a consideration, as depth-of-field is a purely optical characteristic.

    The Wikipedia page on depth-of-field addresses this very point (note limitation #6).

     

    This is, excuse me, a rather naive way of understanding optical laws.

    No need to get personal, but you would be mistaken if you considered sensor/film resolution as an optical property.
     
     

    Softer? You mix up resolution and sharpness. Low resolution images may look out of focus when scaled to the same size as a high resolution image.

    The term "Softer" is generic. It can apply to the properties of both resolution and sharpness.
     
     

    I never wrote: CoC is the most important factor for DoF, but it is inseparable,

    I never stated that you wrote so.

     

    and therefore your statement 'a given depth-of-field optically remains constant, regardless of sensor/film resolution.'  is wrong, given, that there always has to be a medium that receives the light coming through the lens - be it dust or smear on a glas pane, chalk grain on a wall, silver nitrate crystals, pixel circuits, your retina's rod cells

    The focal plane/surface is part of the optical system, but the resolution of the medium at the focal plane/surface has nothing to do with optics.

     

    Instead of arguing, you could make a test of your own. Open the aperture, then film with your camera's highest ISO/gain. You will find a considerably bigger depth of field than with your lowest ISO.

    It's not proportional to what would change with closing the aperture, but nobody said so.

    Huh?

    I am not sure what you are proposing, but it appears that you are suggesting that the depth-of-field will change if I merely vary the ISO setting on my camera while the lens and its aperture remain the same.

    Is that what you are saying?

    If so, would you be interested in a little wager?
  2. Of course resolution affects DoF. If a pixel on the sensor is very big, it swallowes a bigger circle of confusion, whereas if you have four times as many pixels on a identically sized sensor, it also needs a four times smaller CoC to render sharp outlines.


    No.

    Depth-of-field (and circles-of-confusion) is a purely optical property, regardless of what is receiving the image at the focal plane.

    Coarser resolution can make the sharply focused areas look softer (similar to the less sharply focused areas), but a given depth-of-field optically remains constant, regardless of sensor/film resolution.

     

    It's not the size of the sensor alone that affects DoF, it's the size of the sensor relative to it's resolution. This video explains it (jump to about 4'30"):


    No.

    Depth-of-field is an optical property. The size and resolution of the film/sensor have nothing to do with the optical property of depth-of-field. The only exception to this rule occurs when the sensor corners creep into the optically inferior edge of the image circle.

     

    What is more: 
    One 50 mm f1.4 is not as sharp as any other lens with the same specs, meaning it may not be able to focus an equally small CoC.


    Yes.  Some lenses resolve more sharply than others.

  3. This won't work because it creates shadows. If you want to use something like this on a A7 I don't see the reason to use anything nikon related. Just use the mirex adapter from mamiya 645 to Canon EF. You can adapt all MF glass to mamiya 645, and EF to E adapters are cheap.

     

    It's definitely useful to know that there are adapters that convert most medium format mounts to Mamiya 645.

     

    However, what do you mean by "creates shadows?"  What causes such shadows?

     

    Another advantage of having the tilt mechanism closer to the film plane is that one might be able to utilize more of the image circle, which possibly enables the lens to be pushed past the extremes possible with a more distant, medium format tilt/shift adapter (eventually causing the aformentioned shadows).

  4. @araucaria, looks interesting.  I have enough problems without buying field-view equipment

     

    Consider an E-mount to Nikkor lens tilt-shift adapter.  If there is no vignetting with this adapter, the tilt-shift features should work with medium format lenses, with an additional plain, cheap nikkor-to-medium-format adapter.  Of course, it will certainly work as a non-tilt/non-shift adapter for the aforementioned Nikkors.

     

    There are nikkor-to-medium-format tilt/shift adapters, but then you are locked-in to a particular medium format mount, with no chance of tilting/shifting the nikkors.

     

    In regards to the rumored USD$1,800 price of the the A7s, I have always been a loud proponent of full frame sensors/cameras for video.  However, it is disappointing that the best 4k bit rate from the A7s is 8-bits -- and that is only possible through the HDMI cable.

     

    The GH4 can do 4k 8-bits to an SD card for the same price, plus 10-bits from the HDMI cable.  Furthermore, with a focal reducer, the GH4 can mimic a super 35mm frame with an extra stop of sensitivity.

  5. Good point on the G6's focus peaking!

     

    Another advantage of using the G6 instead of the Forza is that there is no confusion between the ACs and electricians if I say, "Set-up the 18K!"

  6. The "One-Cam"  looks like a typical machine vision camera with a recorder.  The 9 stops of DR is in line other such cameras.
     
    Some rental houses offer the IO Industries cameras, and they have had booths at NAB and Cinegear for the past few years.
     
    However, one of the most impressive machine vision cinematography set-ups is this project using a Point Grey, 4K, 11.5-stop DR camera. The guy developed an encoder that recorded real-time to cDNG.  He even made a touch control interface.  The footage looks fairly clean, too, as evidenced from this screen cap and this screen cap.
     
    Also, the early Apertus cameras used open-source Elphel machine vision cameras.  Of course, the Apertus team developed a lot of stuff years ago, including encoding schemes and touch control interfaces.

  7. Can't beat physics. F4 = F4, no matter what sensor size. Just the field of view changes.

     

    A couple of points:

    - the same f-stop on every lens of the same focal length won't necessarily transmit the same amount light to the focal plane, which is why we have T-stops -- they account for the light "Transmission" factor;

    - the field of view doesn't change from one lens to the next if the (effective) focal length remains the same (barring any condensing optics behind the aperture, such as a speed booster).

  8. But the difference between 8-bit and 16-bit is pretty obvious and more in the ballpark of this test.

     

    The difference between 8-bit and 16-bit might be apparent when comparing two images of the same resolution (on monitors of the same resolution and bit depth to match each image).  Such a difference would become obvious if the scene contained a gradation subtle enough to cause banding in the 8-bit image but not in the 16-bit image.

     

    However, in such a scenario, if you could continually increase the resolution of the camera sensor and monitor of the 8-bit system, you would find that the banding would dissappear at some point in the 8-bit image.  By increasing the resolution of the 8-bit system, you are also increasing its color depth -- yet its bit depth always remains at 8-bit.

     

    One can easily observe a similar phenomenon.  Find a digital image that exhibits a slight banding when you are directly in front of it, then move away from the image.  The banding will disappear at some point.  By moving away from the image, you are increasing the resolution, making the pixels smaller in your field of view.  However, the bit depth is always the same, regardless of your viewing distance.

     

     

    However, since most of us don't have 4K monitors maybe the test won't be very evident.

     

    Such a test wouldn't be conclusive unless each monitor matches the resolution and bit depth of the image it displays.

     

     

    And from what I understand (and most here agree), bit depth can't be improved, can it? It's only the color sampling that actually improves.

     

    Most image makers are not aware of the fact that bit depth and color depth are two different properties.  In digital imaging, bit depth is a major factor of color depth, but resolution is an equally major factor of color depth (in both digital and analog imaging).

     

    Therefore, one can sacrifice resolution while increasing bit depth, yet the color depth remains the same (or is decreased).  In other words, swapping resolution for more bit depth does not result in an increase in color depth.

  9. Example images? Can post a 4K and 1080p* image for a single blind trial  B)

     

    There are plenty of examples in which a higher resolution image has been binned to a lower resolution with a higher bit depth.  The technique is mainly used to reduce noise and increase camera sensitivity and to maintain rich colors in post production transfers.

     

    Again, using such a process on a captured image can never increase color depth.  So, barring and inefficiencies/inequities in the conversion process or the display devices, the resulting images should look the same (except that one has more resolution).

     

    In addition, it is usless to try and compare such results, unless there are two display devices of identical size that can be set to correspond to the bit-depth and resolution inherent in each image.  Displaying a 32-bit image on a 16-bit monitor would be a waste of time when assessing the results of binning to a lower resolution to a higher bit depth.

  10. there's nothing significant to be gained from 4K => 1080p resampling in terms of color space / bit depth.

     

    Certainly, one can never gain color depth in an image that is already captured.  However, one can increase the bit depth while reducing resolution, and still maintain the same color depth.

     

     

    In terms of aliasing/artifacts and detail preservation, this is helpful: 

    http://pixinsight.com/doc/docs/InterpolationAlgorithms/InterpolationAlgorithms.html#__section002__

    )

     

    Aliasing/artifacting is a whole other deal.

  11. So they are trying to convince you that by widening the color space via dithering ( AKA Interpolation)  that 'magically' a wider gamut has been recovered.

     

    As tosvus has pointed-out, they are not dithering.  They are merely swapping resolution for bit depth -- reducing resolution while increasing bit depth.  The color depth remains the same or decreases (due to inefficiencies in the conversion or due to the color limitations of the final resolution/bit-depth combination).

     

    With dithering, there is no change in the bit depth nor in the resolution.

     

     

    ... pixel density has NOTHING at all to do with capture color gamut.

     

    Pixel density (resolution) is a major factor in color depth/gamut of digital systems.

     

    Here is the relationship between color depth, bit depth and resolution in a digital RGB system:

    COLOR DEPTH = (BIT DEPTH X RESOLUTION)3

    That's basically the way it works, barring practical and perceptual variables.

  12. Would the opposite be true (i.e. deterioration of bit depth and color space) when stretching anamorphic footage horizontally?

     

    Essentially, digital anamorphic systems create pseudo rectangular pixels, which are usually oblong on the horizontal axis.

     

    In such scenarios, the color depth is reduced on the horizontal dimension, while color depth remains the same on the vertical dimension.  However, the total color depth of the image never changes from capture to display.

     

    Furthermore, in digital anamorphic systems the bit depth never changes on either axis (barring any intentional bit depth reduction) -- the pixels just become oblong.

×
×
  • Create New...