Jump to content

tupp

Members
  • Posts

    1,148
  • Joined

  • Last visited

Posts posted by tupp

  1. I don't see a banded transition between bit-depth levels, nor do I see any "rainbowing."

     

    If it were an actual banded transition between bit-depth levels, the trasition would appear sharper, and we wouldn't see all that gradation within the area in question -- there would just be more bands instead of gradation.

     

    It looks like a cast shadow from the subject onto the background. The "banding" that you might perceive is probably the umbra and penumbra of the cast shadow.

    It appears that you were using a large, softer and cooler source (possibly daylight balanced) and that there was warmer ambient light (possibly tungsten balanced) filling the cast shadow.

  2. @Inazuma  Thanks for the info!
     

    http://www.eoshd.com/content/12923/cure-banding-dslr-footage-gh4-4k-holds-key

     
    Thank you for linking the article.  >One the comments shows the dramatic advantage of "diffuse" dithering over merely applying noise.
     
    I have used noise and blurring with still photos to eliminate banding, but with stills it is easy to localize the effect within the image.  Has anyone used masks (or other selection tools) to keep blurring/noise/dithering to a local area in video?

     

  3. btw, you can calculate the relative exposure range (i.e. dynamic range) as:
     
    relative exposure = log2 ( luminance - reference luminance )


    Not sure on this point. Exposure and "exposure range" are two different things. Also, exposure range and dynamic range are two different things.

     

    an 8bit file has a max luma range of 256 steps, the binary log of which is 8 (stops).


    There is no universal correlation between bit-depth intervals and stops/luma/EV.

    Again, 8 bits worth of tones can be mapped to a system with a dynamic range of 25 stops, while 32 bits worth of tones can be mapped to a system with a dynamic range of only 5 stops. Indeed, cameras currently exist in which the user can select 8-bit, 12-bit, 16-bit and 24-bit depths, while camera's dynamic range remains unchanged.

     

    So, the value of each bit interval is determined by the dynamic range (or, usually, the amplitude range) of the digital system, along with the number of bits used.

    Of course, this scenario is fairly simple on linear response systems. Systems with adjusted response curves (or with randomly mapped intervals) are much more complex.

  4. If you want to retain the typical full dynamic range (often using film print density as a rough target) of a camera's log profile, you will need to record 10bit or higher. Most log profiles consider middle gray as 18%, not 50%, because our eyes see 50% as almost black, and require 10bits to cover the full range.

     

    The rest of your post is excellent, but these two lines are not true.

    Bit depth and dynamic range are two different and independent properties.  Bit depth is the number of tonal/amplitude steps in a digital system.  Dynamic range gives the usable amplitude range relative to noise (similar to signal-to-noise ratio), and dynamic range applies to both digital and analog systems.

    To illustrate how bit depth and dynamic range are independent, consider that 8 bits of tonal intervals can be mapped to a system with a dynamic range of 25 stops, but, by the same token, 32 bits of tonal intervals can be mapped to a system with a dynamic range of 5 stops.

    Furthermore, a film emulsion (or analog magnetic tape) can have a dynamic range of, say, 7 stops, but that emulsion (or tape) will have no bit depth -- it's analog.
     

  5. With a captured image, it is true that resolution can be sacrificed for increased bit depth -- folks have been doing it for years.   On the other hand, the color depth of a captured image can never be increased (unless something artificial is introduced).

    It is important to keep in mind that bit depth and color depth are not the same property.

    The relationship of bit depth, resolution and color depth is fundamental to digital imaging.  With all other variables remaining the same, the association between bit depth, resolution and color depth in an RGB system is:

    COLOR DEPTH = (BIT DEPTH x RESOLUTION)3

    So, if two adjacent RGB pixel groups are cleanly summed/binned into one large RGB pixel group, the bit depth will double.   However, the color depth will remain the same or be slightly reduced (depending on the efficiency of the conversion).  The color depth can never increase.

    As others in this thread have suggested, there appears to be something else going on other than normal banding in the OP's first image.

    Even so, if banding exists in an image, increasing the bit depth at the cost of resolution will not eliminate the banding, unless the banding is only a few pixels wide (or unless the resolution becomes so coarse that the image is no longer discernable).  Again, when swapping resolution for more bit depth, the bit depth has been increased, but not the color depth.
     

  6. A standard value of CoC is often associated with each image format, but the most appropriate value depends on visual acuity, viewing conditions, and the amount of enlargement.

    Well, that quote says it all (except for the fact that circle-of-confusion and depth-of-field are actually two different {but related} properties).

    Nowhere in that quote is resolution mentioned.

    Image format is mentioned as "often associated" with circle-of-confusion values, but the quote states that the most appropriate basis of circle-of-confusion is visual acuity. Visual acuity is not resolution.

    If we were to base depth-of-field on resolution, we would encounter all kinds of silly, varying DOF ratings of the same optical system, which would change with each film stock/sensor and with the whim of any person viewing the captured image who decides to manipulate it long after it has been captured.
  7. So the argument is: Higher ISO -> more processing that lowers effective resolution -> larger acceptable circle of confusion -> deeper depth of field.


    No.

    Post-capture processing and the "acceptable circle of confusion" don't affect the depth-of-field.

    Depth-of-field is an optical property that exists regardless of whether or not the image is captured.
     
     

    What it does is make more of the scene be equally just-out-of-focus, to put it one way, because the lower resolution is inherently less sharp and so more of the image will be "sharp enough."


    I seem to recall reading a similar point somewhere in this thread.
  8. My point was, that there are influences on DoF in real world conditions that add up to the "Full Frame Look" in such a way as making the comparison of 5D (that's it, more or less) to cameras with smaller sensors futile.


    When discussing depth-of-field in the "real world," the term applies only to real world optical systems.
     
     

    Your Wiki-links states: "Most DOF formulas, including those discussed in this article, employ several simplifications:"

    Among them demosaicing, sharpening and image noise reductions. Not mentioned was pixel binning, all methods that inherently reduce resolution.


    No.

    Re-read the "simplifications." All of those post-capture processes are ignored by the DOF formulas.

    Depth-of-field is an optical property -- it has nothing to do with sharpening or softening an image after the image is received by the sensor/film.

    Furthermore, not all of the methods that you mentioned change the resolution.

     

    I'm not saying this applies only to full frame cameras, I just ask if it's unreasonable to assume that it affects their look to a different degree.


    Full-frame, half-frame, 16mm, IMAX, 4"x5", 11"x14" -- the size of the medium at the focal plane really has nothing to do with the depth-of-field.

    Depth-of-field and an image's "look" are two different things, but, certainly, DOF can affect the "look" of an image.

     

    "The resolutions of the imaging medium and the display medium are ignored. If the resolution of either medium is of the same order of magnitude as the optical resolution, the sharpness of the final image is reduced, and optical blurring is harder to detect."


    Well, this passage that you have quoted says it all: Resolution of the imaging medium is ignored.

    When the resolution of the capture medium is similar (or greater than) the optical resolution, the captured image will seem more blurred throughout, making it more difficult to discern the depth-of-field. Nevertheless, the depth-of-field (an optical property) remains the same.

     

    "The acceptable circle of confusion is influenced by visual acuity, viewing conditions, and the amount by which the image is enlarged (Ray 2000, 52–53)."

    The amount by which the image is enlarged. Exactly. Relative size is all about resolution.


    "Acceptable circle of confusion" and DOF are two different properties.

    Magnification of the image at the film plane affects resolution, but not depth-of-field.

    Cropping into and enlarging a captured digital image can lower the effective resolution, but manipulating a captured image has absolutely nothing to do with depth-of-field.

     

    When we had 16 mm analog film we just knew there would be a noticable increase of DoF if we shot on TriX (ISO 400, as compared to ISO 100), even though we shot wide open.


    I doubt that you would notice any difference in DOF with the same f-stop on the same lens, by merely changing film stocks.

    Regardless, you weren't increasing the depth-of-field -- you might have been creating an image that had a coarser resolution overall, thus, making the overall image look softer and more homogeneously in the same focus.

     

    When we had the first HD camcorders that had the same sensor size of 1/3" as our SD camcorders, we didn't expect a difference as far as DoF was concerned. But there was, not dramatically, not enough to let us pass 35 mm adapters ('DoF machines', proof that the 'big sensor' factor mattered the most).


    Big sensors matter, but the size of the sensor doesn't change the DOF. Depth-of-field is an optical property.
  9. tupp, you didn't check the meaning of 'circle of confusion', did you? You take for granted, that when you see a projection, there don't need to be reflective textures that are fine enough to define the individual picture element you recognize?

    No.

    I do not wish to get into a semantics argument, but the definitions of depth-of-field and circle-of-confusion involve basic, well-established optical properties that apply to imaging. The resolution of the sensor/film (and projector/monitor) is not a consideration, as depth-of-field is a purely optical characteristic.

    The Wikipedia page on depth-of-field addresses this very point (note limitation #6).

     

    This is, excuse me, a rather naive way of understanding optical laws.

    No need to get personal, but you would be mistaken if you considered sensor/film resolution as an optical property.
     
     

    Softer? You mix up resolution and sharpness. Low resolution images may look out of focus when scaled to the same size as a high resolution image.

    The term "Softer" is generic. It can apply to the properties of both resolution and sharpness.
     
     

    I never wrote: CoC is the most important factor for DoF, but it is inseparable,

    I never stated that you wrote so.

     

    and therefore your statement 'a given depth-of-field optically remains constant, regardless of sensor/film resolution.'  is wrong, given, that there always has to be a medium that receives the light coming through the lens - be it dust or smear on a glas pane, chalk grain on a wall, silver nitrate crystals, pixel circuits, your retina's rod cells

    The focal plane/surface is part of the optical system, but the resolution of the medium at the focal plane/surface has nothing to do with optics.

     

    Instead of arguing, you could make a test of your own. Open the aperture, then film with your camera's highest ISO/gain. You will find a considerably bigger depth of field than with your lowest ISO.

    It's not proportional to what would change with closing the aperture, but nobody said so.

    Huh?

    I am not sure what you are proposing, but it appears that you are suggesting that the depth-of-field will change if I merely vary the ISO setting on my camera while the lens and its aperture remain the same.

    Is that what you are saying?

    If so, would you be interested in a little wager?
  10. Of course resolution affects DoF. If a pixel on the sensor is very big, it swallowes a bigger circle of confusion, whereas if you have four times as many pixels on a identically sized sensor, it also needs a four times smaller CoC to render sharp outlines.


    No.

    Depth-of-field (and circles-of-confusion) is a purely optical property, regardless of what is receiving the image at the focal plane.

    Coarser resolution can make the sharply focused areas look softer (similar to the less sharply focused areas), but a given depth-of-field optically remains constant, regardless of sensor/film resolution.

     

    It's not the size of the sensor alone that affects DoF, it's the size of the sensor relative to it's resolution. This video explains it (jump to about 4'30"):


    No.

    Depth-of-field is an optical property. The size and resolution of the film/sensor have nothing to do with the optical property of depth-of-field. The only exception to this rule occurs when the sensor corners creep into the optically inferior edge of the image circle.

     

    What is more: 
    One 50 mm f1.4 is not as sharp as any other lens with the same specs, meaning it may not be able to focus an equally small CoC.


    Yes.  Some lenses resolve more sharply than others.

  11. This won't work because it creates shadows. If you want to use something like this on a A7 I don't see the reason to use anything nikon related. Just use the mirex adapter from mamiya 645 to Canon EF. You can adapt all MF glass to mamiya 645, and EF to E adapters are cheap.

     

    It's definitely useful to know that there are adapters that convert most medium format mounts to Mamiya 645.

     

    However, what do you mean by "creates shadows?"  What causes such shadows?

     

    Another advantage of having the tilt mechanism closer to the film plane is that one might be able to utilize more of the image circle, which possibly enables the lens to be pushed past the extremes possible with a more distant, medium format tilt/shift adapter (eventually causing the aformentioned shadows).

  12. @araucaria, looks interesting.  I have enough problems without buying field-view equipment

     

    Consider an E-mount to Nikkor lens tilt-shift adapter.  If there is no vignetting with this adapter, the tilt-shift features should work with medium format lenses, with an additional plain, cheap nikkor-to-medium-format adapter.  Of course, it will certainly work as a non-tilt/non-shift adapter for the aforementioned Nikkors.

     

    There are nikkor-to-medium-format tilt/shift adapters, but then you are locked-in to a particular medium format mount, with no chance of tilting/shifting the nikkors.

     

    In regards to the rumored USD$1,800 price of the the A7s, I have always been a loud proponent of full frame sensors/cameras for video.  However, it is disappointing that the best 4k bit rate from the A7s is 8-bits -- and that is only possible through the HDMI cable.

     

    The GH4 can do 4k 8-bits to an SD card for the same price, plus 10-bits from the HDMI cable.  Furthermore, with a focal reducer, the GH4 can mimic a super 35mm frame with an extra stop of sensitivity.

  13. Good point on the G6's focus peaking!

     

    Another advantage of using the G6 instead of the Forza is that there is no confusion between the ACs and electricians if I say, "Set-up the 18K!"

  14. The "One-Cam"  looks like a typical machine vision camera with a recorder.  The 9 stops of DR is in line other such cameras.
     
    Some rental houses offer the IO Industries cameras, and they have had booths at NAB and Cinegear for the past few years.
     
    However, one of the most impressive machine vision cinematography set-ups is this project using a Point Grey, 4K, 11.5-stop DR camera. The guy developed an encoder that recorded real-time to cDNG.  He even made a touch control interface.  The footage looks fairly clean, too, as evidenced from this screen cap and this screen cap.
     
    Also, the early Apertus cameras used open-source Elphel machine vision cameras.  Of course, the Apertus team developed a lot of stuff years ago, including encoding schemes and touch control interfaces.

  15. Can't beat physics. F4 = F4, no matter what sensor size. Just the field of view changes.

     

    A couple of points:

    - the same f-stop on every lens of the same focal length won't necessarily transmit the same amount light to the focal plane, which is why we have T-stops -- they account for the light "Transmission" factor;

    - the field of view doesn't change from one lens to the next if the (effective) focal length remains the same (barring any condensing optics behind the aperture, such as a speed booster).

×
×
  • Create New...