Jump to content

tupp

Members
  • Posts

    1,148
  • Joined

  • Last visited

Posts posted by tupp

  1. But the difference between 8-bit and 16-bit is pretty obvious and more in the ballpark of this test.

     

    The difference between 8-bit and 16-bit might be apparent when comparing two images of the same resolution (on monitors of the same resolution and bit depth to match each image).  Such a difference would become obvious if the scene contained a gradation subtle enough to cause banding in the 8-bit image but not in the 16-bit image.

     

    However, in such a scenario, if you could continually increase the resolution of the camera sensor and monitor of the 8-bit system, you would find that the banding would dissappear at some point in the 8-bit image.  By increasing the resolution of the 8-bit system, you are also increasing its color depth -- yet its bit depth always remains at 8-bit.

     

    One can easily observe a similar phenomenon.  Find a digital image that exhibits a slight banding when you are directly in front of it, then move away from the image.  The banding will disappear at some point.  By moving away from the image, you are increasing the resolution, making the pixels smaller in your field of view.  However, the bit depth is always the same, regardless of your viewing distance.

     

     

    However, since most of us don't have 4K monitors maybe the test won't be very evident.

     

    Such a test wouldn't be conclusive unless each monitor matches the resolution and bit depth of the image it displays.

     

     

    And from what I understand (and most here agree), bit depth can't be improved, can it? It's only the color sampling that actually improves.

     

    Most image makers are not aware of the fact that bit depth and color depth are two different properties.  In digital imaging, bit depth is a major factor of color depth, but resolution is an equally major factor of color depth (in both digital and analog imaging).

     

    Therefore, one can sacrifice resolution while increasing bit depth, yet the color depth remains the same (or is decreased).  In other words, swapping resolution for more bit depth does not result in an increase in color depth.

  2. Example images? Can post a 4K and 1080p* image for a single blind trial  B)

     

    There are plenty of examples in which a higher resolution image has been binned to a lower resolution with a higher bit depth.  The technique is mainly used to reduce noise and increase camera sensitivity and to maintain rich colors in post production transfers.

     

    Again, using such a process on a captured image can never increase color depth.  So, barring and inefficiencies/inequities in the conversion process or the display devices, the resulting images should look the same (except that one has more resolution).

     

    In addition, it is usless to try and compare such results, unless there are two display devices of identical size that can be set to correspond to the bit-depth and resolution inherent in each image.  Displaying a 32-bit image on a 16-bit monitor would be a waste of time when assessing the results of binning to a lower resolution to a higher bit depth.

  3. there's nothing significant to be gained from 4K => 1080p resampling in terms of color space / bit depth.

     

    Certainly, one can never gain color depth in an image that is already captured.  However, one can increase the bit depth while reducing resolution, and still maintain the same color depth.

     

     

    In terms of aliasing/artifacts and detail preservation, this is helpful: 

    http://pixinsight.com/doc/docs/InterpolationAlgorithms/InterpolationAlgorithms.html#__section002__

    )

     

    Aliasing/artifacting is a whole other deal.

  4. So they are trying to convince you that by widening the color space via dithering ( AKA Interpolation)  that 'magically' a wider gamut has been recovered.

     

    As tosvus has pointed-out, they are not dithering.  They are merely swapping resolution for bit depth -- reducing resolution while increasing bit depth.  The color depth remains the same or decreases (due to inefficiencies in the conversion or due to the color limitations of the final resolution/bit-depth combination).

     

    With dithering, there is no change in the bit depth nor in the resolution.

     

     

    ... pixel density has NOTHING at all to do with capture color gamut.

     

    Pixel density (resolution) is a major factor in color depth/gamut of digital systems.

     

    Here is the relationship between color depth, bit depth and resolution in a digital RGB system:

    COLOR DEPTH = (BIT DEPTH X RESOLUTION)3

    That's basically the way it works, barring practical and perceptual variables.

  5. Would the opposite be true (i.e. deterioration of bit depth and color space) when stretching anamorphic footage horizontally?

     

    Essentially, digital anamorphic systems create pseudo rectangular pixels, which are usually oblong on the horizontal axis.

     

    In such scenarios, the color depth is reduced on the horizontal dimension, while color depth remains the same on the vertical dimension.  However, the total color depth of the image never changes from capture to display.

     

    Furthermore, in digital anamorphic systems the bit depth never changes on either axis (barring any intentional bit depth reduction) -- the pixels just become oblong.

  6. tupp, are you taking about "perceived" color depth?  Which would factor in viewing distance and resolution.

     

    I am talking about actual color depth, of which viewing distance can be a factor.  As I have already said, resolution is a major factor in color depth.

     

    However, in most cases, viewing distance can be ignored, as color depth can be determined by simply taking the bit depth and pixel count on a given area (percentage) of the frame.

     

     

    I agree that the perceived color depth would have broad characteristics but from a technical standpoint 

    Bit Depth is bits per channel

    and

    Color Depth is bits per pixel

    .


     

     

    Taking "color depth per pixel" is actually just considering the number of colors that results from the bit depth in a single RGB pixel cell.  Color depth also majorly involves resolution -- pixels per given area of frame (or pixel groups/cells per given area of frame).

     

     

    For example I would say an image's bit depth is 8bits per color which would be a 24bit color depth image because the 8bits for each color (RGB) would be 8bits x 3channels for each pixel;

     

    That equation merely gives the number of possible colors per RGB pixel group.  It doesn't give the color depth, becuase it does not account for how many pixel groups fit into a given area of the frame.

     

     

    however from this technical standpoint

    Bit Depth will not always equal Color Depth

    , for example you could have an image that has the same

    8bit Bit Depth but has a 32bit Color Depth

    when using an Alpha Channel because then the equation changes to 8bits x 4channels (RGBA).

     

    Adding an alpha channel complicates the equation considerably, in that the background color/shade can affect how many new colors that the alpha channel adds to those of the RGB cell.  However, the total number of possible colors that an RGBA cell can generate will never exceed  the number of possible RGB colors multiplied by the number of possible alpha shades.

     

     

    Either way the bottom line is how many possible shades can the image contain, from that definition anything possible of 1,024 shades per channel would be considered a 10bit image (even if it isn't is using the same precision as a true 10bit source), so 8bit 4k would

    "technically"

    scale to 10bit HD but it's unlikely it would reproduce colors as accurately as a proper 10bit image.

     

    8bit 4k can "practically" scale to 10bit HD with "accurate" colors.  The efficiency/quality of the conversion determines how many colors are lost in translation.

  7. I would think that you could increase the color depth of an image if you reduce its viewing size without changing its resolution, just taking a step backwards for instance.

     

    If you reduce the image size while maintaining the pixel count of the image, you effectively increase the resolution -- the pixels are smaller per degree of the field of view.  So, you are correct in that you have increased the color depth per degree of the field of view.

     

    However, the actual total color depth of the image has not changed at all.  You are merely squeezing the total color depth of the image into a smaller area, and, of course, you are sacrificing discernability and image size.

  8. So it is possible to down sample 8bit 4k and gain more accurate color depth very similar to 10bit HD...

     

    The color depth of a given image can never be increased -- not without introducing something artificial.

     

    Increasing bit depth in a digital image while reducing the resolution will, at best, maintain the same color depth as the original.  I think that this established theory/technique is what has been recently "discovered."

     

    Again, BIT DEPTH ≠ COLOR DEPTH.  Bit depth determines the number of possible shades per color channel in a digital image.  Color depth is a much broader characteristic, as it majorly involves resolution and as it also applies to analog mediums (film, non-digital video, printing, etc.).

  9. @maxotics

    Thank you for your post.

    By the way, on the strength of your work with the EOS-M, I just got one with a Fujian 35mm. I can't wait to start shooting with it!

    In regards to my color depth post, the math that I quoted (from the page that I linked) has nothing to do with video compression. In fact, that formula only applies to raw, unadulterated image information. Introducing compression variables would make the math more complex.

    However, introducing compression can never increase the color depth capabilities inherent in a given image capture or image viewing system.

    Not sure what is the point with the Bayer images, but the color depth formula probably applies to raw Bayer images, with a slight adjustment. One chooses pixel groups in multiples of four (two green, one blue, one red), and, I think the only formula change is that one merely sums the bit depth of the two green pixels and then multiplies that sum against the bit depth of the other two pixles.

     

    Keep in mind that one is calculating the color depth of a raw image that normally (but not necessarily) has a predominant green cast. Also, be mindful of the fact that there are no Bayer viewing systems (just Bayer sensors).

    On the other hand, there are several non-Bayer sensors (even RGB sensors, eg. Panavision Genesis), and almost all digital color viewing systems are RGB.

    I do not follow the point on the formula discrepancy, but note that for the formula to work, one must choose a percentage of an image frame, and one must consistently use that same percentage for all image frames to assess their relative color depth. One can choose for the area to be the entire image, but then one is essentially taking the entire frame as one blended pixel group.

    If you are consistently utilizing the same image percentage throughout your example, please simplify your point for my benefit. I do not understand your conclusion, with the statement, "The number of bits that represent a color have 2 aspects."

    I think that I agree with the statement: "The larger the bit value the GREATER accuracy you can have in representing the color." I am not sure if "accuracy" is the appropriate term. Certainly, the larger the bit depth, the greater the number of possible colors/shades.

    I am not sure this statement was what you meant: "The large the bit value, the greater RANGE you can have between the same color in two neighboring pixels, say." There is a situation in which the color/shade range would be exactly the same regardless of bit depth. In addition, a greater bit depth can actually reduce the dynamic range between two pixel values. I am happy to give examples on request.

    Speaking of dynamic range, it really is a property that is independent from bit depth and color depth. Dynamic range involves the possible high and low value extremes relative to the noise level. The bit depth determines the number of available increments within those extremes. There are plenty of examples of systems having high dynamic range with a low bit depth (and vice versa).

    I agree with this statement: "Higher resolution does not create higher dynamic range." Resolution and dynamic range are completely independent. However, higher resolution definitely increases color depth.

    I disagree with this statement: "Dynamic range is a function of bit-depth at the pixel level." Again, bit depth and dynamic range are two different characteristics. A system can have: great bit depth and low dynamic range or low bit depth and great dynamic range -- or any other combination of the two.

    Thanks!

     

    [edit -- corrected formula]

  10. Most do not understand the color depth advantage of higher resolution and only see the "sharpness" advantage of higher resolution.

     

    There are common misconceptions about this scenario, so here are some basic facts.

     

    First of all, BIT DEPTH ≠ COLOR DEPTH.  This is the hardest concept for most to understand, but bit depth and color depth are not the same things.  Basically, bit depth is a major factor in color depth, with resolution being the other major factor.

     

    A fairly accurate formula for the relationship of color depth, bit depth and resolution is:

    COLOR DEPTH = (BIT DEPTH X RESOLUTION)3.

     

    This mathematical relationship means that a small increase in resolution can yield a many-fold increase in color depth.

     

    The above formula is true for linear response, RGB pixel group sensors.  When considering non-RGB sensors (eg. Bayer, Xtrans, etc) and sensors with a non-linear response, the formula gets more complex.  In addition, the formula does not take into account factors of human perception of differing resolutions, nor does it account for binning efficiancy when trying to convert images from a higher resolution to a lower resolution.

     

    More detailed info can be found on this page.

     

     

×
×
  • Create New...