kye Posted March 19, 2021 Share Posted March 19, 2021 2 hours ago, tupp said: Nope. Color depth is the number of different colors that can be produced in a given area. A given area has to be considered, because imaging necessarily involves area... which area necessarily involves resolution. Obviously, if a 1-bit imaging system produces more differing colors as the resolution is increased, then resolution is an important factor to color depth -- it is not just bit depth that determines color depth. The above example of a common screen printing is just such an imaging system that produces a greater number of differing colors as the resolution increases, while the bit depth remains at 1-bit. The Wikipedia definition of color depth is severely flawed in at least two ways: it doesn't account for resolution; and it doesn't account for color depth in analog imaging systems -- which possess absolutely no bit depth nor pixels. Now, let us consider the wording of the Wikipedia definition of color depth that you quoted. This definition actually gives two image areas for consideration "a single pixel" -- meaning an RGB pixel group; and "the number of bits used for each color component of a single pixel" -- meaning a single pixel site of one of the color channels. For simplicity's sake, let's just work with Wikipedia's area #2 -- a single channel pixel site of a given bit depth of "N." We will call the area of that pixel site "A." If we double the resolution, the number of pixel sites in "A" increases to two. Suddenly, we can produce more tones inside "A." In fact, area "A" can now produce "N²" number of tones -- much more than "N" tones. Likewise, if we quadruple the resolution, "A" suddenly contains four times the pixel sites that it did originally, with the number of possible tones within "A" now increasing to "N⁴." Now, one might say, "that's not how it actually works in digital images -- two or four adjacent pixels are not designed to render a single tone." Well, the fact is that there are some sensors and monitors that use more pixels within a pixel group than those found within the typical Bayer pixel group or found withing a striped RGB pixel group. Furthermore (and probably most importantly), image detail can feather off within one or two or three pixel groups, and such tiny transitions might be where higher tone/color depth is most utilized. By the way, I didn't come up with the idea that resolution is "half" of color depth. It is a fact that I learned when I studied color depth in analog photography in school -- back when there was no such thing as bit depth in imaging. In addition, experts have more recently shown that higher resolutions give more color information (color depth), allowing for conversions from 4k, 4:2:0, 8-bit to Full HD, 4:4:4, 10-bit -- using the full, true 10-bit gamut of tones. Here is Andrew Ried's article on the conversion and here is the corresponding EOSHD thread. You're making progress, but haven't gotten there yet. Please explain how, in an 8K image with banding, an area with dozens/hundreds of pixels that are all the same colour, somehow in the downsampling process you will get something other than simply a lower resolution version of that flat band of colour? tupp 1 Quote Link to comment Share on other sites More sharing options...
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.