Jump to content

Image thickness / density - help me figure out what it is


kye
 Share

Recommended Posts

19 hours ago, kye said:

something that occurs in real footage, like when I did everything wrong and recorded this low-contrast scene (due to cloud cover) with C-Log in 8-bit.

715922396_CinquedeTerre_1.9.3.thumb.jpg.b18950cdb4b48286af9026c56065e5d0.jpg

With a bit more contrast the image would look "thicker," I think.  I wouldn't mind losing detail in those shadows.  What's it really buying you anyway?  You can also mask the sky to hold some detail there if you wanted to.

Still, its a shot of partially backlit colorful buildings on an overcast midday...which isn't going to render wonderfully on any camera, digital or film.

COLORS.jpg.ca46b4e0f2abf7dbd3ae307b037d980e.jpg

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
2 hours ago, fuzzynormal said:

Not for me, because they look the same, or are so similar to not matter to my eye.  

Maybe I missed something more obvious?

That's what I thought - that there wasn't much visible difference between them and so no-one commented.

What's interesting is that I crunched some of those images absolutely brutally.

The source images were (clockwise starting top left) 5K 10-bit 420 HLG, 4K 10-bit 422 Cine-D, and 2x RAW 1080p images, which were then graded, put onto a 1080p timeline and then degraded as below:

  • First image had film grain applied then JPG export
  • Second had film grain then RGB values rounded to 5.5bit depth
  • Third had film grain then RGB values rounded to 5.0bit depth
  • Fourth had film grain then RGB values rounded to 4.5bit depth

The giveaway is the shadow near the eye on the bottom-right image which at 4.5bits is visibly banding.

What this means is that we're judging image thickness via an image pipeline that isn't that visibly degraded by having the output at 5 bits-per-pixel.

The banding is much more obvious without the film grain I applied, and for the sake of the experiment I tried to push it as far as I could.

Think about that for a second - almost no visible difference making an image 5bits-per-pixel at a data rate of around 200Mbps (each frame is over 1Mb).

Link to comment
Share on other sites

6 hours ago, kye said:

...the shadow near the eye on the bottom-right...

Personally, not going to worry.  98% of the other things happening in production to think about before shadow banding becomes a priority. 

Like, should I go to bed early so I can get up at 4am and capture 2 hours of footage during sunrise?  What's for breakfast?  How can I frame my composition to take advantage of backlight?  Who has the coffee?  Where do I need to be to get the best exposure of that subject?  Can the actress doing the supporting role actually help carry this scene?  How long is it going to take to get to location?  Should I use the 50mm so the shots of the dogs look more flattering even though we're in a tight space?  etc.

If I'm ever on a production wherein the level of pixel scrutiny you're outlining here is part of the process it sure won't be me thinking about it.  That rabbit hole doesn't interest me enough to crawl down it.  I'll take a peek into it every now and again, but that's for some other bunny to burrow into.

After all, without good footage to begin with, the rest isn't exactly worth thinking about.

Link to comment
Share on other sites

19 hours ago, noone said:

While I think it is crazy to not light something when you CAN, a lot of the time shooting in available light is the goal or even all you can do and in that case having better DR and colour depth and tonality with a camera like an A7s beats a camera that starts off better but falls away a lot quicker.

I disagree that shooting in lower light levels makes an image "thinner" necessarily (though it CAN).

Nothing wrong with using intentional natural light. It can sometimes produce amazing results. 

But in your reference to dynamic range, I do think it matters how those stops are utilised. For example, on an overcast grey day - using LOG on some cameras may spread the exposure too thinly - whereas one might prefer to use a standard gamma to accomplish a “juicier” looking image. 

The image will most certainly look thinner in poor lighting conditions as there’s less information captured by each pixel. Not in all situations but it’s certainly a big factor.

Link to comment
Share on other sites

1 hour ago, Oliver Daniel said:

 

The image will most certainly look thinner in poor lighting conditions as there’s less information captured by each pixel. Not in all situations but it’s certainly a big factor.

I agree if it is POOR lighting.     If it is LOW lighting and the particular camera can handle it, no.

Some cameras simply can not handle low light others handle it to varying degrees but every camera does have a limit.

It is less of an issue with still photography because you can often use a longer shutter speed.

Link to comment
Share on other sites

20 hours ago, Oliver Daniel said:

Nothing wrong with using intentional natural light. It can sometimes produce amazing results. 

But in your reference to dynamic range, I do think it matters how those stops are utilised. For example, on an overcast grey day - using LOG on some cameras may spread the exposure too thinly - whereas one might prefer to use a standard gamma to accomplish a “juicier” looking image. 

The image will most certainly look thinner in poor lighting conditions as there’s less information captured by each pixel. Not in all situations but it’s certainly a big factor.

Actually, the fact that an image can be reduced to 5bits and not be visibly ruined, means that the bits aren't as important as we all seem to think.

A bit-depth of 5bits is equivalent to taking an 8-bit image and only using 1/8th of the DR, then expanding that out.  Or, shooting a 10-bit image and only exposing using 1/32 of that DR and expanding that out.

Obviously that's not something I'd recommend, and also considering I applied a lot of noise before doing the bit-depth reduction, but the idea that image thickness is related to bit-depth seems to be disproven.

I'm now re-thinking what to test next, but this was an obvious thing and it turned out to be wrong.

Link to comment
Share on other sites

1 hour ago, kye said:

Actually, the fact that an image can be reduced to 5bits and not be visibly ruined, means that the bits aren't as important as we all seem to think.

A bit-depth of 5bits is equivalent to taking an 8-bit image and only using 1/8th of the DR, then expanding that out.  Or, shooting a 10-bit image and only exposing using 1/32 of that DR and expanding that out.

Obviously that's not something I'd recommend, and also considering I applied a lot of noise before doing the bit-depth reduction, but the idea that image thickness is related to bit-depth seems to be disproven.

I'm now re-thinking what to test next, but this was an obvious thing and it turned out to be wrong.

But those images were not captured in 5-bit, that is an indicator for final compression, it is NOT an indicator of quality of it as the capture format. 

Link to comment
Share on other sites

8 hours ago, Geoff CB said:

But those images were not captured in 5-bit, that is an indicator for final compression, it is NOT an indicator of quality of it as the capture format. 

The question we're trying to work out here is what aspects of an image make up this subjective thing referred to by some as 'thickness'.

We know that high-end cinema cameras typically have really thick looking images, and that cheap cameras typically do not (although there are exceptions).  Therefore this quality of thickness is related to something that differs between these two scenarios.

Images from cheap cameras typically have a range of attributes in common, such as 8-bit, 420, highly compressed, cheaper lenses, less attention paid to lighting, and a range of other things.  However, despite all these limitations, the images from these cameras are very good in some senses.  A 4K file from a smartphone has a heap of resolution, reasonable colour science, etc, so it's not like we're comparing cinema cameras with a potato.

This means that the concept of image thickness much be fragile.  Otherwise consumer cameras would capture it just fine.

If something is fragile, and is only just on the edges of being captured, then if we take a thick image and degrade it in the right ways, then the thickness should evaporate with the slightest degradation.

The fact I can take an image and output it at 8-bits and at 5-bits and for there not to be a night-and-day difference then I must assume one of three things:

  1. the image wasn't thick to begin with
  2. it is thick at both 8-bits and 5-bits and therefore bit-depth doesn't matter than much
  3. it is thick at 8-bit but not at 5-bits and people just didn't notice, in a thread especially about this

I very much doubt that it's #3, because I've had PMs from folks who I trust saying it didn't look much different.  

Maybe it's #1, but I also doubt that, because we're routinely judging the thickness of images via stills from YT or Vimeo, which are likely to be 8-bit, 420, and highly compressed.  The images of the guy in the car that look great are 8-bit.  I don't know where they came from, but if they're screen grabs from a streaming service then they'll be pretty poor quality too.  Yet they still look great.

I'm starting to think that maybe image thickness is related to the distribution of tones within a HSL cube, and some areas being nicer than others, or there being synergies between various areas and not others.

Link to comment
Share on other sites

8 hours ago, kye said:

The question we're trying to work out here is what aspects of an image make up this subjective thing referred to by some as 'thickness'.

I think that the "thickness" comes primarily from emulsion's color depth and partially from the highlight compression that you mentioned in another thread, from the forgiving latitude of negative film and from film's texture (grain).

 

Keep in mind that overlaying "grain" screen on a digital image is not the same as the grain that is integral to forming an image on film emulsion.  Grain actually provides the detail and contrast and much of the color depth of an film image.

 

 

8 hours ago, kye said:

We know that high-end cinema cameras typically have really thick looking images, and that cheap cameras typically do not (although there are exceptions).

Home movies shot on Super8 film often have "thick" looking images, if they haven't faded.

 

8 hours ago, kye said:

The fact I can take an image and output it at 8-bits and at 5-bits and for there not to be a night-and-day difference...

You didn't create a 5-bit image nor a "4.5-bit" image, nor did you keep all of the shades within 22.6 shade increments ("4.5-bits") of the 255 increments in the final 8-bit image.

Here are scopes of both the "4.5-bit" image and the 8-bit image:

4.5bit_scopes.png.d3d159b676bde903d6b2afd6b40d8964.png

 

8bit_scopes.png.b2e2f32bd0005d127bfa0fd986d55681.png

If you had actually mapped the 22.6 tones-per-channel from a "4.5-bit" image into 26.5 of the 255 tones-per-channel 8-bit image, then all of the image's pixels would appear inside 22 vertical lines on the RGB histograms, (with 223 of the histogram lines showing zero pixels).

 

So, even though the histogram of the "4.5-bit" image shows spikes (compared to that of the 8-bit image), the vast majority of the "4.5-bit" image's pixels fall in between the 22.6 tones that would be inherent in an actual "4.5-bit" image.

 

To do this comparison properly, one should probably shoot an actual "4.5-bit" image, process it in a "4.5-bit" pipeline and display it on a "4.5-bit" monitor.

 

By the way, there is an perceptible difference between the 8-bit image and the "4.5-bit" image.

Link to comment
Share on other sites

3 hours ago, tupp said:

I think that the "thickness" comes primarily from emulsion's color depth and partially from the highlight compression that you mentioned in another thread, from the forgiving latitude of negative film and from film's texture (grain).

It might be, that's interesting.  I'm still working on the logic of subtractive vs additive colour and I'm not quite there enough to replicate it in post.

3 hours ago, tupp said:

Keep in mind that overlaying "grain" screen on a digital image is not the same as the grain that is integral to forming an image on film emulsion.  Grain actually provides the detail and contrast and much of the color depth of an film image.

Agreed.  In my bit-depth reductions I added grain to introduce noise to get the effects of dithering:

"Dither is an intentionally applied form of noise used to randomize quantization error, preventing large-scale patterns such as color banding in images. Dither is routinely used in processing of both digital audio and video data, and is often one of the last stages of mastering audio to a CD."

Thickness of an image might have something to do with film grain, but that's not what I was testing (or trying to test anyway).

3 hours ago, tupp said:

Home movies shot on Super8 film often have "thick" looking images, if they haven't faded.

Agreed.  That's why I haven't been talking about resolution or sharpness, although maybe I should be talking about reducing resolution and sharpness as maybe that will help with thickness?

3 hours ago, tupp said:

You didn't create a 5-bit image nor a "4.5-bit" image, nor did you keep all of the shades within 22.6 shade increments ("4.5-bits") of the 255 increments in the final 8-bit image.

Obviously it's possible that I made a mistake, but I don't think so.

Here's the code:

Quote

DEFINE_UI_PARAMS(Bits, Bits, DCTLUI_SLIDER_FLOAT, 14, 0, 14, 1)


__DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B)
{
    const float Deg = pow(2,Bits);

    const float r = round(p_R*Deg)/Deg;
    const float g = round(p_G*Deg)/Deg;
    const float b = round(p_B*Deg)/Deg;

    return make_float3(r, g, b);
}

Pretty straight-forwards.

Also, if I set it to 2.5bits, then this is what I get:

image.thumb.png.c45065a653193019069b696d5719a3d6.png

which looks pretty much what you'd expect.

I suspect the vertical lines in the parade are just an algorithmic artefact of quantised data.  If I set it to 1 bit then the image looks like it's not providing any values between the standard ones you'd expect (black, red, green, blue, yellow, cyan, magenta, white).

image.thumb.png.6fcd7fc87eefa3c93f12fbf06f7162b0.png

Happy to hear if you spot a bug.

Also, maybe the image gets given new values when it's compressed?  Actually, that sounds like it's quite possible..  hmm.

3 hours ago, tupp said:

To do this comparison properly, one should probably shoot an actual "4.5-bit" image, process it in a "4.5-bit" pipeline and display it on a "4.5-bit" monitor.

I wasn't suggesting that a 4.5bit image pipeline would give that exact result, more that we could destroy bit-depth pretty severely and the image didn't fall apart, thus it's unlikely that thickness comes from the bit-depth.

3 hours ago, tupp said:

By the way, there is an perceptible difference between the 8-bit image and the "4.5-bit" image.

Indeed there is.  and I'd expect there to be!  I mean, I bought a GH5 based partly on the internal 10-bit!  I'm not regretting my decision, but I'm thinking that it's less important than I used to think it was, especially without using a log profile like I also used to do.

Essentially the test was to go way too far (4.5bits is ridiculous) and see if that had a disastrous effect, which it didn't seem to do.  

If we start with the assumption that cheap cameras create images that are thin because of their 8-bit codecs, then by that logic a 5-bit image should be razor thin and completely objectionable, but it wasn't, so it's unlikely that the 8-bit property is the one robbing the cheap cameras of their images thickness.

Link to comment
Share on other sites

12 hours ago, kye said:

I'm starting to think that maybe image thickness is related to the distribution of tones within a HSL cube

This harks back to deezid's point:

Quote

Super highly saturated shadow areas which many cameras nowadays desaturate to cover chroma noise

From my investigations film does seem to have much more saturated shadows than what a digital image offers. If you match the saturation of the midtones of digital to film, then the shadows will need a boost to also match... maybe by around 25-50% at the lowest parts. It's a shockingly huge saturation boost in the shadow areas (and the highlights would need to come down in saturation slightly). I'm not talking about log images here, I'm talking contrasty Rec709.

The digital capture is probably closer to being an accurate representation of the level of saturation in reality. But film is transformative. We want our images to look better than reality!

12 hours ago, kye said:

and some areas being nicer than others

If we talk about memory colours (sky, foliage and skin) the preferences of photographers and middle American shoppers led to altered hue and saturation in Kodak film stocks. So it looks like we prefer skies that are more cyan than in reality, foliage that is cooler and skin that is more uniform, and tending towards tan (Fuji skin tends towards rosy pink).

3 minutes ago, kye said:

I bought a GH5 based partly on the internal 10-bit

With 10bit I can get decent, filmic colour out of V-Log! But 8 bit would fall apart.

Link to comment
Share on other sites

30 minutes ago, zerocool22 said:

canon 5D III ML RAW is 16bit, canon R5 is 12 bit RAW. I am surprised nobody did a comparison between these 2 yet. The colours of the 5D III might still have the edge over the R5. (Allthough resolution, dynamic range and framerates are better now)

I've compared 14-bit vs 12-bit vs 10-bit RAW using ML, and based on the results of my tests I don't feel compelled to even watch a YT video comparing them, let alone do one for myself, even if I had the cameras just sitting there waiting for it.

Have you played with various bit-depths?  12-bit and 14-bit are so similar that it takes some pretty hard pixel-peeping to be able to tell a difference.  There is one, of course, but it's so far into diminishing returns that the ROI line is practically horizontal, unless you were doing some spectacularly vicious processing in post.

I have nothing against people using those modes, but it's a very slight difference.

Link to comment
Share on other sites

I always think of it as related to compression TBH, AKA how much you can push colour around and have things still look good.

C100 with external recorder for example looked way better than internal, cos the colours didn't go all "thin" and insipid if you pushed it about, 

Trying to white balance adjustments on over-compressed footage results in a sort of messy colour wash. I think of this as "thin", the opposite as "thick"

Link to comment
Share on other sites

When grading files in HDR, I can instantly tell when a file is of lower quality. Grading on a 8-bit timeline doesn't really show the difference, but on a 10 or 12-bit HDR timeline on a HDR panel it is night and day. 

So for me for a "Thick" image. is 10-bit 4:2:2 or better with at least 14+ Stops of DR.

Link to comment
Share on other sites

4 hours ago, kye said:

I've compared 14-bit vs 12-bit vs 10-bit RAW using ML, and based on the results of my tests I don't feel compelled to even watch a YT video comparing them, let alone do one for myself, even if I had the cameras just sitting there waiting for it.

Have you played with various bit-depths?  12-bit and 14-bit are so similar that it takes some pretty hard pixel-peeping to be able to tell a difference.  There is one, of course, but it's so far into diminishing returns that the ROI line is practically horizontal, unless you were doing some spectacularly vicious processing in post.

I have nothing against people using those modes, but it's a very slight difference.

Yeah I have seen some comparisons between the 5d III ML RAW 16 bit and 10bit back in the day. And the difference was indeed not that big. But is it possible that the quality might be better as the source is 16bit, not sure if the R5 is capable of 16bit?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...