Jump to content

Image thickness / density - help me figure out what it is


kye
 Share

Recommended Posts

6 hours ago, kye said:

I'm still working on the logic of subtractive vs additive colour and I'm not quite there enough to replicate it in post.

If you are referring to "additive" and "subtractive" colors in the typical imaging sense, I don't think that it applies here.

 

 

6 hours ago, kye said:

 In my bit-depth reductions I added grain to introduce noise to get the effects of dithering:

"Dither is an intentionally applied form of noise used to randomize quantization error,

There are many different types of dithering.  "Noise" dithering (or "random" dithering) is probably the worst type.  One would think that a grain overlay that yields dithering would be random, but I am not sure that is what your grain filter is actually doing.

 

Regardless, the introducing the variable of grain/dithering is unnecessary for the comparison, and, likely, it is what skewed the results.

 

 

6 hours ago, kye said:

That's why I haven't been talking about resolution or sharpness, although maybe I should be talking about reducing resolution and sharpness as maybe that will help with thickness?

Small film formats have a lot of resolution with normal stocks and normal processing.

 

If you reduce the resolution, you reduce the color depth, so that is probably not wise to do.

 

 

6 hours ago, kye said:

Obviously it's possible that I made a mistake, but I don't think so.

Here's the code:

Too bad there's no mark-up/mark-down for <code> on this web forum.

 

The noise/grain/dithering that was introduced is likely what caused the problem -- not the rounding code.  Also, I think that the images went through a YUV 4:2:0 pipeline at least once.

 

I posted the histograms and waveforms that clearly show that the "4.5-bit" image is mostly an 8-bit image, but you can see for yourself.  Just take your "4.5-bit" image an put it in your NLE and look at the histogram.  Notice that there are spikes with bases that merge, instead of just vertical lines.  That means that a vast majority of the image's pixels fall in between the 22 "rounded 4.5-bit" increments.

 

 

6 hours ago, kye said:

Also, if I set it to 2.5bits, then this is what I get:

image.thumb.png.c45065a653193019069b696d5719a3d6.png

which looks pretty much what you'd expect.

Yes.  The histogram should show equally spaced vertical lines that represent the increments of the lower bit depth (2.5-bits) contained within a larger bit dept (8-bits).

 

6 hours ago, kye said:

I suspect the vertical lines in the parade are just an algorithmic artefact of quantised data.

The vertical lines in the waveforms merely show the locations where each scan line trace goes abruptly up and down to delineate a pool of a single color.  More gradual and more varied scan line slopes appear with images of a higher bit depth that do not contain large pools of a single color.

 

 

7 hours ago, kye said:

Also, maybe the image gets given new values when it's compressed?  Actually, that sounds like it's quite possible..  hmm.

I checked the histogram of "2.5-bit" image without the added noise/grain/dithering, and it shows the vertical histogram lines as expected.  So, the grain/dithering is the likely culprit.

 

 

7 hours ago, kye said:

I wasn't suggesting that a 4.5bit image pipeline would give that exact result, more that we could destroy bit-depth pretty severely and the image didn't fall apart, thus it's unlikely that thickness comes from the bit-depth.

An unnecessary element (noise/grain/dithering) was added to the "4.5-bit" image that made it a dirty 8-bit image, so we can't really conclude anything from the comparison.  Post the "4.5-bit" image without grain/dithering, and we might get a good indication of how "4.5-bits" actually appears.

 

 

7 hours ago, kye said:

Essentially the test was to go way too far (4.5bits is ridiculous) and see if that had a disastrous effect, which it didn't seem to do.

Using extremes to compare dramatically different outcomes is a good testing method, but you have to control your variables and not introduce any elements that skew the results.

 

Please post the "4-5-bit" image without any added artificial elements.

 

Thanks!

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
9 hours ago, hyalinejim said:

This harks back to deezid's point:

From my investigations film does seem to have much more saturated shadows than what a digital image offers. If you match the saturation of the midtones of digital to film, then the shadows will need a boost to also match... maybe by around 25-50% at the lowest parts. It's a shockingly huge saturation boost in the shadow areas (and the highlights would need to come down in saturation slightly). I'm not talking about log images here, I'm talking contrasty Rec709.

The digital capture is probably closer to being an accurate representation of the level of saturation in reality. But film is transformative. We want our images to look better than reality!

If we talk about memory colours (sky, foliage and skin) the preferences of photographers and middle American shoppers led to altered hue and saturation in Kodak film stocks. So it looks like we prefer skies that are more cyan than in reality, foliage that is cooler and skin that is more uniform, and tending towards tan (Fuji skin tends towards rosy pink).

With 10bit I can get decent, filmic colour out of V-Log! But 8 bit would fall apart.

It does hark back to Deezids point.  Lots more aspects to investigate here yet.

Interesting about the saturation of shadows - my impression was that film desaturated both shadows and highlights compared to digital, but maybe when people desaturate digital shadows and highlights they're always done overzealously?

We absolutely want our images to be better than reality - the image of the guy in the car doesn't look like reality at all!  One of the things that I see that makes an image 'cinematic' vs realistic is resolution and specifically the lack of it.  If you're shooting with a compressed codec then I think some kind of image softening in post is a good strategy.  I'm yet to systematically experiment with softening the image with blurs, but it's on my list.

8 hours ago, hyalinejim said:

Let me ask a question!

These are ColorChecker patches abstracted from -2, 0 and +2 exposures using film in one case and digital in the other (contrast has been matched). Which colour palette is nicer? Open each in a new tab and flick back and forth.

ONE:
01.jpg.a22929c299bd317a5b73f42bc4041d90.jpg

 

or TWO:
02.jpg.4bc366bcf79cfdc191c12159858261a3.jpg

I'll let others comment on this in order to prevent groupthink, but with what I've recently learned about film which one is which is pretty obvious.

7 hours ago, jgharding said:

I always think of it as related to compression TBH, AKA how much you can push colour around and have things still look good.

C100 with external recorder for example looked way better than internal, cos the colours didn't go all "thin" and insipid if you pushed it about, 

Trying to white balance adjustments on over-compressed footage results in a sort of messy colour wash. I think of this as "thin", the opposite as "thick"

When you say 'compression', what are you referring to specifically?  Bit-rate? bit-depth? codec? chroma sub-sampling?

6 hours ago, Geoff CB said:

When grading files in HDR, I can instantly tell when a file is of lower quality. Grading on a 8-bit timeline doesn't really show the difference, but on a 10 or 12-bit HDR timeline on a HDR panel it is night and day. 

So for me for a "Thick" image. is 10-bit 4:2:2 or better with at least 14+ Stops of DR.

Have you noticed exceptions to your 10-bit 422 14-stops rule where something 'lesser' had unexpected thickness, or where things above that threshold didn't?  If so, do you have any ideas on what might have tipped the balance in those instances?

2 hours ago, tupp said:

If you are referring to "additive" and "subtractive" colors in the typical imaging sense, I don't think that it applies here.

There are many different types of dithering.  "Noise" dithering (or "random" dithering) is probably the worst type.  One would think that a grain overlay that yields dithering would be random, but I am not sure that is what your grain filter is actually doing.

Regardless, the introducing the variable of grain/dithering is unnecessary for the comparison, and, likely, it is what skewed the results.

Small film formats have a lot of resolution with normal stocks and normal processing.

If you reduce the resolution, you reduce the color depth, so that is probably not wise to do.

Too bad there's no mark-up/mark-down for <code> on this web forum.

The noise/grain/dithering that was introduced is likely what caused the problem -- not the rounding code.  Also, I think that the images went through a YUV 4:2:0 pipeline at least once.

I posted the histograms and waveforms that clearly show that the "4.5-bit" image is mostly an 8-bit image, but you can see for yourself.  Just take your "4.5-bit" image an put it in your NLE and look at the histogram.  Notice that there are spikes with bases that merge, instead of just vertical lines.  That means that a vast majority of the image's pixels fall in between the 22 "rounded 4.5-bit" increments.

Yes.  The histogram should show equally spaced vertical lines that represent the increments of the lower bit depth (2.5-bits) contained within a larger bit dept (8-bits).

The vertical lines in the waveforms merely show the locations where each scan line trace goes abruptly up and down to delineate a pool of a single color.  More gradual and more varied scan line slopes appear with images of a higher bit depth that do not contain large pools of a single color.

I checked the histogram of "2.5-bit" image without the added noise/grain/dithering, and it shows the vertical histogram lines as expected.  So, the grain/dithering is the likely culprit.

An unnecessary element (noise/grain/dithering) was added to the "4.5-bit" image that made it a dirty 8-bit image, so we can't really conclude anything from the comparison.  Post the "4.5-bit" image without grain/dithering, and we might get a good indication of how "4.5-bits" actually appears.

Using extremes to compare dramatically different outcomes is a good testing method, but you have to control your variables and not introduce any elements that skew the results.

Please post the "4-5-bit" image without any added artificial elements.

Thanks!

Additive vs subtractive colours and mimicking subtractive colours with additive tools may well be relevant here, and I see some of the hallmarks of that mimicry almost everywhere I look.

I did a colour test of the GH5 and BMMCC and I took shots of my face and a colour checker with both cameras, including every colour profile on the GH5.  I then took the rec709 image from the GH5 and graded it to match the BMMCC as well as every other colour profile from the GH5.

In EVERY instance I saw adjustments being made that (at least partially) mimicked subtractive colour.

I highly encourage everyone to take their camera, point it at a colourful scene lit with natural light and take a RAW still image and then a short video clip in their favourite colour profile, and then try to match the RAW still to the colour profile.  We talk about "just doing a conversion to rec709" or "applying the LUT" like it's nothing - it's actually applying a dozen or more finely crafted adjustments created by professional colour scientists.  I have learned an incredible amount by reverse-engineering these things.

It makes sense that the scopes draw lines instead of points, that's also why the vector scope looks like triangles and not points.  One less mystery 🙂 

I'm happy to re-post the images without the noise added, but you should know that I added the noise before the bit-depth reduction plugin, not after, so the 'dirtying' of the image happened during compression, not by adding the noise.

52 minutes ago, ntblowz said:

This youtube is quite interesting on the topic of image thickness from colourist's view, general public vs film maker's expectation is a bit different.

 

 

I saw that.  His comments about preferring what we're used to were interesting too.

Blind testing is a tool that has its uses, and we don't use it nearly enough.

Link to comment
Share on other sites

6 hours ago, kye said:

Additive vs subtractive colours and mimicking subtractive colours with additive tools may well be relevant here, and I see some of the hallmarks of that mimicry almost everywhere I look.

I am not sure what you mean.  Are you referring to the concept color emulsion layers subtracting from each other during the printing stage while a digital monitor "adds" adjacent pixels?

 

Keep in mind that there is nothing inherently "subtractive" with "subtractive colors."  Likewise, there is nothing inherently "additive" with "additive colors."

 

 

6 hours ago, kye said:

In EVERY instance I saw adjustments being made that (at least partially) mimicked subtractive colour.

Please explain what you mean.

 

 

6 hours ago, kye said:

It makes sense that the scopes draw lines instead of points, that's also why the vector scope looks like triangles and not points.

Yes, but the histograms are not drawing the expected lines for the "4.5-bit" image nor for the "5-bit" image.  Those images are full 8-bit images.

 

On the other hand, the "2.5-bit" shows the histogram lines as expected.  Did you do something different when making the "2.5-bit" image?

 

7 hours ago, kye said:

I'm happy to re-post the images without the noise added, but you should know that I added the noise before the bit-depth reduction plugin, not after, so the 'dirtying' of the image happened during compression, not by adding the noise.

If the culprit is compression, then why is the "2.5-bit" image showing the histogram lines as expected, while the "4.5-bit" and "5-bit" images do not show the histogram lines?

 

Please just post the 8-bit image and the "4.5-bit" image without the noise/grain/dithering.

 

Thanks!

Link to comment
Share on other sites

 

12 hours ago, kye said:

When you say 'compression', what are you referring to specifically?  Bit-rate? bit-depth? codec? chroma sub-sampling?

Not really referring to a specific aspect, just the pliability of the image.

Codec implementations are more than the sum of their parts so it's a little bit of a 'war of attrition' to try and pin it down to a number.

All of those things contribute to the pliability of the image. I'm just stating that in general, less-compressed images survive more alteration.

Red's codec is obviously the best here, raw data plus masses of compression and still holds up to abuse. But then the image characteristic of Arri is preferable to me despite ProRes being the most practical codec in there.

If the phrase "thick" and "thin" is this hard to define, perhaps it's better to use different ones whe you're to communicate the nature of an image. These words seem to be open to interpretation and sort of fail to communicate clearly as a result.

Link to comment
Share on other sites

5 hours ago, tupp said:

I am not sure what you mean.  Are you referring to the concept color emulsion layers subtracting from each other during the printing stage while a digital monitor "adds" adjacent pixels?

Keep in mind that there is nothing inherently "subtractive" with "subtractive colors."  Likewise, there is nothing inherently "additive" with "additive colors."

Please explain what you mean.

Perhaps these might provide some background to subtractive vs additive colour science.

https://www.dvinfo.net/article/production/camgear/what-alexa-and-watercolors-have-in-common.html

https://www.dvinfo.net/article/post/making-the-sony-f55-look-filmic-with-resolve-9.html

5 hours ago, tupp said:

Yes, but the histograms are not drawing the expected lines for the "4.5-bit" image nor for the "5-bit" image.  Those images are full 8-bit images.

On the other hand, the "2.5-bit" shows the histogram lines as expected.  Did you do something different when making the "2.5-bit" image?

If the culprit is compression, then why is the "2.5-bit" image showing the histogram lines as expected, while the "4.5-bit" and "5-bit" images do not show the histogram lines?

Please just post the 8-bit image and the "4.5-bit" image without the noise/grain/dithering.

Well, I would have, but I was at work.  I will post them now, and maybe we can all relax a little.

No bit-crunch:
971068691_Bit-depthnonoisefull_5.8.8.thumb.jpg.f4b1aa68994a1f00ce53da92506f2b32.jpg

4.5 bits:
929324924_Bit-depthnonoise4.5_5.8.9.thumb.jpg.73f8c53bcb7a618b458d2609c3671baf.jpg

4.0 bits:
843083815_Bit-depthnonoise4.0_5.8_10.thumb.jpg.729843f8cf4e4f67d745b4f7c6c33a7c.jpg

In terms of your analysis vs mine, my screenshots are all taken prior to the image being compressed to 8-bit jpg, whereas yours was taken after it was compressed to 8-bit jpg.

Note how much the banding is reduced on the jpg above vs how it looks uncompressed (both at 4.0 bits):

image.png.91b8fede4121897fc8cd40a20d400365.png

Here's the 5-bit with the noise to show what it looks like before compression:

image.thumb.png.09aa008e03407e2bbf8128f6fa4dedd2.png

and without the noise applied:

image.thumb.png.7628b9a8851542a7be9930bf42cd1c5f.png

 

28 minutes ago, jgharding said:

Not really referring to a specific aspect, just the pliability of the image.

Codec implementations are more than the sum of their parts so it's a little bit of a 'war of attrition' to try and pin it down to a number.

All of those things contribute to the pliability of the image. I'm just stating that in general, less-compressed images survive more alteration.

Agreed about 'more than the sum of their parts' as it's more like a multiplication - even a 10% loss over many aspects multiplies quickly over many factors.

50 minutes ago, jgharding said:

Red's codec is obviously the best here, raw data plus masses of compression and still holds up to abuse. But then the image characteristic of Arri is preferable to me despite ProRes being the most practical codec in there.

Not a fan of ARRIRAW?  I've never really compared them, so wouldn't know.

50 minutes ago, jgharding said:

If the phrase "thick" and "thin" is this hard to define, perhaps it's better to use different ones whe you're to communicate the nature of an image. These words seem to be open to interpretation and sort of fail to communicate clearly as a result.

Indeed, and that's kind of the point.  I'm trying to work out what it is.

Link to comment
Share on other sites

There's nothing with Arriraw per say, but it is pretty big files, which can be a problem for smaller productions. Redcode goes nice and small.
 

ALEXA SXT Open Gate (3424x2202)ARRIRAW (.ari):

11.5 MB/frame & 996 GB/hour

ARRIRAW-HDE (.arx):

6.9 MB/frame & 598 GB/hour

 

ALEXA LF Open Gate (4448x3096)ARRIRAW (.ari):

20.9 MB/frame & 1.80 TB/hour

ARRIRAW-HDE (.arx):

12.5 MB/frame & 1.08 TB/hour

Link to comment
Share on other sites

22 hours ago, tupp said:

Are you referring to the concept color emulsion layers subtracting from each other during the printing stage while a digital monitor "adds" adjacent pixels?

16 hours ago, kye said:

From the first linked article:

Quote

 

What makes these monitors additive is the fact that those pure hues are blended back together to create the final colors that we see. Even though the base colors are created through a subtractive process, it’s their addition that counts in the end because that’s what reaches our eyes.

Film is different in that there is no part of the process that is truly additive. The creation of the film negative, where dyes are deposited on a transparent substrate, is subtractive, and the projection process, where white light is projected through dyes, is also subtractive. (This section edited for clarity.)

 

So, the first linked article echoed what I said (except I left out that the print itself is also "subtractive" when projected).

 

Is that except from the article (and what I said) what you mean when you refer to "additive" and "subtractive" color?

 

 

Also from the first linked article:

Quote

The difference between subtractive color and additive color is key to differentiating between the classic “film” and “video” looks.

I'm not so sure about this.  I think that this notion could contribute to the film look, but a lot of other things go into that look, such as progressive scan, no rolling shutter, grain actually forming the image, color depth, compressed highlight roll-off (as you mentioned), the brighter tones are less saturated (which I think is mentioned in the second article that you linked), etc.

 

Of all of the elements that give the film "thickness," I would say that color depth, highlight compression, and the lower saturation in the brighter areas would be the most significant.

 

It might be possible to suggest the subtractive nature of a film print merely by separating the the color channels and introducing a touch of subtractive overly on the two appropriate color channels.  A plug-in could be made that does this automatically.  However, I don't know if such effort will make a substantial difference.

 

Thank you for posting the images without added grain/noise/dithering.   You only had to post the 8-bit image and the "4.5-bit" image.

 

Unfortunately, most of the pixel values of the "4.5-bit" image still fall in between the 22.6 value increments prescribed  by "4.5-bits."  So, something is wrong somewhere in your imaging pipeline.

 

 

16 hours ago, kye said:

In terms of your analysis vs mine, my screenshots are all taken prior to the image being compressed to 8-bit jpg, whereas yours was taken after it was compressed to 8-bit jpg.

Your NLE's histogram is a single trace, rather than 255 separate columns.  Is there a histogram that shows those 255 columns instead of  a single trace?  It's important, because your NLE histograms are showing 22 spikes with a substantial base that is difficult to discern with that single trace.

 

Something might be going wrong during the "rounding" or at the "timeline" phase.

Link to comment
Share on other sites

1 hour ago, tupp said:

So, the first linked article echoed what I said (except I left out that the print itself is also "subtractive" when projected).

Is that except from the article (and what I said) what you mean when you refer to "additive" and "subtractive" color?

Ok, now I understand what you were saying.  When you said "a digital monitor 'adds' adjacent pixels" I thought you were talking about pixels in the image somehow being blended together, rather than just that monitors are arrays of R, G, and B lights.

One of the up-shots of subtractive colour vs additive colour is that with subtractive colour you get a peak in saturation below the luminance level that saturation peaks at in an additive model.  To compensate for that, colourists and colour scientists and LUT creators often darken saturated colours.

This is one of the things I said that I find almost everywhere I look.  There are other things too.

1 hour ago, tupp said:

Also from the first linked article:

I'm not so sure about this.  I think that this notion could contribute to the film look, but a lot of other things go into that look, such as progressive scan, no rolling shutter, grain actually forming the image, color depth, compressed highlight roll-off (as you mentioned), the brighter tones are less saturated (which I think is mentioned in the second article that you linked), etc.

I'm sure that if you look back you'll find I said that it might contribute to it, and not that it is the only factor.

1 hour ago, tupp said:

Of all of the elements that give the film "thickness," I would say that color depth, highlight compression, and the lower saturation in the brighter areas would be the most significant.

Cool. Let's test these.  The whole point of this thread is to go from "I think" to "I know".

1 hour ago, tupp said:

It might be possible to suggest the subtractive nature of a film print merely by separating the the color channels and introducing a touch of subtractive overly on the two appropriate color channels.  A plug-in could be made that does this automatically.  However, I don't know if such effort will make a substantial difference.

This can be arranged.

1 hour ago, tupp said:

Thank you for posting the images without added grain/noise/dithering.   You only had to post the 8-bit image and the "4.5-bit" image.

Unfortunately, most of the pixel values of the "4.5-bit" image still fall in between the 22.6 value increments prescribed  by "4.5-bits."  So, something is wrong somewhere in your imaging pipeline.

Your NLE's histogram is a single trace, rather than 255 separate columns.  Is there a histogram that shows those 255 columns instead of  a single trace?  It's important, because your NLE histograms are showing 22 spikes with a substantial base that is difficult to discern with that single trace.

Something might be going wrong during the "rounding" or at the "timeline" phase.

Scopes make this kind of error all the time.  Curves and right angles never mix because when you're generating a line of best fit with a non-zero curve inertia or non-infinite frequency response then you will get ringing in your curve.  

What this means is that if your input data is 0, 0, 0, X, 0, 0, 0 the curve will have non-zero data on either side of the spike.  This article talks about it in the context of image processing, but it applies any time you have a step-change in values.  https://en.wikipedia.org/wiki/Ringing_artifacts

There is nothing in my code that would allow for the creation of intermediary values, and I'm seeing visually the right behaviour at lower bit-depths when I look at the image (as shown previously with the 1-bit image quality), so at this point I'm happy to conclude that there are no values in between and that its a scoping limitation, or is being created by the jpg compression process.

Link to comment
Share on other sites

1 hour ago, kye said:

One of the up-shots of subtractive colour vs additive colour is that with subtractive colour you get a peak in saturation below the luminance level that saturation peaks at in an additive model.

Not all additive color mixing works the same.  Likewise, not all subtractive color mixing works the same.

 

However, you might be correct generally in regards to film vs. digital.

 

 

1 hour ago, kye said:

This can be arranged.

One has to allow for the boosted levels in each emulsion layer that counter the subtractive effects.

 

 

1 hour ago, kye said:

Scopes make this kind of error all the time.

I don't think the scopes are mistaken, but your single trace histogram makes it difficult to discern what exactly is happening (although close examination of your histogram reveals a lot of pixels where they shouldn't be) .  It's best to use a histogram with a column for every value increment.

 

 

1 hour ago, kye said:

What this means is that if your input data is 0, 0, 0, X, 0, 0, 0 the curve will have non-zero data on either side of the spike.  This article talks about it in the context of image processing, but it applies any time you have a step-change in values.  https://en.wikipedia.org/wiki/Ringing_artifacts

I estimate that around 80%-90% of the pixels fall in between the proper bit depth increments -- the problem is too big to be "ringing artifacts."

 

 

1 hour ago, kye said:

There is nothing in my code that would allow for the creation of intermediary values, and I'm seeing visually the right behaviour at lower bit-depths when I look at the image (as shown previously with the 1-bit image quality), so at this point I'm happy to conclude that there are no values in between and that its a scoping limitation, or is being created by the jpg compression process.

There is a significant problem... some variable(s) that is uncontrolled, and the images do not simulate the reduced bit depths.  No conclusions can be drawn until the problem is fixed.

Link to comment
Share on other sites

2 hours ago, tupp said:

Not all additive color mixing works the same.  Likewise, not all subtractive color mixing works the same.

However, you might be correct generally in regards to film vs. digital.

Im still working through it, but I would imagine there are an infinite variety.  Certainly, looking at film emulations, some are quite different to others in what they do to the vector scope and waveform.

2 hours ago, tupp said:

One has to allow for the boosted levels in each emulsion layer that counter the subtractive effects.

What do you mean by this?

2 hours ago, tupp said:

I don't think the scopes are mistaken, but your single trace histogram makes it difficult to discern what exactly is happening (although close examination of your histogram reveals a lot of pixels where they shouldn't be) .  It's best to use a histogram with a column for every value increment.

I estimate that around 80%-90% of the pixels fall in between the proper bit depth increments -- the problem is too big to be "ringing artifacts."

There is a significant problem... some variable(s) that is uncontrolled, and the images do not simulate the reduced bit depths.  No conclusions can be drawn until the problem is fixed.

OK, one last attempt.

Here is a LUT stress test image from truecolour.  It shows smooth graduations across the full colour space and is useful for seeing if there are any artefacts likely to be caused by a LUT or grade.

This is it taken into Resolve and exported out without any effects applied.

561418800_LUTstresstestfull_1.1.1.thumb.jpg.76388ecb615437a13fb6b61cfbcfb560.jpg

This is the LUT image with my plugin set to 1-bit.  This should create only white, red, green, blue, yellow, magenta, cyan, and black.

1656646838_LUTstresstest1-bit_1.1.3.thumb.jpg.1f2547bb7e7c7f0a8a06c8f184801d5b.jpg

This is the LUT image with my plugin set to 2-bits.  This will create more variation.

435899131_LUTstresstest2-bits_1.1.2.thumb.jpg.e89f10de49c5bc6c09b059ca5d140e42.jpg

The thing to look for here is that all the gradual transitions have been replaced by flat areas that transition instantly to another flat area of the next adjacent colour.  

If you whip one of the above images into your software package I would imagine that you'd find the compression might have created intermediary colours on the edges of the flat areas, but if my processing was creating intermediate colours then they would be visible as separate flat areas, but as you can see, there are none.

Link to comment
Share on other sites

 

12 hours ago, tupp said:

Of all of the elements that give the film "thickness," I would say that color depth, highlight compression, and the lower saturation in the brighter areas would be the most significant.

Don't forget about shadow saturation! It often gets ignored in talk about highlight rolloff. The Art Adams articles kye posted above are very interesting but he's only concerned with highlight saturation behaviour. Here is a photo taken on film (Kodak Pro Image 100, the same as used in my example above)

1765405836_compare01film.thumb.jpg.312fe8347c6d797ed693795b20a13cfc.jpg

 

Here is the same scene shot as a digital RAW still with Adobe default colour but with contrast matched using RGB curves in ACR. You'll notice that at first glance it's more saturated:

1159767622_compare03addcontrast.thumb.jpg.98e3aab43f778cad2b2497b40b14bf41.jpg

 

Now here is the same digital shot with a LUT added to match the saturation and hues of the midtones. These now look like a good match. But look carefully at how desaturated the shadow areas look. The saturation has been globally lowered and the shadows are looking (dare I say it?)..... thin!

684121309_compare04midtonematch.thumb.jpg.f892196f07e0008b0317918165c44678.jpg

 

Finally, here is the same shot with a tweaked lut that boosts saturation in the shadows but keeps midtone and highlight saturation restrained. Now the shadows have deep blues and it looks more like the film shot. Is this a thicker image compared to the version with default colour? I definitely think it looks nicer.

531920465_compare05matchhimidshad.thumb.jpg.c75f26e021d1a85af5fe4d070e4d98f1.jpg

Again, open all in tabs to notice the difference. Night mode is good too 🙂

 

Link to comment
Share on other sites

I definitely 'know' what you mean. When I see a good image, it's obvious. But most modern mirrorless can look awful and pretty good, but never have that 'pop' that 'real' video camera's have like BlackMagics, C300's, Alexa's, Varicams, etc.

But is this the processing of the image? Or the sensor? Or both?

I think it's the processing because almost all modern mirrorless camera's take damn good photo's in raw. And when you edit them in lightroom the color 'thickness' is definitely there. But in video mode that is different. So the fact that the same sensor can look great and 'meh' at the same time, should point to the processing part. But then again, I'm sure an Arri still looks better at 50 mbps than a GH5 with 10 times the bitrate. 

So where / when is the 'secret sauce' introduced? And are manufacturers themselves aware of this? And is that the reason they will never put their top of the line color science / processing in their 'cheap' mirrorless camera's?

Link to comment
Share on other sites

Here's a pair of images that illustrates my point about shadow saturation more clearly. This is the same digital still with Adobe colour versus film emulation colour.

 

default:

193600440_compare06ACR.thumb.jpg.f7a4ce7e5564e79ad8b000077f5d8bcf.jpg

 

film emulation:

1047296540_compare07filmsat.thumb.jpg.72c6ea574164f5d2bd20dac391352693.jpg

 

The highlight roll-off is just about observable on the foreheads in the second image compared to the first, and it's very clear that there is a shadow erm..... roll-on?

 

Link to comment
Share on other sites

I've certainly been enjoying this discussion. I think that image "thickness" is 90% what is in frame and how it's lit. I think @hyalinejimis right talking about shadow saturation, because "thick" images are usually ones that have deep, rich shadows with only a few bright spots that serve to accentuate how deep the shadows are, rather than show highlight detail. Images like the ones above of the gas station, and the faces don't feel thick to me, since they have huge swathes of bright areas, whereas the pictures that @mat33 posted on page 2 have that richness. It's not a matter of reducing exposure, it's that the scene has those beautiful dark tonalities and gradations, along with some nice saturation.

Some other notes:

- My Raw photos tend to end up being processed more linear than Rec709/sRGB, which gives them deeper shadows and thus more thickness.

- Hosing down a scene with water will increase contrast and vividness for a thicker look. Might be worth doing some tests on a hot sunny day, before and after hosing it down.

- Bit depth comes into play, if only slightly. The images @kyeposted basically had no difference in the highlights, but in the dark areas banding is very apparent. Lower bit depth hurts shadows because so few bits are allocated to those bottom stops. To be clear, I don't think bit depth is the defining feature, nor is compression for that matter.

- I don't believe there is any scene where a typical mirrorless camera with a documented color profile will look significantly less thick than an Alexa given a decent colorist--I think it's 90% the scene, and then mostly color grading.

Link to comment
Share on other sites

19 hours ago, kye said:

OK, one last attempt.

Here is a LUT stress test image from truecolour.  It shows smooth graduations across the full colour space and is useful for seeing if there are any artefacts likely to be caused by a LUT or grade.

This is it taken into Resolve and exported out without any effects applied.

This is the LUT image with my plugin set to 1-bit.  This should create only white, red, green, blue, yellow, magenta, cyan, and black.

This is the LUT image with my plugin set to 2-bits.  This will create more variation.

Thank you for posting these Trueclor tests, but these images are not relevant to the fact that the "4.5-bit" image that you posted earlier is flawed and is in no way conclusive proof that "4.5-bit" images can closely approximate 8-bit images.

 

On the other hand, after examining your 2-bit Truecolor test, it indicates that there is a problem in your rounding code and/or your imaging pipeline.

 

2-bit RGB can produce 64 colors, including black, white and two evenly spaced neutral grays.  There seem to be significantly fewer than 64 colors.   Furthermore, some of the adjacent color patches blend with each other in a somewhat irregular way, instead of forming the orderly, clearly defined and well separated pattern of colors that a 2-bit RGB system should produce with that test chart.  In addition, there is only one neutral gray shade rendered, when there should be two different grays.

 

Looking at the histogram of the 2-bit Truecolor image shows three "spikes" when there should be four with a 2-bit image:

2-bit_hist.png.c42b1e836f22ff7ad348d98c692751d3.png

Your 2-bit simulation is actually 1.5 bit simulation (with other problems).  So, your "rounding" code could have a bug.

 

 

20 hours ago, kye said:

If you whip one of the above images into your software package I would imagine that you'd find the compression might have created intermediary colours on the edges of the flat areas, but if my processing was creating intermediate colours then they would be visible as separate flat areas, but as you can see, there are none.

Well, something is going wrong, and I am not sure if it's compression.  I think that PNG images can exported without compression, so it might be good to post uncompressed PNG's from now on, to eliminate that variable.

 

Another thing that would help is if you would stick to the bit depths in question -- 8-bit and "4.5-bit."  All of this bouncing around to various bit depths just further complicates the comparisons.

 

Link to comment
Share on other sites

14 hours ago, hyalinejim said:

Don't forget about shadow saturation! It often gets ignored in talk about highlight rolloff.  The Art Adams articles kye posted above are very interesting but he's only concerned with highlight saturation behaviour.

Well, when I listed the film "thickness" property of "lower saturation in the brighter areas," naturally, that means that the lower values have more saturation.

 

I think that one of those linked articles mentioned the tendency that film emulsions generally have more saturation at and below middle values.

 

Thanks for posting the comparisons!

 

 

9 hours ago, KnightsFan said:

I think that image "thickness" is 90% what is in frame and how it's lit.

Then what explains the strong "thickness" of terribly framed and badly lit home movies that were shot on Kodachrome 64?

 

 

 

9 hours ago, KnightsFan said:

- Bit depth comes into play, if only slightly. The images @kyeposted basically had no difference in the highlights, but in the dark areas banding is very apparent.

Unfortunately, @kye's images are significantly flawed, and they do not actually simulate the claimed bit-depths.  No conclusions can be made from them.

 

By the way, bit depth is not color depth.

Link to comment
Share on other sites

15 hours ago, hyalinejim said:

Here's a pair of images that illustrates my point about shadow saturation more clearly. This is the same digital still with Adobe colour versus film emulation colour.

 

default:

193600440_compare06ACR.thumb.jpg.f7a4ce7e5564e79ad8b000077f5d8bcf.jpg

 

film emulation:

1047296540_compare07filmsat.thumb.jpg.72c6ea574164f5d2bd20dac391352693.jpg

 

The highlight roll-off is just about observable on the foreheads in the second image compared to the first, and it's very clear that there is a shadow erm..... roll-on?

 

Is this a lut, or could you just simulate this with adding saturation in the shadows in Resolve? It looks like a lot more going on, but not sure. 

Link to comment
Share on other sites

5 hours ago, tupp said:

Then what explains the strong "thickness" of terribly framed and badly lit home movies that were shot on Kodachrome 64?

Got some examples? Because I generally don't see those typical home videos as having thick images.

5 hours ago, tupp said:

Unfortunately, @kye's images are significantly flawed, and they do not actually simulate the claimed bit-depths.  No conclusions can be made from them.

They're pretty close, I don't really care if there's dithering or compression adding in-between values. You can clearly see the banding, and my point it that while banding is ugly, it isn't the primary factor in thickness.

Link to comment
Share on other sites

44 minutes ago, zerocool22 said:

Is this a lut

Yes, it's this transformation in action, as a lut:

So there are hue transforms going on as well as saturation transforms. But the saturation aspect of it you could totally do in Resolve. Art Adams came up with this for matching F55 to AlexaartadamsLum-vs-sat.png

 

And my point is to do something similar for digital to film, the leftmost point on that curve should be raised to boost the shadows. But I don't know if that curve is Log to Log or whatever, in which case it might be right. I think Rec709 to Rec709 it might possibly need to be more like this:

Capture.JPG.8e3cf89b92c5722a87542b14fc3b2422.JPG

But I haven't tested it extensively, other than to notice that the results of my tinkering weren't as nice as the lut (because the hue changes are important too). So that adjustment is just a guess off the top of my head and not based on testing how it looks. But you get the general idea.... it's not just a highlight roll off, it's a more or less constant change throughout the range.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...