Jump to content

4K 8bit to 2K 10bit - Let's get to the bottom of this please!


Guest Ebrahim Saadawi
 Share

Recommended Posts

Such an informative discussion. Thank you all for contributing. So conclusion is converting 4K 4:2:0 8bit files to 1080p gives us 4:4:4 10bit luma and 8bit chroma files. However, the chroma banding effects are reduced because of the fine noise dithering and low-pass filter effect. 

The adding noise trick to eliminate banding is awesome. Heard of it before but never thought it would work this well! This is something I am going to be using a lot on all my footage. Thanks!

 

Nope.. it is not 10bit luma from the scene and therefore not really relevant. Noise is inherent in all cameras, film and digital, and I wouldn't advise adding more just to remove artifacts. Banding is very specific to symmetrical lights or shadows and it's usually only an issue with computer generated images. The much bigger problem you face with your footage are the 8bit 4:2:0 artifacts. This will not go away by adding noise. It might become less visible but at the expense of your footage, that's really taking out the sledge hammer. The reason I did a little zoom window in that animation is so that you can see what 8bit 4:2:0 looks like up close vs 10bit 4:2:2. That is your enemy, not banding.

 

There have been a couple of red herrings thrown out here. 1) 8bit monitors 2) adding noise to remove banding and 3) the fact that you only get true 10bit in the luma channel.. that implies that this concept would work if you could have a few more bits, for example, by starting with 8k. This is not the case. 

 

All you are accomplishing by reformatting 4k to 2k is anti-aliasing and shrinking the image, no additional bit depth conversion that is akin to capturing in 10bit or greater. 

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

8-bit monitors are relevant as that's what most people use. How can people see the benefits of >8-bit data on an 8-bit display? Dithering.

If we dither 10- or more bit images before truncation to 8-bit we can reduce banding more effectively than if we apply dither after truncation to 8-bit (more dithering is required, degrading the image).

 

4K to 2K "10-bit luma" doesn't do anything significant by itself. The fine-grained noise from 4K and perhaps macroblock scale helps improve gradations as does a slight improvement from the 2x2 scale reduction.

 

Adding noise to the entire image isn't the best way to reduce banding/artifacts. Algorithms which selectively apply dither only to areas where it's needed are more optimal. Examples of these algorithms and example images here: http://en.wikipedia.org/wiki/Dithering . In order to use those algorithms, we need the original, >8-bit data, for error diffusion calculations, for example. Noise diffusion is a easy to use and fast to test, so it's worth a shot when there are no other options.

Link to comment
Share on other sites

jcs-

 

Yes but, as per the title of this thread, people are confused at to whether or not you can convert 4k 8bit 4:2:0 to 2k 10bit 4:4:4 and get the grading and smoothness of detail that 10bit acquisition can give. They "want to get to the bottom of it", which I appreciate. By bringing up adding noise to reduce banding, algorithms for dither, the limitation of what we see with 8bit monitors (which makes it seem irrelevant for anyone to want anything higher than 8bit unless you're viewing on a 10bit monitor) and the fact that the chromatic channels don't add up to 10bit 4:4:4, although I don't doubt you mean well, you've helped to confuse the issue even more. Andrew is now posting an article on how to "cure banding", almost within minutes of learning that the technique exists. 

 

We should really just stay focused on the flaws in the bits to pixels = depth workflow. I think you understand it's fundamental flaw, that it does not represent the scene more accurately or improve stops that you can push before you see artifacts. 

 

Can we just agree that all the conversion from 4k to 2k does is anti-alias and shrink?

 

Then the discussion about dithering algos and 8bit monitors can be an aside. 

 

I have my thoughts about all that but I don't want to get off topic. 

Link to comment
Share on other sites

With a captured image, it is true that resolution can be sacrificed for increased bit depth -- folks have been doing it for years.   On the other hand, the color depth of a captured image can never be increased (unless something artificial is introduced).

It is important to keep in mind that bit depth and color depth are not the same property.

The relationship of bit depth, resolution and color depth is fundamental to digital imaging.  With all other variables remaining the same, the association between bit depth, resolution and color depth in an RGB system is:

COLOR DEPTH = (BIT DEPTH x RESOLUTION)3

So, if two adjacent RGB pixel groups are cleanly summed/binned into one large RGB pixel group, the bit depth will double.   However, the color depth will remain the same or be slightly reduced (depending on the efficiency of the conversion).  The color depth can never increase.

As others in this thread have suggested, there appears to be something else going on other than normal banding in the OP's first image.

Even so, if banding exists in an image, increasing the bit depth at the cost of resolution will not eliminate the banding, unless the banding is only a few pixels wide (or unless the resolution becomes so coarse that the image is no longer discernable).  Again, when swapping resolution for more bit depth, the bit depth has been increased, but not the color depth.
 

Link to comment
Share on other sites

Sunyata- no one was able to come up with an example image showing that so-called 10-bit luma from 4K to 2K did anything significant. Andrew's Mercedes example was due to scale and viewing size, the example I found (link above) appeared due to compression. Since most people have 8-bit displays, the only way to see 10-bit material is either via noise/texture during 10-bit acquisition before 8-bit quantization or via dithering before 8-bit quantization: this will provide the best results to see the advantage to >8-bit on 8-bit displays (temporal dithering is also possible but requires displays capable of 10-bit signals). As the examples I posted show, one can dither after quantization, which helps, but is not as good as adding dither to 10+ bit material before quantization.

Folks wanted to understand how 8-bit 4K material could look so good downscaled to 2K, especially for gradations. The answer is 2x2 downscaling, fine 4K noise grain, and macroblock scaling, not anything significant with 10-bit luma. If 10-bit luma was the reason for improvement, we wouldn't be able to see such an effect on an 8-bit display. Effective dithering is the only way to view 10- or more bits on an 8-bit display. Tone Mapping can also help but changes the look and is currently expensive to compute (local adaption).

Link to comment
Share on other sites

jcs-

 

When you say "nobody was able to come up with an example" you must be referring to another post that I missed because I keep seeing this as an open debate. That's great if it was tested and de-bunked, but I haven't gotten that, especially from the title of this thread.

 

I'll address your comment about 8bit monitors. I work on an 8bit monitor (hopefully I'll change that soon), but the jobs I work on are delivered at 10bit or higher, and the footage is also 10bit or higher. I can see details of the higher source material just fine. If I'm doing a luma matte and i push the whitepoint and blackpoint so that they practically touch, I need the extra dynamic range. It also allows more stops with a color correct, it helps with tracking, pretty much everything.. As I'm making adjustments I can see the changes happen fine, the fact that you work on an 8bit monitor does not inhibit your ability to deliver a 10bit+ job. It's really only in the blacks that you're flying blind sometimes, what is brighter than 255 white is usually not an issue.

 

A couple times there have been some gotchas in the very low end that I couldn't see on my monitor, but you learn how to avoid that. It is almost always with computer generated elements, that could be 3d elements or 2d effects, and in that case film grain is helpful, but also motion blur and just avoiding symmetry in general.

Link to comment
Share on other sites

sunyata- it is now hopefully clear that the 2x2 pixel summing/averaging from 4K to 2K takes fine noise from 4K and acts like dither before downsampling to 2K. The "10-bit" luma won't have any significant extra latitude in post as with actual 10-bit (or more) sensor acquisition, however banding can be reduced and color tonality can be better preserved vs. 1080p in-camera. In cases where banding is apparent in 4K, it's probably better to add extra noise to the 4K version before scaling to 2K. In Premiere this can be done by Nesting the 4K footage, applying noise in 4K, then adding the Nested clip to a 1080p Sequence with Scale to Frame Size selected.

Link to comment
Share on other sites

Okay JCS-- I'm not sure it is to everyone, but I approve this message. Sorry, I've been getting a lot of robocalls lately.

 

And Tupp, very clearly stated too. That's 3 people, 3 people can't be wrong. 

 

But don't get too excited about the fine noise part.. 

Link to comment
Share on other sites

This source image is pretty hardcore, since the banding has been exaggerated on purpose, but I'm up for a challenge.

 

Here I've used Neat Video in 32bit and some 4K grain, then exported at 2K. It's a high bit-depth TIFF. Obviously if you go to H264, banding appears again.

 

Ignore the google drive preview, it will look hideous.

 

https://dl.dropboxusercontent.com/u/9922139/Debanding%204k%20to%202K_00000.tif

 

This really minimises banding. If this were a more real-world test, as opposed to fixing xomething that has been sabotaged for test purposes, the banding wouldn't be as extreme, and the result would be even cleaner.

 

My film grain made the borders go grey, sorry bout that ;)

 

Life's too short to transcode, downsample blah blah... you don't gain anything. Work at 4K in 32bit all the way, then just export at 2K or 1080p max bitdepth, The result will be great, and you'll get to sleep at night ;) You can't add what isn't there to the source, but you can treat the source well at every stage and get a great result, while 8-bit still haunts this end of the market.

Link to comment
Share on other sites

I wonder if anybody sees the writing on the wall when he praises the advantages of shrinking a 4k-8-bit image to HD for an 8-bit player/display.

 

What we had to expect if we watched an HD image with 10-bit color depth on a 10-bit HD monitor was not so much *better colors*, but rather *less artifacts*.

 

Therefore, if we watched a 4k-8-bit 4:2:0 image on a 4k-monitor (no matter if 8-bit or 10-bit) we'd see way more banding. Banding, again, is just the highly visible result of having too few values to spread a soft grade evenly over a big space. It's the tip of the iceberg, indicating that there is unsufficient color depth and only a quarter of color resolution.

 

One mustn't ignore relative size. The higher the spatial luma resolution, the more color information you need. Remember: Digital cinema once was 1k (i.e. Star Wars Ep.I), it was just enlarged. You couldn't appreciate individual sand grains on Tatooine, but it still would look better than 4k 10-bit 4:2:0, side by side.

 

In other words: Yes, 4k may look better than 8-bit HD, but 10-bit HD may very well look better than 8-bit 4k.

 

I'd like to add this to the 4k hype to stress the importance of having better color depth, unless you are content with '4k for better HD'.

Link to comment
Share on other sites

True but have you actually tested what happens when you change your source from 10-bit to 8-bit? I exported a jpeg-sequence (at 5k though) from REDRaw and we did a test at our workplace. They had done a composition straight from the original Redfiles (greenscreen etc) and I exchanged them to 8bit jpeg-files. After a bit of gamma tweaking and the difference was...almost none. Negligible. It was funny.

Link to comment
Share on other sites

I wonder if anybody sees the writing on the wall when he praises the advantages of shrinking a 4k-8-bit image to HD for an 8-bit player/display.

 

What we had to expect if we watched an HD image with 10-bit color depth on a 10-bit HD monitor was not so much *better colors*, but rather *less artifacts*.

 

Therefore, if we watched a 4k-8-bit 4:2:0 image on a 4k-monitor (no matter if 8-bit or 10-bit) we'd see way more banding. Banding, again, is just the highly visible result of having too few values to spread a soft grade evenly over a big space. It's the tip of the iceberg, indicating that there is unsufficient color depth and only a quarter of color resolution.

 

One mustn't ignore relative size. The higher the spatial luma resolution, the more color information you need. Remember: Digital cinema once was 1k (i.e. Star Wars Ep.I), it was just enlarged. You couldn't appreciate individual sand grains on Tatooine, but it still would look better than 4k 10-bit 4:2:0, side by side.

 

In other words: Yes, 4k may look better than 8-bit HD, but 10-bit HD may very well look better than 8-bit 4k.

 

I'd like to add this to the 4k hype to stress the importance of having better color depth, unless you are content with '4k for better HD'.

 

Yep, you're right, 2k cineon scans looked great. Better than most digital does today. 

Link to comment
Share on other sites

True but have you actually tested what happens when you change your source from 10-bit to 8-bit? I exported a jpeg-sequence (at 5k though) from REDRaw and we did a test at our workplace. They had done a composition straight from the original Redfiles (greenscreen etc) and I exchanged them to 8bit jpeg-files. After a bit of gamma tweaking and the difference was...almost none. Negligible. It was funny.

 

As Andrew wrote, there is also compression to consider. Actually all sensors of our 8-bit cameras take in raw. Afterwords they have to somehow map all the values anew, which is called, if I recall correctly, quantization. It's more or less applying a curve that transforms all values needed to represent the same image with the same dynamic range to 256 (or less) luma steps, same with color. With different picture profiles, you will get more or less banding in a sky (if there is a preset like 'landscape'), but might lose values in skin tones ('portrait') or in the dark areas ('night'). 

 

Grading for 8-bit means respecting what's there and what's not there. If it was there, because there were sufficient values or because you recorded in a better codec in the first place, you can go ahead and reduce information further (which is what grading is about, in a way).

 

It is better to try to get most of things right in-camera if you deal with 8-bit. The last two days, me and my video buddy had the chance to borrow a C300 (free) from Canon. It records in 8-bit 4:2:2, and from what we saw it's real 1920, as close as you can get to the real resolution. I can't believe the image could look any better with downscaled 4k.

 

To overcome the 8-bit limitation for grading, it has a 'cinema-lock' feature, meaning C-log. It's not that flat as i.e. Technicolor picture style, it almost looks okay as it is, you may just say it's too neutral.

 

We put the stuff in Resolve. What we found was: It's not raw. You have to expose correctly, there also should be a correct WB. Other than that, it holds up well. We had a lot of skies, but apparently no banding. 

 

My conclusion: You can do well with 8-bit for 8-bit, it's just not as 'forgiving'.

Link to comment
Share on other sites

Thanks to John for pointing me here, it is an interesting discussion. :)

 

As one of the people who think that there is free tonal precision lunch to be had in 4K to 2K downscale and the one that wrote the ShutterAngle article linked on the previous page, I think the 8-bit display argument is a bit beside the point. The whole idea of shooting 4K for 2K (for me, at least) is in using a flat profile as s-log2 on the A7S. Then working it in post to the appropriate contrast. 

 

As my idea of a good looking image is generally inspired by film and includes strong fat mids and nice contrast, this means the source image is gonna take quite the beating before getting in the place I want it. And here is where the increased precision is going to help. To simplify it a bit: when you stretch an 8-bit image on an 8-bit display, you are effectively looking at a, say, 6-7-bit image on an 8-bit display, depending on how flat the source image is. That's why starting with more precision is helpful. Starting with 20-25 values in the mids (which is the case with 8-bit s-log2) is just not gonna handle it, when you are aiming at, say, 60-65 values there in delivery.

 

Compression and dirty quantization to begin with surely affect the result and limit precison gains. But they don't entirely cancel them, and the better codec you use on the hdmi feed, the cleaner the downscale.

Link to comment
Share on other sites

Hey cpc- nice write up on your blog! While the 4K to 2K effect might be minor in terms of grading latitude, using dither can help with banding and blocking reduction. It appears the fine noise grain from the 4K footage is mostly responsible for any gains when going from 4K to 2K vs. 2K native capture.

 

The dithering discussion raises the question regarding how NLEs convert 32-bit to 8-bit for final rendering and delivery- do they apply dither? It would also be nice to be able to control dither method and amount. The quick & dirty solution is to apply noise or film grain, however targeted error diffusion would be more optimal and more likely to survive H.264 compression.

Link to comment
Share on other sites

Thanks to John for pointing me here, it is an interesting discussion. :)

 

As one of the people who think that there is free tonal precision lunch to be had in 4K to 2K downscale and the one that wrote the ShutterAngle article linked on the previous page, I think the 8-bit display argument is a bit beside the point. The whole idea of shooting 4K for 2K (for me, at least) is in using a flat profile as s-log2 on the A7S. Then working it in post to the appropriate contrast. 

 

As my idea of a good looking image is generally inspired by film and includes strong fat mids and nice contrast, this means the source image is gonna take quite the beating before getting in the place I want it. And here is where the increased precision is going to help. To simplify it a bit: when you stretch an 8-bit image on an 8-bit display, you are effectively looking at a, say, 6-7-bit image on an 8-bit display, depending on how flat the source image is. That's why starting with more precision is helpful. Starting with 20-25 values in the mids (which is the case with 8-bit s-log2) is just not gonna handle it, when you are aiming at, say, 60-65 values there in delivery.

 

Compression and dirty quantization to begin with surely affect the result and limit precison gains. But they don't entirely cancel them, and the better codec you use on the hdmi feed, the cleaner the downscale.

 

 

So if your idea of good looking is film, and even the digital camera companies use that as a benchmark, people need to remember that film is scanned for a digital intermediate at 10bit log encoded gamma to preserve the density information of the film negative, and so that it can be reprinted w/o any loss. When working on 10bit log, converted to linear so that it looks correct on your screen, you can bring out the details that you want to in post in a non-destructive way. 

 

So going back to the original theme of this post, is 4k 8bit 4:2:0 to 2k 10bit 4:4:4 going to really yield "true 10bit 2k", the answer is a clear no, and if you are someone that wants to shoot flat and have latitude in post, you probably should consider shelling out for the Shogun if you use a GH4 and capture 10bit.

Link to comment
Share on other sites

Not sure about that, John. The native fine noise in the 4K image is probably mostly killed by compression as much as the downsampled grain in the original 2K image is killed by compression.

I don't think NLEs dither, Resolve doesn't seem to do it. But the scaling algorithms are surely involved, and this affects output because quite a lot source pixels are sampled per output pixel (a generic cubic spline filter would sample 16 source pixels).

 

@sunyata: it is not true 10-bit, and in your example chroma is still 8-bit (with no subsampling though), but it surely is better than a 2K 8-bit subsampled source from camera, if not as good as a true 10-bit source.

Link to comment
Share on other sites

cpc- I've sharpened the 4K GH4 footage in post and the noise grain is very fine. There are smeared areas and macroblock artifacts in some places, though overall when there's not a lot of motion the noise grain is pretty impressively small, especially compared to my 5D3 (RAW) or FS700 (AVCHD).

 

Premiere uses a form of Lanczos and Bicubic for scaling ( http://blogs.adobe.com/premierepro/2010/10/scaling-in-premiere-pro-cs5.html ). Whatever they are doing appears to work reasonably well. In terms of the 10-bit luma debate, if the 8-bit 4K footage was effectively dithered, either via error diffusion or simply noise, then resampling a 4x4 grid  (Bicubic) could kind of reverse the error diffusion and put some bits back into luma. Intuitively it doesn't seem like it would buy a lot of latitude vs. native 10-bit sampling, however a little bit can be helpful. Adding error diffusion / noise certainly helps reduce the appearance of banding/blocking. Ideally the dither/noise would only be added where it's needed. Without significant dithering of some form or another, I don't see how 4K to 2K could do much for the '10-bit luma' argument as we need variance for the 4 source pixels to spread around the values of the summed/averaged final pixels.

Link to comment
Share on other sites

jcs-

 

random noise and dither is not going to "kind of reverse the error diffusion and put some bits back into luma" anywhere near the color depth of the actual scene recorded at 10bit. I'm not even sure it kinda helps, because it creates other problems, like artificial noise. 

 

As a side note.. when a light drop-off radius moves and there is a quality issue where the noise technique has been used, even with super grainy film, you can spot the clean arcs of the banding through the noise. This is an example of the visual system you reference, we ignore the noise and see the symmetrical patterns when there is motion. Similar to the fixed pattern noise tests that have been posted on Vimeo.

 

Unfortunately there is no free lunch if you want to get rid of these types of artifacts.. gotta go higher bit depth and lower compression.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...