Jump to content

The cure for 8-bit?


kye
 Share

Recommended Posts

We all know that 8-bit video capture formats don't generate the best results.

This is especially bad when combined with flat / log profiles, and especially especially bad when those profiles are calibrated to fit more DR than that camera can provide.

The traditional solution is to output more bits, but 8-bit values are kind of a default in computers so can cause a challenge to go beyond.

So, here's an idea.

Instead of just encoding the image in 8-bits but having no values below say 40% and none above say 90% (which is really just 7 bits) how about stretching the whole DR over the full 8-bit range and just including the values of the brightest and darkest pixels in that frame?

It would mean that all frames would have the full 8-bits applied, and scenes with less DR would get more subtle colour information.  Every stop less of DR would be equal to effectively an extra bit.

It would also mean that when a sensor is invented with more DR than that codec was designed for it will scale infinitely without hassle.

It would add a small processing overhead and raise the data rates on more compressed codecs, but codecs where all frames are key frames wouldn't be impacted.

If anyone knows any of the manufacturers, we'd all benefit if they could pass this along.. Thanks :)

 

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
31 minutes ago, kye said:

We all know that 8-bit video capture formats don't generate the best results.

This is especially bad when combined with flat / log profiles, and especially especially bad when those profiles are calibrated to fit more DR than that camera can provide.

The traditional solution is to output more bits, but 8-bit values are kind of a default in computers so can cause a challenge to go beyond.

So, here's an idea.

Instead of just encoding the image in 8-bits but having no values below say 40% and none above say 90% (which is really just 7 bits) how about stretching the whole DR over the full 8-bit range and just including the values of the brightest and darkest pixels in that frame?

It would mean that all frames would have the full 8-bits applied, and scenes with less DR would get more subtle colour information.  Every stop less of DR would be equal to effectively an extra bit.

It would also mean that when a sensor is invented with more DR than that codec was designed for it will scale infinitely without hassle.

It would add a small processing overhead and raise the data rates on more compressed codecs, but codecs where all frames are key frames wouldn't be impacted.

If anyone knows any of the manufacturers, we'd all benefit if they could pass this along.. Thanks :)

 

I sometimes wonder why Canon Log doesn't have any information below 9 IRE and Arri Log C doesn't have any above 93 IRE (or something) but it's not as bad as 40-90 IRE. A similar solution to what you propose is just to change color space and gamma when you don't have high DR in a given scene rather than using log footage. The issue with a format that changes the black and white point per scene or per shot is you can't apply a single LUT or grade as a starting point and you're left with a lot more work to do correcting everything. Far more cost in man hours than simply renting a better camera.

Fwiw, I know this conflicts with the conventional wisdom, but I've done really really extensive tests with 8 bit and 10 bit footage and with a high quality codec the difference is extremely small between the two. With externally captured C100 footage I couldn't get any significant banding under any circumstances despite Canon Log being 8 bit and having around a 9 IRE black level. With AVCHD I could get a bit of banding. With the Alexa I AB'd 8 bit transcodes with 10 bit internal ProRes and the only difference even with an aggressive grade is more contrast in the noise from quantization rounding when you zoom in to 800%. Not a hint of banding on either, but remember both of these cameras are cinema cameras that are very very noisy. There WILL be a lot of banding in an 8 bit gradient without dithering that there won't be in a 10 bit gradient without dithering, but that's because of the source 

However, with older F5 footage (10 bit but a low bitrate codec) I was able to find a bit of banding... mostly due to macro blocking. Macroblocking and in-camera noise reduction (which smooths out the noise that the Alexa has so much of) are MUCH bigger culprits than 8 bit space alone. I think a poor codec will look worse in 10 bit at the same bit rate sometimes, since it's trying to compress more data into less data. Just my opinion, though.

Try transcoding some 10 bit Alexa footage into an 8 bit uncompressed/lossless codec (Apple animation or similar) then bring both into Resolve. Apply the same LUT. There will be no increase in banding in the 8 bit footage, just an increase in noise from the rounding error. Now try the same thing but with an aggressive noise reduction algorithm. I expect both of the noise reduced clips would have banding, but the 8 bit one would have much more. So banding isn't about bit depth exclusively, it's also (mostly) about dithering.

Not trying to disagree with you outright, I just think this is more complicated than it seems and including a convoluted technique in consumer cameras isn't the answer. Besides, this is already possible (and trivially easy) by using different looks for different scenes or by renting a different camera. Resolve also has some pretty good anti-banding tools already.

Link to comment
Share on other sites

I don't think the only issue is 8-bit banding. I wonder how much of the talk about the picture of various cameras being "thick" or "thin" or "brittle" is attributed to this.

@Mattias Burling I agree that not all 10-bit implementations are equal and that's kind of my point, that we could get the benefits without needing a separate implementation.

@HockeyFan12 I was thinking that the scaling would be applied during decoding so once the file is read then all the different shots would be comparable, obviously grading would be awful if this didn't happen.

I agree that it's a complicated picture and absolutely depends on how good the implementation is within the camera, however, with all else being equal, more bits within the given DR is better than less bits.

Different looks is also a solution, but it means that you give yourself difficulties in post. I've got footage from multiple budget cameras and matching them is a PITA and you'd think that things like RCM and ACES would be the answer but the reality is that these things don't have support for every profile, normally just the log ones from the big manufacturers.

@Shirozina I agree, 8-bit can look fine, but using those bits more effectively would be better!

Link to comment
Share on other sites

The idea is great, but now, at the beginning of the affordable 10 bit recording era it is pointless.

From technical point of view, 8 bit LOG recording only lived for a few years, from 2014 (A7s) till nowdays. I think every camera manufacturer will step up to 10 bit in their next camera where throwing out half of the data will not means visible image degradation

 

 

Link to comment
Share on other sites

@kye, fair enough. That's a little over my head but makes sense. Which I think is the bigger issue, it might confuse prosumer-level shooters like me who are used to something more WYSIWYG.

But the idea makes sense, though jumping up to 10 bit gets you four times more color information and at best you'd get maybe 30-50% more stretching 8 bit out for low DR scenes. I did that math wrong once myself, equating a stop with a bit, but that's only in true log space, and most pseudo-and-so-called-log gammas aren't actually that flat. And, regardless, I've never found the bit depth to be the issue. (In-camera noise reduction and macro blocking usually are.) But tbh, I've only had banding issues rarely and with A7S footage and 5D Mark III footage, and it's been manageable in post. Usually I set my ISO at native and expose with an incident meter as I used to when I shot film and haven't had any problems with anything else. So I'm a big luddite here, but am probably using an antiquated approach that works only for me–I think the key is this encourages me to underexpose compared with ETTR, which I know many more technical members advocate, but I find ETTR problematic with log profiles as it seems to increase banding and change color/saturation/tonality in the highlights (dramatically less so with the Alexa than with other systems, but I rarely have the budget to shoot with one!). ?

I did work with someone who helped develop, or at least worked a lot with ACES and he actually shot on an A7S AND exposed to the right. He used a Q7 recorder and a two-stop pull and custom LUT to clean up shadow noise and improve tonality, and he did some of what you suggested–taking the log signal (fed via HDMI to the recorder) and compressed it into linear space with a custom LUT he developed after contacting Sony for information on the gamma and chromasticities and recorded the final signal externally to ProRes. The footage looked really good and had nice tonality and intercut well with a higher end system. 

The approach wasn't for me, though. Too technical, too many moving parts, the recorder was too big, and it did nothing to mitigate the chroma clipping issues in the A7 line (which the F5 and F55 finally fixed, but which plagued them when I first used them) and you had to watch your highlights carefully with that in mind because it clipped two stops sooner and not always to white. But for a technical shooter on a budget, it was great, and the footage looked really good.

So you're not the first to try something like this–and it does work. I just found it... complicated. But I'm a real luddite–probably one of the least technical shooters here. My entire approach is set to native ISO (or a stop or two faster if needed, but changing it in camera rather than in post), set to the flattest log space, use an incident meter to set stop as I would if shooting film, use false color as I would a spot meter if I were shooting film, and uhh.... that's it. And it works for me to get a pretty adequate/average result and I think also for other old school shooters who are more concerned with repeatability than perfect image quality. I suspect on bigger shows like Game of Thrones they pull a couple stops in-camera for green screen work, though, and I've heard of Fincher using HDRX selectively depending on scene DR (as you mention, switching formats to suit the scene), so this simple approach has its flaws.

Link to comment
Share on other sites

@HockeyFan12

I was hoping that the manufacturers would just build it in and we'd never have to even know it was there! :)

You're right about my math being wrong, oops(!), but we'd still get a benefit.  Every aesthetic impression we get of footage has one or more technical aspects that are responsible for them, and the colours of ML RAW vs other codecs is at least partially due to bit depth.  My idea was to just get a bit more bit depth, essentially for free.  Even if it was applied to 10-bit cameras, you'd still get a little bit more than 10-bits.

Someone in the GoPro thread posted a link to a blog post about the GoPro Protune codec and it was fascinating, and includes much of what you are saying.  The profile doesn't distribute the bits evenly between the DR and it mentions that there's less information in the bottom one or two stops and so it uses less on those but gives more to the upper two as that's typically more useful.

ETTR is a complex approach.  On the one hand it gives the most bits to play with, but on the other it means that the image is subtly different between scenes.  I'm not sure if you've seen those articles where a cinematographer gets to know a new camera and tests it by under and overexposing by various amounts and then matches the levels in post?  The results are typically that you'll discover the limits of where underexposing gives too much noise, overexposing makes the image start to look awful, and what is the best spot for skin tones.  I've heard many cinematographers talk about how for a given camera you have to over or underexpose a certain amount.

In terms of your approach I would suggest that if it's working then that's great, it's definitely the way to get a consistent image, and it will definitely be nicer in post.

The adventures of your friend are too tedious for me as well.  I have a technical education, and I am at home reading specifications and data processing techniques, but in the end I work out how to set my equipment for the best results and then I go out and put the effort into what I point my camera at.  I think I'm actually far less technically inclined on set than you are - I don't white balance or even control my own exposure or focus most of the time.  Everything the camera can automate (to a level where it's acceptable and not a detriment to the end product) I automate, and when I am operating the camera for recording my home and travel videos is spent thinking about angle, composition, camera movement, depth of field, anticipating what is going to happen next so I can be ready for it, and not have filming get in the way of the holiday experience for me or others.  This is an art-form after all :)

Link to comment
Share on other sites

@kye, yeah I would never auto-white balance, but that's just me. What if your white balance changes mid-shot, or what if you're mixing color temperatures, which I used to do a lot for night exteriors and then it would at the very least be inconsistent in the coverage. Often, I would put a half CTB and half plus green on tungsten lights (or half CTO and half plus green on HMIs) and mix that with an urban vapor on tungsten to emulate street lights (mercury vapor and low pressure sodium, respectively), and that's a mix of wildly divergent color temperatures, neither of which is meant to read as white. That's my concern with self-scaling contrast–what if your scene is mostly flat but then you pan against the sun, will the contrast change on the fly? Will it change fast enough? It just seems like a lot of work for very minimal benefit.

But I feel a little old school with white balance. I usually shoot either 5600K or 3200K (or very very rarely 4300K) or on the Red MX I used to shoot 4000K with an 80D filter, which I later learned Cronenweth was also doing, so I felt pretty cool about that. Probably I was inadvertently copying him, though, based on something I overheard or read somewhere. On the very very low end I know of people who white balance each shot to a white card because they don't know what they're doing and read some tutorial about white balancing wrong (in my opinion, but just my opinion); on the super high end there's a vfx supervisor I talked with who when given the choice would only shoot Red Raw and he'd put color filters on cameras to maximize the dynamic range of each channel, and then white balance in post to a white point he set carefully and consistently in the scene based on a true white source, be it tungsten or daylight-balanced. And so he'd maximize (and ETTR) the exposure of each channel, not just the image overall. Kind of genius, and weirdly similar to what I perhaps unfairly dismissed beginners for doing (white balancing every shot). Sort of an extreme version of the Cronenweth 80D technique.

But shooting with everything on auto is totally fair. Most directors don't think about any of that stuff–it's just their crew that does. I feel like that's a very Malick-like approach, and I envy people who are able to focus on the content to that extent rather than worrying about technical stuff as I would.

I'm planning to buy a C200 later this year and might fool around with the auto focus function for the first time. And with stills I've started using the meter in the camera, rather than a spot meter. Otherwise I sort of enjoy the technical stuff. But I do wish there were an "auto" for sound department, but haven't found one yet other than paying a good sound mixer. ?

Link to comment
Share on other sites

12 hours ago, kye said:

 

@Shirozina I agree, 8-bit can look fine, but using those bits more effectively would be better!

That’s what your in camera profiles do. Response curves in log or film etc allow you to move more bits to the most important areas of the  image and not waste them on areas where they are not needed. Naturally this means you have to pick the profile to fit the scene and nail the exposure to pin the tones in the correct place on the curve. Higher bit depths mean you can afford to ETTR more to minimise noise and not worry you will run into banding when you pull tones down a log curve and stretch the bits out. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...