Jump to content

8-bit REC709 is more flexible in post than you think


kye
 Share

Recommended Posts

5 hours ago, zlfan said:

"

We did the color correction for "Key & Peele" on a Davinci Resolve.

Of note is the fact that our recording medium for the season (after a horrific trial with one of the external 10-bit 4:2:2 recorders) was 50 Mbs XDCAM disc. We never lost a frame and we have hard masters of all of our footage, but I didn't learn until we were in post that that is an 8 bit standard. However we were able to do a tremendous amount of push/pull and the 8 bit never seemed to create problems. There is, of course, a notable difference between 35 and 50 Mbs.

The shots we did in non s-log (60fps material) were far more problematic.

__________________
Charles Papert"

 

 

i think he did the whole season 1 on key and peele with f3 s log to 8 bit nano flash 50 mbits mpeg2 long gop. cg is really good. something to consider though, f3 has a nice sensor and good dsp; he had good budget on lighting, maybe he used one stop lower than the native iso to reduce noise; most of the scenes were staged so he could control dr. but the final results are amazing for 8 bit 50 mbits mpeg2 long gop. 

 

https://vimeo.com/channels/keypeele

Interesting.

I suspect that the two key aspects that make this even remotely possible to get a professional result with 8-bit at that low bitrate are:

  • The DSP in the F3
  • The complete control on set - being able to change what is in-front of the camera to suit the camera makes an enormous difference to how well something will look in the final version

Mostly now, the cameras people are shooting on that are limited to 8-bit will be consumer cameras that have completely rubbish DSP (certainly compared to the flagship cinema cameras of yesteryear) and the people that use them will be pointing them at mostly / completely uncontrolled scenes, perhaps with mixed temperature lighting, and many elements in the frame that do not look pleasing in the footage that is captured.

Also, many of these cameras will be used to capture scenes with a lot of movement - either of movement of objects in the frame or movement of the whole camera due to being hand-held, which puts an incredible strain on a limited bitrate codec.

I've said it many times but its worth repeating - the amateurs who shoot the real world need the best specs but are given the worst, and the professionals who shoot on sets with dozens or hundreds of people to massage every aspect of the scene to look good to the camera need the least specs but are given the best.

4 hours ago, John Matthews said:

When weighing 10bit versus 8bit, the debate was particularly pronounced in the era of 1080p. While 1080p 10bit, depending on the camera, has issues like moiré and aliasing, these concerns are often gone with 4k 8bit. The redeeming quality of 10bit is in H.265, where file sizes remain more manageable, only marginally larger than 4k 8bit H.264.

IMO:

4k 8bit 4:2:0 > 1080p 10bit 4:2:2 due to:

  • Absence of banding
  • Diminished moiré
  • Reduced aliasing
  • Enhanced detail and cropping capabilities

While 4k 10bit holds an objective and mathematical superiority over 4k 8bit, its perceptible advantages are often overlooked by many.

I suspect that the missing piece in the above is the oversampling.  4k 8bit 4:2:0 should be inferior to 1080p 10bit 4:2:2 when put into a 1080p timeline because the potential banding and quantisation error of 8-bit aren't always overcome by the extra resolution, however making the statement that "shooting in 4k 8bit 4:2:0 > shooting in 1080p 10bit 4:2:2" is perhaps more accurate due to the lack of downsampling in most cameras.  The 1080p from the GH5 is stunning, for example, and I think the downsampling is the main reason.

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
21 minutes ago, kye said:

I suspect that the missing piece in the above is the oversampling.  4k 8bit 4:2:0 should be inferior to 1080p 10bit 4:2:2 when put into a 1080p timeline because the potential banding and quantisation error of 8-bit aren't always overcome by the extra resolution

I have yet to see any examples of banding of a 4k 4:2:0 100mbps file when put on a 1080p timeline. I can't remember banding on a 4k timeline either. Mathematically, you are correct that 1080p 4:2:2 10bit has more information than 4k 4:2:0 8bit, but I haven't found any examples that prove it objectively- most of the 4k 8bit files that I see end up looking slightly more detailed; punch in and the difference is even bigger. There is definitely something to be said for down-sampled footage though.

Link to comment
Share on other sites

41 minutes ago, John Matthews said:

I have yet to see any examples of banding of a 4k 4:2:0 100mbps file when put on a 1080p timeline. I can't remember banding on a 4k timeline either. Mathematically, you are correct that 1080p 4:2:2 10bit has more information than 4k 4:2:0 8bit, but I haven't found any examples that prove it objectively- most of the 4k 8bit files that I see end up looking slightly more detailed; punch in and the difference is even bigger. There is definitely something to be said for down-sampled footage though.

Well, if you're yet to see issues then let us delay no more!

This is a post I made about the endless issues I had with the XC10 and its 300bps 4K 8-bit C-log files:

While it's not banding, it's definitely problems related to 8-bit on decent bitrate 4K footage.

In reality, the combination of 8-bit and log is the reason, but the blame is mine for not using the camera differently, or for buying it for this kind of work in the first place.

Link to comment
Share on other sites

"I've said it many times but its worth repeating - the amateurs who shoot the real world need the best specs but are given the worst, and the professionals who shoot on sets with dozens or hundreds of people to massage every aspect of the scene to look good to the camera need the least specs but are given the best."

 

yes.

Link to comment
Share on other sites

21 minutes ago, kye said:

While it's not banding, it's definitely problems related to 8-bit on decent bitrate 4K footage.

In reality, the combination of 8-bit and log is the reason, but the blame is mine for not using the camera differently, or for buying it for this kind of work in the first place.

Yes, maybe I should have prefaced my statement with "log files meant for 10bit". Yes, they will not look great in 8bit. I don't really like shooting log anymore, only sometimes. I think most cameras do quite well in the normal profiles.

24 minutes ago, kye said:

This is a post I made about the endless issues I had with the XC10 and its 300bps 4K 8-bit C-log files:

I remember the XC10. Do you still have it and shoot with it? That was when Canon was 100% into making great concepts suck so you buy something pricier.

Link to comment
Share on other sites

19 minutes ago, John Matthews said:

Yes, maybe I should have prefaced my statement with "log files meant for 10bit". Yes, they will not look great in 8bit. I don't really like shooting log anymore, only sometimes. I think most cameras do quite well in the normal profiles.

I remember the XC10. Do you still have it and shoot with it? That was when Canon was 100% into making great concepts suck so you buy something pricier.

Yeah, I still have it but don't use it.  I should have used the absolutely excellent standard profile that was 709-like but had an extended highlight rolloff that contained the full dynamic range of the camera.  The downside was that it wasn't a professional colour space and wasn't supported by colour management etc, so I would have needed to know how to grade the image manually, which isn't a problem now but certainly was then.

Link to comment
Share on other sites

Interesting all of this: noise covering flaws, the role of a camera downsampling or not, 8-bit camera profiles vs 8-bit log, etc. Lots of singular details I've been aware of but this thread helps me think of them in combination with one another. Thankful of that.

I too was assuming 10-bit as a reference to log and 8-bit as a REC709 profile. (My C100 days feel so long ago now...)

Wary of noise, I've been mindful to keep ISO's at or only one stop-ish over panny's 640/4000 base and I have never really had much noise concern. I feel like I savor log's roll-off character but perhaps I've been telling myself that now for so long that it deserves doing another round of testing to interrogate. 

And I am thinking of a shoot this summer at a singing retreat in which I had no lighting control under a 20'x40' tent with buildings to one side, a lake to the other, a ceiling that reflected the green of the surrounding grass, and a huge range of skin tones... I think the 10-bit paid its dues during that edit, but this is the kind of extreme for which it's suited, not the point of this thread, which I appreciate.

Link to comment
Share on other sites

8 hours ago, zlfan said:

Of note is the fact that our recording medium for the season (after a horrific trial with one of the external 10-bit 4:2:2 recorders) was 50 Mbs XDCAM disc. We never lost a frame and we have hard masters of all of our footage

From this comment I presume they were recording to a Sony PMW-50, or similar. This is what I mean:

https://pro.sony/s3/cms-static-content/file/05/1237490106505.pdf

Although the reference to "disc" makes me wonder if perhaps it was an older/different model than this one. Or maybe it was just a slightly sloppy mistake saying "disc"? 

 

8 hours ago, zlfan said:

but I didn't learn until we were in post that that is an 8 bit standard. 

It's interesting how even the professional experts can sometimes screw up the basics. 

8 hours ago, zlfan said:

i think he did the whole season 1 on key and peele with f3 s log to 8 bit nano flash 50 mbits mpeg2 long gop.

Correct, they shot one season with the new Sony PMW-F3, but after that they switched over to the brand new ARRI cameras for the later seasons. 

But where did you see evidence they filmed with the Nano Flash? 

Link to comment
Share on other sites

2 hours ago, IronFilm said:

From this comment I presume they were recording to a Sony PMW-50, or similar. This is what I mean:

https://pro.sony/s3/cms-static-content/file/05/1237490106505.pdf

Although the reference to "disc" makes me wonder if perhaps it was an older/different model than this one. Or maybe it was just a slightly sloppy mistake saying "disc"? 

 

It's interesting how even the professional experts can sometimes screw up the basics. 

Correct, they shot one season with the new Sony PMW-F3, but after that they switched over to the brand new ARRI cameras for the later seasons. 

But where did you see evidence they filmed with the Nano Flash? 

he mentioned his work in a nano flash subforum and said that 8 bit xdcam codec is surprisingly good for post. but 50 mbits 8 bit 422 long gop for post for a pro series soap opera, really hard to imagine. 

it also reminds me how lucky we are now, we have so many good cameras and free resovle and cheap led llights. 

i really think crop mood with its true lossless ml raw and dirty cheap cameras can make things happen. it is just buried in so many good choices now. 

it is also unfortunately or fortunate to have a dp as a true full time career, since no barrier is left. 

Link to comment
Share on other sites

The biggest difference I notice between 8 and 10 bit footage is that 8 bit has splotchy chroma variation. I believe this is a result of the encoder rather than inherent in bit depth, but it's been visible on every camera that I've used which natively shoots both bit depths. In this quick example, I shot 60 Mbps 4:2:0 UHD Rec 709 in 10 bit H265 and 8 bit H264, and added some saturation to exaggerate the effect. No other color corrections applied. Notice when zooming in, the 8 bit version has sort of splotches of color in places.

 All settings were the same, but this is not a perfectly controlled test--partially because I was lazy, and partially to demonstrate that it's not that hard to show a 10 bit benefit at least on this camera. I do, however, agree with the initial premise, that 8 bit does generally get the job done, and I generally also agree that 8 bit 4k downscales to a better image than native 10 bit 1080p.

 

comparison.png.39366348fa34e8c51376b2fc5616e0db.png

8bit.png

10bit.png

Link to comment
Share on other sites

Very Interesting tests. The image holds on pretty well. 

Has anyone worked on Topaz Labs' or other similar software. Maybe upressed 8-bit 4k to 8k, and then increased bit depth to 12-bit when downressed to 1080p. Or is that only exaggerated claims of improving image quality? 

Link to comment
Share on other sites

33 minutes ago, sanveer said:

Very Interesting tests. The image holds on pretty well. 

Has anyone worked on Topaz Labs' or other similar software. Maybe upressed 8-bit 4k to 8k, and then increased bit depth to 12-bit when downressed to 1080p. Or is that only exaggerated claims of improving image quality? 

I have version 3.1.11. I have had excellent results deinterlacing and slightly upresing 540i 60fps footage to 1080p 60fps. However, when I take 4k or even 1080p footage and try to improve, it's almost never worth the energy (time and electricity). This software is still in its infancy- it doesn't even correct moiré- a major point.

Link to comment
Share on other sites

59 minutes ago, John Matthews said:

I have version 3.1.11. I have had excellent results deinterlacing and slightly upresing 540i 60fps footage to 1080p 60fps. However, when I take 4k or even 1080p footage and try to improve, it's almost never worth the energy (time and electricity). This software is still in its infancy- it doesn't even correct moiré- a major point.

Oh ok. That's very curious. I guess it means that the software isn't prepared for processing by the majority of hardware for higher resolutions. Its way too hardware intensive. It's strange that moirè (and other artefacts) aren't corrected too. It can't possibly be too complicated in algorithms to fix it. 

Link to comment
Share on other sites

14 minutes ago, sanveer said:

Oh ok. That's very curious. I guess it means that the software isn't prepared for processing by the majority of hardware for higher resolutions. Its way too hardware intensive. It's strange that moirè (and other artefacts) aren't corrected too. It can't possibly be too complicated in algorithms to fix it. 

I have an M1 iMac; so, I don't have the latest and greatest, but does really well with FCPX. The problem is, after several hours of processing, I can't really tell much of a difference except for compression artifact in 4k. At least the M1 is super efficient. It does decently with aliasing (electrical wires in 1080p), but moiré is another beast apparently. In fact, I don't know any consumer software that fixes it. The biggest fix for moiré is more resolution out of camera (and even then, without a low pass filter you'll have some frequency with it).

Link to comment
Share on other sites

Well, well:)

1. 10bit Vlog on the S1 smashes Pannys S1 8bit 709 like Bruce Lee kicked Chuck "chesthair superchamp" Norris' butt in the fake Collosseum of Rome. No contest.

2. Most part of the curve of 10bit log if not all should hold much more values than linear 709.

3. Alister Chapman shown that 10bit Slog2 held no disadvantage on the FS7 compared to its linear 12bit Raw output. Always a great presenter and a good reason to grab some chocolate milk and some cookies and watch a 2.5 hour presentation.

I am applying kyes neat and effective form of presenting arguments. Learning from the best. 1, 2, 3... 🙂

Link to comment
Share on other sites

5 hours ago, PannySVHS said:

3. Alister Chapman shown that 10bit Slog2 held no disadvantage on the FS7 compared to its linear 12bit Raw output. Always a great presenter and a good reason to grab some chocolate milk and some cookies and watch a 2.5 hour presentation.

That was more a design flaw of the FS7 itself though

12bit from an ARRI for instance will be great. 

Link to comment
Share on other sites

1 hour ago, IronFilm said:

That was more a design flaw of the FS7 itself though

12bit from an ARRI for instance will be great. 

Isn't ARRIRAW 12-bit LOG?  That would be broadly as good as 14-bit Linear, one 'level' better than 12-bit Linear or 10-bit log.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...