Jump to content

Thomas S

Members
  • Posts

    9
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Thomas S reacted to Django in Why did Canon remove so many EOS R features on the more expensive EOS R6?   
    I've been shooting a lot with the R6 this summer including a run&gun event last week. The IQ I'm getting from the CLog3 10-bit 4:2:2 files is quite possibly the best 4K IQ I've gotten on any camera. The image is so clean, so detailed, with those great Canon skin tones.  To avoid the poor RS I often shoot in 4K60p crop mode. I don't notice any line skipping either, really sharp detailed IQ on my 5K iMac Pro monitor.
    I must agree that for the price, the R6 outclasses all the competition in terms of IQ.
     
  2. Like
    Thomas S got a reaction from Django in Why did Canon remove so many EOS R features on the more expensive EOS R6?   
    As a long time Panasonic m43 user I actually just upgraded to the Canon R6 and absolutely love it. For me it came down to 100% EF lens support.
    The 20 MP completely makes sense t one for the same reason why 12 MP makes sense for the Sony A7S.  It helps sensitivity. The R6 has about a 1 stop advantage over the R5 and even photographers are very happy with the 20 MP stills. The sensitivity gain matters more then the tiny bit of extra detail.  Detail that is debatable how many lenses really take advantage of.
    Yes The R6 has some head scratching limitations but I think the amount of hatred is a bit overly dramatic. Its a solid stills and video camera with really good IBIS and really good DPAF. I also have a P4k and while its nice that I'm not limited in what tools and options I have on the P4k the addition of super clean low light, super reelable DPAF and very good IBIS make it a more enjoyable camera to use in many situations.
    Every camera out there has flaws. The latest Sony A7S finally has 10bit but its stills suck.  If we are going to complain about 20 MP on the R6 its funny the Sony fans are willing to look the other way for 12 MP stills on the A7S. Most of the other Sony cameras still don't have 10bit.  Talk about outdated thinking.
    I didn't realize 4k 60p line skips.  Is that 100% confirmed?   If you look at this review and look at the studio tool the 4k 60p seems to have the same detail as the UHD in the drop down. https://www.dpreview.com/reviews/canon-eos-r6-review/8 Are you sure you didn't do something wrong? I rarely use 4k 60p even on the P4k.  I'm not as obsessed with slow motion as others are I guess.  I also don't really have an issue with the rolling shutter on the R6. Maybe its not as good as other options out there but I rarely run into a rolling shutter issue because I don't whip my camera around. 4k 60p APS-C crop eliminates a lot of those concerns.  Yes it sucks to have a crop but my other camera choice was the Panasonic S5 which can only shoot 4k 60p with a APS-C crop. So technically the R6 having a FF 4k 60p mode is a bonus over the S5 even if it is compromised. I really see no decent affordable option out there for FF 4k 60p 10bit that doesn't have some kind of compromise or much higher cost.
    I also find it odd to criticize the RF lens options when I think many EF users will happily use their trusted EF lenses on the camera. EF glass works great on the R6 and there are a ton of EF owners out there.
    I am not even close to a Canon fan boy. I have used Panasonic since before the GH1. I owned a Canon XL1 many years ago but that was my last Canon camera.  I used to laugh at what passed for quality from their DSLRs. The HD made me want to vomit I hated it that much. The R6 is the first Canon camera I feel finally gets 4k video right.  Now that it has c.log3 its pretty solid and has a bit better DR. Every camera out there has disadvantages so its silly to only trash one camera. The S5 is an amazing camera but the L mount lenses are insanely over priced and Panasonic only focused on the high end market for glass. Nothing can really be adapted to the L mount either with any hope of video AF. Even adapted stills AF is hit or miss. Thats even assuming one finds value in the contrast detect AF of the S5. Much better than it used to be but still not perfect enough to trust it. Plus I'm just not sure of the future of the L mount and Panasonic. Feels like risky investment at this point and I just cannot afford the L mount lenses I really want.
    Sony does make nice cameras but the lack of 10bit is a deal breaker for me and I will not compromise on that. No matter what other features Sony does better its not enough to make up for that. I don't want to have to buy a $4,000 camera body just to get a Sony with 10bit. Thats essentially R5 price territory which arguably does a lot of things better than the Sony A7S. Those with hybrid needs that shoot professional stills and video will never consider a A7S so basically 10bit is a dead end with Sony for those users.
    I was about 30 seconds away from getting a Panasonic S5 instead. In the end it was the AF and bleak lens options that killed it for me. I just didn't have $8,000 to invest in a new set of f2.8 zooms and a new body. With the R6 I get 100% perfect performance from my $1,000 Tamron 70-200 f2.8 lens.
  3. Like
    Thomas S got a reaction from ntblowz in Why did Canon remove so many EOS R features on the more expensive EOS R6?   
    As a long time Panasonic m43 user I actually just upgraded to the Canon R6 and absolutely love it. For me it came down to 100% EF lens support.
    The 20 MP completely makes sense t one for the same reason why 12 MP makes sense for the Sony A7S.  It helps sensitivity. The R6 has about a 1 stop advantage over the R5 and even photographers are very happy with the 20 MP stills. The sensitivity gain matters more then the tiny bit of extra detail.  Detail that is debatable how many lenses really take advantage of.
    Yes The R6 has some head scratching limitations but I think the amount of hatred is a bit overly dramatic. Its a solid stills and video camera with really good IBIS and really good DPAF. I also have a P4k and while its nice that I'm not limited in what tools and options I have on the P4k the addition of super clean low light, super reelable DPAF and very good IBIS make it a more enjoyable camera to use in many situations.
    Every camera out there has flaws. The latest Sony A7S finally has 10bit but its stills suck.  If we are going to complain about 20 MP on the R6 its funny the Sony fans are willing to look the other way for 12 MP stills on the A7S. Most of the other Sony cameras still don't have 10bit.  Talk about outdated thinking.
    I didn't realize 4k 60p line skips.  Is that 100% confirmed?   If you look at this review and look at the studio tool the 4k 60p seems to have the same detail as the UHD in the drop down. https://www.dpreview.com/reviews/canon-eos-r6-review/8 Are you sure you didn't do something wrong? I rarely use 4k 60p even on the P4k.  I'm not as obsessed with slow motion as others are I guess.  I also don't really have an issue with the rolling shutter on the R6. Maybe its not as good as other options out there but I rarely run into a rolling shutter issue because I don't whip my camera around. 4k 60p APS-C crop eliminates a lot of those concerns.  Yes it sucks to have a crop but my other camera choice was the Panasonic S5 which can only shoot 4k 60p with a APS-C crop. So technically the R6 having a FF 4k 60p mode is a bonus over the S5 even if it is compromised. I really see no decent affordable option out there for FF 4k 60p 10bit that doesn't have some kind of compromise or much higher cost.
    I also find it odd to criticize the RF lens options when I think many EF users will happily use their trusted EF lenses on the camera. EF glass works great on the R6 and there are a ton of EF owners out there.
    I am not even close to a Canon fan boy. I have used Panasonic since before the GH1. I owned a Canon XL1 many years ago but that was my last Canon camera.  I used to laugh at what passed for quality from their DSLRs. The HD made me want to vomit I hated it that much. The R6 is the first Canon camera I feel finally gets 4k video right.  Now that it has c.log3 its pretty solid and has a bit better DR. Every camera out there has disadvantages so its silly to only trash one camera. The S5 is an amazing camera but the L mount lenses are insanely over priced and Panasonic only focused on the high end market for glass. Nothing can really be adapted to the L mount either with any hope of video AF. Even adapted stills AF is hit or miss. Thats even assuming one finds value in the contrast detect AF of the S5. Much better than it used to be but still not perfect enough to trust it. Plus I'm just not sure of the future of the L mount and Panasonic. Feels like risky investment at this point and I just cannot afford the L mount lenses I really want.
    Sony does make nice cameras but the lack of 10bit is a deal breaker for me and I will not compromise on that. No matter what other features Sony does better its not enough to make up for that. I don't want to have to buy a $4,000 camera body just to get a Sony with 10bit. Thats essentially R5 price territory which arguably does a lot of things better than the Sony A7S. Those with hybrid needs that shoot professional stills and video will never consider a A7S so basically 10bit is a dead end with Sony for those users.
    I was about 30 seconds away from getting a Panasonic S5 instead. In the end it was the AF and bleak lens options that killed it for me. I just didn't have $8,000 to invest in a new set of f2.8 zooms and a new body. With the R6 I get 100% perfect performance from my $1,000 Tamron 70-200 f2.8 lens.
  4. Like
    Thomas S got a reaction from Katrikura in Sony A7S III – 10bit vs 8bit 4K/60p   
    Thats why I got a Pocket 4k.  I got tired of the farting around with compressed formats.  External recorders are the way to get around that nut thats also a hassle.  Now I can shoot 3:1 Braw on a cheap SSD directly on the camera or at least 5:1 on an internal SD card. Now that some DSLRS are getting external raw support I still don't really care.  Its an added cost to get an external recorder and its a lot more hassle than what I have now.
    I'm also kind of a freak for 4:4:4.  I know visually it actually doesn't make a huge difference.  I studied VFX in college and can pull damn good keys with 4:2:0. To me its more about why 4:2:2.  Its a left over from the analogue days and we don't really need it anymore. We have fast and cheap enough media now to not worry about 4:2:2.  Its an old broadcast video standard and really has no place in our digital world today. h264 and h265 are also very capable of 4:4:4 but we are barely getting cameras to add 4:2:2 and 10bit let alone 4:4:4.
    So Braw on the P4k represents something I have been trying to achieve ever since I started with SVHS and have been trying to get something better than video standards. Its not just because its raw.  To me its because its RGB, 4:4:4 and color space agnostic.  No more butchered rec709, no more unnecessary 4:2:2. I know visually I could probably do the same with a lesser format but to me its just about starting clean and go from there. It represents what I always dreamed of being able to do with video. Oh yeah and its 12bit which will be even harder to make an argument for than the 10bit vs 8bit argument. But hey its there and doesn't hurt so why not.
    Fun fact.  12bit has 4096 samples.  DCI 4k resolution is 4096 wide. Thats exactly one sample per pixel for a full width subtle gradient or in other words the perfect bit depth for 4k. Not sure anyone could ever tell vs one sample every 4 pixels like 10bit has but hey there it is. Basically posterization should be physically impossible on the P4k shooting Braw.
  5. Like
    Thomas S got a reaction from Katrikura in Sony A7S III – 10bit vs 8bit 4K/60p   
    Kind of depends.  ProRes is in my opinion one of the best formats the industry has ever had.  With that said its not a very smart format. It just throws the same amount of bits at a frame no matter what the content. The beauty of formats like h264 is that they are smarter and they look at each frame and try to figure out how much is really needed.
    Yes bitrate is important for that but its such an efficient format that it can get away with a lot less bits than a dummy format like ProRes could get away with. When ProRes drops down to LT or Proxy its sacrificing quality across the frame no matter what the visual impact might be. h264 breaks the image into blocks. mpeg2 did the same thing but the image was all 8x8 pixel blocks. h264 is even more sophisticated and can use 1x1, 2x2, 4x4 and 8x8 pixel blocks. So the macro blocking can be much smaller in fine detail areas where a macro block may be 2x2 pixels instead of 8x8 that mpeg2 would use. This means we don't see macro blocking as much and visually we get a very solid picture.  The image then saves bits in those flat areas so they can be used for the more detailed areas. h264 also spreads those bits across many frames. It uses difference between a group of frames to determine what has actually changed. So if a camera is locked down and only a small red ball rolls across the screen then each of the proceeding frames only have to use bits for the macro blocks that cover that red ball. The more the frames change the more bits they need for each of the following frames.
    The problem is yes sometimes some h264 encoders do not get enough bits. most of the time 100 mbps is enough. If you get a shot with a lot of random moving fine detail like tree leaves blowing in heavy wind then that 100 mbps may fall apart. ProResHQ is dumb but it has the advantage to look good no matter what the situation. A locked down blue sky will compress as well as those complex moving tree leaves. Its just that both will take the same 700 mbps no matter how simple or complex. h264 on the other hand can get by with much less.  It would be a complete epic waste to give h264 the same 700mbps.  It would not need it at all.  In the above example that small red ball just does not need that much data to store 100% perfectly. I'm not sure there is a magic number as to what bitrate should be used.  Really depends on the scene but for the most part 100 mbps has been pretty solid on many 4k cameras. 150 mbps for 10 bit on some cameras has been even more solid.  Thats another thing to factor in.  ProResHQ is 10bit 4:2:2.  So its not really fair to compare to 8bit 4:2:0 h264 formats directly in terms of bitrate.  Again its a dummy format and even if you send ProRes a 8bit 4:2:0 camera source it stills encodes it as if its 10bit 4:2:2. So the 150 mbps 10bit 4:2:2 h264 formats are a better comparison and visually they hold up very well compared to ProResHQ.
  6. Like
    Thomas S got a reaction from Katrikura in Sony A7S III – 10bit vs 8bit 4K/60p   
    Agreed.  All the photos we look at online are 8bit and rarely is 8bit an issue. For normal rec709 video profiles 10bit is a tad overkill although there can be some extremely rare cases where it can help.
    The bigger plague of 8bit is some h264 encoders that are too aggressive in how they assume color samples will not be noticed as different and merge them as a macro block. You can have a frame from a h264 video in 8bit and an uncompressed png of that image also in 8bit and get more banding from the h.264.
    Encoders try to figure out what is safe to compress together. The stuff we can't see with the naked eye.  If we can't see it there is no point wasting bits on it. So a 8x8 pixel block with very subtle green values may decide to make that a 8x8 block of one green color. This can cause banding where one would normally not have any in 8bit. Panasonic suffered from this on the GH4 when they added log. The log was so flat that the encoder assumed it could compress areas of color into big macro blocks because the values would not be noticeable. If the shot stayed as log they were right but because the log was flatter than other log profiles it really struggled with areas of flat color like walls when graded. Sony's encoder did better at not grouping similar colors.  At least up to S-log2. S-log3 could suffer from the same issues as Panasonic V-log on the GH4 on older Sony cameras.  The GH5 had an improved encoder that wasn't as aggressive with 8bit and areas of similar colors.
  7. Like
    Thomas S got a reaction from hyalinejim in Sony A7S III – 10bit vs 8bit 4K/60p   
    A lot of this is due to 32bit float color space in NLEs. As long as the 8bit has enough to not have posterization the 32bit float will likely be able to fill in any gaps as the image is pushed hard. Grading is much easier for math to fill in gaps than say upscaling an image.
    In the case of upscaling new pixels can be averaged but averaging doesn't work for fine details like a hair. Grading however we are trying to prevent posterizing.  That is done through smooth gradients.  Sometimes averaging surrounding values does perfectly.
    For example if you have a color of value 200 and another value of 250 its easy in grading to averaging an in between value of 225 which still creates a nice smooth gradient.
    Where 10bit is important is making sure the shot is captured well the first time.  Once you have posterization it will always be there and no 32bit float processing can magically make it go away. Visually ion the shot has no posterizing than no matter how hard it is pushed it likely never will have any or pushing the 10bit would show just as much. Thats why 32bit float was created.
    10bit is a lot like 32 bit audio or 16 stops of DR that are graded down to 10 stops.  We record more so we have it and can better manipulate it.  Most of the shots above likely would have still looked good with 6 bits. You need a very long and complex gradient to break 8bit.  It can and does happen.  The more noise the camera has the less it will happen because of dithering. I think this is partially why Sony always had such a high base ISO for log.
    Finally 10bit never promised to have better color, more dynamic range or less compression artifacts. Thats not what bit depth does.  Its all just about how many different color samples can be used across the image. The single and only real side effect is posterizing. Many computer monitors at one point were only 6 bit panels even if they claimed 8bit.  Most never really noticed unless they did something like use the gradient tool in Photoshop to span a full 1920 wide image. In the case of the clear blue sky image in the article that wasn't even a difficult gradient. Most of the sky was a similar shade of blue. To break 8bit you need to create a gradient going from 0 blue to 255 blue across the full 3840 pixels for 4k video. That means there is a unique blue sample every 15 pixels if you create a gradient like that. So your sky needs to go from black on one end of the screen to bright blue on the other side. Not always realistic but you can shoot Skys around dusk and dawn that spread the values out a lot more than mid day. By comparison 10bit has a unique blue color sample every 3.75 pixels for UHD video.
    It doesn't even have to be something that covers the full screen.  If you have a gradient over 100 pixels from 200 blue to 205 blue that still means a new blue sample every 20 pixels.  Even though the screen area is very small. I develop mobile apps and when I add a gradient I run into the same problem trying to do a subtle gradient across something like a button. The gradient needs enough range to cover the area of pixels or it will look steppy. 10bit and higher is a safety net or guarantee to likely never have any kind of posterizing. In the professional world thats important and nobody wants surprises after the shoot is done.
  8. Like
    Thomas S got a reaction from Towd in Sony A7S III – 10bit vs 8bit 4K/60p   
    Agreed.  All the photos we look at online are 8bit and rarely is 8bit an issue. For normal rec709 video profiles 10bit is a tad overkill although there can be some extremely rare cases where it can help.
    The bigger plague of 8bit is some h264 encoders that are too aggressive in how they assume color samples will not be noticed as different and merge them as a macro block. You can have a frame from a h264 video in 8bit and an uncompressed png of that image also in 8bit and get more banding from the h.264.
    Encoders try to figure out what is safe to compress together. The stuff we can't see with the naked eye.  If we can't see it there is no point wasting bits on it. So a 8x8 pixel block with very subtle green values may decide to make that a 8x8 block of one green color. This can cause banding where one would normally not have any in 8bit. Panasonic suffered from this on the GH4 when they added log. The log was so flat that the encoder assumed it could compress areas of color into big macro blocks because the values would not be noticeable. If the shot stayed as log they were right but because the log was flatter than other log profiles it really struggled with areas of flat color like walls when graded. Sony's encoder did better at not grouping similar colors.  At least up to S-log2. S-log3 could suffer from the same issues as Panasonic V-log on the GH4 on older Sony cameras.  The GH5 had an improved encoder that wasn't as aggressive with 8bit and areas of similar colors.
  9. Like
    Thomas S got a reaction from TheRealOG in Sony A7S III – 10bit vs 8bit 4K/60p   
    Thats why I got a Pocket 4k.  I got tired of the farting around with compressed formats.  External recorders are the way to get around that nut thats also a hassle.  Now I can shoot 3:1 Braw on a cheap SSD directly on the camera or at least 5:1 on an internal SD card. Now that some DSLRS are getting external raw support I still don't really care.  Its an added cost to get an external recorder and its a lot more hassle than what I have now.
    I'm also kind of a freak for 4:4:4.  I know visually it actually doesn't make a huge difference.  I studied VFX in college and can pull damn good keys with 4:2:0. To me its more about why 4:2:2.  Its a left over from the analogue days and we don't really need it anymore. We have fast and cheap enough media now to not worry about 4:2:2.  Its an old broadcast video standard and really has no place in our digital world today. h264 and h265 are also very capable of 4:4:4 but we are barely getting cameras to add 4:2:2 and 10bit let alone 4:4:4.
    So Braw on the P4k represents something I have been trying to achieve ever since I started with SVHS and have been trying to get something better than video standards. Its not just because its raw.  To me its because its RGB, 4:4:4 and color space agnostic.  No more butchered rec709, no more unnecessary 4:2:2. I know visually I could probably do the same with a lesser format but to me its just about starting clean and go from there. It represents what I always dreamed of being able to do with video. Oh yeah and its 12bit which will be even harder to make an argument for than the 10bit vs 8bit argument. But hey its there and doesn't hurt so why not.
    Fun fact.  12bit has 4096 samples.  DCI 4k resolution is 4096 wide. Thats exactly one sample per pixel for a full width subtle gradient or in other words the perfect bit depth for 4k. Not sure anyone could ever tell vs one sample every 4 pixels like 10bit has but hey there it is. Basically posterization should be physically impossible on the P4k shooting Braw.
  10. Like
    Thomas S got a reaction from DFason in Sony A7S III – 10bit vs 8bit 4K/60p   
    A lot of this is due to 32bit float color space in NLEs. As long as the 8bit has enough to not have posterization the 32bit float will likely be able to fill in any gaps as the image is pushed hard. Grading is much easier for math to fill in gaps than say upscaling an image.
    In the case of upscaling new pixels can be averaged but averaging doesn't work for fine details like a hair. Grading however we are trying to prevent posterizing.  That is done through smooth gradients.  Sometimes averaging surrounding values does perfectly.
    For example if you have a color of value 200 and another value of 250 its easy in grading to averaging an in between value of 225 which still creates a nice smooth gradient.
    Where 10bit is important is making sure the shot is captured well the first time.  Once you have posterization it will always be there and no 32bit float processing can magically make it go away. Visually ion the shot has no posterizing than no matter how hard it is pushed it likely never will have any or pushing the 10bit would show just as much. Thats why 32bit float was created.
    10bit is a lot like 32 bit audio or 16 stops of DR that are graded down to 10 stops.  We record more so we have it and can better manipulate it.  Most of the shots above likely would have still looked good with 6 bits. You need a very long and complex gradient to break 8bit.  It can and does happen.  The more noise the camera has the less it will happen because of dithering. I think this is partially why Sony always had such a high base ISO for log.
    Finally 10bit never promised to have better color, more dynamic range or less compression artifacts. Thats not what bit depth does.  Its all just about how many different color samples can be used across the image. The single and only real side effect is posterizing. Many computer monitors at one point were only 6 bit panels even if they claimed 8bit.  Most never really noticed unless they did something like use the gradient tool in Photoshop to span a full 1920 wide image. In the case of the clear blue sky image in the article that wasn't even a difficult gradient. Most of the sky was a similar shade of blue. To break 8bit you need to create a gradient going from 0 blue to 255 blue across the full 3840 pixels for 4k video. That means there is a unique blue sample every 15 pixels if you create a gradient like that. So your sky needs to go from black on one end of the screen to bright blue on the other side. Not always realistic but you can shoot Skys around dusk and dawn that spread the values out a lot more than mid day. By comparison 10bit has a unique blue color sample every 3.75 pixels for UHD video.
    It doesn't even have to be something that covers the full screen.  If you have a gradient over 100 pixels from 200 blue to 205 blue that still means a new blue sample every 20 pixels.  Even though the screen area is very small. I develop mobile apps and when I add a gradient I run into the same problem trying to do a subtle gradient across something like a button. The gradient needs enough range to cover the area of pixels or it will look steppy. 10bit and higher is a safety net or guarantee to likely never have any kind of posterizing. In the professional world thats important and nobody wants surprises after the shoot is done.
  11. Like
    Thomas S got a reaction from plucas in Sony A7S III – 10bit vs 8bit 4K/60p   
    A lot of this is due to 32bit float color space in NLEs. As long as the 8bit has enough to not have posterization the 32bit float will likely be able to fill in any gaps as the image is pushed hard. Grading is much easier for math to fill in gaps than say upscaling an image.
    In the case of upscaling new pixels can be averaged but averaging doesn't work for fine details like a hair. Grading however we are trying to prevent posterizing.  That is done through smooth gradients.  Sometimes averaging surrounding values does perfectly.
    For example if you have a color of value 200 and another value of 250 its easy in grading to averaging an in between value of 225 which still creates a nice smooth gradient.
    Where 10bit is important is making sure the shot is captured well the first time.  Once you have posterization it will always be there and no 32bit float processing can magically make it go away. Visually ion the shot has no posterizing than no matter how hard it is pushed it likely never will have any or pushing the 10bit would show just as much. Thats why 32bit float was created.
    10bit is a lot like 32 bit audio or 16 stops of DR that are graded down to 10 stops.  We record more so we have it and can better manipulate it.  Most of the shots above likely would have still looked good with 6 bits. You need a very long and complex gradient to break 8bit.  It can and does happen.  The more noise the camera has the less it will happen because of dithering. I think this is partially why Sony always had such a high base ISO for log.
    Finally 10bit never promised to have better color, more dynamic range or less compression artifacts. Thats not what bit depth does.  Its all just about how many different color samples can be used across the image. The single and only real side effect is posterizing. Many computer monitors at one point were only 6 bit panels even if they claimed 8bit.  Most never really noticed unless they did something like use the gradient tool in Photoshop to span a full 1920 wide image. In the case of the clear blue sky image in the article that wasn't even a difficult gradient. Most of the sky was a similar shade of blue. To break 8bit you need to create a gradient going from 0 blue to 255 blue across the full 3840 pixels for 4k video. That means there is a unique blue sample every 15 pixels if you create a gradient like that. So your sky needs to go from black on one end of the screen to bright blue on the other side. Not always realistic but you can shoot Skys around dusk and dawn that spread the values out a lot more than mid day. By comparison 10bit has a unique blue color sample every 3.75 pixels for UHD video.
    It doesn't even have to be something that covers the full screen.  If you have a gradient over 100 pixels from 200 blue to 205 blue that still means a new blue sample every 20 pixels.  Even though the screen area is very small. I develop mobile apps and when I add a gradient I run into the same problem trying to do a subtle gradient across something like a button. The gradient needs enough range to cover the area of pixels or it will look steppy. 10bit and higher is a safety net or guarantee to likely never have any kind of posterizing. In the professional world thats important and nobody wants surprises after the shoot is done.
  12. Like
    Thomas S got a reaction from Towd in Sony A7S III – 10bit vs 8bit 4K/60p   
    A lot of this is due to 32bit float color space in NLEs. As long as the 8bit has enough to not have posterization the 32bit float will likely be able to fill in any gaps as the image is pushed hard. Grading is much easier for math to fill in gaps than say upscaling an image.
    In the case of upscaling new pixels can be averaged but averaging doesn't work for fine details like a hair. Grading however we are trying to prevent posterizing.  That is done through smooth gradients.  Sometimes averaging surrounding values does perfectly.
    For example if you have a color of value 200 and another value of 250 its easy in grading to averaging an in between value of 225 which still creates a nice smooth gradient.
    Where 10bit is important is making sure the shot is captured well the first time.  Once you have posterization it will always be there and no 32bit float processing can magically make it go away. Visually ion the shot has no posterizing than no matter how hard it is pushed it likely never will have any or pushing the 10bit would show just as much. Thats why 32bit float was created.
    10bit is a lot like 32 bit audio or 16 stops of DR that are graded down to 10 stops.  We record more so we have it and can better manipulate it.  Most of the shots above likely would have still looked good with 6 bits. You need a very long and complex gradient to break 8bit.  It can and does happen.  The more noise the camera has the less it will happen because of dithering. I think this is partially why Sony always had such a high base ISO for log.
    Finally 10bit never promised to have better color, more dynamic range or less compression artifacts. Thats not what bit depth does.  Its all just about how many different color samples can be used across the image. The single and only real side effect is posterizing. Many computer monitors at one point were only 6 bit panels even if they claimed 8bit.  Most never really noticed unless they did something like use the gradient tool in Photoshop to span a full 1920 wide image. In the case of the clear blue sky image in the article that wasn't even a difficult gradient. Most of the sky was a similar shade of blue. To break 8bit you need to create a gradient going from 0 blue to 255 blue across the full 3840 pixels for 4k video. That means there is a unique blue sample every 15 pixels if you create a gradient like that. So your sky needs to go from black on one end of the screen to bright blue on the other side. Not always realistic but you can shoot Skys around dusk and dawn that spread the values out a lot more than mid day. By comparison 10bit has a unique blue color sample every 3.75 pixels for UHD video.
    It doesn't even have to be something that covers the full screen.  If you have a gradient over 100 pixels from 200 blue to 205 blue that still means a new blue sample every 20 pixels.  Even though the screen area is very small. I develop mobile apps and when I add a gradient I run into the same problem trying to do a subtle gradient across something like a button. The gradient needs enough range to cover the area of pixels or it will look steppy. 10bit and higher is a safety net or guarantee to likely never have any kind of posterizing. In the professional world thats important and nobody wants surprises after the shoot is done.
  13. Like
    Thomas S got a reaction from alanpoiuyt in Sony A7S III – 10bit vs 8bit 4K/60p   
    A lot of this is due to 32bit float color space in NLEs. As long as the 8bit has enough to not have posterization the 32bit float will likely be able to fill in any gaps as the image is pushed hard. Grading is much easier for math to fill in gaps than say upscaling an image.
    In the case of upscaling new pixels can be averaged but averaging doesn't work for fine details like a hair. Grading however we are trying to prevent posterizing.  That is done through smooth gradients.  Sometimes averaging surrounding values does perfectly.
    For example if you have a color of value 200 and another value of 250 its easy in grading to averaging an in between value of 225 which still creates a nice smooth gradient.
    Where 10bit is important is making sure the shot is captured well the first time.  Once you have posterization it will always be there and no 32bit float processing can magically make it go away. Visually ion the shot has no posterizing than no matter how hard it is pushed it likely never will have any or pushing the 10bit would show just as much. Thats why 32bit float was created.
    10bit is a lot like 32 bit audio or 16 stops of DR that are graded down to 10 stops.  We record more so we have it and can better manipulate it.  Most of the shots above likely would have still looked good with 6 bits. You need a very long and complex gradient to break 8bit.  It can and does happen.  The more noise the camera has the less it will happen because of dithering. I think this is partially why Sony always had such a high base ISO for log.
    Finally 10bit never promised to have better color, more dynamic range or less compression artifacts. Thats not what bit depth does.  Its all just about how many different color samples can be used across the image. The single and only real side effect is posterizing. Many computer monitors at one point were only 6 bit panels even if they claimed 8bit.  Most never really noticed unless they did something like use the gradient tool in Photoshop to span a full 1920 wide image. In the case of the clear blue sky image in the article that wasn't even a difficult gradient. Most of the sky was a similar shade of blue. To break 8bit you need to create a gradient going from 0 blue to 255 blue across the full 3840 pixels for 4k video. That means there is a unique blue sample every 15 pixels if you create a gradient like that. So your sky needs to go from black on one end of the screen to bright blue on the other side. Not always realistic but you can shoot Skys around dusk and dawn that spread the values out a lot more than mid day. By comparison 10bit has a unique blue color sample every 3.75 pixels for UHD video.
    It doesn't even have to be something that covers the full screen.  If you have a gradient over 100 pixels from 200 blue to 205 blue that still means a new blue sample every 20 pixels.  Even though the screen area is very small. I develop mobile apps and when I add a gradient I run into the same problem trying to do a subtle gradient across something like a button. The gradient needs enough range to cover the area of pixels or it will look steppy. 10bit and higher is a safety net or guarantee to likely never have any kind of posterizing. In the professional world thats important and nobody wants surprises after the shoot is done.
×
×
  • Create New...