Jump to content
Sign in to follow this  
Eric Calabros

Downsampled 4k looks way better than native 4k.. or not!

Recommended Posts

Jim Kasson did a series of tests about this subject: does 6k or 8k or even 9k image downsampled to 4k look better than native 4k? Though he is a still guy but the test is relevant for video shooters. This is the last one but check the previous posts too.

https://blog.kasson.com/the-last-word/camera-resolution-and-4k-viewing-a7s-a7iii-a7riii-a7riv-downsampled/

My take is that if you dont shoot texts or Siemens star or a chart, the difference is surprisingly unnoticeable! And keep in mind, Jim is doing this in post, using a powerful PC, using the best available software and algorithms. We know the downsample process happening inside the cameras are far from the best possible, they want to do it fast without using much power, it shouldn't be a very computationally expensive task. 

 

Share this post


Link to post
Share on other sites
EOSHD Pro Color for Sony cameras EOSHD Pro LOG for Sony CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

Interesting! I think that A7S is also being oversampled, as far as I can tell from Kasson's methodology. The sensor is 12MP with a height of 2832, and it is downscaled to 2160. That's just over 31% oversampling. The numbers I see thrown around for how much you have to oversample to overcome bayer patterns is usually around 40%, so A7S is already in the oversampling ballpark of resolving "true" 2160.

With that in mind, I was surprised at how much difference in color there is. Even on the real world images, there's a nice improvement in color issues on the A7R4 over the A7S. The difference in sharpness was not very pronounced at all. But bayer patterns retain nearly 100% luminance resolution, so maybe that makes sense. The color difference evens out a bit when the advanced algorithms are used, which really shows the huge potential of computational photography, all the way from better scaling algorithms, up to AI image processing.

I suspect that some of our concept that oversampling makes vastly better images comes from our experience moving from binned or line skipped to oversampling, rather than directly from native to oversampling. And I also think that we all agree that after a point, oversampling doesn't help. 100MP vs 50MP won't make a difference if you scale down to 2MP.

Share this post


Link to post
Share on other sites
6 hours ago, Eric Calabros said:

My take is that if you dont shoot texts or Siemens star or a chart, the difference is surprisingly unnoticeable

My take is that many of the things that camera nerds on forums obsess over are surprisingly unnoticeable.  I used to get caught up in things like resolution and bit depth and colour science, and after seeing a few real tests (or even better - real world tests) I got a shock and learned that some things really make very little difference.

That's why I question things I'm lead to believe and actually go and do tests to find out.  I do far more tests than I talk about on here, gradually unlearning the BS that the internet is full of.  Of course, much of what is talked about does matter and the vast majority matters sometimes and not other times.  That deeper knowledge takes years to learn on the internet, or mere hours if you pick up a camera and go do a test and see what the end result looks like.

“What gets us into trouble is not what we don't know. It's what we know for sure that just ain't so.”  ― Mark Twain

Share this post


Link to post
Share on other sites
12 hours ago, KnightsFan said:

 

I suspect that some of our concept that oversampling makes vastly better images comes from our experience moving from binned or line skipped to oversampling, rather than directly from native to oversampling. And I also think that we all agree that after a point, oversampling doesn't help. 100MP vs 50MP won't make a difference if you scale down to 2MP.

Which is convincing me that the best sensor for capturing 4k is a native 4k sensor. Yes it comes with some aliasing and false color in some cases, but It would run faster (higher fps), cooler (less data traveling inside or outside the sensor), and as a bonus sees better in the dark. 

Share this post


Link to post
Share on other sites
9 minutes ago, Eric Calabros said:

Which is convincing me that the best sensor for capturing 4k is a native 4k sensor. Yes it comes with some aliasing and false color in some cases, but It would run faster (higher fps), cooler (less data traveling inside or outside the sensor), and as a bonus sees better in the dark. 

Do they see better in the dark?

I understand that the pixels will be larger and have less ISO noise, but in an oversampled image the downscaling has an averaging effect on random noise on adjacent pixels, working as a sort of noise reduction filter.

I'm genuinely not sure which would come out ahead.

Does anyone know?

Share this post


Link to post
Share on other sites
2 hours ago, Eric Calabros said:

Which is convincing me that the best sensor for capturing 4k is a native 4k sensor. Yes it comes with some aliasing and false color in some cases, but It would run faster (higher fps), cooler (less data traveling inside or outside the sensor), and as a bonus sees better in the dark. 

I don't know if I agree with that. The A7S and A7SII at 12MP had ~30ms rolling shutter, and the Fuji XT3 at 26MP has ~20ms. 5 years ago you'd be right, but now we really should expect lower RS and oversampling without heat issues. Maybe 26MP is more than necessary, but I think 8MP is much less than optimal. I don't know if it was strictly the lack of oversampling or what, but to be honest the false color on A7S looked pretty ugly to me. And I'm talking about the 4k downsample real world ones.

2 hours ago, kye said:

Do they see better in the dark?

I understand that the pixels will be larger and have less ISO noise, but in an oversampled image the downscaling has an averaging effect on random noise on adjacent pixels, working as a sort of noise reduction filter.

I'm genuinely not sure which would come out ahead.

Does anyone know?

I think it's a question of how big many photons hit the photosensitive part. Every sensor has some amount of electronics and stuff on the front, blocking light from hitting the photosensitive part (BSI sensors have less, but some), and every sensor has color filters which absorb some light. So it depends on the sensor design. If you can increase the number of pixels without increasing the photosensitive area blocked, then noise performance should be very similar.

Share this post


Link to post
Share on other sites
2 hours ago, KnightsFan said:

I don't know if I agree with that. The A7S and A7SII at 12MP had ~30ms rolling shutter, and the Fuji XT3 at 26MP has ~20ms. 5 years ago you'd be right, but now we really should expect lower RS and oversampling without heat issues. Maybe 26MP is more than necessary, but I think 8MP is much less than optimal. I don't know if it was strictly the lack of oversampling or what, but to be honest the false color on A7S looked pretty ugly to me. And I'm talking about the 4k downsample real world ones.

I think it's a question of how big many photons hit the photosensitive part. Every sensor has some amount of electronics and stuff on the front, blocking light from hitting the photosensitive part (BSI sensors have less, but some), and every sensor has color filters which absorb some light. So it depends on the sensor design. If you can increase the number of pixels without increasing the photosensitive area blocked, then noise performance should be very similar.

Modern BSI designs (Sony Semicon) virtually do not block any incident photons (100% fill factor), a 12MP FF sensor and 60MP FF sensor both receive the same amount of photons, it's just that the 60MP sensor has more readout noise.

On-chip colour-aware binning is more efficient and higher quality than oversampling when deriving 4K/2K from higher resolution sensors.

Share this post


Link to post
Share on other sites
19 hours ago, KnightsFan said:

Interesting! I think that A7S is also being oversampled, as far as I can tell from Kasson's methodology. The sensor is 12MP with a height of 2832, and it is downscaled to 2160. That's just over 31% oversampling.

Bear in mind though that a full-frame sensor has a 3:2 aspect ratio, so some of those vertical pixels will go unused when capturing 16:9 video. There probably is some slight oversampling going on, as the sensor's horizontal resolution is 4,240 pixels, but it'd only be in the region of a 10% oversample rather than 31%.

Share this post


Link to post
Share on other sites
30 minutes ago, David Bowgett said:

Bear in mind though that a full-frame sensor has a 3:2 aspect ratio, so some of those vertical pixels will go unused when capturing 16:9 video. There probably is some slight oversampling going on, as the sensor's horizontal resolution is 4,240 pixels, but it'd only be in the region of a 10% oversample rather than 31%.

A7S/A7S II record 4K with 1:1 readout, therefore there's a 1.1x crop.

Share this post


Link to post
Share on other sites
28 minutes ago, David Bowgett said:

Bear in mind though that a full-frame sensor has a 3:2 aspect ratio, so some of those vertical pixels will go unused when capturing 16:9 video. There probably is some slight oversampling going on, as the sensor's horizontal resolution is 4,240 pixels, but it'd only be in the region of a 10% oversample rather than 31%.

Kasson's tests were in photo mode, not video. It's not explicitly clear, but I think he used the full vertical resolution of the respective cameras. He states that he scaled the images to 2160 pixels high, despite the horizontal resolution not being 3840.

Share this post


Link to post
Share on other sites
On 10/19/2019 at 9:47 PM, kye said:

My take is that many of the things that camera nerds on forums obsess over are surprisingly unnoticeable.  I used to get caught up in things like resolution and bit depth and colour science, and after seeing a few real tests (or even better - real world tests) I got a shock and learned that some things really make very little difference.

That's why I question things I'm lead to believe and actually go and do tests to find out.  I do far more tests than I talk about on here, gradually unlearning the BS that the internet is full of.  Of course, much of what is talked about does matter and the vast majority matters sometimes and not other times.  That deeper knowledge takes years to learn on the internet, or mere hours if you pick up a camera and go do a test and see what the end result looks like.

“What gets us into trouble is not what we don't know. It's what we know for sure that just ain't so.”  ― Mark Twain

It doesn't matter when your subject matter is large objects, such as people and such. But, in content that focusses on detail, such as natural history, then it does make a difference. It is all about context. Sure, you can produce examples where resolution is not noticeable, but equally you can do the same where it is noticeable. 

Oversampling is important because it produces more accurate edges in an image. In content with large objects and few edges, you would not see a big difference, but in subject matter with small objects and lots of edges, it is significant.

Share this post


Link to post
Share on other sites
2 hours ago, Mokara said:

It doesn't matter when your subject matter is large objects, such as people and such. But, in content that focusses on detail, such as natural history, then it does make a difference. It is all about context. Sure, you can produce examples where resolution is not noticeable, but equally you can do the same where it is noticeable. 

Oversampling is important because it produces more accurate edges in an image. In content with large objects and few edges, you would not see a big difference, but in subject matter with small objects and lots of edges, it is significant.

I'm considering posting a blind test where some of the shots are digitally zoomed to various amounts and seeing which shots people could identify, and how much zoom was evident.  What sort of shots should I include?  I'm imagining if I included things like close shots of plants with a back-light to highlight all the tiny hairs and textures?  

I did a quick test like that for myself (thus the reference to me doing more tests than I share here) and I found that large digital zooms were visible but smaller ones were not.

Share this post


Link to post
Share on other sites
21 hours ago, kye said:

I'm considering posting a blind test where some of the shots are digitally zoomed to various amounts and seeing which shots people could identify, and how much zoom was evident.  What sort of shots should I include?  I'm imagining if I included things like close shots of plants with a back-light to highlight all the tiny hairs and textures?  

I did a quick test like that for myself (thus the reference to me doing more tests than I share here) and I found that large digital zooms were visible but smaller ones were not.

It is usually evident in anything that has vegetation in it, since leaves are approaching the limits of resolution and anything that results in local degradation reduces them to an amorphous mass. If your subject matter is a face on the other hand, it is far from the limits of resolution, so for something like that you might not notice the difference.

That is the issue I have with a lot of these comparative "tests", usually the person doing them chooses subject matter that reinforces whatever claim they are making. So, someone who claims that resolution does not matter will typically shoot a bunch of talking heads or buildings to make their point, and sure, for those things resolution is less important but the claim that resolution is not important is still wrong. They are just focusing on the wrong thing. It could be that they simply don't understand, or it may be that they do understand and are doing that on purpose.

Share this post


Link to post
Share on other sites
2 hours ago, Mokara said:

It is usually evident in anything that has vegetation in it, since leaves are approaching the limits of resolution and anything that results in local degradation reduces them to an amorphous mass. If your subject matter is a face on the other hand, it is far from the limits of resolution, so for something like that you might not notice the difference.

That is the issue I have with a lot of these comparative "tests", usually the person doing them chooses subject matter that reinforces whatever claim they are making. So, someone who claims that resolution does not matter will typically shoot a bunch of talking heads or buildings to make their point, and sure, for those things resolution is less important but the claim that resolution is not important is still wrong. They are just focusing on the wrong thing. It could be that they simply don't understand, or it may be that they do understand and are doing that on purpose.

Good points.

For the test I did for myself I shot 4K h264, scaled some shots, exported in h264 (at a higher bitrate) and then uploaded to YT in 4K, as that is my workflow.  Obviously if you're shooting RAW and delivering in Prores HQ then your thresholds for what is perceptible will be different.  I was simply doing it to see if it mattered to me, and how far I could push things in how I shoot, hoping that it would matter less and I could use digital zoom to space my primes further apart and cover more zoom range with the same number of lenses.

Fun stuff.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

×
×
  • Create New...