Jump to content
Don Kotlos

A7sIII - Get ready?

Recommended Posts

On 10/10/2016 at 1:12 PM, mkabi said:

 

I'm confused by this logic. Canon crops I understand... Red Cropping I understand...

But Sony cropping? How is it that 12MP is perfect for FF UHD and even the 42.4MP is perfect for FF UHD... but 15.4 is crop?

 

To get 12mp (well, 8MP once in 16:9 aspect ratio) from 15.4MP then you either need some ugly downsampling or to crop. That is why.

On 10/10/2016 at 9:08 PM, JurijTurnsek said:

Built in ND filter seems like a stretch: the FF sensor already occupies the extreme edges of the mount and how would you physically cram a file in there? The did it on a super-35 sensor so far, but the size difference is huge.

Interestingly the Sony NEX-VG900 lacked ND filters built in and it is a full frame sensor :-/ 

Share this post


Link to post
Share on other sites
EOSHD Pro Color for Sony cameras EOSHD Pro LOG for Sony Cameras
4 hours ago, IronFilm said:

To get 12mp (well, 8MP once in 16:9 aspect ratio) from 15.4MP then you either need some ugly downsampling or to crop. That is why.

 

I suppose that explains the 12MP a7sii.... but what about the 42.4MP a7rii?

 

Share this post


Link to post
Share on other sites
18 minutes ago, IronFilm said:

It is a smooth multiple of 4K to make for easy math with the oversampling and not needing any nasty skipping in the process.

Still can't grasp the concept (Neither 12MP fits into 42, nor 8MP goes into 42... then I tried the actual resolutions 3840X2160 into 7952X5304; even tried the many combinations of the 16:9 resolutions of the 42MP provided by dpreview). Perhaps if you told me the math behind it or at least point me to a source that will tell me the math behind it.

Share this post


Link to post
Share on other sites
55 minutes ago, IronFilm said:

It is a smooth multiple of 4K to make for easy math with the oversampling and not needing any nasty skipping in the process.

That's incorrect. Oversampling can be done from any resolutions. For example, A6300 oversamples every pixel in the entire width of the sensor (6000x3375) to produce 3840x2160, without pixel binning or lineskipping, then downscale to 4K with software algorithm. It works the same way as when you take a high resolution JPG in Photoshop and change it to a lower resolution. The camera does it on the fly from a 6K raw signal, that's why it produces the sharpest 4K image. In comparison, A7R II S35 mode oversamples from a ~15MP area of the FF sensor.

Share this post


Link to post
Share on other sites
2 hours ago, Luke Mason said:

That's incorrect. Oversampling can be done from any resolutions. For example, A6300 oversamples every pixel in the entire width of the sensor (6000x3375) to produce 3840x2160, without pixel binning or lineskipping, then downscale to 4K with software algorithm. It works the same way as when you take a high resolution JPG in Photoshop and change it to a lower resolution. The camera does it on the fly from a 6K raw signal, that's why it produces the sharpest 4K image. In comparison, A7R II S35 mode oversamples from a ~15MP area of the FF sensor.

Now this makes sense to me... so technically a 15.4MP would not crop...

But, hey... fake b/c of poor japanese grammar... lol.

Share this post


Link to post
Share on other sites
3 hours ago, Luke Mason said:

That's incorrect. Oversampling can be done from any resolutions.

True. But you are misinterpreting what I'm saying. 
I'm not saying certain resolutions *can't* produce a 4K image, what I'm saying instead is certain sensor resolutions are better suited for it than others. 

Share this post


Link to post
Share on other sites
28 minutes ago, IronFilm said:

True. But you are misinterpreting what I'm saying. 
I'm not saying certain resolutions *can't* produce a 4K image, what I'm saying instead is certain sensor resolutions are better suited for it than others. 

"certain sensor resolutions are better suited for it than others." is exactly incorrect. From a pure image quality perspective, the more resolutions to work with the better, for example, Nokia Pureview technology oversamples from 41MP raw data to produce 8MP/5MP/2MP ones, they are sharp and clean because the noise can be average out.

The "multiples of 4K" concept you mentioned is designed for optimal speed, called pixel-binning, groups of pixels are readout as one instead of every single pixel. The bin requires equal number of pixels horizontally or vertically, like 2x2, 3x3 etc. Hence the need for "multiples" of resolution. Pixel binning tends to create noisier and softer images with higher likelihood of moire.

Share this post


Link to post
Share on other sites
32 minutes ago, Luke Mason said:

"certain sensor resolutions are better suited for it than others." is exactly incorrect. From a pure image quality perspective, the more resolutions to work with the better, for example, Nokia Pureview technology oversamples from 41MP raw data to produce 8MP/5MP/2MP ones, they are sharp and clean because the noise can be average out.

 

Nope, imagine you have an image source which is exactly 1 pixel bigger in each direction than the final resolution. You'd be much better off cropping that than downsampling it. 

Share this post


Link to post
Share on other sites

Pixel binning averages multiple pixels together, which is effectively a Box filter or low-pass filter, which means higher frequencies such as noise will be reduced. Cubic or Lanczos filters can downscale while preserving detail (very fast on a GPU (an iPhone can do it; Sony might be doing something better than a Box filter)). Aliasing can be reduced when doing an oversampling rescaling operation such as pixel binning vs. nearest neighbor (no averaging or interpolating) or line skipping which will increase aliasing and Moire.

Share this post


Link to post
Share on other sites

Nobody said its not possible to downsample 4800 pixels to 3840, but that would be a huge CPU task for very small unnoticeable benefit. Sony chose similar approach in A7r2 (though that's 5168 pixels) but its not an efficient way to do 4k. The best solution could be 40mp sensor to do 2x2 binning with full field of view of FF, and because sensor outputs only 8mp frames, adding 60fps capability would be much easier. 

Share this post


Link to post
Share on other sites
7 hours ago, jcs said:

Pixel binning averages multiple pixels together, which is effectively a Box filter or low-pass filter, which means higher frequencies such as noise will be reduced. Cubic or Lanczos filters can downscale while preserving detail (very fast on a GPU (an iPhone can do it; Sony might be doing something better than a Box filter)). Aliasing can be reduced when doing an oversampling rescaling operation such as pixel binning vs. nearest neighbor (no averaging or interpolating) or line skipping which will increase aliasing and Moire.

Pixel binning does not "average" multiple pixels, it reads them as one bigger pixel. If you used Sony F5 or FS7 you would notice that in "2K Full Scan" mode which does 2x2 binning from 4096x2160 sensor to produce 2048x1080 image, the image quality in this mode suffers from coarser and increased  amount of noise as well as severe aliasing if the OLPF was not swapped for the 2K version.

There's a special type of colour aware pixel binning though, as far as I know it's only used in Canon C300/C500, 4K sensor is pixel binned to produce full RGB 2K image without interpolation. The prototype 16K FF sensor from Forza also uses colour aware pixel binning to produce full RGB 8K from a 16K sensor.

Share this post


Link to post
Share on other sites
1 hour ago, Eric Calabros said:

Nobody said its not possible to downsample 4800 pixels to 3840, but that would be a huge CPU task for very small unnoticeable benefit. Sony chose similar approach in A7r2 (though that's 5168 pixels) but its not an efficient way to do 4k. The best solution could be 40mp sensor to do 2x2 binning with full field of view of FF, and because sensor outputs only 8mp frames, adding 60fps capability would be much easier. 

Definitely not the best solution, here is an example of it:

4K 1:1 readout from Sony FS7, downscaled in post:

OLPF4K-4K-FF-CROP.jpg?w=773

2K from 2x2 pixel binning:

OLPF4K-2K-Full-Frame-CROP.jpg?w=777

 

 

Share this post


Link to post
Share on other sites
1 hour ago, Luke Mason said:

Definitely not the best solution, here is an example of it:

4K 1:1 readout from Sony FS7, downscaled in post:

OLPF4K-4K-FF-CROP.jpg?w=773

2K from 2x2 pixel binning:

OLPF4K-2K-Full-Frame-CROP.jpg?w=777

 

 

in a video camera, yes, but we are talking about small hybrid cameras that are supposed to take high resolution still images. in that case best solution should be the best balanced compromise. 

(as a side note, 2x2 binning need different demosaicing algorithm than conventional bayer, and Sony is not good at making those algorithms)

Share this post


Link to post
Share on other sites
29 minutes ago, Eric Calabros said:

in a video camera, yes, but we are talking about small hybrid cameras that are supposed to take high resolution still images. in that case best solution should be the best balanced compromise. 

(as a side note, 2x2 binning need different demosaicing algorithm than conventional bayer, and Sony is not good at making those algorithms)

On a small hybrid camera it would be even worse, because the OLPF has to be optimised for 40MP stills, which should be very very weak to retain fine details. The aliasing from 2X2 binning with a weak 8K optimised OLPF would be horrifying.

2x2 binning is 2x2 binning, different demosaic algorithms do not yield any better results. The key to not degrade image quality when binning is appropriate OLPF, and we are not talking about how to get better image quality here.

Share this post


Link to post
Share on other sites
53 minutes ago, jcs said:

Exactly what I said in the second paragraph of the reply. These are special types of pixel-binning rarely used on cinema and consumer cameras. Majority of cameras use the most standard and fastest method of pixel-binning.

Share this post


Link to post
Share on other sites
11 minutes ago, Luke Mason said:

Exactly what I said in the second paragraph of the reply. These are special types of pixel-binning rarely used on cinema and consumer cameras. Majority of cameras use the most standard and fastest method of pixel-binning.

Mathematically, when two sensors are sampled and results combined, that's an add operation, which must be normalized by a divide by 2 or bit shift right operation. (sensor1 + sensor2) / 2 = average. This is the same as a box or low-pass filter.

The math applies to any sensor system, including consumer devices: http://www.photonics.com/Article.aspx?AID=23653

Quote

Designers have explored a number of approaches to image-quality improvement. However, only two — enhancing the fundamental design of the pixels in the sensor array and combining, or “binning,” pixels — are known to improve the quality of low-light imaging, offering the market CMOS sensors with CCD-level performance at a fraction of the cost.

 

Share this post


Link to post
Share on other sites
32 minutes ago, jcs said:

Mathematically, when two sensors are sampled and results combined, that's an add operation, which must be normalized by a divide by 2 or bit shift right operation. (sensor1 + sensor2) / 2 = average. This is the same as a box or low-pass filter.

The math applies to any sensor system, including consumer devices: http://www.photonics.com/Article.aspx?AID=23653

 

Two articles you linked are from more than a decade ago. They are not so relevant to what most of the sensors are doing today.

You (and the articles) are still talking about colour aware pixel-binning which improves SNR and creates full RGB data without interpolation. This is currently only used on C300/C500 (Sony F35 was a legend with this binning technique). The pixel-binning method used by majority of cameras today couldn't be more simple, they treat multiple pixels as one, one readout for many pixels, there's no averaging or any other processing involving signals from individual pixels. Because of the high total FWC of the pixel group, readout noise increases. Also if the OLPF is not designed with the size of the pixel group in mind, there will be severe aliasing.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×