Jump to content

Sensor vs. Processor


SRV1981
 Share

Recommended Posts

What is the relationship of each of these on:

 

1. color

2. dynamic range

3. highlight roll off 

etc? 
 

for example the new xs20 has an older xtransIV sensor but newer processor while the xt5 has a brewer trans V sensor and newer processor. A little confused. 

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
3 hours ago, SRV1981 said:

What is the relationship of each of these on:

 

1. color

2. dynamic range

3. highlight roll off 

etc? 
 

for example the new xs20 has an older xtransIV sensor but newer processor while the xt5 has a brewer trans V sensor and newer processor. A little confused. 

1) colour

The sensor in cameras is a linear device, but does have a small influence in colour because each photo site has a filter on it (which is how red, green, and blue are detected separately) and the wavelengths of light that each colour detects are tuned by the manufacturer of the filter to give optimal characteristics, so this is a small influence in the colour science of the camera.

The sensor then just measures the light that hits each photo site and is completely Linear.  Therefore, all the colour science (except the filter on the sensor) is in the processor that turns the Linear output into whatever 709 or Log profile is written to the card.

2) DR

DR is limited by the dynamic range of the sensor, and of the noise levels, at the given ISO setting.  If a sensor has more DR or less noise then the overall image has more DR.

The processor can do noise reduction (spatial or temporal) and this can increase the DR of the resultant image.  The processor can also compress the DR of the image through the application of un-even contrast (eg crushing the highlights) or clipping the image (eg when saving JPG images rather than RAW stills) and this would decrease the DR.

3) Highlight rolloff

Sensors have nothing to do with highlight rolloff - when they reach their maximum levels they clip harder than an iPhone 4 on the surface of the sun.

All highlight rolloff is created by the processor when it takes the Linear readout from the sensor and applies the colour science to the image.

There is general confusion around these aspects and there is frequently talk of how one sensor or other has great highlight rolloff, which is factually incorrect.  I'm happy to discuss this further if you're curious.

Link to comment
Share on other sites

10 hours ago, kye said:

1) colour

The sensor in cameras is a linear device, but does have a small influence in colour because each photo site has a filter on it (which is how red, green, and blue are detected separately) and the wavelengths of light that each colour detects are tuned by the manufacturer of the filter to give optimal characteristics, so this is a small influence in the colour science of the camera.

The sensor then just measures the light that hits each photo site and is completely Linear.  Therefore, all the colour science (except the filter on the sensor) is in the processor that turns the Linear output into whatever 709 or Log profile is written to the card.

2) DR

DR is limited by the dynamic range of the sensor, and of the noise levels, at the given ISO setting.  If a sensor has more DR or less noise then the overall image has more DR.

The processor can do noise reduction (spatial or temporal) and this can increase the DR of the resultant image.  The processor can also compress the DR of the image through the application of un-even contrast (eg crushing the highlights) or clipping the image (eg when saving JPG images rather than RAW stills) and this would decrease the DR.

3) Highlight rolloff

Sensors have nothing to do with highlight rolloff - when they reach their maximum levels they clip harder than an iPhone 4 on the surface of the sun.

All highlight rolloff is created by the processor when it takes the Linear readout from the sensor and applies the colour science to the image.

There is general confusion around these aspects and there is frequently talk of how one sensor or other has great highlight rolloff, which is factually incorrect.  I'm happy to discuss this further if you're curious.

Woah. Thanks! So it seems the processor is most important.  So when there is the older sensor in the xs20 but the same processor as the xt5 how does that affect differences in image?

Link to comment
Share on other sites

No in Fuji's case the X-Trans sensor has a huge impact on IQ. It uses a unique non standard Color Filter Array that affects noise, moiré and detail resolution. It also avoids the need for an optical lowpass filter. 

The new processor in XS20 is what allows 6.2k open gate in 10-bit 4:2:2. The XT4 could only do up to 4K in 4:2:0. Also allows FLog2 and ProRes RAW out.

Link to comment
Share on other sites

2 hours ago, SRV1981 said:

Woah. Thanks! So it seems the processor is most important.  So when there is the older sensor in the xs20 but the same processor as the xt5 how does that affect differences in image?

I don't think there is a "most important".  

I tend to think of photos/video like you're looking through a window at the world, where each element of technology is a pane of glass - so to see the outside world you look through every layer.  If one layer is covered in mud, or is blurry, or is tinted, or defective in some way, then the whole image pipeline will suffer from that.

In this analogy, you should be trying to work out which panes of glass the most offensive aspects of the image are on, and trying to improve or replace that layer.  

Thinking about it like this, there is no "most important".  Every layer is important, but some are worth paying more or less attention to, depending on what their current performance is and what you are trying to achieve.

Of course, the only sensible test should be the final edit.  Concentrating on anything other than the final edit is just optimising for the wrong outcome.

Link to comment
Share on other sites

To expand on the above, here is a list of all the "layers" that I believe are in effect when creating an image - you are in effect "looking through" these items:

  • Atmosphere between the camera and subject
  • Filters on the end of the lens
  • The lens itself, with each element and coating, as well as the reflective properties of the internal surfaces
  • Anything between the lens and camera (eg, speed booster / TC, filters, etc)
  • Filters on the sensor and their accompanying coatings (polarisers, IR/UV cut filters, anti-aliasing filter, bayer filter, etc)
  • The sensor itself (the geometry and electrical properties of the photosites)
  • The mode that the sensor is in (frame-rate, shutter-speed, pixel binning, line skipping, bit-depth, resolution, etc)
  • Gain (there are often multiple stages of gain, one of which is ISO, that occur digitally and in the analog domain - I'm not very clear on how these operate)
  • Image de-bayering (or equivalent for non-bayer sensors)
  • Image scaling (resolution)
  • Image colour space adjustments (Linear to Log or 709)
  • Image NR, sharpening, and other processing
  • Image bit-depth conversions
  • Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc)
  • Image container formats

This is what gets you the file on the media out of the camera.  Then, in post, after decompressing each frame, you get:

  • Image scaling and pre-processing (resolution, sharpening, etc)
  • Image colour space adjustments (from file to timeline colour space)
  • All image manipulation done in post by the user, including such things as: stabilisation, NR, colour and gamma manipulation (whole or selectively), sharpening, overlays, etc
  • Image NR, sharpening, and other processing (as part of export processing)
  • Image bit-depth conversions (as part of export processing)
  • Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc) (as part of export processing)
  • Image container formats (as part of export processing)

This gets you the final deliverable.  Then, if your content is to be viewed through some sort of streaming service, you get:

  • Image scaling and pre-processing (resolution, sharpening, etc)
  • Image colour space adjustments (from file to streaming colour space)
  • All image manipulation done in post by the streaming service, including such things as: stabilisation, NR, colour and gamma manipulation (whole or selectively), sharpening, overlays, etc
  • Image NR, sharpening, and other processing (as part of preparing the steam)
  • Image bit-depth conversions (as part of preparing the steam)
  • Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc) (as part of preparing the steam)
  • Image container formats (as part of preparing the steam)

This list is non-exhaustive and is likely missing a number of things.  It's worth noting a few things:

  • The elements listed above may be done in different sequences depending on the manufacturer / provider
  • The processing that is done by the streaming provider may be different per resolution (eg, more sharpening for lower resolutions for example)
  • I have heard anecdotal but credible evidence to suggest that there is digital NR within most cameras, and that this might be a significant factor in what separates consumer RAW cameras like the P2K/P4K/P6K from cameras like the Digital Bolex or high-end cinema cameras

..and to re-iterate a point I made above, you must take the whole image pipeline into consideration when making decisions.  Failure to do so is more likely to lead you to waste money on upgrades that don't get the results you want.  For example, if you want sharper images then you could spend literally thousands of dollars on new lenses, but this might be fruitless if the sharpness/resolution limitations are the in-camera-NR or you might spend thousands of dollars getting a camera that is better in low-light when there is no perceptible difference after the streaming service has compressed the image so much that you have to be filming at ISO 10-bajillion before and grain is visible (seriously - test this for yourself!).

Link to comment
Share on other sites

Ah crap..  I missed a step.  

Once your stream has been delivered to the streaming device, it will likely apply its own processing too.  This is anything from colour space manipulations, calibration profiles, etc, all the way through to the extremely offensive "motion smoothing" effects, NR, sharpening, and all other manner of processing that is as sophisticated and nuanced as a TikTok filter.

Plus, grandmas TV is from the early 90s and everything is bright purple, but no-one replaced it because she can't understand the remote controls on the new ones and she's too blind for it to matter anyway.

Link to comment
Share on other sites

To return to the original question, perhaps the most important element in all this is the ability of the operator to understand the variables and aesthetic implications of all of the above layers, to understand their budget, the available options on the market, and to apply their budget to ensure the products they use are optimal to achieve the intellectual and emotional response that the operator wishes to induce in the viewer.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...