Jump to content
DBounce

Canon C200 Philip Bloom Review

Recommended Posts

On ‎7‎/‎13‎/‎2017 at 0:41 PM, Oliver Daniel said:

I'd get a C200 if it had that middle codec. 

On the Sony comment, I get that too. I am getting really nice results with colour, however the "camera operating pleasure" isn't that good. 

Downloaded some C200 stuff and tried a grade. Yep, colour with Canon certainly has that mojo that the Sony can't match. Just much better. 

The DPAF is immense - I wonder if Sony will put their 4D focus tech into the FS series? That would be interesting. 

I'll be keeping a beady eye on this confusing contraption anyway. Very niceI've from what I've seen. 

If it is RAW, there is no "mojo".

Share this post


Link to post
Share on other sites
EOSHD Pro Color for Sony cameras EOSHD Pro LOG for Sony CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
3 hours ago, EthanAlexander said:

Raw doesn't mean there are zero calculations in the pipeline. Canon color science will still come into play.

Exactly, or else there would've never been this massive effort to make RED look like Alexa footage. 

Share this post


Link to post
Share on other sites

RAW pixels are high bit depth RGB Bayer pattern pixels- all of the (secret) color science has been computed at that point. All that remains is final WB and conversion from Bayer pixels to RGB pixels (and then to YUV (if not an RGB target), 422, 420, bit remapping and truncation to 10 or 8-bit, compression, etc.). The Bayer-to-RGB step, while technically part of the color science, is public: that's how ACR, PP, FCPX, Resolve, etc. know how to properly de-Bayer all the different RAW formats from different cameras.

Share this post


Link to post
Share on other sites
4 hours ago, tugela said:

If it is RAW, there is no "mojo".

Ok, seriously, I had to cringe at this...

 

If it is MP4, there is no mojo because:

  • terrible processing like sharpening and nr
  • colors suffer from bad codec
  • lower DR
  • worse motion cadence

 

Share this post


Link to post
Share on other sites
6 hours ago, tugela said:

If it is RAW, there is no "mojo".

Very untrue. The strength and specific colors of the BFA filters are a major factor in color rendering and low light ability, as is the sensor, as are the micro lenses. Red's notorious issues with ruddy colors and poor separation between reds and greens have to do with too-close red and green chromaticities, which I believe are rooted in BFA filters, and which I know are issues no amount of post processing has solved (though Panavision's DXL has gotten close). Oh, and the sensor, and on-sensor readout and processing, are all proprietary, too, and contribute to one brand's "look." Look at how different each one of Black Magic's cameras have been. Vast differences in color, sharpness, aliasing, dynamic range, noise, fixed pattern noise, color shading, color aberrations with wide angle lenses related to micro-lenses, etc. even when they're all shooting raw. Admittedly, this is all way over my head, but the idea that raw is the great leveler is absolutely categorically untrue. It's like saying 4k is the great resolution leveler. It's not. Both are just formats. What comes before that is what matters... and that's what gets so complicated I just acknowledge I know next to nothing about it. Except that I know I know nothing about it. So I do my own research instead of trolling with meaningless dicta.

These days, I just judge with my eyes. I can't see much difference between the sample raw clips and the sample .mp4 clips. In my experience Canon Log and Canon Log 3 do not suffer from banding, even in 8 bit, and I don't see any here, either. (In my experience Arri Log C does band when compressed to 8 bit before grading, technicolor picture style DEFINIELY bands like crazy in 8 bit, and SLOG 2 does at times, and I suspect Canon Log 2 would as well if it existed in an 8 bit camera.) Which says enough to me that if I were seriously in the market for this camera, I would rent it and shoot each format side-by-side and draw my own conclusions from the test footage. That people are seeing horrible banding in the mp4 clips and not the raw ones must mean they're seeing something I'm not. Or something that's the result of YouTube compression.... which bands like hell. 

I'm not saying the lack of a 10 bit codec isn't significant and shouldn't be a deal-breaker for some. I'm just saying I'd rent and test extensively before drawing any conclusions.

And that raw is just a recording format. It's a very very small part of a very very big equation.

Share this post


Link to post
Share on other sites
1 hour ago, HockeyFan12 said:

That people are seeing horrible banding in the mp4 clips and not the raw ones must mean they're seeing something I'm not. Or something that's the result of YouTube compression.... which bands like hell. 

I'm not saying the lack of a 10 bit codec isn't significant and shouldn't be a deal-breaker for some. I'm just saying I'd rent and test extensively before drawing any conclusions.

10-bit (or more) is only helpful/needed when grading is extreme enough that final conversion to 8-bit delivery would cause issues. Everywhere on Earth in 2017 is 8-bit except for digital cinema and the very rare case of HDR TVs (people have them now however the content is extremely limited) and the even rarer case of 10-bit GPUs (e.g. Quadros) with 10-bit monitors on PCs.

What's the mostly likely reason C200 RAW doesn't show banding (still viewed online as an 8-bit streaming source, right?) vs. MP4? The MP4 file has noise reduction before compression, H.264 performs additional NR, the final edit is compressed again for upload, and the backend does another round of H.264 compression with ffmpeg (x264: very high quality, but still another compression round). DeBayering RAW does a sort of low-pass filter (a form of NR), however it's minor. So the RAW file has at least two less rounds of NR/compression vs. the MP4.

How do you eliminate banding, especially in the sky, and especially after YouTube/Vimeo online streaming compression? With dithering. Ideally NLEs would provide a high-bit to lower-bit dithering option, similar to 256 color PNGs for 10- or more bits RGB to 8-bit RGB. Since that doesn't seem to exist, here's another trick: add noise/grain. Experiment with noise level and noise size, then upload, check results and raise or lower the noise until the banding is gone.

Share this post


Link to post
Share on other sites
43 minutes ago, jcs said:

10-bit (or more) is only helpful/needed when grading is extreme enough that final conversion to 8-bit delivery would cause issues. Everywhere on Earth in 2017 is 8-bit except for digital cinema and the very rare case of HDR TVs (people have them now however the content is extremely limited) and the even rarer case of 10-bit GPUs (e.g. Quadros) with 10-bit monitors on PCs.

What's the mostly likely reason C200 RAW doesn't show banding (still viewed online as an 8-bit streaming source, right?) vs. MP4? The MP4 file has noise reduction before compression, H.264 performs additional NR, the final edit is compressed again for upload, and the backend does another round of H.264 compression with ffmpeg (x264: very high quality, but still another compression round). DeBayering RAW does a sort of low-pass filter (a form of NR), however it's minor. So the RAW file has at least two less rounds of NR/compression vs. the MP4.

How do you eliminate banding, especially in the sky, and especially after YouTube/Vimeo online streaming compression? With dithering. Ideally NLEs would provide a high-bit to lower-bit dithering option, similar to 256 color PNGs for 10- or more bits RGB to 8-bit RGB. Since that doesn't seem to exist, here's another trick: add noise/grain. Experiment with noise level and noise size, then upload, check results and raise or lower the noise until the banding is gone.

While I agree with you and you're providing the voice of reason in what amounts mostly to an ill-informed war of numbers, there are still issues with banding and 8 bit color.

Banding can be an issue in 8 bit even in a linear color space. That specific bit depth was created to provide more colors than the eye can see, but it didn't necessarily map the colors surjectively to the values our eyes see. So you lose (by design), some saturated greens, red, and particularly violets, because the sRGB/rec709 chromasticities are far more narrow than our eyes' "chromasticitie," and our eyes see a weird round gamut and digital chromasticities define a triangle. But you do also get a bit of banding... in the odd weird places...

I used to work at one of the biggest (the biggest?) film labs and content delivery companies in the world, and they would author blu rays and have this crazy linux machine that would allow the author to view the final compressed deliverable while cycling through dithering schemes in complex areas and then having them compressed in real time to the final. Some areas did suffer in down-converting to 8 bit. Tellingly, these areas were exceptionally, vanishingly few. Many films would have none. But flashlight flares (you see a number in the Oblivion blu ray) are very problematic. But most footage isn't. And, yep, they dealt with this through dithering algorithms and noise. Usually good enough. But there's a reason HDR requires 10 bit color and a wider gamut, because you can only stretch things so far. 8 bit's not perfect.

In a "flat" space you introduce even more room for error, in log in particular. There's a reason log film scans began being used commonly as 10 bit log even while delivery was 8 bit. It's how much color data you needed to preserve an insanely flat image: 16 bit linear or 10 bit log. I'm talking insanely flat, too. Log film scans are flatter than Log C. Significantly. And while Canon Log, Canon Log 3, etc. hold up well in 8 bit, I find that when I transcode Arri Log C files (which don't need any dithering or noise, they're pretty noisy to begin with) to 8 bit, banding might appear even with a moderately aggressive grade–and that SLOG2 and Technicolor and VLOG don't play well with 8 bit, maybe also because of the inelegant denoising going on in dSLR video and just curves that are too flat for their own good. Canon's log profiles are not true log. They're designed to work well in 8 bit. Not to sandwich log gammas into 8 bits....

So the thing is... I don't expect Canon Log (12 stops of DR) or Canon Log 3 (13 stops of DR) to present any banding problems. And I didn't see any in the video, not any that could be attributed to an 8 bit recording, only what could be attributed to YouTube compression. But I could see Canon Log 2 (15 stops DR and VERY flat, designed for HDR) being quite problematic in 8 bit space. So the question isn't "do I need 8 bits or 10 bits" but do I need those extra barely-there 2 stops of DR. Given that a GH2 has 7-8 stops of DR, and a Red MX or 5D Mark III RAW can't even eke out 12.... I don't need more than 12-13!

But some do. So I get why Arri uses 10 bit ProRes and why Canon won't allow Canon Log 2 in an 8 bit wrapper. 

But arguing over a YouTube video adds up to nothing. The video itself is 8 bits and horribly compressed. Imo, the question is, do you need 15 stops of DR or can you live with 13... and 13 real ones. Same as an Epic Dragon or F55 on its best day. Canon's last two stops are mostly noise anyway. Sure it's there if you denoise and work with it, but in most grades it's not going to show up anyway.

My advice to rent the camera and test still stands. Some 8 bit implementations (mostly the Canon dSLR ones and Sony's too-flat SLOG2/SLOG3 profiles) have a terrible habit of banding, and should be examined seriously before being put to serious use or should result in conservative exposure techniques, which sort of betray the whole point of log! Most properly implemented formats are fine, though.

But it's up to the end user to examine his or her own needs. And I don't think most people here have tried this camera out on their own, or spot metered and lit with enough care in the past to know that they need way more DR than even film ever had before the advent of the DI.

Share this post


Link to post
Share on other sites
28 minutes ago, HockeyFan12 said:

While I agree with you and you're providing the voice of reason in what amounts mostly to an ill-informed war of numbers, there are still issues with banding and 8 bit color.

Banding can be an issue in 8 bit even in a linear color space. That specific bit depth was created to provide more colors than the eye can see, but it didn't necessarily map the colors surjectively to the values our eyes see. So you lose (by design), some saturated greens, red, and particularly violets, because the sRGB/rec709 chromasticities are far more narrow than our eyes' "chromasticitie," and our eyes see a weird round gamut and digital chromasticities define a triangle. But you do also get a bit of banding... in the odd weird places...

I used to work at one of the biggest (the biggest?) film labs and content delivery companies in the world, and they would author blu rays and have this crazy linux machine that would allow the author to view the final compressed deliverable while cycling through dithering schemes in complex areas and then having them compressed in real time to the final. Some areas did suffer in down-converting to 8 bit. Tellingly, these areas were exceptionally, vanishingly few. Many films would have none. But flashlight flares (you see a number in the Oblivion blu ray) are very problematic. But most footage isn't. And, yep, they dealt with this through dithering algorithms and noise. Usually good enough. But there's a reason HDR requires 10 bit color and a wider gamut, because you can only stretch things so far. 8 bit's not perfect.

In a "flat" space you introduce even more room for error, in log in particular. There's a reason log film scans began being used commonly as 10 bit log even while delivery was 8 bit. It's how much color data you needed to preserve an insanely flat image: 16 bit linear or 10 bit log. I'm talking insanely flat, too. Log film scans are flatter than Log C. Significantly. And while Canon Log, Canon Log 3, etc. hold up well in 8 bit, I find that when I transcode Arri Log C files (which don't need any dithering or noise, they're pretty noisy to begin with) to 8 bit, banding might appear even with a moderately aggressive grade–and that SLOG2 and Technicolor and VLOG don't play well with 8 bit, maybe also because of the inelegant denoising going on in dSLR video and just curves that are too flat for their own good. Canon's log profiles are not true log. They're designed to work well in 8 bit. Not to sandwich log gammas into 8 bits....

So the thing is... I don't expect Canon Log (12 stops of DR) or Canon Log 3 (13 stops of DR) to present any banding problems. And I didn't see any in the video, not any that could be attributed to an 8 bit recording, only what could be attributed to YouTube compression. But I could see Canon Log 2 (15 stops DR and VERY flat, designed for HDR) being quite problematic in 8 bit space. So the question isn't "do I need 8 bits or 10 bits" but do I need those extra barely-there 2 stops of DR. Given that a GH2 has 7-8 stops of DR, and a Red MX or 5D Mark III RAW can't even eke out 12.... I don't need more than 12-13!

But some do. So I get why Arri uses 10 bit ProRes and why Canon won't allow Canon Log 2 in an 8 bit wrapper. 

But arguing over a YouTube video adds up to nothing. The video itself is 8 bits and horribly compressed. Imo, the question is, do you need 15 stops of DR or can you live with 13... and 13 real ones. Same as an Epic Dragon or F55 on its best day. Canon's last two stops are mostly noise anyway. Sure it's there if you denoise and work with it, but in most grades it's not going to show up anyway.

My advice to rent the camera and test still stands. Some 8 bit implementations (mostly the Canon dSLR ones and Sony's too-flat SLOG2/SLOG3 profiles) have a terrible habit of banding, and should be examined seriously before being put to serious use or should result in conservative exposure techniques, which sort of betray the whole point of log! Most properly implemented formats are fine, though.

But it's up to the end user to examine his or her own needs. And I don't think most people here have tried this camera out on their own, or spot metered and lit with enough care in the past to know that they need way more DR than even film ever had before the advent of the DI.

Please feel free to drop the mic on your way out of the forum.

Share this post


Link to post
Share on other sites
3 hours ago, HockeyFan12 said:

While I agree with you and you're providing the voice of reason in what amounts mostly to an ill-informed war of numbers, there are still issues with banding and 8 bit color.

Banding can be an issue in 8 bit even in a linear color space.

That's true: the solution is dithering, even 1-bit monochrome can eliminate banding with dithering (when highlights aren't blown or shadows crushed) :) 

https://en.wikipedia.org/wiki/Dither

Michelangelo's_David_-_63_grijswaarden.pDavid-Gradient_based.png

I suspect people will buy the C200 figuring that they'll use RAW to get 12-bits, but after using the camera for a while and dealing with massive files and storage requirements, will realize the 8-bit 420 looks more than good enough and won't use RAW much or at all. While the C300 II provides a 12-bit 444 option, so far I haven't needed it (shoot mostly 1080p 10-bit CLog2 with ARRI settings and ARRI LUT in post). SLog2 with SGamut3.cine on the A7S II in 8-bit so far hasn't provided any banding challenges. 8-bit 422 on the 1DX II has only had banding issues after NR (solution was to reduce high frequency noise reduction, focusing on medium and low frequency noise). With carefully controlled noise/dithering, 8-bit shouldn't have any issues with banding, not any more than converting 24-bit RGB to a dithered 8-bit 256 color image.

One of the reasons banding can be so conspicuous is due to Mach Bands: https://en.wikipedia.org/wiki/Mach_bands.

Share this post


Link to post
Share on other sites
3 hours ago, Jimbo said:

You make it sound like you want him to leave @fuzzynormal ?

Do not leave the forum @HockeyFan12, but feel free to drop the mic, and then pick it back up when you next see it appropriate to lend us some more of your knowledge.

Thanks.

As in dropping the mic and walking off stage- https://en.wikipedia.org/wiki/Mic_drop

Share this post


Link to post
Share on other sites
13 hours ago, HockeyFan12 said:

Very untrue. The strength and specific colors of the BFA filters are a major factor in color rendering and low light ability, as is the sensor, as are the micro lenses. Red's notorious issues with ruddy colors and poor separation between reds and greens have to do with too-close red and green chromaticities, which I believe are rooted in BFA filters, and which I know are issues no amount of post processing has solved (though Panavision's DXL has gotten close). Oh, and the sensor, and on-sensor readout and processing, are all proprietary, too, and contribute to one brand's "look." Look at how different each one of Black Magic's cameras have been. Vast differences in color, sharpness, aliasing, dynamic range, noise, fixed pattern noise, color shading, color aberrations with wide angle lenses related to micro-lenses, etc. even when they're all shooting raw. Admittedly, this is all way over my head, but the idea that raw is the great leveler is absolutely categorically untrue. It's like saying 4k is the great resolution leveler. It's not. Both are just formats. What comes before that is what matters... and that's what gets so complicated I just acknowledge I know next to nothing about it. Except that I know I know nothing about it. So I do my own research instead of trolling with meaningless dicta.

These days, I just judge with my eyes. I can't see much difference between the sample raw clips and the sample .mp4 clips. In my experience Canon Log and Canon Log 3 do not suffer from banding, even in 8 bit, and I don't see any here, either. (In my experience Arri Log C does band when compressed to 8 bit before grading, technicolor picture style DEFINIELY bands like crazy in 8 bit, and SLOG 2 does at times, and I suspect Canon Log 2 would as well if it existed in an 8 bit camera.) Which says enough to me that if I were seriously in the market for this camera, I would rent it and shoot each format side-by-side and draw my own conclusions from the test footage. That people are seeing horrible banding in the mp4 clips and not the raw ones must mean they're seeing something I'm not. Or something that's the result of YouTube compression.... which bands like hell. 

I'm not saying the lack of a 10 bit codec isn't significant and shouldn't be a deal-breaker for some. I'm just saying I'd rent and test extensively before drawing any conclusions.

And that raw is just a recording format. It's a very very small part of a very very big equation.

It is still just signal coming off the sensor. Red is red. Green is green. Blue is blue. None of them are something else. The specific filters used and sensitivity of sensor cells might differ, but in the end you still get a series of monochromatic data streams. It is how that signal is interpreted AFTER it comes off the sensor that makes color, specifically how it is deconvoluted  from the Beyer array. This idea that there is some sort of special "color science" in the data coming off the sensor is complete nonsense, the color you get is derived AFTER the signals have been debeyered and reconstituted into final pixels.

If you are talking about "science" it has to follow the rules of science, not magic.

Share this post


Link to post
Share on other sites

@tugela can you read C++ code? You can download the source to RAWTherapee and check out the various deBayer algorithms (e.g. AMaZE). You'll see that there is nothing camera specific other than the Bayer pattern used. RAW files have all the secret color science baked in; nothing secret is needed to convert to RGB. RAW is just high bit depth RGB stored in a planar Bayer pattern (e.g. RGGB). See the source code for yourself.

Share this post


Link to post
Share on other sites
11 hours ago, tugela said:

It is still just signal coming off the sensor. Red is red. Green is green. Blue is blue. None of them are something else. The specific filters used and sensitivity of sensor cells might differ, but in the end you still get a series of monochromatic data streams. It is how that signal is interpreted AFTER it comes off the sensor that makes color, specifically how it is deconvoluted  from the Beyer array. This idea that there is some sort of special "color science" in the data coming off the sensor is complete nonsense, the color you get is derived AFTER the signals have been debeyered and reconstituted into final pixels.

If you are talking about "science" it has to follow the rules of science, not magic.

Red is red, green is green, and blue is blue... but only before they hit the sensor.

The color filters on bayer filter arrays are not all alike. They've changed dramatically even in recent years and vary substantially brand-to-brand even now. The original 5D had thicker color filters (in what were also quite different colors) than the current Canon line up does, and many users perceived richer colors from the original 5D on a per-pixel basis when compared with contemporary cameras as a result... at the cost of lower resolution and a worse SNR ratio (worse dynamic range). When I shot film, I remember that Velvia had the narrowest spectral sensitivity curves per layer, and also the best color resolution and perceived saturation subjectively. This stuff varied a lot stock-to-stock. Reversal and color negative looked totally different. Some stocks had better resolution. Some better colors. Some finer grain. Some different white balances. Some were black and white. Some infrared-sensitive. 

If the sensor doesn't matter and only the math thereafter does, why do different film stocks look so different from each other? Like today's CCD and CMOS sensors, film stocks are just analogue light-sensitive RGB sensors.

I can't being to understand the argument that every sensor is entirely the same. It's so easily proved false. Let's go easy on you and forget the big differences between micro-lenses and anti-alasiing filters, which are of course baked in at the RAW level (Red itself has many different low pass filters for the Dragon, each with its own ISO rating and color profile and overall look) and even between Foveon vs. Bayer vs. film vs CCD vs CMOS. Let's stick to just CMOS sensors. Why do the Red, the Red MX, the Dragon, and the Helium, all vary so dramatically in look and technical quality and each requires dramatically different math to produce a raster image if their sensors are all the same? DXOMark has its share of issues, but it's certainly not without reason that they give different sensors different scores in different categories... based not on images, but on RAW sensor output alone. Sure, the internal math and calculations that each camera's JPEG engine or each PC's RAW developer provides helps equalize the image toward a common goal of looking accurate and looking good. But the sensor output varies tremendously camera-to-camera before any code tackles it. Before the ADC even quantizes it, and the ADC is another factor in RAW. Arguably, the RAW data varies more on a per-camera basis than the resultant RGB image does.

I'm not a scientist (liberal arts major here, but an inquisitive one) so I'll leave the science to you. But science requires a control group and a variable group and adherence to the scientific method. We don't have access to RAW data (well, maybe with DXOMark, but they haven't rated most video cameras and so far as I know can't read Canon's RAW lite codec) before some level of sensor-specific code turns it into a raster image... so there's no way to even create a reliable control group in camera RAW comparisons because we can't access the RAW data until variable code turns it into an image. So until I read "scientific" proof from you that sensor quality is irrelevant–and I look forward to that news because I can return my current generation dSLR and move back to my Rebel XT without any penalty–I'm going to stick with what I trust most, my eyes.

And reason. 

I encourage others to do the same. 

Share this post


Link to post
Share on other sites

It's possible to get a valid image without even deBayering the RAW data (aka demosaicing)! Just take the appropriate RGB Bayer pixels and do a simple interpolation to get lower resolution standard RGB pixels. The C300 I doesn't even do any interpolation of the Bayer data. It takes R and B as is, straight form the Bayer array (no processing at all), and averages the G from two G Bayer pixels. While the RAW files have metadata in them, such as white balance and additional general color information, they are still RGB pixels, just flattened into a Bayer array. In other words, RAW Bayer is the same as uncompressed RGB except the pixels are arranged in a single planar Bayer pattern array. Higher quality algorithms can extract more resolution and less artifacts from the Bayer data, however that's mostly about resolution, detail, and (reduction of) artifacts. There is no camera or sensor specific color science code to deBayer RAW (at most a color space matrix, however that's not related to the Bayer process). See the source code to AMaZE: https://github.com/darktable-org/darktable/blob/master/src/iop/demosaic.c

Share this post


Link to post
Share on other sites

Correction to the prior post: RAW files can contain "looks" as well as 3D LUTs which are used to create the final image. Thus if the look is not fully encoded (and hidden) in the Bayer RGB pixels, it can be encoded in 3D LUTs and other color data in the RAW file.

A RAW file (DNG or from the manufacturer) has all the information needed to create RGB pixels for editing. So the mojo could be encoded completely in the Bayer RGB pixels, or in the combination of Bayer pixels, 3D LUTs and additional color data in the RAW file. There is a provision for secret private data, however this could only be used by the manufacturer's RAW development software (not with Resolve, PP, FCPX, ACR, etc.).

More info here: https://wwwimages.adobe.com/content/dam/Adobe/en/products/photoshop/pdfs/dng_spec_1.4.0.0.pdf

Share this post


Link to post
Share on other sites
On ‎7‎/‎19‎/‎2017 at 11:39 PM, HockeyFan12 said:

Red is red, green is green, and blue is blue... but only before they hit the sensor.

The color filters on bayer filter arrays are not all alike. They've changed dramatically even in recent years and vary substantially brand-to-brand even now. The original 5D had thicker color filters (in what were also quite different colors) than the current Canon line up does, and many users perceived richer colors from the original 5D on a per-pixel basis when compared with contemporary cameras as a result... at the cost of lower resolution and a worse SNR ratio (worse dynamic range). When I shot film, I remember that Velvia had the narrowest spectral sensitivity curves per layer, and also the best color resolution and perceived saturation subjectively. This stuff varied a lot stock-to-stock. Reversal and color negative looked totally different. Some stocks had better resolution. Some better colors. Some finer grain. Some different white balances. Some were black and white. Some infrared-sensitive. 

If the sensor doesn't matter and only the math thereafter does, why do different film stocks look so different from each other? Like today's CCD and CMOS sensors, film stocks are just analogue light-sensitive RGB sensors.

I can't being to understand the argument that every sensor is entirely the same. It's so easily proved false. Let's go easy on you and forget the big differences between micro-lenses and anti-alasiing filters, which are of course baked in at the RAW level (Red itself has many different low pass filters for the Dragon, each with its own ISO rating and color profile and overall look) and even between Foveon vs. Bayer vs. film vs CCD vs CMOS. Let's stick to just CMOS sensors. Why do the Red, the Red MX, the Dragon, and the Helium, all vary so dramatically in look and technical quality and each requires dramatically different math to produce a raster image if their sensors are all the same? DXOMark has its share of issues, but it's certainly not without reason that they give different sensors different scores in different categories... based not on images, but on RAW sensor output alone. Sure, the internal math and calculations that each camera's JPEG engine or each PC's RAW developer provides helps equalize the image toward a common goal of looking accurate and looking good. But the sensor output varies tremendously camera-to-camera before any code tackles it. Before the ADC even quantizes it, and the ADC is another factor in RAW. Arguably, the RAW data varies more on a per-camera basis than the resultant RGB image does.

I'm not a scientist (liberal arts major here, but an inquisitive one) so I'll leave the science to you. But science requires a control group and a variable group and adherence to the scientific method. We don't have access to RAW data (well, maybe with DXOMark, but they haven't rated most video cameras and so far as I know can't read Canon's RAW lite codec) before some level of sensor-specific code turns it into a raster image... so there's no way to even create a reliable control group in camera RAW comparisons because we can't access the RAW data until variable code turns it into an image. So until I read "scientific" proof from you that sensor quality is irrelevant–and I look forward to that news because I can return my current generation dSLR and move back to my Rebel XT without any penalty–I'm going to stick with what I trust most, my eyes.

And reason. 

I encourage others to do the same. 

Film stock is different because the materials and assembly are physically different. That is the difference between an analog medium and a digital medium

The filters used in standard beyer arrays basically use the same pigments, so they will all be pretty similar. If there was a major difference from one manufacturer to another, you would also expect to see differences between individual cameras as well as differences on a sensor across the sensor.

ADC conversion of data is irrelevant, since it is still just detecting data across a range, irrespective of what that range is. 8 bits are 8 bits, Canon 8 bits are no different from Sony 8 bits. All manufacturers will be collecting the same data range, irrespective of the actual analog luminance involved.

The only time you would see a significant difference in the data available for deconvolution is if the sensor uses a different color scheme (such as CYGM or RGBE) or a different arrangement (such as those used in some Fuji sensors). Sensors like that might generate different color since the elements used for deconvolution are different.

One more thing, the only way to make color performance significantly different would be to reduce the level of filtering by the filters in order to improve ISO performance. However, if that were the case you expect to see those differences between sensor models, in other words there would be no such thing as "Canon color" since different cameras would generate different color.

 

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...