Jump to content
Sign in to follow this  
wolf33d

Color science

Recommended Posts

35 minutes ago, TheRenaissanceMan said:

It was certainly mine. Giant difference in skin tones, and richer more accurate color overall. What was your experience?

Looks good if you have contrasty ( sunny day) lighting but can't handle anything on an overcast day. Main problem is the shadows which tend to go brown and muddy. ACR seems to retain clean colour in the shadows. This is using Sony cameras. I suspect PhaseOne didn't spend a lot of time profiling these cameras for some reason? The only advantage I see in C1 is the sophisticated CA correction. For me ACR just gives a more consistent result through a range of different lighting situations. 

Share this post


Link to post
Share on other sites
EOSHD Pro Color for Sony cameras EOSHD Pro LOG for Sony CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
On ‎11‎/‎7‎/‎2018 at 10:44 PM, Inazuma said:

Your raw processor can make a huge difference to colour though. For example, Panasonic and Sony RAWS look far better in DXO PhotoLab than in Lightroom

I really like DXO (using version 9 they released free a while ago).       I don't see any big difference between Sony processed RAW and out of camera Jpegs though.

Plenty of FF shooters shoot jpegs for various reasons (lots of newspaper and sports shooters do still I would think).        It is a lot easier if I shoot a concert/gig and have hundreds if not thousands of photos to go through.

Share this post


Link to post
Share on other sites
2 hours ago, noone said:

I really like DXO (using version 9 they released free a while ago).       I don't see any big difference between Sony processed RAW and out of camera Jpegs though.

Plenty of FF shooters shoot jpegs for various reasons (lots of newspaper and sports shooters do still I would think).        It is a lot easier if I shoot a concert/gig and have hundreds if not thousands of photos to go through.

Version 9, hell you can get version 11 for free. It is a huge difference especially the better DXO Smart Lighting.

Share this post


Link to post
Share on other sites
22 hours ago, kye said:

Manufacturers design the bayer filter for their cameras, adjusting the mix and strength of various of tints in order to choose what parts of the visible and invisible spectrum hit the photo sites in the sensor.

This is part of colour science too.  Did you even watch the video?

OK maybe this statement (RAW has no color) is not that accurate. The point is that RAW has to be developed before you have the color of every pixel. You have the values for each pixel from the bayer sensor but they are one of the 3 basic colors only - Green, Red, Blue. With different intensity. Before the debayering / development you don't have "real" colors - RGB values for each pixel, only one of this values - R or G or B. The other two are interpolated, "made" by the software. So before developing the image you can't measure anything related to color. 

This process has 3 variables (actually more): 1- the sensor and other electronics around it. Let's call it hardware. 2- the software that do the debayering/interpolation. 3 - the human deciding which parameter to use for the development. For color there are many parameters that can be changed in the software. So how you are going to measure for accurate color the developed image coming from RAW (because RAW can't be measured), when it is dependent of so many parameters and most of them are not related to the camera ? 

Yes watched the video and totally agree with Tony that for RAW there is no point to measure color accuracy of the camera or cameras color science. As color depends on too many variables and parameters outside of the camera. You can literally get any color you want in the program. 

Now Mattias and many other people argue that every camera (sensor and electronics in the camera) has specific signature and they affects the RAW image and as result the final/developed image. This is true. It that sens not all RAW are equal. Yes indeed it's one of the variable (some of the variables) in the process and for sure has an impact for the final image. Dynamic range of the sensor for example definitely affects the final image. But for colors specifically my argument is that all those differences in the sensor are easily obliterated by the software. Remember 2/3 of the color information is made by the software. It is the software (algorithm) and human behind it, who has the final saying what color a pixel and whole picture will have. So for me when people says different sensors / hardware give me differences in colors they mostly mean: different sensors/cameras gives me different colors in MY workflow. :) You can perfectly color match photos from different cameras/sensors. Same for video. 

So we agree to disagree here :)

Share this post


Link to post
Share on other sites
52 minutes ago, stephen said:

OK maybe this statement is not that accurate. The point is that RAW has to be developed before you have the color of every pixel. You have the values for each pixel from the bayer sensor but they are one of the 3 basic colors only - Green, Red, Blue. With different intensity. Before the debayering / development you don't have "real" colors - RGB values for each pixel, only one of this values - R or G or B. The other two are interpolated, "made" by the software. So before developing the image you can't measure anything related to color. 

This process has 3 variables (actually more): 1- the sensor and other electronics around it. Let's call it hardware. 2- the software that do the debayering/interpolation. 3 - the human deciding which parameter to use for the development. For color there are many parameters that can be changed in the software. So how you are going to measure for accurate color the developed image coming from RAW (because RAW can't be measured), when it is dependent of so many parameters and most of them are not related to the camera ? 

Yes watched the video and totally agree with Tony that for RAW there is no point to measure color accuracy of the camera or cameras color science. As color depends on too many variables and parameters outside of the camera. You can literally get any color you want in the program. 

Now Mattias and many other people argue that every camera (sensor and electronics in the camera) has specific signature and they affects the RAW image and as result the final/developed image. This is true. It that sens not all RAW are equal. Yes indeed it's one of the variable (some of the variables) in the process and for sure has an impact for the final image. Dynamic range of the sensor for example definitely affects the final image. But for colors specifically my argument is that all those differences in the sensor are easily obliterated by the software. Remember 2/3 of the color information is made by the software. It is the software (algorithm) and human behind it, who has the final saying what color a pixel and whole picture will have. So for me when people says different sensors / hardware give me differences in colors they mostly mean: different sensors/cameras gives me different colors in MY workflow. :) You can perfectly color match photos from different cameras/sensors. Same for video. 

So we agree to disagree here :)

The strength of the RGB CFA's over the photosites will determine how much colour info can be recorded vs how much needs to be interpolated.

Share this post


Link to post
Share on other sites
41 minutes ago, Shirozina said:

 The strength of the RGB CFA's over the photosites will determine how much colour info can be recorded vs how much needs to be interpolated.

IMHO the strength of the RGB CFA's over the photo sites is related more to the exposure than to color. Stronger filter = less photons reaching the photo sites. But still one pixel (photo site)  = one basic color (R, G or B). The other two still need to be generated/interpolated in software in order pixel to have all RGB values.

There is no need to theorize in that much details. Everyone can do simple test. Take a photo of an object with one single color (in RAW). Let say a blue ball. Import the picture in photo editing program. Camera doesn't matter. If color of the object was BLUE can make it GREEN or RED or any other color. That’s why said RAW has no color.

Share this post


Link to post
Share on other sites
38 minutes ago, stephen said:

IMHO the strength of the RGB CFA's over the photo sites is related more to the exposure than to color. Stronger filter = less photons reaching the photo sites. But still one pixel (photo site)  = one basic color (R, G or B). The other two still need to be generated/interpolated in software in order pixel to have all RGB values.

 

Don't agree - if the filter density is lower it's intensity is also lower and thus it's ability to measure actual colour as it's determining the hue values by the differences in signal between the R G B The problem manufacturers face is that higher density CFA's reduce exposure and thus low light ability of the sensor so they trade these off against each other. They in effect give you just enough colour information to keep the average user happy.....  

Share this post


Link to post
Share on other sites
2 hours ago, Shirozina said:

Don't agree - if the filter density is lower it's intensity is also lower and thus it's ability to measure actual colour as it's determining the hue values by the differences in signal between the R G B The problem manufacturers face is that higher density CFA's reduce exposure and thus low light ability of the sensor so they trade these off against each other. They in effect give you just enough colour information to keep the average user happy.....  

What you are discussing here is called Quantum Efficiency, current APS-C/FF sensors can easily achieve 50%-60% QE, so around 1 photographic stop loss of exposure, similar to how our retina respond to light.

Share this post


Link to post
Share on other sites

For video colour from the camera, the Sony liveview grading app was/is brilliant.      Adds more choices in settings to the cameras than most people could possibly properly try in several years.

Pity it isn't useable with the third version A7 cameras (though it is with the A6500 and A7sii).

Share this post


Link to post
Share on other sites
39 minutes ago, noone said:

For video colour from the camera, the Sony liveview grading app was/is brilliant.      Adds more choices in settings to the cameras than most people could possibly properly try in several years.

Pity it isn't useable with the third version A7 cameras (though it is with the A6500 and A7sii).

Yes, I really wish it were available on the current generation of bodies. Really awesome, pro-level idea, much like what the Varicams offer with wireless CDL creation.

I think we need to differentiate here between RAW stills color and encoded 8-bit video color, because there is a profound gap in workflow, results, and flexibility between the two. We also need to clarify how we are grading our files, because speed and quality differs wildly between various methods. For example, using the DaVinci color managed workflow gives you a corrected starting point with virtually no work, so all you have to do from there is tweak and do your creative grading. If you're just grading SLOG/SGamut files from scratch with levels and curves, your experience will change drastically. So for the sake of clear communication, let's be very specific when describing how we deal with our footage.

Another angle to think about: ease of results matters. The Alexa is popular not only for its reliability and image quality, but for its dead simple workflow. In many cases, corporate and commercial work can make do with nothing more than Arri's r.709 LUT, with maybe a small tweak or two. THAT IS HUGE! Saving time, minimizing complexity and miscommunications between set and post (many DPs do not get to grade their own footage!), jumping straight into edit with a robust easy-to-cut codec...these are all enormous time and money savers. Compare that to RED and, while the image quality is outstanding, you have to deal with large difficult files requiring in depth knowledge of their various sensors, color spaces, gamma, etc. So when saying "you can always grade to match," keep in mind that while you often can, it takes time. It takes money. Expertise. More communication with whoever's handling your post. That is why out of camera color still matters, despite all the powerful color tools we have now.

Share this post


Link to post
Share on other sites
On 11/9/2018 at 12:16 PM, stephen said:

OK maybe this statement (RAW has no color) is not that accurate. The point is that RAW has to be developed before you have the color of every pixel. You have the values for each pixel from the bayer sensor but they are one of the 3 basic colors only - Green, Red, Blue. With different intensity. Before the debayering / development you don't have "real" colors - RGB values for each pixel, only one of this values - R or G or B. The other two are interpolated, "made" by the software. So before developing the image you can't measure anything related to color. 

This process has 3 variables (actually more): 1- the sensor and other electronics around it. Let's call it hardware. 2- the software that do the debayering/interpolation. 3 - the human deciding which parameter to use for the development. For color there are many parameters that can be changed in the software. So how you are going to measure for accurate color the developed image coming from RAW (because RAW can't be measured), when it is dependent of so many parameters and most of them are not related to the camera ? 

Yes watched the video and totally agree with Tony that for RAW there is no point to measure color accuracy of the camera or cameras color science. As color depends on too many variables and parameters outside of the camera. You can literally get any color you want in the program. 

Now Mattias and many other people argue that every camera (sensor and electronics in the camera) has specific signature and they affects the RAW image and as result the final/developed image. This is true. It that sens not all RAW are equal. Yes indeed it's one of the variable (some of the variables) in the process and for sure has an impact for the final image. Dynamic range of the sensor for example definitely affects the final image. But for colors specifically my argument is that all those differences in the sensor are easily obliterated by the software. Remember 2/3 of the color information is made by the software. It is the software (algorithm) and human behind it, who has the final saying what color a pixel and whole picture will have. So for me when people says different sensors / hardware give me differences in colors they mostly mean: different sensors/cameras gives me different colors in MY workflow. :) You can perfectly color match photos from different cameras/sensors. Same for video. 

So we agree to disagree here :)

Good post.

I think we're mostly agreeing, but there are aspects of what you said that I think are technically correct but maybe not in practice.

@TheRenaissanceMan touched on two of the biggest issues - the limitations of 8-bit video and the ease of use.

It is technically true that you can use software (like Resolve or photoshop) to convert an image from basically any set of colours into any other set of colours, but in 8-bit files you may find that information may well be missing to do a seamless job of it.  Worse still, the closer a match you want, the more manipulations you must do, and the more complicated the processing becomes.

In teaching myself to colour correct and grade I downloaded well shot high-quality footage and found that good results were very easy and simple to achieve.  But try inter-cutting and colour matching multiple low-quality consumer cameras and you'll realise that in practice it's either incredibly difficult or it's just not possible.

21 hours ago, Andrew Reid said:

Funny thing is, colour isn't even just a science

Absolutely!

Share this post


Link to post
Share on other sites
25 minutes ago, kye said:

but in 8-bit files you may find that information may well be missing to do a seamless job of it.

That's not the bit depth but the Chroma subsampling in Y'CbCr codecs. Even in 10bit  the colour information is very compromised compared to the Luma information. 

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

×
×
  • Create New...