Jump to content

BenjaminJ

Members
  • Posts

    3
  • Joined

  • Last visited

Posts posted by BenjaminJ

  1. ​In regards to the FZ1000, it may have a leica named lens, it might even be leica designed, but at that price point not a snowballs chance in hell it's leica manufactured. The FZ1000 also has smaller pixels and yet again, another inferior codec. It might get "close" to the XC10 if you have a very loose definition of the word close.

    ​Of course not, but the same is true for similar (fixed) Zeiss lenses on Sony cameras, Scheider lenses on Samsung, etc. It's about their involvement in the optical design (which can be significant).

  2. I think it doesn't have so much to do with the color space, but with tone mapping or camera profiles. The same RAW file will give different colors in different RAW developing applications for this reason (because they use different camera profiles).

     

    Regarding the worse sharpness of image B: it's because of the CA of the lens -- different colors get different magnification and this reduces sharpness, but this can be corrected in software quite effectively by scaling the different color channels. The NX1 does this in-camera very effectively, as you can see in image A.

     

    First, a disclaimer, my monitoer is crummy. At first I thought "A" was the video because there seemed to be less color detail. But when I pixel peeped, it was clear that "A" was MUCH sharper. Yet "B" seems to have more detailed color information. It remindes me of the difference between sRGB and Adobe RGB. I don't understand color space that well but when I export an image from Lightroom as sRGB and then Adobe RGB, this is the type of difference I see where the sRGB has more saturated colors but obviously less color detail. Now I think it is a trick question. I'm supposing Andrew deliberately blew focus on "B" to confuse us. So I'm sticking with my first guess. "A" is video and "B" is from RAW but out of focus.

  3. Respectfully, let me provide a correction: Sigma's Foveon X3 sensors do not use stacked color filters -- they measure color by penetration depth in the silicon (red penetrates deepest and blue least deep, green in between). The only advantage that this type of sensor has is that the spatial color resolution for red and blue is twice higher than with a Bayer sensor and there is no color aliasing or colored moiré (but still some liminance aliasing/moiré). The downside is that the colors need to be calculated with heavy mathematics and this is error prone, leading to more metameric failures (failure of the sensor to differentiate between similar colors -- something that cannot be fixed with post-processing). I have no personal experience with Sigma cameras but have heard complaints that the sensor output is unpredictable, especially with mixed light sources.

     

    Also, if you think Leica can really make the sensor of their M Monochrom for $50 you're out of your mind. :D They only removed the color filter array, which isn't exactly the most expensive part. The very limited production runs are obviously impacting the price (and Leica is a boutique brand anyway so lowering their prices would hurt their image).

     

     

    Regarding the "electronic color filtering" of this Sony sensor: I would presume it's a solid state technology and not some set of physical filters moving across the sensor.

     

    I use them. I hate everything about them except the final image.  If I have good light, and time, and need medium format quality still in my pocket, they are the ONLY game in town.  The same for your BMPCC if you want 1080 RAW video in your pocket.

     

    My gut feeling is this new Sony sensor is more about semantics, than any true pixel-level RGB sampling (like Foveon).  

     

    The problem is something along the lines of Heisenberg Uncertainty Principle.  In our world, you can either know a light beam's color or intensity, but not both at any given instant.  All sensors are intensity only.  Whether Bayer or Foveon, filters are put in front of the sensor to estimate color.  If you think there is NO problem in this then why does Leica have a monochrome (no color filter) camera that sells for $6,000?  (It probably cost them $50 to make it, but no matter  ;) ).  If you're a B/W purist, those filters degrade your black and white image.  

     

    So there are only two ways around this problem (unless Sony has discovered an electrically conductive material that can read color), you can stack color filtered sensors on top of each other (like Foveon) or next to each other (like Bayer).  With Foveon, you get true color pixels with little color distortion (if in strong light); with Bayer, you get high sensitivity color pixels, but when you combine them horizontally you get aliasing/moire problems.  

     

    Theoretically, you could take a grid of color filters, RGB, and vibrate them across the sensor so that the sensor could take three readings for each color.  So if you had a global shutter that ran at 72 frames a second, it could take the red at 1/3rd a 24th of a second, then the blue, then green.  Perhaps they use a seriously precise stepper-motor to do this.  

     

    If the sensels are rectangle, maybe they use that to capture all three colors at any instance, but the vibration changes the pixel-center focus color and the pixel just averages them all together.  

     

    It's all very interesting stuff, to me at least, but my guess is that though it may make for a good video application, it won't be good enough for still photography (at least professional or enthusiast).  The reason is that Foveon doesn't work because of PHYSICS, it isn't a failure of Sigma.  They simply can't find a substance that will take the color value of light and send enough light to the sensel below it, then the next one below.  If Sony can change a color filter over the sensel it could eliminate color moire problems in a still subject, but if the subject moves then the color may change between filter changes (in the 1/3rd of 24th of a second).  Color problems are back!

     

    I believe, understanding this stuff makes one a better photographer, or filmmaker, even if it has no immediate practical use on set.  For example, if you noticed a lot of moire in the background of your shot you might go out and buy all kinds of blur filters.  Every time you got rid of the moire you might find the image not sharp enough.  If you knew this stuff about sensors, however, you'd open the aperture up a bit and increase the blur in the background while keeping your subject in focus.

×
×
  • Create New...