Jump to content

Jay Turberville

Members via Facebook
  • Posts

    20
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Jay Turberville got a reaction from Michael1 in Panasonic GH4 Review   
    That might be logical if you are going to display the image at the same final per pixel magnification and view it from the same distance.  In other words, if you were going to display it at twice the width and height of the 1080p image.  But we know that's not how videos/movies are typically viewed.  So no, it really isn't logical. The practical reality is that an image with higher pixel count can tolerate more compression because each pixel and the resulting artifacts from noise and compression is visually smaller.
     
    If we want to get into pixel peeping, we need to consider question of the information density coming from the sensor.  When shooting 4K on the GH4, we are getting one image pixel for each sensor pixel.  But the GH4, of course, uses a CFA sensor.  And in the real world of CFA image processing, such images are not 1:1 pixel sharp.  This is due to the nature of the Color Filter Array as well as the use of the low pass filter in front of the sensor that reduces color moire.  A good rule of thumb is that the luminance resolution is generally about 80% of what the pixel count implies.  The color resolution is even less.
     
    Years ago I demonstrated online via a blind test that an 8MP image from a 4/3s camera (E-500 I think) only has about 5Mp of luminance info.  Dpreview forum participants could not reliably distinguish between the out of camera 8Mp image and the image that I downsampled to 5Mp and the upsampled back to 8Mp. They got it right about 50% of the time.
     
    So what's the point?  The point is that before you can even (mis)apply your logic about scaling bandwidth with image resolution, you must first be sure that the two images are similarly information dense. What is the real information density of the A7S image before it is encoded to 1080p video?  I think of CFA images as being like cotton candy - not information dense at all.  About 35% of the storage space used is not necessary from an information storage standpoint. (Though it may be useful from a simplicity of engineering and marketing standpoint.)
     
    And even with this, there's surely plenty I'm not considering.  For instance, if the A7S uses line skipping  (I have no idea what it really uses) that will introduce aliasing artifacts.  How do the aliasing artifacts affect the CODEC's ability to compress efficiently.  The bottom line is that all too often camera users apply simplistic math to photographic questions that are actually more complex.  There are often subtle issues that they fail to consider (like viewing distance, viewing angles, how a moving image is perceived differently than a still images and more.)
     
    Personally, as a guy who sometimes leans toward the school of thought that "if a little bit of something is good, then too much is probably just right.",  I kinda wish that the GH4 did offer 4K recording at 200mbs.  Why not given that the camera can record at 200mbs?  But the reality is that I can't really see much in the way of CODEC artifacts in the 4K footage I've shot with the GH4 so far.  So 100Mbs is probably just fine.  50Mbs on the A7s is probably pretty good as well.
×
×
  • Create New...