Jump to content

maxotics

Members
  • Posts

    957
  • Joined

  • Last visited

Everything posted by maxotics

  1. First day out I handed camera to friend and they took this. f5.6 > Here's a house I use for test shots. The lens nailed it. >
  2. I've been going through an architectural photography phase and bought a Sony 10-18mm for the a6000. It will also work, with some cropping, on the a7. Unfortunately, I haven't had much time. I shot some test footage, which is completely ungraded and boring, but if you're interested in the field of view of the 10-18 on the a6000 then it should give you an idea. Photographically, the 10-18 on the a6000 is a monster wide-angle solution. Great sharpness, contrast and color. I did a little comparison with the GM1. The GM1 continues to amaze me with it's clean low-ISO footage. The a6000 is superb camera, but I don't think they'll ever get APS-C sensor as sharp as MFT in 1080. I think it impossible to completely fix the aliasing/moire issue with having pixels more spread out on the sensor, vs the MFT. What Sony seems to have improved is the chromatic aberrations, which is certainly the most annoying aspect of that problem. In short, not sure 1080 will get much better from APS-C camera. However, I can get a shallower DOF with the a6000 that I can't get with the GM1. It's a tradeoff. I have a Sony 35mm 2.8 which I hope to do some test footage with soon.
  3. Interesting. It looks like you can get an FS100 used for the same price as an a7S. Thanks!
  4. I would think another significant problem would be the decreased efficiency of the micro-lenses in front of each pixel. The larger the micro-lens the more efficient? Or, the smaller the micro-lens, the less efficiency, especially in low light.
  5. HI horshack, I did read that thread but had difficulty understanding. It seems misleading when you say, "cumulative read noise". PLEASE correct me. I would think that on a set of 4 pixels (pre de-bayering), you would get more sensitivity at high ISO, and less noise, then a similar block of 16 pixels fit into the same space. I can see how you could say the cumulative noise is less on the 4 pixels, then 16 pixels, but isn't that obscured the prime difference between different size pixels? I can understand that experimentally, it makes sense to look at cumulative read noise, but in explaining the difference to filmmakers, isn't it better that they think in pixels? On the base ISO issue, it seems that comparing 4 pixels against 16, or 1 against 4, allows one to visualize the difference between say having a dish microphones vs an omnidirectional.
  6. Hi Ebraham, can you be specific here? This is quite a claim. Are you sure those sensors aren't actually larger resolution sensors where the maker is only using 2 megapixels from, say 5, on-sensor pixel density?
  7. The tests done with the a7S seem pretty indisputable to me, that in FACT, the camera has the best high-ISO performance of any camera. Whether you need it in real-world situations is subjective question. The question most others are interested in is the tradeoff the camera makes between low noise at high ISO and less DR at base ISO. All sensors have a base ISO at which they perform optimally. Everything after that is degraded. Apparently, and I don't fully understand this myself, higher sensel count cameras improve DR at base ISO. In other words, the a7S will be better at low-light than the a7, but the a7 will have better DR at base ISO (say 100). It is not the low megapixel count that makes the a7S more sensitive to light, it is the LARGER sensel/pixel size on the sensor. Each sensel is like a little telescope/radio dish with a colored filter in front of it. As in astronomy, you can collect radiation through lots of little dishes, or one big dish. In picking up faint objects, the bigger the better. In short, larger pixel, less noise, period. There is nothing temporary in this area of physics. I do not see TV stations replacing their satellite dishes with 50 little dishes. Again, it is NOT about the low megapixel count. It is about the increased pixel size (more sensitivity/less noise) of the a7S that makes it unique. Here is someone's calculation of pixel width Approximate pixel pitch (in microns) Refer to the reservations here about calculating the "true" width and area of an individual pixel. Pixel pitch in microns = width of sensor in millimetres divided by image width in pixels multiplied by 1000 A77 = 3.917 (23.5 / 6000 x 1000) A7S = 8.443 (35.8 / 4240 x 1000) Relationship: A7S is approximately 116% greater than A77 http://www.robsphotography.co.nz/Sony-A77-SonyA7S.html Would you rather have a 4 inch telescope or 8 inch telescope when peering into the night sky?
  8. Hi Michael. The pixels are made larger on the a7S to be more sensitive/accurate to light (especially low light). This has little to do with processing speed. If the camera has a problem it's that it can't save internal 4K. The drawback of larger pixels is less resolution for still photos. There is no drawback for 4K full-frame video--only positives (AFAIK). You should also know that there is space on a sensor between each pixel, space that can't be used to read light and space which creates aliasing issues. As for color science. White balance is so difficult to get perfect without expensive equipment that it's really impossible to factor out user error. Even on my own somewhat calibrated monitors each video looks different. For properly exposed video, I believe all the manufacturers end up with the same values. Even if a camera has a certain "color cast" you should be able to adjust for it. I'm no expert in color grading, but haven't heard it a problem for anyone. Dynamic range or bit-depth is another thing.
  9. The more cameras I try out, the more I believe aliasing/moire/sharpness is a function of physical sensor characteristics, and not firmware (software). The more time goes on, the more I believe the manufacturers are up against limits of how hot they can run their sensors/ships without shortening the life of the battery or generating noticeable sensor noise. The new sensor for the A7S is the first consumer-grade full-frame sensor made for video. How cool is that!!! Software development is something I know about. Hiring a few extra programmers to work on software/CODEC is a fraction of what it costs to manufacture a new type of chip. When you couple that with the physics of shallow DOF on full-frame sensors the A7S has significant benefits over current Canon and Nikon full-frames. In short, Andrew, this is the camera you have warned Canon and Nikon about. And the Blackmagics for RAW color depth. Nikon and Canon currently have no offerings with full-frame maximized pixel count for video (fewer/larger pixels are better for video), heat-sink technology (Blackmagic), EVF (Sony), or razor sharp MFT 4K (Panasonic). I've been following EOSHD for about a year now. Amazing what has happened in that time! These guys have done everything you've asked Andrew, but you're like the Dad where it's never good enough. "You can still do better, Son!" :) (As for 60P being softer, unless it is double the bit-rate it is obviously an expected still-frame/motion trade-off)
  10. I've only had a d600, have never touched a d800 so I believe you. I came from Canon when getting the d600 so was pleasantly surprised. However, I find the Canon menus superior too. The more I use the a7 and a6000 the more I agree with you that one day optical viewfinders will be as quaint as viewfinders on film range-finders before the advent of SLRs. The ability to set the exposure and see what you'll get, in real-time, plus focus zoom, etc., counters Nikon's only real benefit--high dynamic range. The high dynamic range of the d800 assumes that either you'll misjudge the scene, or will get it right and need that extra range. In the first instance, I think i'm at least a stop more accurate with the a7 than when I had the d600. So in real world terms, I feel I'm getting the extra dynamic range I'd get from the Nikon by nailing the exposure through Sony's EVF. As for shots where I need maximum DR, I'd probably use a tripod, and then I can HDR my way to anything I want. All that said, Sony cameras are still consumerish so I'd probably use a Nikon or Canon if I was a professional. ISO is a real important button for me. I wouldn't want it in a bad place either!
  11. Here are a couple of interesting posts at dpreview about why the a7S has superior shadow detail at high ISO but, as some people have experienced, a reduction in low-ISO DR compare to a7 or a7R. http://***URL removed***/forums/post/53959496 http://***URL removed***/forums/post/53959763
  12. Sorry for any confusion I caused, I was talking about my experience with the a7. I'm glad to hear this is little aliasing in the a7s (makes more sense with the lower pixel count)...Ooops, I meant a7S :)
  13. The GH4 is saving those pixels internally. Later, you can sick a powerful PC on the task of down-resing the image to 1080 :) The Sony is supposedly binning the pixels internally and I suspect they aren't downscaling through true binning but throwing out data, which leads to aliasing.
  14. When it comes to consumer video, all DSLR sensors are "cropped" in some fashion for video. For example, the GH4 sensor size is 4608 x 3456. But its highest resolution in video is 4096 x 2160. Therefore it either crops from the sides 4608-4096=512 and 3456-2160=1296, or it must average/in, or throw out, 512x1296 pixels, 663,662. In video, all sensors are cropped in some fashion (either side, lines of pixels, or individual pixels by pixel blocks). The question is how much. In full-frame and APS-C, vs MFT, more pixels have to be averaged ( which creates and draws a lot of power, so is seldom used in consumer equipment) or thrown out. This thrown out image information leads to aliasing/moire problems. The reason the GH4 is such a great camera is that it THROWS OUT LESS pixels than other cameras meaning less image data falls between the cracks (quite literally!). When you crop from the sides on a sensor you are increasing your focal length. Therefore, most camera makers ether "bin" the pixels (average them), or throw out pixels (often in lines) to maintain the expected focal length of the lens. Focal reducers work by concentrating the image from a larger lens onto a smaller sensor (or part of the sensor). They degrade the image in the sense that if you took 2 full-res photographs from a Canon 50 on a full-frame, against a Canon 50 + focal reducer on MFT, the second would not be as optically accurate. In video, because the lower resolution and, other factors, these compromises are not noticed. In short, with a focal reducer, a videographer trades edge to edge full-sensor sharpness for more light and increased FOV on a smaller sensor. What Panasonic has done for resolution, with the GH4 4K camera (more pixel data), Sony has done for light gathering power (more accurate color, especially in low light; that is, less noise). Sony increased the sizes of the individuals pixels on the a7s. The larger each sensel on the sensor, the fewer of them there are, hence the 16MP, instead of the usual 24 on the a7. In order for the GH4 to match the a7s in light gathering it would have to increase the size of the sensels which would increase the size of sensor. In order for the a7s to match the resolution of the GH4 is would have to save 4k worth of pixels (which it can't do internally, probably because of heat/power requirements). In a perfect world you want both cameras. They have different strengths and weaknesses that cannot be designed away IMHO.
  15. For what it's worth, I believe the idea of "color science" differences between these cameras is over-estimated. When cameras create JPGs, which one will usually see when software shows a thumbnail, the manufacturer can easily balance warmer ( like Canon) or cooler ( like Nikon). But the RAW files have no real "science" behind them. All the chips are essentially the same. Yes, the camera makers can adjust the "raw" values before saving, but not in any way that you can't debayer them to get any color you want. DR is slightly different, and Nikon seems to have found a way to re-adjust high and low values from RAW data to preserve more detail than Canon. But again, in the video world, I don't see that it caries down to 8bit. Video is a bit different, of course, because you're not working with RAW data (which is why I don't pay attention to any of these DxO scores which really have no bearing on compressed video IMHO). They're only relevant to RAW images. With a significantly larger sensor, the A7s will have superior light gathering power over the GH4, however it will be more prone to aliasing, which has a large color aberration component due to mosaic (bayer) color sensors. In good light, where shallow DOF is not necessary, there is no doubt in my mind that the GH4 is the better camera. 1. 4K downreses to true 1080 and 2. Panasonic focuses on video compression to do in camera what one might do in post with RAW. In weak light, the a7s if going to have less noise and greater DOF than the GH4, even with a weak video CODEC. There's a lot of wishful thinking about cameras. You want to make what you have do what a camera does, that you don't have. In a sense, that's the whole point of this site--getting the most out of consumer-based digital cameras. However, I feel we have to keep things in perspective. If you shoot primarily in low light the a7s is going to do better for you than the GH4 with all the noise reduction in the world. The real question, to me, is what situations do the cameras visibly diverge? Comparing the GH4 to the A7s in good light is a bit pointless. To GH4 owners it will show who the GH4 is superior in resolution, to the a7s owners they will grant resolution is better, but will think the shallower DOF worth the trade off. To each his own. How little light must there be before the GH4 grants the a7s has more light gathering power?
  16. As much as I loved the Nikon d600, it doesn't have an EVF so you can't focus zoom in the viewfinder (you'd need to use Liveview). When you can put a manual lens on a full-frame, focus zoom into your subject's eye, and focus so their eyeball-iris is Crystal-sharp, you get a little excited :) The only thing I don't like about EVF's is they seem to blank out a hair longer than an optical viewfinder. One of them pointed out that auto focus worked in almost no light (without assist light) on the a7s. I never expected to like the silent shutter as much as I do on the GM1. If money were not object, I might get an 7s for that alone! A negative about these cameras, for me, is I'm moving away from manual lenses. There are times I just need auto-focus. The Sony lenses are expensive and difficult to find used.
  17. Congrats! B&H picks the best so it is quite an honor, any economics aside!
  18. jcs, great review! However, like others, I find it very difficult to believe that the full-frame, large sensel design of the a7S wouldn't show significantly better color saturation over a MFT sensor camera at anything above 1600 ISO. I understand if you prefer higher ISO images to go gray, so to speak, rather than guess color, but for those needing to match color I would think the a7s would be the best option, by a mile. You mention applying Neat to the GH4 footage to get it to match the a7s, but the real question is can you apply Neat to a7s footage to get it to match ISO 400, say? Also, DOF is no small matter, at least to me. You mention that, but it doesn't seem to rank high as the menu stuff, build quality, etc. The kind of lenses you can put on the a7s is no small thing. But again, great review! As you say, you're still working on it! :)
  19. Doesn't make sense to me; that is, blur doesn't seem to be from lens but in post. When she's walking, a doorway far behind her in focus, which wouldn't make sense if blur/focus is all optical. Or maybe an optical filter for blur? In any case, should be a shot any of those cameras can do.
  20. Quirky, Bill Maher said something politically incorrect on "Politically Incorrect" and was hounded off the air. You're in good company :)
  21. Andrew would know better than me, but I doubt companies sponsor web people to switch to their cameras--too many legal/liability issues. They might give someone first access to a camera, or pick up the phone when they call, but that's about it. Why Panasonic, or anyone else, wouldn't sponsor Dave o. He'd have to sign a contract saying he would never disparage Panasonic in the future o. He'd have to deliver some sort of metric to keep getting payments o. He would have to show them copy first (so he doesn't give way secrets/confuse the marketing focus, etc). What makes Dave and Andrew so successful is you know they'd rather live on the street than follow any corporate party-line. Lately, DPreview is getting into hot water because they came out with an article saying why the d810 was so great, even though they were just tiny improvements. The article suggested that these tiny improvements constituted a reason to upgrade. Even Nikon lovers found that a bit irritating (the improvements might make me pick an 810 over a 5D3, but for me, at least, it woudln't get me to sell my 800). On the video side, they said it had a new profile, which Andrew pointed out was wrong. In any case, the idea that anyone would sell their 800 for an 810 to get a new picture profile is hilarious, to me.
  22. @Aaron, Was that a good barf or bad barf ? :) @Sunyata I believe someone could come up with a rules-based de-bayering algo that could deal with these issues. All the formula based algos have strengths and weaknesses. If I had another 24 hours in a day I would have tried creating my own algo for the EOS-M RAW, or other Canon RAW where there is a lot of line-skipping and this problem aggravated. In the end, I'd have to yawn, or maybe like Aaron, barf and go to sleep. In the real world, moire is not noticed by audiences unless someone is wearing some crazy shirt, and even then, no one ever walked out of a theater from it.
  23. I find it best to put your imagination in the form of a sensor. Each pixel of your eye sees either red, green or blue. Keep this grid in mind for a 20x5 sensor R=Red, G=Green, B=Blue RGRGRGRGRGRGRGRGRGRG GBGBGBGBGBGBGBGBGBGBG RGRGRGRGRGRGRGRGRGRG GBGBGBGBGBGBGBGBGBGBG RGRGRGRGRGRGRGRGRGRG If you're out in a field and shoot a landscape with a power-line glinting in the sun, how does the sensor "see" it? If the line is perfectly horizontal to the sensor and is two pixels high, then each sensel is debayering using the same light. The 4 sensels get full values for Red, Green and Blue. They de-bayer them and each comes up with white light. The power line will look perfect. These are the pixels that "see" the line RGRGRGRGRGRGRGRGRGRG GBGBGBGBGBGBGBGBGBGBG Now let's imagine that the line is only one pixel high. That means you might have full red, green, red, green, and NO light from the line on green, blue, green blue, etc. So what happens when you de-bayer each block of 4 pixels? Naturally, you get a redish tint because the power line didn't register on the blue pixels. RGRGRGRGRGRGRGRGRGRG If the line is on the green. blue line of the sensor, you get bluish (because it's missing red). Keep in mind, the field is green, so the sensor isn't going to figure out the missing red or blue if it doesn't register it. These are the sensels the register GBGBGBGBGBGBGBGBGBGBG Usually, "lines" fall above and blow sensor pixels. If you look at any video with these cameras and look as thin lines you will usually see some form of chromatic distortion because, as jcs says, the camera isn't resolving enough detail (color samples). In general, there are issues with missing color at all stated resolutions for bayer sensors. It blends in when we don't have sharp edges that fall between a minimum of 3 pixels we need to properly ascertain the color
  24. I use iDynamic because I believe it pulls out the shadows a bit (I don't like high contrast). iResolution isn't good, Someone explained why somewhere, but I forget.
  25. An advantage of 4K is that when it is downscalled to 1K, or 1920x1080, it has greater color correctness. As Andrgl says, each 4-pixel block of a bayer camera image is really 1 red, 1 blue and 2 green sensor pixels. Through de-bayering, each borrows color data from the neighboring pixels to create 4 RGB values. Put another way, in 1080, the camera has sample 25% of the images' red channel, 25% blue and 50% green (because we are most sensitive to green in luma). In 4K you have a box sized at 16 pixels. Again, they are 25% red, 25% blue and 50% green. However, when you downsample them you're actually giving a bigger sample of each channels data; that is, you have 4 pixels of red, 4 pixels of blue and 8 pixels of green. My GM1 takes very sharp video. I suspect they're already doing some form of 4K down to 1080 in the camera; that is, doing pixel binning of 8 pixels for every 1. But I'm just guessing. In any case, from all the samples I see GH4 downsampled 4K produces the best 1080 you can get at the moment (in shaprness). However, I'd still rather have 1080 from RAW, all things equal ;)
×
×
  • Create New...