Jump to content

AndrewM

Members
  • Content Count

    33
  • Joined

  • Last visited

About AndrewM

  • Rank
    Member

Recent Profile Visitors

1,181 profile views
  1. So again, maybe I'm missing things, but let me try and focus (for myself, mainly) on what is at issue here... There are a bunch of questions, and I think different people are answering different ones. I'm going to simplify by saying "rec709 scene" to mean a scene with no more dynamic range and colors variation than can be captured by the rec709 spec. In other words, a scene that looks like pointing your camera at your tv or monitor. Question 1: How often in real world situations do you find yourself shooting a rec709 scene? My answer: in a controlled studio environment with expert li
  2. So tell me if this simplification is wrong... You have a black box. You point it at stuff and turn it on and it records movies. Depending on how you twiddle the nobs, it is only sensitive to a certain range of inputs. Things outside that range get misrecorded (clipped etc). Inside the box are more black boxes. These encode and compress and store in ways that have particular costs. They produce artifacts, reduce resolution (temporal, color space, dynamic range etc). You only have so much recording power, however you spread it out by twiddling the nobs. You have an output fo
  3. Two comments: Moire: moire is a result of detail that has higher frequencies than the sensor array. It is exacerbated by the bayer array, which means you are sub sampling in odd ways. The only way to get rid of the possibility of moire is to filter out all those frequencies. But the stronger the filter that does this is, the more detail you lose. Sorry, math. Until someone invents a filter that breaks the rules of physics and cuts off frequencies instantly, there will be a detail/moire trade off. That is why what we should really be arguing for is Sony to use the switchable OLPF from the
  4. I think people need to remember what these Sigma Art zooms are supposed to be. They are not, I think, intended to be replacements for standard zooms - they are replacements for multiple primes. So to say "you can get a 2.8 with greater range and how big a difference is 2.8 vs 1.8?" is slightly missing the point. Would you buy a 2.8 prime over a 1.8 prime? You might, for price or size or use reasons, but you would recognize that it is not the same thing and there are perfectly good reasons why someone might want the faster lens. Now if you are talking 2.8 plus speed booster, and thus 2.0 v
  5. Sigma 18-35, the one we have been talking about, is APS-C. I made the mistake, didn't notice 16-35. My apologies!
  6. I hate to be the one to do this, because there is more than enough contention on this thread already, but... The 16-35 is an APS-C lens. So when you put it on your A7rII, it automatically went into crop mode. So of course the pictures look the same... I think one of the things that is confusing things here is ISO. Aperture and shutter speed are real physical measures, but ISO - not so much. Or rather, it is a calibrated/normed number. When you set your ISO to 3200 for a particular f stop and shutter speed, you are setting the amount of gain that is being applied to the image,
  7. Have you seen Broadchurch? It is saturated (over-saturated? not sure...), and I wondered if my reaction to it was based on too much desaturation everywhere else, making me feel like something was just "wrong" with the grade compared to what has been standard recently. Also, to my eyes the Broadchurch colors are the "canon" colors everyone claims to like in stills. So, so golden, always against blues and greens. Nice blues, nice greens, but still.
  8. The second season of Broadchurch is up on Netflix, so I have been watching. And after a while, I found myself thinking "what beautiful images, what talented cinematography, what great color." No matter what you think of the content of the show, it is made by incredibly competent people. Apart from some annoying, intentionally-jerky handheld work, there is great composition, some amazing tracking work and focus pulls, great control of light and beautiful colors for people and landscapes alike. And the lenses are just beautiful too - there is a lot of shallow depth of field work, and the backgro
  9. If the stabilization works well in video, then there are all sorts a flow-on benefits. Compression will work better (vertical and horizontal movements "suit" the algorithms, but roll really screws with it, and fewer small changes to deal with means more data going to things that matter) and rolling shutter effects that delay and spread out camera movements and make things swirl will be reduced. Frankly, the second is really big for me - I can't watch go pro footage most of the time because of the combination of rolling shutter and lots of camera movement is completely disorienting.
  10. This looks really, really interesting. I thought the color filter was going to be a steady state electronic device, something electrochromic, rather than something so mechanical. Though the way it looks like it works would actually reduce some of the worries I had. Instead of taking a red picture, a green picture, then a blue picture, it looks like it is taking a bayer full-color picture, then displacing it a pixel, then another full bayer, then displace, then another full bayer. (It's not strictly a bayer pattern (less green) but let's not nitpick...) Advantages: You are getting full thr
  11. I think the Foveon/Sigma problem is probably a mix of (1) silicon being a pretty lousy (and probabilistic) color filter, thus providing "information" that requires an awful lot of massaging to make "real" color, (2) trying to pull information from three distinct layers of a chip and (3) Sigma being a small company and not really an electronics company. So they probably don't have cutting-edge processing hardware, or custom hardware, or optimized code and algorithms. And there are just differences in the details of everybody's solutions that don't seem intrinsic to basic technology - Panasonic,
  12. On how the Foveon sensor works - you are both right. It does have multiple sensels for the different colors, but they are stacked on top of each other vertically. And it does have color filters, and it doesn't have color filters. Essentially, it uses the silicon as a filter - different frequencies of light penetrate to different depths, so end up in different sensels. On the new Sony sensor - we are all guessing, but if it is taking sequential pictures in different colors, then what you are doing is swapping spatial chroma aliasing for temporal chroma aliasing - if objects are moving, then
  13. I'd love to be wrong, but I'm guessing we are more likely to see this in an F5/F55 type camera...
  14. The discussion in the video did shoot off in a bunch of directions, which makes it hard to summarize. But if I were going to try, I would say the morals were the following: They would rather have the next step be high dynamic range and larger color gamut with the same resolution than higher resolution with the same range/gamut for delivery of content. In other words, they like the new Dolby HDR stuff. In terms of cameras, the nice things they said about higher resolution were all about effects, and crop in post. Not anything much about improving the image by improving the resolutio
  15. How is the 60p in the crop mode? If I am switching to that for action to avoid rolling shutter, I'd like to be be able to do slo-mo. Does it have the same softness problems as the full-frame 60p?
×
×
  • Create New...