Jump to content

AndrewM

Members
  • Posts

    33
  • Joined

  • Last visited

About AndrewM

Recent Profile Visitors

1,499 profile views

AndrewM's Achievements

Member

Member (2/5)

10

Reputation

  1. So again, maybe I'm missing things, but let me try and focus (for myself, mainly) on what is at issue here... There are a bunch of questions, and I think different people are answering different ones. I'm going to simplify by saying "rec709 scene" to mean a scene with no more dynamic range and colors variation than can be captured by the rec709 spec. In other words, a scene that looks like pointing your camera at your tv or monitor. Question 1: How often in real world situations do you find yourself shooting a rec709 scene? My answer: in a controlled studio environment with expert lighting people - yes. Outside, on a really overcast day - maybe. Inside or outside, in "normal" conditions - never. Question 2: If confronted with a rec709 scene, should you shoot with a rec709 gamma? My answer(s): yes, if you are really good at your job, because you will get the maximum, densest information possible given your codec etc. No, if you are in a rush, or less than 100% confident in your technical skills, or if you just want to be careful because there are a lot of other people working that day whose work product (or wedding...) depends on you doing things right with no reshoots. If the dynamic range I encode exactly matches the dynamic range of my scene, and I misexpose at all, then I will have clipping at one end or the other. And (in my opinion) clipping is way worse than faint banding. Still photographers, even really good ones, bracket for a reason. Or have gear and raw formats good enough that they have sufficient margin of error so they don't have to. Question 3: If confronted with a rec709 scene and I shoot with a wider-dynamic range codec, will I end up with a "worse" image? My answer: duh yes. In some sense, I must. I am spending bits to encode details that don't exist in the image - lots of zeros. I am spreading the ability to discriminate much more thinly over the part of the scene where all the action is. That, mathematically, must have consequences. Question 4: How much worse? Worse mathematically does not necessarily mean worse visually - that's why lossy compression, on which the entire existence of digital video depends. And "worse in the real world" is not the same as "visually detectable in a scene with continuous color gradients not found in nature." I think the goal of the OP is admirable - to try and quantify this. I just see so many variables (per camera, per camera setting, per what differences actually matter) that the task seems next to impossible. Question 5: Is there something about "consumer" cams (hybrids, still cameras that also take video) vs "professional" cams (dedicated video cameras) that makes a difference here, relevant to the merits of log? My answer(s): yes and no. If we are talking sensors, there is lots of variation and a lot of reasons why a dedicated video chip might be better, but the bottom line is that any consumer camera that is capable of taking raw images, even sucky raw images, is capable of capturing way, way, way more information than is available in rec709. There haven't been rec709 limited sensors in 40 years. If we are talking codecs and data limits, then of course. The more data you throw at the scene (given equally efficient codecs) the more discrimination you get. Pro cameras often throw more data at the problem. So the big question is: given current codecs and processing limits, do consumer cams have the data rates to support log shooting? Which gets us to... Question 6: Should we be shooting log on current 8 bit cameras if we are delivering rec709? My answer: depends on the camera, the videographer, the colorist, and the project. If people deliver results that are better than otherwise available, then the answer is yes (for that setup and those skills) and if they don't, the answer is no.
  2. So tell me if this simplification is wrong... You have a black box. You point it at stuff and turn it on and it records movies. Depending on how you twiddle the nobs, it is only sensitive to a certain range of inputs. Things outside that range get misrecorded (clipped etc). Inside the box are more black boxes. These encode and compress and store in ways that have particular costs. They produce artifacts, reduce resolution (temporal, color space, dynamic range etc). You only have so much recording power, however you spread it out by twiddling the nobs. You have an output format (rec709, UHD Ultra, human eye, etc.). You want to "realize your vision" and deliver the best possible output given these constraints. Option 1: Make sure you point your black box at a rec709 scene (say), twiddle your nobs so it captures only what you will deliver and to put the limited recording power you have where it hides its weaknesses and puts its strengths where it most matters to you. (Maybe you like shadows? Maybe you like colors? Maybe you like motion?) How do you get a rec709 scene? By going into a studio and having lots of lights and having lots of people who know how to use those lights, and running every shot five times before you shoot so you know exactly what will happen. Oh, and don't have anything reflective or any shadows. And hope you don't make any mistakes because if your exposure is off, well... you are screwed. We call this "making a Hollywood movie." Option 2: Just go into the world and point your box at things. Take the scene as it is, twiddle the nobs so that you can capture everything in the scene that matters to you, and to spread your recording power to make it the least-worst you can, given your priorities. It is going to be "worse" than option 1 because you are spreading your power more thinly over a wider range, and there will almost certainly be losses when you stuff that into rec709 (say). But they may not be visually obvious, and it is probably better than just clipping 75% of your scene. (Remember miniDV?) Option 3: Go into the world and do what you can to control the light with what you have - your judgement, your choices, some lights, some screens, some reflectors. Do what you can to not stretch your recording power too thin, and allow yourself a safety margin for when you don't twiddle the nobs quite right. Control your environment when you can to not put your black box in a situation where it is guaranteed to produce bad results. Learn how to twiddle your nobs so that you can expand the range of situations you can work in and better deliver the aesthetics you want in your final product. Save up for stuff (lights, better cameras. external recorders) that lead to fewer compromises and better results. I don't think I understand the OP. We are all in option 3, right? And Log is a tool, designed to make the best of certain situations. If we are in option 1, then no log. If the costs of log in a particular situation outweigh the benefits, then don't use it. If you are just using log as a kind of "bracketing" to avoid not having your shot, then that is fine (but you can learn to do better in some situations, and you have to recognize the costs). If you like the "log look" and don't grade, then... that is your preference.
  3. Two comments: Moire: moire is a result of detail that has higher frequencies than the sensor array. It is exacerbated by the bayer array, which means you are sub sampling in odd ways. The only way to get rid of the possibility of moire is to filter out all those frequencies. But the stronger the filter that does this is, the more detail you lose. Sorry, math. Until someone invents a filter that breaks the rules of physics and cuts off frequencies instantly, there will be a detail/moire trade off. That is why what we should really be arguing for is Sony to use the switchable OLPF from the rx1 in more cameras. Camera release schedules: we have a choice. We can have cameras on a consumer electronics schedule, which means dropping prices and more features for your money, but with rapidly decreasing residual values and something close to built-in obsolescence. Or we can have a model on which things cost more, innovate less, but retain value. There are benefits to both, and I think we have all felt the pain of spending significant money on something that is no longer the new shiny sooner than we wish. I'd rather just have things as soon as they are ready, instead of waiting four years in the hope of them coming. Why are we complaining when a company delivers the features we've asked for?
  4. I think people need to remember what these Sigma Art zooms are supposed to be. They are not, I think, intended to be replacements for standard zooms - they are replacements for multiple primes. So to say "you can get a 2.8 with greater range and how big a difference is 2.8 vs 1.8?" is slightly missing the point. Would you buy a 2.8 prime over a 1.8 prime? You might, for price or size or use reasons, but you would recognize that it is not the same thing and there are perfectly good reasons why someone might want the faster lens. Now if you are talking 2.8 plus speed booster, and thus 2.0 vs 1.8, the arguments are much better. And if you already own the speed booster, that makes complete sense as an option. But this lens only costs about $400 more than the speed booster alone. I'd love to find a 2.8 zoom for that... Edit: and Ebrahim posted the same thing while I was typing...
  5. Sigma 18-35, the one we have been talking about, is APS-C. I made the mistake, didn't notice 16-35. My apologies!
  6. I hate to be the one to do this, because there is more than enough contention on this thread already, but... The 16-35 is an APS-C lens. So when you put it on your A7rII, it automatically went into crop mode. So of course the pictures look the same... I think one of the things that is confusing things here is ISO. Aperture and shutter speed are real physical measures, but ISO - not so much. Or rather, it is a calibrated/normed number. When you set your ISO to 3200 for a particular f stop and shutter speed, you are setting the amount of gain that is being applied to the image, and if the camera maker has done their job properly, you should end up with a roughly similarly-exposed (same perceived brightness) resulting image as you would with any camera with the same settings. So of course different camera/lens combinations should come out the same if set the same - ISO is calibrated so they should. BUT (and this is the big thing) the same ISO can mean a completely different things in terms of processing and amplification. So if all other things are equal (same sensor technology, same processing sophistication), ISO 3200 on a small sensor is going to require more boosting to get the same output compared to a larger sensor, which means potentially more noise.
  7. Have you seen Broadchurch? It is saturated (over-saturated? not sure...), and I wondered if my reaction to it was based on too much desaturation everywhere else, making me feel like something was just "wrong" with the grade compared to what has been standard recently. Also, to my eyes the Broadchurch colors are the "canon" colors everyone claims to like in stills. So, so golden, always against blues and greens. Nice blues, nice greens, but still.
  8. The second season of Broadchurch is up on Netflix, so I have been watching. And after a while, I found myself thinking "what beautiful images, what talented cinematography, what great color." No matter what you think of the content of the show, it is made by incredibly competent people. Apart from some annoying, intentionally-jerky handheld work, there is great composition, some amazing tracking work and focus pulls, great control of light and beautiful colors for people and landscapes alike. And the lenses are just beautiful too - there is a lot of shallow depth of field work, and the backgrounds are soft and creamy and not distracting. Generally, the images just "pop" in all aspects. So I googled out of curiosity, and it was shot on Alexa. But after a while longer, I began to feel... oppressed by all the imagery. And that is what I wanted to start the topic about - the aesthetics. Broadchurch has all the things that people on this forum often talk about positively, and that I would take as positives too - the intelligent and creative use of shallow depth of field that comes with big sensors, color that is vivid yet "real," and an organic "film-like" image. You won't see any clipping or noise on this show... If you haven't seen it, you can pull up a few minutes online and see what I mean. It can be any few minutes. I am trying to reflect on this oppressiveness I am feeling, because it is making it hard for me to watch. And I am wondering if anyone sees/feels the same. The best theory I have is this - on color, it is like living in an eternal "golden hour." The colors are absolutely lovely, but you wish sometimes they would go away - they are asking too much of you in their loveliness. On the shallow depth of field: the work is incredibly proficient, but sometimes you wish you could just see the scene and decide where to look, and not have your attention dragged around the scene by the decisions of the director and cinematographer. Again, you wish it would go away and ask less of you. Maybe it is just me, and I am just odd. But I am wondering if anyone else has had this feeling, on this or some other film and their aesthetics. It is great, in many many ways, but in some way (for me, at least) it is "too" great. I would aspire in my dreams to produce images with a fraction of the quality of those in Broadchurch, but I find myself being distracted by them and not wanting to watch.
  9. If the stabilization works well in video, then there are all sorts a flow-on benefits. Compression will work better (vertical and horizontal movements "suit" the algorithms, but roll really screws with it, and fewer small changes to deal with means more data going to things that matter) and rolling shutter effects that delay and spread out camera movements and make things swirl will be reduced. Frankly, the second is really big for me - I can't watch go pro footage most of the time because of the combination of rolling shutter and lots of camera movement is completely disorienting.
  10. This looks really, really interesting. I thought the color filter was going to be a steady state electronic device, something electrochromic, rather than something so mechanical. Though the way it looks like it works would actually reduce some of the worries I had. Instead of taking a red picture, a green picture, then a blue picture, it looks like it is taking a bayer full-color picture, then displacing it a pixel, then another full bayer, then displace, then another full bayer. (It's not strictly a bayer pattern (less green) but let's not nitpick...) Advantages: You are getting full three-color information over the entire sensor every exposure, not just once every three. You are getting direct information about each individual color at each individual site, but only once a cycle (three exposures). That makes me a lot less worried about temporal aliasing... Anyway, at the numbers in the document, you can do 80 full cycles is a 1/60 second exposure! Disadvantages: You are looking at a really short exposure! 1/16000 second! Those better be sensitive photo sites! Though because you are summing multiple exposures, some noise issues will be reduced. But if you hit the floor of the sensor sensitivity, it will be all over... One of the remaining questions I have is whether you can vary the speed of the color filter/ increase the time of each exposure.
  11. I think the Foveon/Sigma problem is probably a mix of (1) silicon being a pretty lousy (and probabilistic) color filter, thus providing "information" that requires an awful lot of massaging to make "real" color, (2) trying to pull information from three distinct layers of a chip and (3) Sigma being a small company and not really an electronics company. So they probably don't have cutting-edge processing hardware, or custom hardware, or optimized code and algorithms. And there are just differences in the details of everybody's solutions that don't seem intrinsic to basic technology - Panasonic, for instance, seems to have lower power/lower heat 4K processing than Sony right now, for whatever reason. I am a little puzzled why the Sigmas aren't better. I think, if we are understanding the Sony imager right, the first issue is how good and how quick the electronically-changing color filter is. If it is good (quick and color accurate), then all you have to do theoretically is pull the numbers from each sensor and sum them individually for each color pass. You could do that locally for each pixel with very simple hardware in essentially no time - you just have an add-buffer. Then you have to get all the numbers off the sensor into the rest of the chip, but that only has to happen once a frame. And you don't have to do that quickly, or worry about rolling shutter, because the moment of exposure is controlled at the pixel. Global shutter is free. The other issue is sensitivity. If we are right, then each frame is a lot of exposures in each color added up. Short exposure means less light, even if the pixels are three times the size because no Bayer. My guess is that there will be a trade-off. At low light, you can have low noise/good sensitivity (by having long individual color exposures and summing fewer of them) but then you will have problems with temporal aliasing of color (which you might be able to deal with in processing but at a loss of resolution). Or you can have good temporal resolution by having more, shorter exposures, at the cost of greater noise/lesser sensitivity. Low light with lots of motion is going to be the worst case for this technology, if I am understanding it right.
  12. On how the Foveon sensor works - you are both right. It does have multiple sensels for the different colors, but they are stacked on top of each other vertically. And it does have color filters, and it doesn't have color filters. Essentially, it uses the silicon as a filter - different frequencies of light penetrate to different depths, so end up in different sensels. On the new Sony sensor - we are all guessing, but if it is taking sequential pictures in different colors, then what you are doing is swapping spatial chroma aliasing for temporal chroma aliasing - if objects are moving, then a single object will exhibit weird uneven color smearing that would have to corrected in software, which will cost you resolution. That is why I think you are seeing the insane frame rates. If you were shooting 30p with 1/60 shutter, and it did 1/180 second red, then 1/180 green, then 1/180 blue, anything that has changed position in 1/180 second is going to cause real problems. But if, in 1/60 of a second, it shoots r-b-g-r-b-g-r-b-g-r-b-g... and combines the exposures, the problem goes away to a large extent. That would be really exciting. It would also allow you to do some really, really cool things to deal with temporal aliasing and to give more pleasing motion blur - you could assign lower weights to the exposures at the beginning and end of the broader exposure interval. see http://www.red.com/learn/red-101/cinema-temporal-aliasing
  13. I'd love to be wrong, but I'm guessing we are more likely to see this in an F5/F55 type camera...
  14. The discussion in the video did shoot off in a bunch of directions, which makes it hard to summarize. But if I were going to try, I would say the morals were the following: They would rather have the next step be high dynamic range and larger color gamut with the same resolution than higher resolution with the same range/gamut for delivery of content. In other words, they like the new Dolby HDR stuff. In terms of cameras, the nice things they said about higher resolution were all about effects, and crop in post. Not anything much about improving the image by improving the resolution. The big step, according to them, was not resolution but sensitivity - increased sensitivity meant easier/lighter/cheaper lighting, and more natural light. I think dynamic range is way more important for us than it is for them. They are working in a world where if the shadows are dropping off into black, then they tell someone to shine a light there, and if the highlights are clipping, then they throw up some gels or some diffusion. They have whole crews, and a controlled environment, so they can make the scene fit into the dynamic range they have. They might like to deliver in HDR, and they might like to have the safety margin that the latitude gives them, but that isn't the main issue. For most of us, the issues are different. We don't have the control over the environment, and we don't have three people working the camera and another whole crew manning the lighting. So high dynamic range is more about the fact that we can't be sure the camera is set just right or the scene is set just right, and we need to be able to fix in post and not worry so much when we shoot.
  15. How is the 60p in the crop mode? If I am switching to that for action to avoid rolling shutter, I'd like to be be able to do slo-mo. Does it have the same softness problems as the full-frame 60p?
×
×
  • Create New...