Jump to content

AndrewM

Members
  • Posts

    33
  • Joined

  • Last visited

Everything posted by AndrewM

  1. So again, maybe I'm missing things, but let me try and focus (for myself, mainly) on what is at issue here... There are a bunch of questions, and I think different people are answering different ones. I'm going to simplify by saying "rec709 scene" to mean a scene with no more dynamic range and colors variation than can be captured by the rec709 spec. In other words, a scene that looks like pointing your camera at your tv or monitor. Question 1: How often in real world situations do you find yourself shooting a rec709 scene? My answer: in a controlled studio environment with expert lighting people - yes. Outside, on a really overcast day - maybe. Inside or outside, in "normal" conditions - never. Question 2: If confronted with a rec709 scene, should you shoot with a rec709 gamma? My answer(s): yes, if you are really good at your job, because you will get the maximum, densest information possible given your codec etc. No, if you are in a rush, or less than 100% confident in your technical skills, or if you just want to be careful because there are a lot of other people working that day whose work product (or wedding...) depends on you doing things right with no reshoots. If the dynamic range I encode exactly matches the dynamic range of my scene, and I misexpose at all, then I will have clipping at one end or the other. And (in my opinion) clipping is way worse than faint banding. Still photographers, even really good ones, bracket for a reason. Or have gear and raw formats good enough that they have sufficient margin of error so they don't have to. Question 3: If confronted with a rec709 scene and I shoot with a wider-dynamic range codec, will I end up with a "worse" image? My answer: duh yes. In some sense, I must. I am spending bits to encode details that don't exist in the image - lots of zeros. I am spreading the ability to discriminate much more thinly over the part of the scene where all the action is. That, mathematically, must have consequences. Question 4: How much worse? Worse mathematically does not necessarily mean worse visually - that's why lossy compression, on which the entire existence of digital video depends. And "worse in the real world" is not the same as "visually detectable in a scene with continuous color gradients not found in nature." I think the goal of the OP is admirable - to try and quantify this. I just see so many variables (per camera, per camera setting, per what differences actually matter) that the task seems next to impossible. Question 5: Is there something about "consumer" cams (hybrids, still cameras that also take video) vs "professional" cams (dedicated video cameras) that makes a difference here, relevant to the merits of log? My answer(s): yes and no. If we are talking sensors, there is lots of variation and a lot of reasons why a dedicated video chip might be better, but the bottom line is that any consumer camera that is capable of taking raw images, even sucky raw images, is capable of capturing way, way, way more information than is available in rec709. There haven't been rec709 limited sensors in 40 years. If we are talking codecs and data limits, then of course. The more data you throw at the scene (given equally efficient codecs) the more discrimination you get. Pro cameras often throw more data at the problem. So the big question is: given current codecs and processing limits, do consumer cams have the data rates to support log shooting? Which gets us to... Question 6: Should we be shooting log on current 8 bit cameras if we are delivering rec709? My answer: depends on the camera, the videographer, the colorist, and the project. If people deliver results that are better than otherwise available, then the answer is yes (for that setup and those skills) and if they don't, the answer is no.
  2. So tell me if this simplification is wrong... You have a black box. You point it at stuff and turn it on and it records movies. Depending on how you twiddle the nobs, it is only sensitive to a certain range of inputs. Things outside that range get misrecorded (clipped etc). Inside the box are more black boxes. These encode and compress and store in ways that have particular costs. They produce artifacts, reduce resolution (temporal, color space, dynamic range etc). You only have so much recording power, however you spread it out by twiddling the nobs. You have an output format (rec709, UHD Ultra, human eye, etc.). You want to "realize your vision" and deliver the best possible output given these constraints. Option 1: Make sure you point your black box at a rec709 scene (say), twiddle your nobs so it captures only what you will deliver and to put the limited recording power you have where it hides its weaknesses and puts its strengths where it most matters to you. (Maybe you like shadows? Maybe you like colors? Maybe you like motion?) How do you get a rec709 scene? By going into a studio and having lots of lights and having lots of people who know how to use those lights, and running every shot five times before you shoot so you know exactly what will happen. Oh, and don't have anything reflective or any shadows. And hope you don't make any mistakes because if your exposure is off, well... you are screwed. We call this "making a Hollywood movie." Option 2: Just go into the world and point your box at things. Take the scene as it is, twiddle the nobs so that you can capture everything in the scene that matters to you, and to spread your recording power to make it the least-worst you can, given your priorities. It is going to be "worse" than option 1 because you are spreading your power more thinly over a wider range, and there will almost certainly be losses when you stuff that into rec709 (say). But they may not be visually obvious, and it is probably better than just clipping 75% of your scene. (Remember miniDV?) Option 3: Go into the world and do what you can to control the light with what you have - your judgement, your choices, some lights, some screens, some reflectors. Do what you can to not stretch your recording power too thin, and allow yourself a safety margin for when you don't twiddle the nobs quite right. Control your environment when you can to not put your black box in a situation where it is guaranteed to produce bad results. Learn how to twiddle your nobs so that you can expand the range of situations you can work in and better deliver the aesthetics you want in your final product. Save up for stuff (lights, better cameras. external recorders) that lead to fewer compromises and better results. I don't think I understand the OP. We are all in option 3, right? And Log is a tool, designed to make the best of certain situations. If we are in option 1, then no log. If the costs of log in a particular situation outweigh the benefits, then don't use it. If you are just using log as a kind of "bracketing" to avoid not having your shot, then that is fine (but you can learn to do better in some situations, and you have to recognize the costs). If you like the "log look" and don't grade, then... that is your preference.
  3. Two comments: Moire: moire is a result of detail that has higher frequencies than the sensor array. It is exacerbated by the bayer array, which means you are sub sampling in odd ways. The only way to get rid of the possibility of moire is to filter out all those frequencies. But the stronger the filter that does this is, the more detail you lose. Sorry, math. Until someone invents a filter that breaks the rules of physics and cuts off frequencies instantly, there will be a detail/moire trade off. That is why what we should really be arguing for is Sony to use the switchable OLPF from the rx1 in more cameras. Camera release schedules: we have a choice. We can have cameras on a consumer electronics schedule, which means dropping prices and more features for your money, but with rapidly decreasing residual values and something close to built-in obsolescence. Or we can have a model on which things cost more, innovate less, but retain value. There are benefits to both, and I think we have all felt the pain of spending significant money on something that is no longer the new shiny sooner than we wish. I'd rather just have things as soon as they are ready, instead of waiting four years in the hope of them coming. Why are we complaining when a company delivers the features we've asked for?
  4. I think people need to remember what these Sigma Art zooms are supposed to be. They are not, I think, intended to be replacements for standard zooms - they are replacements for multiple primes. So to say "you can get a 2.8 with greater range and how big a difference is 2.8 vs 1.8?" is slightly missing the point. Would you buy a 2.8 prime over a 1.8 prime? You might, for price or size or use reasons, but you would recognize that it is not the same thing and there are perfectly good reasons why someone might want the faster lens. Now if you are talking 2.8 plus speed booster, and thus 2.0 vs 1.8, the arguments are much better. And if you already own the speed booster, that makes complete sense as an option. But this lens only costs about $400 more than the speed booster alone. I'd love to find a 2.8 zoom for that... Edit: and Ebrahim posted the same thing while I was typing...
  5. Sigma 18-35, the one we have been talking about, is APS-C. I made the mistake, didn't notice 16-35. My apologies!
  6. I hate to be the one to do this, because there is more than enough contention on this thread already, but... The 16-35 is an APS-C lens. So when you put it on your A7rII, it automatically went into crop mode. So of course the pictures look the same... I think one of the things that is confusing things here is ISO. Aperture and shutter speed are real physical measures, but ISO - not so much. Or rather, it is a calibrated/normed number. When you set your ISO to 3200 for a particular f stop and shutter speed, you are setting the amount of gain that is being applied to the image, and if the camera maker has done their job properly, you should end up with a roughly similarly-exposed (same perceived brightness) resulting image as you would with any camera with the same settings. So of course different camera/lens combinations should come out the same if set the same - ISO is calibrated so they should. BUT (and this is the big thing) the same ISO can mean a completely different things in terms of processing and amplification. So if all other things are equal (same sensor technology, same processing sophistication), ISO 3200 on a small sensor is going to require more boosting to get the same output compared to a larger sensor, which means potentially more noise.
  7. Have you seen Broadchurch? It is saturated (over-saturated? not sure...), and I wondered if my reaction to it was based on too much desaturation everywhere else, making me feel like something was just "wrong" with the grade compared to what has been standard recently. Also, to my eyes the Broadchurch colors are the "canon" colors everyone claims to like in stills. So, so golden, always against blues and greens. Nice blues, nice greens, but still.
  8. The second season of Broadchurch is up on Netflix, so I have been watching. And after a while, I found myself thinking "what beautiful images, what talented cinematography, what great color." No matter what you think of the content of the show, it is made by incredibly competent people. Apart from some annoying, intentionally-jerky handheld work, there is great composition, some amazing tracking work and focus pulls, great control of light and beautiful colors for people and landscapes alike. And the lenses are just beautiful too - there is a lot of shallow depth of field work, and the backgrounds are soft and creamy and not distracting. Generally, the images just "pop" in all aspects. So I googled out of curiosity, and it was shot on Alexa. But after a while longer, I began to feel... oppressed by all the imagery. And that is what I wanted to start the topic about - the aesthetics. Broadchurch has all the things that people on this forum often talk about positively, and that I would take as positives too - the intelligent and creative use of shallow depth of field that comes with big sensors, color that is vivid yet "real," and an organic "film-like" image. You won't see any clipping or noise on this show... If you haven't seen it, you can pull up a few minutes online and see what I mean. It can be any few minutes. I am trying to reflect on this oppressiveness I am feeling, because it is making it hard for me to watch. And I am wondering if anyone sees/feels the same. The best theory I have is this - on color, it is like living in an eternal "golden hour." The colors are absolutely lovely, but you wish sometimes they would go away - they are asking too much of you in their loveliness. On the shallow depth of field: the work is incredibly proficient, but sometimes you wish you could just see the scene and decide where to look, and not have your attention dragged around the scene by the decisions of the director and cinematographer. Again, you wish it would go away and ask less of you. Maybe it is just me, and I am just odd. But I am wondering if anyone else has had this feeling, on this or some other film and their aesthetics. It is great, in many many ways, but in some way (for me, at least) it is "too" great. I would aspire in my dreams to produce images with a fraction of the quality of those in Broadchurch, but I find myself being distracted by them and not wanting to watch.
  9. If the stabilization works well in video, then there are all sorts a flow-on benefits. Compression will work better (vertical and horizontal movements "suit" the algorithms, but roll really screws with it, and fewer small changes to deal with means more data going to things that matter) and rolling shutter effects that delay and spread out camera movements and make things swirl will be reduced. Frankly, the second is really big for me - I can't watch go pro footage most of the time because of the combination of rolling shutter and lots of camera movement is completely disorienting.
  10. This looks really, really interesting. I thought the color filter was going to be a steady state electronic device, something electrochromic, rather than something so mechanical. Though the way it looks like it works would actually reduce some of the worries I had. Instead of taking a red picture, a green picture, then a blue picture, it looks like it is taking a bayer full-color picture, then displacing it a pixel, then another full bayer, then displace, then another full bayer. (It's not strictly a bayer pattern (less green) but let's not nitpick...) Advantages: You are getting full three-color information over the entire sensor every exposure, not just once every three. You are getting direct information about each individual color at each individual site, but only once a cycle (three exposures). That makes me a lot less worried about temporal aliasing... Anyway, at the numbers in the document, you can do 80 full cycles is a 1/60 second exposure! Disadvantages: You are looking at a really short exposure! 1/16000 second! Those better be sensitive photo sites! Though because you are summing multiple exposures, some noise issues will be reduced. But if you hit the floor of the sensor sensitivity, it will be all over... One of the remaining questions I have is whether you can vary the speed of the color filter/ increase the time of each exposure.
  11. I think the Foveon/Sigma problem is probably a mix of (1) silicon being a pretty lousy (and probabilistic) color filter, thus providing "information" that requires an awful lot of massaging to make "real" color, (2) trying to pull information from three distinct layers of a chip and (3) Sigma being a small company and not really an electronics company. So they probably don't have cutting-edge processing hardware, or custom hardware, or optimized code and algorithms. And there are just differences in the details of everybody's solutions that don't seem intrinsic to basic technology - Panasonic, for instance, seems to have lower power/lower heat 4K processing than Sony right now, for whatever reason. I am a little puzzled why the Sigmas aren't better. I think, if we are understanding the Sony imager right, the first issue is how good and how quick the electronically-changing color filter is. If it is good (quick and color accurate), then all you have to do theoretically is pull the numbers from each sensor and sum them individually for each color pass. You could do that locally for each pixel with very simple hardware in essentially no time - you just have an add-buffer. Then you have to get all the numbers off the sensor into the rest of the chip, but that only has to happen once a frame. And you don't have to do that quickly, or worry about rolling shutter, because the moment of exposure is controlled at the pixel. Global shutter is free. The other issue is sensitivity. If we are right, then each frame is a lot of exposures in each color added up. Short exposure means less light, even if the pixels are three times the size because no Bayer. My guess is that there will be a trade-off. At low light, you can have low noise/good sensitivity (by having long individual color exposures and summing fewer of them) but then you will have problems with temporal aliasing of color (which you might be able to deal with in processing but at a loss of resolution). Or you can have good temporal resolution by having more, shorter exposures, at the cost of greater noise/lesser sensitivity. Low light with lots of motion is going to be the worst case for this technology, if I am understanding it right.
  12. On how the Foveon sensor works - you are both right. It does have multiple sensels for the different colors, but they are stacked on top of each other vertically. And it does have color filters, and it doesn't have color filters. Essentially, it uses the silicon as a filter - different frequencies of light penetrate to different depths, so end up in different sensels. On the new Sony sensor - we are all guessing, but if it is taking sequential pictures in different colors, then what you are doing is swapping spatial chroma aliasing for temporal chroma aliasing - if objects are moving, then a single object will exhibit weird uneven color smearing that would have to corrected in software, which will cost you resolution. That is why I think you are seeing the insane frame rates. If you were shooting 30p with 1/60 shutter, and it did 1/180 second red, then 1/180 green, then 1/180 blue, anything that has changed position in 1/180 second is going to cause real problems. But if, in 1/60 of a second, it shoots r-b-g-r-b-g-r-b-g-r-b-g... and combines the exposures, the problem goes away to a large extent. That would be really exciting. It would also allow you to do some really, really cool things to deal with temporal aliasing and to give more pleasing motion blur - you could assign lower weights to the exposures at the beginning and end of the broader exposure interval. see http://www.red.com/learn/red-101/cinema-temporal-aliasing
  13. I'd love to be wrong, but I'm guessing we are more likely to see this in an F5/F55 type camera...
  14. The discussion in the video did shoot off in a bunch of directions, which makes it hard to summarize. But if I were going to try, I would say the morals were the following: They would rather have the next step be high dynamic range and larger color gamut with the same resolution than higher resolution with the same range/gamut for delivery of content. In other words, they like the new Dolby HDR stuff. In terms of cameras, the nice things they said about higher resolution were all about effects, and crop in post. Not anything much about improving the image by improving the resolution. The big step, according to them, was not resolution but sensitivity - increased sensitivity meant easier/lighter/cheaper lighting, and more natural light. I think dynamic range is way more important for us than it is for them. They are working in a world where if the shadows are dropping off into black, then they tell someone to shine a light there, and if the highlights are clipping, then they throw up some gels or some diffusion. They have whole crews, and a controlled environment, so they can make the scene fit into the dynamic range they have. They might like to deliver in HDR, and they might like to have the safety margin that the latitude gives them, but that isn't the main issue. For most of us, the issues are different. We don't have the control over the environment, and we don't have three people working the camera and another whole crew manning the lighting. So high dynamic range is more about the fact that we can't be sure the camera is set just right or the scene is set just right, and we need to be able to fix in post and not worry so much when we shoot.
  15. How is the 60p in the crop mode? If I am switching to that for action to avoid rolling shutter, I'd like to be be able to do slo-mo. Does it have the same softness problems as the full-frame 60p?
  16. I love the colors I am seeing from the gh4... except the greens. It's like a crayon box with lots of different colors, and just three greens. Is this in my imagination, or am I really seeing something? I guess I would rather have the skin-tones saved, but still...
  17. I think that part of the problem is that when we think about "8-bit 4:2:0 1080p" we have a particular picture in mind, a perfectly reasonable one - a big rectangle of 8-bit numbers, each essentially independent. That is how our 8-bit monitor works - one number can be 256, the next one 0, the next 131, and so on. On that picture, banding will appear due to quantization when you try to represent smoothly varying material directly. (It will appear in 10 bit (or 12 bit, or 14 bit..) but the bands will just be smaller and more similar, so at some point it is imperceptible.) Dithering can decrease banding and increase perceived color range because our eye essentially averages over neighboring pixels, and throwing in a (semi)random 16 among a bunch of 15s makes us think we are seeing, say, 15.25. The point Andrew keeps making is that what we get (out of non-raw cameras) is COMPRESSED 8bit 4:2:0, for example. And the way compression works is that it does not allow you to just randomly place numbers in boxes in the array. It removes "extraneous" detail to reduce information. So it would see that 16 among all the 15s and say "hmm - outlier - toss it." In the end, the quality of the implementation of the compression is going to be the big difference maker (as Andrew, again, always says), and so the 8-bit spec will be less important than the algorithms implemented. If the final output is compressed in any of the standard ways, the dither will go away and the banding will return.
  18. Um...no. To quote from wikipedia: In photography, the circle of confusion (“CoCâ€) is used to determine the depth of field, the part of an image that is acceptably sharp. A standard value of CoC is often associated with each image format, but the most appropriate value depends on visual acuity, viewing conditions, and the amount of enlargement.
  19. I buy the argument that there is no mathematical argument for FF over say m43. There is no mathematical reason why the ecosystems couldn't offer exactly the same stuff. But as you yourself say. they don't offer the same stuff. The lenses you might want, and can get on FF, don't exist for m43. They could exist. But don't. Maybe they will one day, but for now the native lenses are overpriced and comparatively limited. Speedboosters do a lot for that, but really, should we need these? What I am hoping happens is that as the cameras get better and the consumers get more educated, the lens makers will offer more compelling choices for the smaller formats.
  20. Sigh. This is what you get for trying to be a peace-maker and explain how both sides have a point.... Depth of Field is an optical property. But it is a property with a free parameter, to use technical speak on you. You have to specify a circle of confusion. Otherwise the depth of field (for an optically perfect system) is always zero. There is exactly one distance where the light is focussed to a mathematically perfect point, regardless of aperture. (In the real world, there is NO distance where this happens, so there is no (perfect) depth of field.) What counts as "the area in focus" depends on what you mean by "in focus," and you can't mean "perfectly in focus" because then the answer is always zero. The free parameter is the circle of confusion. If you don't specify a value for that parameter, then there is no depth of field in any meaningful sense. The point (and oh my god I feel stupid even as I type this, like I am being sucked into some kind of internet madness...) is that what is a reasonable value for this parameter depends on the resolving capacity of the system (film, digital, or whatever) that you are talking about. Nothing optically changes, please let us all accept this. But the number you get for depth of field will change when you change the number for the circle of confusion, and it is not completely unreasonable to change the number for the circle of confusion if the resolution changes. Everyone is right. Please let it end.
  21. Is this what is going on, and does this help resolve it? Depth of field and circles of confusion are optical properties, independent of sensor. We should agree on this. However, The relevant circle of confusion changes depending on your sensor. You want your lens to resolve well enough to get the best out of your sensor, but it doesn't need to be better. Larger photosites -> tolerance of larger circles of confusion. Lower resolution -> tolerance of larger circles of confusion. Circles of confusion feed, mathematically, into depth of field calculations, because the depth of field is the region where things are acceptably in focus, as defined by your specified circle of confusion. Next: Other things affect resolution apart from megapixels - processing for instance. Noise reduction can reduce resolution. Higher ISOs have more aggressive noise reduction, so can have lower resolution. So the argument is: Higher ISO -> more processing that lowers effective resolution -> larger acceptable circle of confusion -> deeper depth of field. This isn't magic, it doesn't make any more of the scene be in focus. What it does is make more of the scene be equally just-out-of-focus, to put it one way, because the lower resolution is inherently less sharp and so more of the image will be "sharp enough."
  22. Don't want to make it a fight (or any more of a fight than it has been in this thread...) but... It is much harder to make a full-frame f/2.8 than a m43 f/2.8 - that was part of the point of my earlier post. You have to produce a quality image circle that is 4x the area, which takes more glass and a lot more precision. I think a lot of this discussion is being prompted by the competing cameras with different formats, but also by the speedbooster and like products. We know now that you can take a full frame lens and bolt on some more glass and get a great product for a crop sensor with an amazing aperture that outcompetes the "native" crop lenses in many ways. And people love it. So why are the lens makers for smaller formats producing lenses that just match the aperture of the larger format lenses, when the speedbooster makes it clear that they could do better?
  23. There is a lot of apparent disagreement here but not much actual disagreement, I think. Is this a fair summary? If I am buying a camera BODY, there are some numbers that I should know: - sensor size - pixel count Now all other things being equal more pixels is nice (up to a point), but as Andrew points out at the beginning of every camera report, more pixels on a smaller sensor probably means worse pixels all other things being equal. But bottom line is that, especially for video, we have more pixels than we need. Really, it is PIXEL SIZE, which is derived from the above two numbers (roughly - Andrew's point about global shutter circuitry and other technologies like backside illumination must be taken into account) that matters more. However, all other things are almost never equal. So if we want to know how good the actual image is, everything is in the details. If I want to know the details, then I rely on review sites that take pictures under carefully controlled situations with the best possible lenses, look at how ISO 800 pics on different bodies compare, in terms of noise and resolution and so on. If there is a fudge factor in ISO, as the OP video claims, it will show up in the quality of the images at this point, so this does not concern me. Crop cameras should look worse by the factor he claims, and if so then... they will look worse, and we will all be able to tell. Now what if I am buying camera LENSES? First, we all know that there are enormous variations in lens quality - both in technical quality (MTF etc) and in aesthetic quality. Put that aside for a moment and assume all these things are equal. A lens can be described by three numbers (ignoring zoom, minimum focus, etc): - focal length - maximum aperture - image circle If I have my camera body already, then I have to make sure I buy lenses with a big enough image circle. If I have a full-frame body and I buy APS-C only lenses, I am going to be really disappointed by what I see, and will be using crop mode all the time and might as well have bought an APS-C camera. But I can put a larger image circle lens on a smaller sensor body (assuming I can physically attach it) and it will work fine, giving me images (again, all other things being equal) equivalent to a crop of the larger sensor body the lens was intended for. Of course, because of the crop, I may need to reframe for the same image, and because of that, depth of field and perspective will not be the same. By multiplying the focal length by the crop factor, I know how much I need to reframe for the same image (or conversely, I know which (different focal length) lens to chose so I don't have to reframe). Ok. Now let's talk about about what you get for your money. Suppose I am comparing two lenses that will render the same field of view on their native sensor size and roughly equivalent "quality" - say, a 50mm f/2.8 full-frame and a 25mm f/2.8 m43. While similar in many, many ways, the fundamental difference is that that the full-frame lens renders a larger image circle, which means that it is a more complex piece of engineering - it maintains an adequate image over a 4x larger area. That means more glass (and a heavier lens), tighter tolerances etc. It is much harder to keep that wide aperture and still resolve over that large image circle. Also, if shallow depth of field is your thing, you are not going to get as shallow with the m43 on native body. If I put that full-frame lens on a crop body it will work fine, but I am wasting an awful lot of engineering by doing so, because much of the image circle is just being ignored. Also, as Andrew points out, I may lose an awful lot of the character of the lens, which comes from its fall-off, distortions, etc.. But this may not worry me. The magic of the speedbooster is that it takes all the engineering, all the character, of the larger-format lens and squeezes it down to fit onto a smaller sensor, so you are not just throwing that away. It means you get the lens as it would be on the larger sensor. Now here is the part where I agree with the video that started all this. If you look at the lenses and their prices, full-frame lenses look like much better value. That 50mm f2.8 FF and that 25mm f2.8 m43 cost close to the same, but if you think about the job they are doing, the full frame lens is way more impressive. (It is also way heavier (because of the job it is doing)). And on a native body, it will do things the other won't, like a shallower depth of field. If that is what matters to me, then yes, I need a 25mm f1.4 native m43 lens. If that doesn't matter to me, and if other things matter more (like size) then I may not care about the full frame comparison. I'm looking at getting a new body and new lenses. And I am facing this issue every time I try to decide. I look at the GH4, and I look at the native lens selection and what you pay for them, and what they do compared to full-frame or APS-C lenses and what you pay for those, and the value seems off. I get much more for my money with the larger-format lenses. But I pay more for the body that handles those lenses, and the body may not be as good for video anyway. It is, after all, the combination of lenses-plus-body that produces the actual images.
  24. He's right and he's wrong. It depends on what you think f-stops are telling you. If you think they are telling you depth of field, he is right. If you think they are telling you how to expose, he is wrong. Look at how he has to change the ISO to get the images above. A M43 f/2.8 will expose about the same as a full-frame f/2.8. But if you are buying lenses to get shallow depth of field, then you have to use the multiplier on the aperture, just like you do on the focal length to compare field of view.
  25. The short looks lovely. I have a question about something I seem to be seeing in almost all the GH4 footage, but I can't tell if it is a grading choice or what. The greens (particularly foliage greens) seem kind of garish and lacking in dynamic range - there is one green, and it is really bright. Is that an artifact of recovering the skin tones, or actually just part of the color response of the camera?
×
×
  • Create New...