Jump to content

cpc

Members
  • Posts

    190
  • Joined

  • Last visited

About cpc

Profile Information

  • Gender
    Not Telling

Contact Methods

  • Website URL
    http://www.shutterangle.com

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

cpc's Achievements

Active member

Active member (3/5)

102

Reputation

  1. Smaller VF magnification can be a positive for spectacles wearers as you don't have to move your eye around the image. I've dumped otherwise great cameras before because of their excessive VF magnification.
  2. How do you price size though? Using the official dimensions, the A7c fits in a box of half the volume of the S5 bounding box.
  3. re: appeal The Sony NEX 6 has the most perfect size-feature balance for small hands and spectacles of all digital cameras I've tried. A7 series is big and heavy, particularly so after the first iteration (the main reason I still use an A7s mark I as a primary photo camera), and EVF position is worse (for me) than on the NEX series. This camera on the other hand... color me interested. The position of the EVF alone is an insta-win.
  4. cpc

    Sony A7S III

    This is too optimistic, I think. The A7s needed overexposure in s-log, it was barely usable at nominal ISO (and I am being generous with my wording here). With the lower base ISO in s-log3 of the A7s III (640 vs 1600 on the A7s), Sony now basically make this overexposure implicit.
  5. cpc

    Sony A7S III

    For determining the clip point it doesn't matter if the footage is overexposed; overexposure doesn't move the clip point; if anything, it helps to find this point easier. All you need is locating a hard clipping area (like the sun). re: exposing While a digital spotmeter would be the perfect tool for exposing log, the A7s II does have "gamma assist" where you are recording s-log, but previewing an image properly tone mapped for display. The A7s III likely has this too. You don't really need perfect white balance in-camera when shooting a fully reversible curve like s-log3. This can be white balanced in post in a mathematically correct way, similarly to how you balance raw in post. You only need to have in-camera WB in the ballpark to maximize utilization of available tonal precision.
  6. cpc

    Sony A7S III

    The sun is so bright that you'd need significant underexposure to bring it down to below clip levels (on any camera). And these images don't look underexposed to me. A clipping value of 0.87106 is still very respectable: on the s-log3 curve, this is slightly more than 6 stops above middle gray. With "metadata ISO" cameras like the Alexa the clip point in Log-C moves up with ISOs higher than base, and lower with ISOs lower than base. But on Sony A7s cameras you can't rate lower than base in s-log (well, on the A7s you can't, at least), so this is likely shot at base s-log3 ISO 640. I any case, the s-log3 curve has a nominal range of around 9 stops below mid gray (usable range obviously significantly lower), so this ties up with the boasted 15 stops of DR in video. You can think of the camera as shooting 10 - log2(1024/ (0.87*1024 - 95)) bit footage in s-log3. That is, as a 9.64 bit camera.
  7. cpc

    Sony A7S III

    With the middle point mapped as per the specification, the camera simply lacks the highlights latitude to fill all the available s-log3 range. Basically, it clips lower than what s-log3 can handle. You should still be importing as data levels: this is not a bug, it is expected. Importing as video levels simply stretches the signal, you are importing it wrong and increasing the gamma of the straight portion of the curve (it is no longer the s-log3 curve), thus throwing off any subsequent processing which relies on the curve being correct.
  8. @Lensmonkey: Raw is really the same as shooting film. Something that you should take into consideration is that middle gray practically never falls in the middle of the exposure range on negative film. You have tons of overexposure latitude, and very little underexposure latitude, so overexposing a negative for a denser image is very common. With raw on lower end cameras it is quite the opposite: you don't really have much (if any) latitude for overexposure, because of the hard clip at sensor saturation levels, but you can often rate faster (higher ISO) and underexpose a bit. This is the case, provided that ISO is merely a metadata label, which is true for most cinema cameras, and looking at the chart it is likely true for the Sigma up to around 1600, where some analog gain change kicks in.
  9. Your "uncompressed reference" has lost information, that the 4:4:4 codecs are taking into consideration, hence the difference. You should use uncompressed RGB for reference, not YUV, and certainly not 4:2:2 subsampled. Remember, 4:2:2 subsampling is a form of compression.
  10. Can't argue with this, I am using manual lenses almost exclusively myself. On the other hand, ML does provide the best exposure and (manual) focusing tools available in-camera south of 10 grand, maybe more, by far, so this offsets the lack of IBIS somewhat. I am amazed these tools aren't matched by newer cameras 8 years later.
  11. A 2012 5d mark 3 shoots beautiful 1080p full-frame 14-bit lossless compressed raw with more sharpness than you'll ever need for YT, at a bit rate comparable to Prores XQ. If IS lenses can do instead of IBIS, I don't think you'll find a better deal.
  12. The problem is missing time codes in the audio files recorded by the camera. Resolve needs these to auto sync audio and image and present them as a single entity. Paul has posted a workaround here: As a general rule, if the uncompressed image and audio don't auto sync in Resolve, the compressed image won't auto sync either.
  13. I will be surprised if Resolves does rescale in anything different than the image native gamma, that is in whatever gamma the values are at the point of the rescale operation. But if anything, some apps convert from sRGB or power gamma to linear for scaling, and then back. You can do various transforms to shrink the ratio between extremes, and this will generally reduce ringing artifacts. I know people deliberately gamma/log transform linear rendered images for rescale. But it is mathematically and physically incorrect. There are examples and lengthy write-ups on the web with what might go wrong if you scale in non-linear gamma, but perhaps most intuitively you can think about it in an "energy conserving" manner. If you don't do it in linear, you are altering the (locally) average brightness of the scene. You may not see this easily in real life images, because it will often be masked by detail, but do a thought experiment about, say, a greyscale synthetic 2x1 image scaled down to a 1x1 image and see what happens. I have a strong dislike for ringing artifacts myself, but I believe the correct approach to reduce these would be to pre-blur to band limit the signal and/or use a different filter: for example, Lanczos with less lobes, or Lanczos with pre-weighted samples; or go to splines/cubic; and sometimes bilinear is fine for downscale between 1x and 2x, since it has only positive weights. On the other hand, as we all very well know, theory and practice can diverge, so whatever produces good looking results is fine. Rescaling Bayer data is certainly more artifact prone, because of the missing samples, and the unknown of the subsequent deBayer algorithm. This is also the main reason SlimRAW only downscales precisely 2x for DNG proxies. It is actually possible to do Bayer aware interpolation and scale 3 layers instead of 4. This way the green channel will benefit from double the information compared to the others. You can think of this as interpolating "in place", rather than scaling with subsequent Bayer rearrangement. Similar to how you can scale a full color image in dozens of ways, you can do the same with a Bayer mosaic, and I don't think there is a "proper" way to do this. It is all a matter of managing trade offs, with the added complexity that you have no control over exactly how the image will be then debayered in post. It is in this sense that rescaling Bayer is worse -- you are creating an intermediate image, which will need to endure some serious additional reconstruction. Ideally, you should resize after debayering, because an advanced debayer method will try to use all channels simultaneously (also, see below). This is possible, and you can definitely extract more information and get better results by using neighboring pixels of different color because channels correlate somewhat. Exploiting this correlation is at the heart of many debayer algorithms, and, in some sense, memorizing many patterns of correlating samples is how recent NN based debayering models work. But if you go this way, you may just as well compress and record the debayered image with enough additional metadata to allow WB tweaks and exposure compensation in post, or simply go the partially debayered route similar to BRAW or Canon Raw Light. In any case, we should also have in mind that the higher the resolution, the less noticeable the artifacts. And 4K is quite a lot of pixels. In real life images I don't think it is very likely that there will be noticeable problems, other than the occasional no-OLPF aliasing issues.
  14. Binning is also scaling. Hardware binning will normally just group per-channel pixels together without further spatial considerations, but a weighted binning techique is basically bilinear interpolation (when halving resolution). Mathematically, scaling should be done in linear, assuming samples are in an approximately linear gamut, which may or may not be the case. Digital sensors, in general, have good linearity of light intensity levels (certainly way more consistent than film), but native sensor gamut is not a clean linear tri-color space. If you recall the rules of proper compositing, scaling itself is very similar -- you do it in linear to preserve the way light behaves. You sometimes may get better results with non-linear data, but this is likely related to idiosyncrasies of the specific case and is not the norm. re: Sigma's downscale I assume, yes, they simply downsample per channel and arrange into a Bayer mosaic. Bayer reconstruction itself is a process of interpolation, you need to conjure samples out of thin air. No matter how advanced the method, and there are some really involved methods, it is really just that, divination of sample values. So anything that loses information beforehand, including channel downsample, will hinder reconstruction. Depending on the way the downscale is done, you can obstruct reconstruction of some shapes more than others, so you might need to prioritize this or that. A simple example of tradeoffs: binning may have better SNR than some interpolation methods but will result in worse diagonal detail.
  15. Only because I have code lying around that does this in multiple ways, and it shows various ways of producing artifacts without doing weird things. It is not necessary to do crazy weird things to break the image. Even the fanciest way of Bayer donwscale will produce an image that's noticeably worse than debayering in full res and then downscaling to the target resolution, there's no way around it even in the conceptually easiest case of 50% downscale.
×
×
  • Create New...