Jump to content

cpc

Members
  • Content Count

    178
  • Joined

  • Last visited

About cpc

  • Rank
    Active member

Profile Information

  • Gender
    Not Telling

Contact Methods

  • Website URL
    http://www.shutterangle.com

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I will be surprised if Resolves does rescale in anything different than the image native gamma, that is in whatever gamma the values are at the point of the rescale operation. But if anything, some apps convert from sRGB or power gamma to linear for scaling, and then back. You can do various transforms to shrink the ratio between extremes, and this will generally reduce ringing artifacts. I know people deliberately gamma/log transform linear rendered images for rescale. But it is mathematically and physically incorrect. There are examples and lengthy write-ups on the web with what might go wrong if you scale in non-linear gamma, but perhaps most intuitively you can think about it in an "energy conserving" manner. If you don't do it in linear, you are altering the (locally) average brightness of the scene. You may not see this easily in real life images, because it will often be masked by detail, but do a thought experiment about, say, a greyscale synthetic 2x1 image scaled down to a 1x1 image and see what happens. I have a strong dislike for ringing artifacts myself, but I believe the correct approach to reduce these would be to pre-blur to band limit the signal and/or use a different filter: for example, Lanczos with less lobes, or Lanczos with pre-weighted samples; or go to splines/cubic; and sometimes bilinear is fine for downscale between 1x and 2x, since it has only positive weights. On the other hand, as we all very well know, theory and practice can diverge, so whatever produces good looking results is fine. Rescaling Bayer data is certainly more artifact prone, because of the missing samples, and the unknown of the subsequent deBayer algorithm. This is also the main reason SlimRAW only downscales precisely 2x for DNG proxies. It is actually possible to do Bayer aware interpolation and scale 3 layers instead of 4. This way the green channel will benefit from double the information compared to the others. You can think of this as interpolating "in place", rather than scaling with subsequent Bayer rearrangement. Similar to how you can scale a full color image in dozens of ways, you can do the same with a Bayer mosaic, and I don't think there is a "proper" way to do this. It is all a matter of managing trade offs, with the added complexity that you have no control over exactly how the image will be then debayered in post. It is in this sense that rescaling Bayer is worse -- you are creating an intermediate image, which will need to endure some serious additional reconstruction. Ideally, you should resize after debayering, because an advanced debayer method will try to use all channels simultaneously (also, see below). This is possible, and you can definitely extract more information and get better results by using neighboring pixels of different color because channels correlate somewhat. Exploiting this correlation is at the heart of many debayer algorithms, and, in some sense, memorizing many patterns of correlating samples is how recent NN based debayering models work. But if you go this way, you may just as well compress and record the debayered image with enough additional metadata to allow WB tweaks and exposure compensation in post, or simply go the partially debayered route similar to BRAW or Canon Raw Light. In any case, we should also have in mind that the higher the resolution, the less noticeable the artifacts. And 4K is quite a lot of pixels. In real life images I don't think it is very likely that there will be noticeable problems, other than the occasional no-OLPF aliasing issues.
  2. Binning is also scaling. Hardware binning will normally just group per-channel pixels together without further spatial considerations, but a weighted binning techique is basically bilinear interpolation (when halving resolution). Mathematically, scaling should be done in linear, assuming samples are in an approximately linear gamut, which may or may not be the case. Digital sensors, in general, have good linearity of light intensity levels (certainly way more consistent than film), but native sensor gamut is not a clean linear tri-color space. If you recall the rules of proper compositing, scaling itself is very similar -- you do it in linear to preserve the way light behaves. You sometimes may get better results with non-linear data, but this is likely related to idiosyncrasies of the specific case and is not the norm. re: Sigma's downscale I assume, yes, they simply downsample per channel and arrange into a Bayer mosaic. Bayer reconstruction itself is a process of interpolation, you need to conjure samples out of thin air. No matter how advanced the method, and there are some really involved methods, it is really just that, divination of sample values. So anything that loses information beforehand, including channel downsample, will hinder reconstruction. Depending on the way the downscale is done, you can obstruct reconstruction of some shapes more than others, so you might need to prioritize this or that. A simple example of tradeoffs: binning may have better SNR than some interpolation methods but will result in worse diagonal detail.
  3. Only because I have code lying around that does this in multiple ways, and it shows various ways of producing artifacts without doing weird things. It is not necessary to do crazy weird things to break the image. Even the fanciest way of Bayer donwscale will produce an image that's noticeably worse than debayering in full res and then downscaling to the target resolution, there's no way around it even in the conceptually easiest case of 50% downscale.
  4. Thanks. Here is the same file scaled down to 3000x2000 Bayer in four different ways (lineskipping, two types of binning and a bit more fancy interpolation). Not the same as 6K-to-4K Bayer, but it might be interesting anyway. _SDI2324_bin.DNG _SDI2324_interp.DNG _SDI2324_skip.DNG _SDI2324_wbin.DNG
  5. Thanks, these gears look really nice. I don't think there is a single Leica R that can hold a candle to the Contaxes in terms of flare resistance. The luxes also flare a lot (here is the 50), but I haven't found this to be a problem in controlled shoots. Sorry, I meant the DNG file.
  6. I've had both Contax and Leica R, and Contax is technically better, usually sharper and with significantly better flare resistance. The Contax 50/1.7 is likely the sharpest "old" SLR 50mm I've seen, and I still use the 28/2.8 for stills when travelling, at f4 or smaller it pops in that popular Zeiss way. Contax lenses are also much lighter. That said, I find the Leica R's more pleasing with digital cameras, in particular the 50mm and 80mm Summiluxes are gorgeous: they provide excellent microcontrast at low lpmm, but not too strong MTF at high lpmm, which actually seems to work quite well with video resolutions. They draw in a way that's both smooth and with well defined detail (perhaps reminiscent of Cookes), and focus fall-off is very nice. The focus rings are also a bit better for pulling I think (compared to Contax). Are you using M lenses with focus gears? None of the Voigt lenses I have (or had) can be geared, they either have tabs or thin curvy rings. I find that pulling sharpness down to 0 in Resolve's raw settings helps a bit with tricky shots from cameras with no OLPFs. In the case of the fp, these weird pixels might be a result from interactions between debayer algorithm and the way in-camera Bayer scaling is done.
  7. No idea, but I won't be surprised if it does. I don't recall any arcane Windows trickery that would hinder Wine.
  8. As promised, the Sigma fp centered release of slimRAW is now out, so make sure to update. SlimRAW now works around Resolve's lack of affection for 8-bit compressed CinemaDNG, and slimRAW compressed Sigma fp CinemaDNG will work in Premiere even though the uncompressed originals don't. There is also another peculiar use: even though Sigma fp raw stills are compressed, you can still (losslessly) shrink them significantly through slimRAW. It discards the huge embedded previews and re-compresses the raw data, shaving off around 30% of the original size. (Of course, don't do this if you want the embedded previews.)
  9. It does honor settings, and it is most useful when pointed at various parts of the scene to get a reading off different zones without changing exposure, pretty much the same as you'd use a traditional spot meter. The main difference is that the digital meter doesn't have (and need) a notion of mid grey: you get directly the average (raw) value of the spot region, while a traditional spot meter is always mid grey referenced. You can certainly use a light meter very successfully while shooting raw (I always have one on me), but the digital spotmeter gives you a spot reading directly off the sensor which is very convenient, cause you see what is being recorded. Since you'd normally aim to overexpose for dense skin when shooting raw, seeing the actual values is even more useful. Originally, the ML spotmeter was only showing tone mapped values, but they could also be used for raw exposure once you knew the (approximate) mapping. Of course, showing either the linear raw values or EV below the clip point is optimal for raw.
  10. Well, it should be quite obvious that this camera is at its best (video) in 12-bit. The 8-bit image is probably derived from the 12-bit image anyway, so it can't be better than that. I think any raw camera should utilize a digital spotmeter similar to Magic Lantern's. This is really the most simple to implement exposure tool and possibly the only thing one needs for consistent exposure. I don't need zebras, I don't need raw histograms, I don't need false color. It baffles me that ML had it 7+ years ago and it isn't ubiquitous yet. I mean, just steal the damn thing.
  11. Both Resolve and Premiere have problems with 8-bit DNG. I believe Resolve 14 broke support for both compressed and uncompressed 8-bit. Then at some later point uncompressed 8-bit was working again, but compressed 8-bit was still sketchy. This wasn't much of an issue since no major camera was recording 8-bit anyway, but now with the Sigma fp out, it is worked around in the upcoming release of slimRAW.
  12. If you recall the linearisation mapping that you posted before, there is a steep slope upwards at the end. Highlights reconstruction happens after linearisation, so it would have the clipped red channel curving up stronger than the non-clipped green and blue; it needs to preserve the trend. This hypothesis should be easy to test with a green or blue biased light. If the non-linear curve causes this, you will get respectively green or blue biased reconstructed highlights. I don't think the post raised shadows tint is related to incorrect black levels. It is more likely a result of limited tonality, although it might be exaggerated a little bit by value truncation (instead of rounding). This can also be checked by underexposing the 12 bit image additional 2 stops compared to the underexposed 10 bit image, and then comparing shadow tints after exposure correction in post.
  13. I've done a few things that ended in both theaters and online. 1.85:1 is a good ratio to go for if you are targeting both online and festivals, and you can always crop a 1920x1080 video from a 1998x1080 flat DCP, if you happen to need to send a video file somewhere. Going much wider may compromise the online version; contrary to popular belief a cinemascope ratio on a tablet or computer display is not particularly cinematic, what with those huge black strips. Is there a reason you'd want to avoid making a DCP for festivals, or am I misunderstanding? Don't bother with a 4K release, unless you are really going to benefit from the resolution. Many festivals don't like 4K anyway. Master and grade in a common color gamut (rec709/sRGB). DCP creation software will fix gamma for the DCP, if you grade to an sRGB gamma for online. Also, most (all?) media servers in current cinemas do 23.976 (and other frame rates like 25, 29.97, 30) fine, but if you can shoot 24 fps you might just as well do.
  14. The linearisation table is used by the raw processing software to invert the non-linear pixel values back to linear space. This is why you can have any non-linear curve applied to the raw values (with the purpose of sticking higher dynamic range into limited coding space), and your raw processor still won't get confused and will show the image properly. The actual raw processing happens after this linearisation.
×
×
  • Create New...