Jump to content

cpc

Members
  • Posts

    204
  • Joined

  • Last visited

Everything posted by cpc

  1. The problem is missing time codes in the audio files recorded by the camera. Resolve needs these to auto sync audio and image and present them as a single entity. Paul has posted a workaround here: As a general rule, if the uncompressed image and audio don't auto sync in Resolve, the compressed image won't auto sync either.
  2. I will be surprised if Resolves does rescale in anything different than the image native gamma, that is in whatever gamma the values are at the point of the rescale operation. But if anything, some apps convert from sRGB or power gamma to linear for scaling, and then back. You can do various transforms to shrink the ratio between extremes, and this will generally reduce ringing artifacts. I know people deliberately gamma/log transform linear rendered images for rescale. But it is mathematically and physically incorrect. There are examples and lengthy write-ups on the web with what might go wrong if you scale in non-linear gamma, but perhaps most intuitively you can think about it in an "energy conserving" manner. If you don't do it in linear, you are altering the (locally) average brightness of the scene. You may not see this easily in real life images, because it will often be masked by detail, but do a thought experiment about, say, a greyscale synthetic 2x1 image scaled down to a 1x1 image and see what happens. I have a strong dislike for ringing artifacts myself, but I believe the correct approach to reduce these would be to pre-blur to band limit the signal and/or use a different filter: for example, Lanczos with less lobes, or Lanczos with pre-weighted samples; or go to splines/cubic; and sometimes bilinear is fine for downscale between 1x and 2x, since it has only positive weights. On the other hand, as we all very well know, theory and practice can diverge, so whatever produces good looking results is fine. Rescaling Bayer data is certainly more artifact prone, because of the missing samples, and the unknown of the subsequent deBayer algorithm. This is also the main reason SlimRAW only downscales precisely 2x for DNG proxies. It is actually possible to do Bayer aware interpolation and scale 3 layers instead of 4. This way the green channel will benefit from double the information compared to the others. You can think of this as interpolating "in place", rather than scaling with subsequent Bayer rearrangement. Similar to how you can scale a full color image in dozens of ways, you can do the same with a Bayer mosaic, and I don't think there is a "proper" way to do this. It is all a matter of managing trade offs, with the added complexity that you have no control over exactly how the image will be then debayered in post. It is in this sense that rescaling Bayer is worse -- you are creating an intermediate image, which will need to endure some serious additional reconstruction. Ideally, you should resize after debayering, because an advanced debayer method will try to use all channels simultaneously (also, see below). This is possible, and you can definitely extract more information and get better results by using neighboring pixels of different color because channels correlate somewhat. Exploiting this correlation is at the heart of many debayer algorithms, and, in some sense, memorizing many patterns of correlating samples is how recent NN based debayering models work. But if you go this way, you may just as well compress and record the debayered image with enough additional metadata to allow WB tweaks and exposure compensation in post, or simply go the partially debayered route similar to BRAW or Canon Raw Light. In any case, we should also have in mind that the higher the resolution, the less noticeable the artifacts. And 4K is quite a lot of pixels. In real life images I don't think it is very likely that there will be noticeable problems, other than the occasional no-OLPF aliasing issues.
  3. Binning is also scaling. Hardware binning will normally just group per-channel pixels together without further spatial considerations, but a weighted binning techique is basically bilinear interpolation (when halving resolution). Mathematically, scaling should be done in linear, assuming samples are in an approximately linear gamut, which may or may not be the case. Digital sensors, in general, have good linearity of light intensity levels (certainly way more consistent than film), but native sensor gamut is not a clean linear tri-color space. If you recall the rules of proper compositing, scaling itself is very similar -- you do it in linear to preserve the way light behaves. You sometimes may get better results with non-linear data, but this is likely related to idiosyncrasies of the specific case and is not the norm. re: Sigma's downscale I assume, yes, they simply downsample per channel and arrange into a Bayer mosaic. Bayer reconstruction itself is a process of interpolation, you need to conjure samples out of thin air. No matter how advanced the method, and there are some really involved methods, it is really just that, divination of sample values. So anything that loses information beforehand, including channel downsample, will hinder reconstruction. Depending on the way the downscale is done, you can obstruct reconstruction of some shapes more than others, so you might need to prioritize this or that. A simple example of tradeoffs: binning may have better SNR than some interpolation methods but will result in worse diagonal detail.
  4. Only because I have code lying around that does this in multiple ways, and it shows various ways of producing artifacts without doing weird things. It is not necessary to do crazy weird things to break the image. Even the fanciest way of Bayer donwscale will produce an image that's noticeably worse than debayering in full res and then downscaling to the target resolution, there's no way around it even in the conceptually easiest case of 50% downscale.
  5. Thanks. Here is the same file scaled down to 3000x2000 Bayer in four different ways (lineskipping, two types of binning and a bit more fancy interpolation). Not the same as 6K-to-4K Bayer, but it might be interesting anyway. _SDI2324_bin.DNG _SDI2324_interp.DNG _SDI2324_skip.DNG _SDI2324_wbin.DNG
  6. Thanks, these gears look really nice. I don't think there is a single Leica R that can hold a candle to the Contaxes in terms of flare resistance. The luxes also flare a lot (here is the 50), but I haven't found this to be a problem in controlled shoots. Sorry, I meant the DNG file.
  7. I've had both Contax and Leica R, and Contax is technically better, usually sharper and with significantly better flare resistance. The Contax 50/1.7 is likely the sharpest "old" SLR 50mm I've seen, and I still use the 28/2.8 for stills when travelling, at f4 or smaller it pops in that popular Zeiss way. Contax lenses are also much lighter. That said, I find the Leica R's more pleasing with digital cameras, in particular the 50mm and 80mm Summiluxes are gorgeous: they provide excellent microcontrast at low lpmm, but not too strong MTF at high lpmm, which actually seems to work quite well with video resolutions. They draw in a way that's both smooth and with well defined detail (perhaps reminiscent of Cookes), and focus fall-off is very nice. The focus rings are also a bit better for pulling I think (compared to Contax). Are you using M lenses with focus gears? None of the Voigt lenses I have (or had) can be geared, they either have tabs or thin curvy rings. I find that pulling sharpness down to 0 in Resolve's raw settings helps a bit with tricky shots from cameras with no OLPFs. In the case of the fp, these weird pixels might be a result from interactions between debayer algorithm and the way in-camera Bayer scaling is done.
  8. No idea, but I won't be surprised if it does. I don't recall any arcane Windows trickery that would hinder Wine.
  9. As promised, the Sigma fp centered release of slimRAW is now out, so make sure to update. SlimRAW now works around Resolve's lack of affection for 8-bit compressed CinemaDNG, and slimRAW compressed Sigma fp CinemaDNG will work in Premiere even though the uncompressed originals don't. There is also another peculiar use: even though Sigma fp raw stills are compressed, you can still (losslessly) shrink them significantly through slimRAW. It discards the huge embedded previews and re-compresses the raw data, shaving off around 30% of the original size. (Of course, don't do this if you want the embedded previews.)
  10. It does honor settings, and it is most useful when pointed at various parts of the scene to get a reading off different zones without changing exposure, pretty much the same as you'd use a traditional spot meter. The main difference is that the digital meter doesn't have (and need) a notion of mid grey: you get directly the average (raw) value of the spot region, while a traditional spot meter is always mid grey referenced. You can certainly use a light meter very successfully while shooting raw (I always have one on me), but the digital spotmeter gives you a spot reading directly off the sensor which is very convenient, cause you see what is being recorded. Since you'd normally aim to overexpose for dense skin when shooting raw, seeing the actual values is even more useful. Originally, the ML spotmeter was only showing tone mapped values, but they could also be used for raw exposure once you knew the (approximate) mapping. Of course, showing either the linear raw values or EV below the clip point is optimal for raw.
  11. Well, it should be quite obvious that this camera is at its best (video) in 12-bit. The 8-bit image is probably derived from the 12-bit image anyway, so it can't be better than that. I think any raw camera should utilize a digital spotmeter similar to Magic Lantern's. This is really the most simple to implement exposure tool and possibly the only thing one needs for consistent exposure. I don't need zebras, I don't need raw histograms, I don't need false color. It baffles me that ML had it 7+ years ago and it isn't ubiquitous yet. I mean, just steal the damn thing.
  12. Both Resolve and Premiere have problems with 8-bit DNG. I believe Resolve 14 broke support for both compressed and uncompressed 8-bit. Then at some later point uncompressed 8-bit was working again, but compressed 8-bit was still sketchy. This wasn't much of an issue since no major camera was recording 8-bit anyway, but now with the Sigma fp out, it is worked around in the upcoming release of slimRAW.
  13. If you recall the linearisation mapping that you posted before, there is a steep slope upwards at the end. Highlights reconstruction happens after linearisation, so it would have the clipped red channel curving up stronger than the non-clipped green and blue; it needs to preserve the trend. This hypothesis should be easy to test with a green or blue biased light. If the non-linear curve causes this, you will get respectively green or blue biased reconstructed highlights. I don't think the post raised shadows tint is related to incorrect black levels. It is more likely a result of limited tonality, although it might be exaggerated a little bit by value truncation (instead of rounding). This can also be checked by underexposing the 12 bit image additional 2 stops compared to the underexposed 10 bit image, and then comparing shadow tints after exposure correction in post.
  14. I've done a few things that ended in both theaters and online. 1.85:1 is a good ratio to go for if you are targeting both online and festivals, and you can always crop a 1920x1080 video from a 1998x1080 flat DCP, if you happen to need to send a video file somewhere. Going much wider may compromise the online version; contrary to popular belief a cinemascope ratio on a tablet or computer display is not particularly cinematic, what with those huge black strips. Is there a reason you'd want to avoid making a DCP for festivals, or am I misunderstanding? Don't bother with a 4K release, unless you are really going to benefit from the resolution. Many festivals don't like 4K anyway. Master and grade in a common color gamut (rec709/sRGB). DCP creation software will fix gamma for the DCP, if you grade to an sRGB gamma for online. Also, most (all?) media servers in current cinemas do 23.976 (and other frame rates like 25, 29.97, 30) fine, but if you can shoot 24 fps you might just as well do.
  15. The linearisation table is used by the raw processing software to invert the non-linear pixel values back to linear space. This is why you can have any non-linear curve applied to the raw values (with the purpose of sticking higher dynamic range into limited coding space), and your raw processor still won't get confused and will show the image properly. The actual raw processing happens after this linearisation.
  16. 12-bit is linear. 10-bit is linear. 8-bit is non-linear. No idea why Sigma didn't do 10-bit non linear, seeing as they already do it for 8-bit. Here is how 10-bit non linear can look (made from your 12-bit linear sample with slimraw). In particular, note how darks are indistinguishable from the 12-bit original. 10-bit non linear (made from the 12-bit).DNG
  17. You are comparing 6K Bayer-to-4K Bayer downscale + 4K debayer to 6K debayer + 6K RGB-to-4K RGB downscale. The first will never look as good as the second. The 10-bit file is linear and the 8-bit file is non-linear. That's why 10-bit looks suspicious to you, it has lost a lot of precision in the darks. Yeah, well, the main difference with offsetting in log is that you are moving your "zero" around (a "log" curve is never a direct log conversion in the blacks), so you'd need to readjust the black point. Whereas with multiplication (gain) in linear there is no such problem. Still, offsetting is handy with log footage or film scans.
  18. Can't do this if you want constant image quality (noise size, detail and no-crop equivalent DOF). But yes, if you only care about perspective, you can do this.
  19. Focal lengths have no "perspective". There is no such thing as "50mm perspective", so you can't maintain this with a 50mm lens on a bigger sensor. Relative sizes of objects depend entirely on the position of the camera. Closer viewpoints will exaggerate perspective and more distant viewpoints will flatten perspective. Hence, some people may say that wider lenses have stronger perspective, which is incorrect. What they actually mean is that with a wide lens you move forward for a similar similar object size in the frame (compared to a longer lens), and this movement forward decreases the camera-subject distance and exaggerates perspective distortion. Surely everyone has seen one of these:
  20. You can use the gain control in linear spaces. With linear material gain is equivalent to exposure adjustment (although you don't get the intuitive numerical interpretability of a raw processing exposure control).
  21. Historically, the reason for using larger formats was resolution. When the change to widescreen happened, the need for higher horizontal resolutions came naturally. You need more negative in order to cover the wide screen without reducing vertical resolution. But with film stocks getting better and better, this was less of a necessity. Optically, larger sizes can do fine with technically worse lenses because you need less lpmm for the same (perceived) microcontrast after magnification. As a result, larger formats generally have better delineation which promotes a feeling of three dimensionality in the image. Of course, actual FOV has little to do with format size, and perspective entirely depends on viewpoint (camera to subject distance). Anamorphic does have peculiarituies though, since only one FOV (usually vertical) matches the spherical equivalent and the other axis' FOV is wider than nominal.
  22. You should be able to do the same with any intraframe codec in a container, no? In any case, whether your intermediate is a sequence (DPX or EXR or whatever) has little to do with whether your source media is a sequence. Are you talking source metadata or intermediate (post) metadata? The latter shouldn't be related to what your source is. I see. If you are actually streaming individual frames from a network that makes sense. It is not for one man bands only though. I've done it on a couple of indie productions where I shared dng proxies with the editor (they did edit in Resolve). I also know for at least two production houses that do work this way. But yes, bigger productions will likely promote a more traditional workflow. Yet I think film post can gain as much from utilizing raw processing controls for correction/grading as any other production, as it is in some ways more intuitive and more mathematically/physically correct than the common alternative. Edit: I see CaptainHook has expressed a similar and more detailed opinon above on this point. It isn't necessary to debayer in order to create a downscaled Bayer mosaic, you can simply resample per channel.
  23. You seem to have misunderstood this part. What I am saying is that the choice of the actual curve handicapped the results. It is a curve with lots of holes (unused values) at the low end, which is good for quantization (e.g. quantized DCT), but bad for entropy coded predictors (as in lossless DNG). Also, my point was that if you ditched byte stuffing altogether (which is a trivial mod), without changing anything else, this would speed up both decoding and encoding, as well as give a small bonus in size. For all practical purposes, lossy BM DNG was proprietary, because there was zero publicly available information about it, so BM was in a position to simplify and optimize. Of course I am just theorizing a parallel future. Certainly in hindsight, having an alternative ready would have helped tremendously when the patent thing came up. Sigma have no option but to go the sequence way since there is no support for MXF CinemaDNG anywhere. BM were in the unique position of making cameras AND Resolve. Certainly there are some advantages to discrete frames; the biggest one might be that you can easily swap bad frames. I can't think of a case where you need frame specific metadata (other than time codes and stuff), but you can have this in the MXF version of the CinemaDNG spec too. Also, with frame indexing you can have your file working fine with bad frames in it. And your third point I am not sure I understand, individual frames equal more bytes equal more load on the network, there is no way around this. And certainly reading more files puts more load on the file system, as you need to access the file index for every frame. On a related note, there are benefits to keeping the raw files all the way through DI: raw controls can actually be used creatively in grading, and raw files are significantly smaller than DPX frames. And if you edit in Resolve, you might as well edit raw for anything that doesn't need vfx, as long as your hardware can handle the resolution. After all, working on the raw base all the way is the beauty of the non-destructive raw workflow.
  24. I think there are a few upgrades BM could have done to DNG. First, perhaps introduce MXF containers, as per the CinemaDNG spec. This would have decreased filesystem load, reduce wasted storage space, as well as eliminate the need to replicate metadata in each frame. With the rare power to do be able to produce both cameras and Resolve, this sounded like a no brainer. Also, there were some choices which handicapped BM cameras in terms of achievable file sizes. First, the non-linear curve used was not friendly to lossless compression, which basically resulted in ~20-25% bigger lossless files from the start (~1.5:1 vs. ~2:1). Then, BM essentially had the freedom to introduce a lossy codec of their choice, it was a proprietary thing anyway, even though it was wrapped in DNG clothes. Even with the choice BM did, if they didn't treat the Bayer image as monochrome during compression, they would have likely been able to push 5:1 with acceptable quality (as it stands, 4:1 could drop to as low as 7 bits of actual precision, never mind the nominal tag of 12 bits). And finally, BM could have ditched byte stuffing in lossy modes (remember, it is essentially a proprietary thing, you could have done anything!), which would have boosted decoding (and encoding) speed significantly. Of course, reproducibility of results across apps is a valid argument, and is something that the likes of Arri did bet on from the beginning. But you need an SDK for this anyway, and it is by no means bound to the actual format. To promote an uniform look, BM could have done an SDK/plugins/whatever for the BM DNG image, the same way they did with BRAW.
×
×
  • Create New...