Jump to content

cpc

Members
  • Content Count

    152
  • Joined

  • Last visited

About cpc

  • Rank
    Active member

Profile Information

  • Gender
    Not Telling

Contact Methods

  • Website URL
    http://www.shutterangle.com

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. It is not linear. I haven't looked at the exact curve, but it does non-linear companding for the 8-bit raw. The 12-bit image is linear. The DNG spec allows for color tables to be applied on the developed image. The Sigma sample images do include such tables. No idea if they replicate the internal picture profiles though. AFAIK, only Adobe Camera Raw based software (e.g. Photoshop) honors these. Unless Resolve has gained support for these tables (I am on an old Resolve version), it is very likely that the cinema5d review is mistaken on this point.
  2. Without detracting from Graeme's work, it should be made clear that none of the algorithmic REDCODE specifics described in the text are non-trivial for "skilled artisans". I don't think any of this will hold in court as a significant innovation. A few notes: Re: "pre-emphasis curve" used to discard excessive whites and preserve blacks. Everyone here knows it very well, because every log curve does this. Panalog, s-log, Log-C, you name it, do that. In fact, non-linear curves are (and were) so widely used as a pre-compression step, that some camera companies manage to shoot themselves in the leg by applying them non-discriminatively even before entropy coding (where a pure log/power curve can be non-optimal). JPEG has been used since the early 90's to compress images. Practically all images compressed with JPEG were gamma encoded. Gamma encoding is a "simple power law curve". Anyone who has ever compressed a linear image knows what happens (not a pretty picture) to linear signal after a DCT or wavelet transfrom, followed by quantization. And there is nothing special, technically speaking, about raw -- it is linear signal in native camera space. But you don't need to look far for encoding alternatives: film has been around since the 19th century, it does a non-linear transform (more precisely, log with toe and shoulder) on the captured light. In an even more relevant connection, Cineform RAW was developed in 2005 and presented at NAB 2006. It uses a "pre-emphasis" non-linear curve (more precisely, a tunable log curve) to discard excessive whites and preserve blacks. You may also want to consult this blog post from [email protected] from 2007 about REDCODE and Cineform: http://cineform.blogspot.com/2007/09/10-bit-log-vs-12-bit-linear.html Re: "green average subtraction": Using nearby pixels for prediction/entropy reduction goes at least as far back as JPEG, which specifies 7 such predictors. In a Bayer mosaic, red and blue pixels will always neighbor a green pixel, hence using the brightness correlating green channel for prediction of red and blue channels is a tiny step. Re: using a Bayer sensor, as a an "unconventional avenue": The Dalsa Origin, presented at NAB 2003, and available for renting since 2006, was producing Bayer raw (uncompressed). The Arri Arriflex D-20, introduced in November 2005, was doing Bayer raw (uncompressed). Can't recall the SI-2K release year, but it was doing Bayer compressed raw (Cineform RAW, externally) in 2006.
  3. It depends; variable bit rate means size will vary with image complexity. It is usually (but not always) between 3:1 and 4:1, might be around 600GB for a couple of hours of 4K. I wouldn't bother shrinking a 4:1 source any further, but if you really have the inclination, you can try 5:1 or 7:1 on top of it, which will shave off ~20% and ~40% respectively. Ofc, always test and judge results yourself.
  4. 4:1 is usually fine, and of course less is better. But I'd probably use VBR HQ for constant quality.
  5. Yes, a couple of hours of 4K at 5:1 should be somewhere under 500GB. I usually recommend 5:1 for oversampled delivery only (i.e. when shooting 4K or higher, but going for a 2K DCP). I know some users routinely use 5:1 for 4K material and are happy with it, but I am a bit conservative about this. I'd imagine most indie work ends up with a 2K DCP anyway (well, at least anything I've shot that ended in a theater has always been 2K).
  6. Only if you will be doing more lossy compression on the same video down the line, and the methods used in the different compression passes differ in some significant way. If you are going to use the same method (only with different amounts of quantization), it doesn't matter much. So if you'd be doing compression after acquisition with, say, slimraw, there are enough differences between lossy slimraw and lossy in-camera to warrant doing lossless in-camera. Well, it is normal. Not only BRAW needs to happen in-camera which imposes some limits (power, memory, real-time, etc), but it is likely hindered by its attempt to avoid Bayer level compression (possibly due to the patent thing). On the other hand, denoising (which often goes together with debayering) does have advantages when done before very high compression. More precisely, lower resolution images can withstand less compression abuse. It should be fairly intuitive: if you have a fixed delivery resolution, let's say 2K, and you arrive at this delivery resolution from a 2K image, you can't afford messing with the original image much. But if you deliver to 2K from a 4K source, you can surely afford doing more compression to the 4K image. BM raw is already tonally remapped through a log curve. The 10-bit log mode in slimraw is only intended for linear raw. No. Size will always go up when transcoding from lossy back to the lossless scheme: this works by decompressing from lossy to uncompressed, then doing lossless compression on the decomrpessed image; you can't do the lossless pass straight on top of the original lossy raw, it doesn't work like this. So going this route only makes sense when people need to maximize recording times (and shoot lossy), but still want to use Premiere in post. If you insist on using DNG, you'll get best quality per size from shooting lossless in-camera, then going through any of the lossy modes in slimraw: which one depends entirely on what target data size you are after. I honestly wouldn't bother doing it for a camera that has in-camera lossy DNG, unless I really, really wanted to shrink down to 5:1 or more.
  7. This is very resolution dependent, but assuming 4k, the corresponding settings would be: lossless, 5:1, and 7:1 / VBR LT. Only if you'd use slimraw in a lossy mode afterwards. It is generally better to avoid multiple generations of lossy compression, and there are a few significant differences in how in-camera dng lossy compression works in comparison to slimraw's lossy compression. Yes. Well, 5:1 is matched by 5:1. The meaning of these ratios is that you get down to around 1/5 of the original data size, which is the same no matter what format you are going to use. "Safety" is something only the user can judge. You are always losing something with lossy compression. It is "safe", in the sense that it is reliable, and it will work. VBR HQ will normally produce files between 4:1 and 3:1, but since it's constant quality/variable bit rate it depends somewhat on image complexity. Now, it is important to note that it is probably not a good idea to swap a BRAW workflow for a DNG workflow, unless you need truly lossless files (for VFX work, for example). Even though a low compression lossy DNG file will very likely look better than an equally sized BRAW frame (because by (partially) debayering in BRAW you increase the data size, and then shrink it back down through compression, while there is no such initial step in DNG; remember: debayering triples your data size!), this quality loss is progressively less important with resolution going up. Competing with BRAW is certainly not a goal for slimraw. There are basically 4 types of slimraw users: 1) People shooting uncompressed DNG raw: Bolex D16, Sony FS series, Canon Magic Lantern raw, Panasonic Varicam LT, Ikonoskop, etc. The go-to compression mode for these users is the Lossless 10-bit log mode for 2K video, or one of the lossy modes for higher resolution video. 2) People shooting losslessly compressed DNG on an early BM camera (Pocket, original BMCC, Production 4K) or on a DJI camera: these users normally offload with one of the lossy modes to reduce their footprint (often 3:1 or VBR HQ for the Pocket and BMCC, and 4:1/5:1 for the 4K). Lossless 10-bit log is also popular with DJI cameras. 3) People doing DNG proxies for use in post with Resolve. They are usually using 7:1 compression and 2x downscale for a blazing fast entirely DNG based workflow in Resolve (relinking in Resolve is a one-click affair and you can go back-and-forth between originals and proxies all the time during post). 4) People shooting BM cameras and recording 3:1 or 4:1 CDNG for longer recording times, who do their post in Premiere. They use slimraw to transcode back to lossless CinemaDNG in post and import the footage in Premiere. Of course, there are other uses (like timelapses, or doing lossless-to-lossy on a more recent BM camera, if you are a quality freak (a few users are), slimraw will beat in-camera for the same output sizes, which is expected -- it doesn't have the limitations of doing processing in-camera), but these are less common. So yeah, if you don't need VFX, it is likely best to just stick to BRAW and don't complicate your life.
  8. This is most likely uncompressed source to losslessly compressed output. It also looks like a rather old version of slimraw. But if you want to know more about the various types of compression in the dng format, here is an overview: http://www.slimraw.com/article-cdngmodes.html (@Emanuel I am around, just not following the discussion closely )
  9. Up till BRAW the only consistently present characteristic of raw video from a camera manufacturer claiming "raw" was a Bayer image. There's been lossy compressed raw (Cineform, Red, BM), there's been tonally remapped raw (Arri, BM, Red, Panasonic, Canon), there's been white balanced raw (Canon), there's been baked ISO raw (Canon, Sony), etc. But all "raw" has always been Bayer. In this sense BRAW is a stretch of the term "raw" as we know it: it is not a Bayer image. I wouldn't call it "raw", but obviously there are market reasons for naming it this way. This is similar to how "visually lossless" is being abused as marketing speak for "lossy". "Visually lossless" can only be applied to delivery images viewed in well defined viewing environments (that's how it is used in any scientific paper that takes itself seriously). By definition, it is not applicable to acquisition formats (raw or anything else) meant to be hammered in post: you can't claim "visually lossless", because you have no knowledge about what will be done to the image, nor where it will end up.
  10. I am pretty sure there is nothing patent breaching in BM's lossy take on DNG: it's all common techniques which have been out there for ages. The problem is likely with another company's patents, which are so broad, that they cover a lot of ground in in-camera raw compression (no matter what method or format you use), and if anything, BM's DNG specifics actually appear to be circumventing some details in these patents. I am not a lawyer, and I haven't read all the patents of that other company, but I think BM doesn't actually breach the ones I've read (due to a certain important detail in BM's implementation). Whether BM are aware of this, or this is simply a battle they don't want to pick, is a different story. In any case, it is definitely not a coincidence that BRAW is not raw in the first place, despite its name, -- it is a debayered image with metadata, think ProRes + metadata.
  11. No need to feel sorry for us PC users. Resolve has been cutting through 4K raw like butter for years. I've been shooting raw exclusively since 2013. Stopped using proxies in 2015. I've only ever used regular consumer hardware for post. Frankly, raw is old news for PC users.
  12. It should be fine in terms of coverage, but Blackmagic cameras have thinner filters, so likely more aberrations.
  13. Premiere has issues with missing metadata. It doesn't care if the values are stretched. What happens is that it can infer missing metadata correctly when the values are shifted and zero padded to 16 bits. Also, there is 14-bit CinemaDNG, but Premiere has trouble with 14-bit uncompressed (not with compressed though!). Now that all ML dng is compressed, it should work fine with Premiere at 14 bits.
  14. FYI, there is no reason to ever use any of the "maximized" variants in raw2cdng. 10-bit raw can be all you need, if it isn't linear. ML raw is linear though, and 10-bit is noticeably worse than 12- and 14-bit even though it is better than, for example, 10-bit raw from DJI cameras. 12-bit is actually quite good.
  15. There is support for MXF containers in the CinemaDNG specification. AFAIK, no support in cameras and apps though. CinemaDNG 3:1 is similar size to ProRes Raw HQ and CinemaDNG 4:1 is similar size to ProRes Raw. DNG performance in Resolve is excellent.
×
×
  • Create New...