Jump to content


  • Content Count

  • Joined

  • Last visited

About cpc

  • Rank

Profile Information

  • Gender
    Not Telling

Contact Methods

  • Website URL

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. It depends; variable bit rate means size will vary with image complexity. It is usually (but not always) between 3:1 and 4:1, might be around 600GB for a couple of hours of 4K. I wouldn't bother shrinking a 4:1 source any further, but if you really have the inclination, you can try 5:1 or 7:1 on top of it, which will shave off ~20% and ~40% respectively. Ofc, always test and judge results yourself.
  2. 4:1 is usually fine, and of course less is better. But I'd probably use VBR HQ for constant quality.
  3. Yes, a couple of hours of 4K at 5:1 should be somewhere under 500GB. I usually recommend 5:1 for oversampled delivery only (i.e. when shooting 4K or higher, but going for a 2K DCP). I know some users routinely use 5:1 for 4K material and are happy with it, but I am a bit conservative about this. I'd imagine most indie work ends up with a 2K DCP anyway (well, at least anything I've shot that ended in a theater has always been 2K).
  4. Only if you will be doing more lossy compression on the same video down the line, and the methods used in the different compression passes differ in some significant way. If you are going to use the same method (only with different amounts of quantization), it doesn't matter much. So if you'd be doing compression after acquisition with, say, slimraw, there are enough differences between lossy slimraw and lossy in-camera to warrant doing lossless in-camera. Well, it is normal. Not only BRAW needs to happen in-camera which imposes some limits (power, memory, real-time, etc), but it is likely hindered by its attempt to avoid Bayer level compression (possibly due to the patent thing). On the other hand, denoising (which often goes together with debayering) does have advantages when done before very high compression. More precisely, lower resolution images can withstand less compression abuse. It should be fairly intuitive: if you have a fixed delivery resolution, let's say 2K, and you arrive at this delivery resolution from a 2K image, you can't afford messing with the original image much. But if you deliver to 2K from a 4K source, you can surely afford doing more compression to the 4K image. BM raw is already tonally remapped through a log curve. The 10-bit log mode in slimraw is only intended for linear raw. No. Size will always go up when transcoding from lossy back to the lossless scheme: this works by decompressing from lossy to uncompressed, then doing lossless compression on the decomrpessed image; you can't do the lossless pass straight on top of the original lossy raw, it doesn't work like this. So going this route only makes sense when people need to maximize recording times (and shoot lossy), but still want to use Premiere in post. If you insist on using DNG, you'll get best quality per size from shooting lossless in-camera, then going through any of the lossy modes in slimraw: which one depends entirely on what target data size you are after. I honestly wouldn't bother doing it for a camera that has in-camera lossy DNG, unless I really, really wanted to shrink down to 5:1 or more.
  5. This is very resolution dependent, but assuming 4k, the corresponding settings would be: lossless, 5:1, and 7:1 / VBR LT. Only if you'd use slimraw in a lossy mode afterwards. It is generally better to avoid multiple generations of lossy compression, and there are a few significant differences in how in-camera dng lossy compression works in comparison to slimraw's lossy compression. Yes. Well, 5:1 is matched by 5:1. The meaning of these ratios is that you get down to around 1/5 of the original data size, which is the same no matter what format you are going to use. "Safety" is something only the user can judge. You are always losing something with lossy compression. It is "safe", in the sense that it is reliable, and it will work. VBR HQ will normally produce files between 4:1 and 3:1, but since it's constant quality/variable bit rate it depends somewhat on image complexity. Now, it is important to note that it is probably not a good idea to swap a BRAW workflow for a DNG workflow, unless you need truly lossless files (for VFX work, for example). Even though a low compression lossy DNG file will very likely look better than an equally sized BRAW frame (because by (partially) debayering in BRAW you increase the data size, and then shrink it back down through compression, while there is no such initial step in DNG; remember: debayering triples your data size!), this quality loss is progressively less important with resolution going up. Competing with BRAW is certainly not a goal for slimraw. There are basically 4 types of slimraw users: 1) People shooting uncompressed DNG raw: Bolex D16, Sony FS series, Canon Magic Lantern raw, Panasonic Varicam LT, Ikonoskop, etc. The go-to compression mode for these users is the Lossless 10-bit log mode for 2K video, or one of the lossy modes for higher resolution video. 2) People shooting losslessly compressed DNG on an early BM camera (Pocket, original BMCC, Production 4K) or on a DJI camera: these users normally offload with one of the lossy modes to reduce their footprint (often 3:1 or VBR HQ for the Pocket and BMCC, and 4:1/5:1 for the 4K). Lossless 10-bit log is also popular with DJI cameras. 3) People doing DNG proxies for use in post with Resolve. They are usually using 7:1 compression and 2x downscale for a blazing fast entirely DNG based workflow in Resolve (relinking in Resolve is a one-click affair and you can go back-and-forth between originals and proxies all the time during post). 4) People shooting BM cameras and recording 3:1 or 4:1 CDNG for longer recording times, who do their post in Premiere. They use slimraw to transcode back to lossless CinemaDNG in post and import the footage in Premiere. Of course, there are other uses (like timelapses, or doing lossless-to-lossy on a more recent BM camera, if you are a quality freak (a few users are), slimraw will beat in-camera for the same output sizes, which is expected -- it doesn't have the limitations of doing processing in-camera), but these are less common. So yeah, if you don't need VFX, it is likely best to just stick to BRAW and don't complicate your life.
  6. This is most likely uncompressed source to losslessly compressed output. It also looks like a rather old version of slimraw. But if you want to know more about the various types of compression in the dng format, here is an overview: http://www.slimraw.com/article-cdngmodes.html (@Emanuel I am around, just not following the discussion closely )
  7. Up till BRAW the only consistently present characteristic of raw video from a camera manufacturer claiming "raw" was a Bayer image. There's been lossy compressed raw (Cineform, Red, BM), there's been tonally remapped raw (Arri, BM, Red, Panasonic, Canon), there's been white balanced raw (Canon), there's been baked ISO raw (Canon, Sony), etc. But all "raw" has always been Bayer. In this sense BRAW is a stretch of the term "raw" as we know it: it is not a Bayer image. I wouldn't call it "raw", but obviously there are market reasons for naming it this way. This is similar to how "visually lossless" is being abused as marketing speak for "lossy". "Visually lossless" can only be applied to delivery images viewed in well defined viewing environments (that's how it is used in any scientific paper that takes itself seriously). By definition, it is not applicable to acquisition formats (raw or anything else) meant to be hammered in post: you can't claim "visually lossless", because you have no knowledge about what will be done to the image, nor where it will end up.
  8. I am pretty sure there is nothing patent breaching in BM's lossy take on DNG: it's all common techniques which have been out there for ages. The problem is likely with another company's patents, which are so broad, that they cover a lot of ground in in-camera raw compression (no matter what method or format you use), and if anything, BM's DNG specifics actually appear to be circumventing some details in these patents. I am not a lawyer, and I haven't read all the patents of that other company, but I think BM doesn't actually breach the ones I've read (due to a certain important detail in BM's implementation). Whether BM are aware of this, or this is simply a battle they don't want to pick, is a different story. In any case, it is definitely not a coincidence that BRAW is not raw in the first place, despite its name, -- it is a debayered image with metadata, think ProRes + metadata.
  9. No need to feel sorry for us PC users. Resolve has been cutting through 4K raw like butter for years. I've been shooting raw exclusively since 2013. Stopped using proxies in 2015. I've only ever used regular consumer hardware for post. Frankly, raw is old news for PC users.
  10. It should be fine in terms of coverage, but Blackmagic cameras have thinner filters, so likely more aberrations.
  11. Premiere has issues with missing metadata. It doesn't care if the values are stretched. What happens is that it can infer missing metadata correctly when the values are shifted and zero padded to 16 bits. Also, there is 14-bit CinemaDNG, but Premiere has trouble with 14-bit uncompressed (not with compressed though!). Now that all ML dng is compressed, it should work fine with Premiere at 14 bits.
  12. FYI, there is no reason to ever use any of the "maximized" variants in raw2cdng. 10-bit raw can be all you need, if it isn't linear. ML raw is linear though, and 10-bit is noticeably worse than 12- and 14-bit even though it is better than, for example, 10-bit raw from DJI cameras. 12-bit is actually quite good.
  13. There is support for MXF containers in the CinemaDNG specification. AFAIK, no support in cameras and apps though. CinemaDNG 3:1 is similar size to ProRes Raw HQ and CinemaDNG 4:1 is similar size to ProRes Raw. DNG performance in Resolve is excellent.
  14. I recall something like 115 for midgrey but it's been 4 years since I've last shot the 5d3, I may be wrong. Having raw white clip referred values is pretty cool, we didn't have these back then. IMO, the problem with using a histogram for exposure is that it kind of promotes post unfriendly habits like ETTR. The spotmeter on the other hand is all about consistency.
  15. You can use the spotmeter for this. This simple tool is faster/better than a waveform for judging skin exposure and not nearly as obtrusive as false color (you can have it on ALL the time). All you need to know to make good use of it is the map between the numbers you see in the profile you are monitoring with (say, you have the camera set to Standard while shooting raw) and the the numbers you'll get in post after doing your raw import routine. Shoot a grey chip chart, record what goes where in live view (or just record a clip in Standard), import the raw footage and make a table with two columns. Voila, you now know that +1 is ~175 in "spotmeter values" and falls wherever in your imported footage. You don't really need to memorize the mapping with great precision. All you need is knowing where a -3 to +3 range falls as this is where the important stuff is in an image. Knowing your tonal curves is useful in most situations anyway. But it happens to be priceless when shooting raw and monitoring an image which you know is different than what you'll be seeing in post.
  • Create New...