Jump to content

cpc

Members
  • Posts

    204
  • Joined

  • Last visited

Everything posted by cpc

  1. It is not that the files are reported as such. Rather, this is the internal working precision of ACR. Just set it to 16 bits and you are good.
  2. Here are the two C5D raw samples, but made Premiere CC compatible, if anyone wants to play with the image in Lumetri: https://drive.google.com/open?id=1c3DAPF_heybRJ1oL_ohgjRolBcKWZkwt
  3. It is not linear. I haven't looked at the exact curve, but it does non-linear companding for the 8-bit raw. The 12-bit image is linear. The DNG spec allows for color tables to be applied on the developed image. The Sigma sample images do include such tables. No idea if they replicate the internal picture profiles though. AFAIK, only Adobe Camera Raw based software (e.g. Photoshop) honors these. Unless Resolve has gained support for these tables (I am on an old Resolve version), it is very likely that the cinema5d review is mistaken on this point.
  4. Without detracting from Graeme's work, it should be made clear that none of the algorithmic REDCODE specifics described in the text are non-trivial for "skilled artisans". I don't think any of this will hold in court as a significant innovation. A few notes: Re: "pre-emphasis curve" used to discard excessive whites and preserve blacks. Everyone here knows it very well, because every log curve does this. Panalog, s-log, Log-C, you name it, do that. In fact, non-linear curves are (and were) so widely used as a pre-compression step, that some camera companies manage to shoot themselves in the leg by applying them non-discriminatively even before entropy coding (where a pure log/power curve can be non-optimal). JPEG has been used since the early 90's to compress images. Practically all images compressed with JPEG were gamma encoded. Gamma encoding is a "simple power law curve". Anyone who has ever compressed a linear image knows what happens (not a pretty picture) to linear signal after a DCT or wavelet transfrom, followed by quantization. And there is nothing special, technically speaking, about raw -- it is linear signal in native camera space. But you don't need to look far for encoding alternatives: film has been around since the 19th century, it does a non-linear transform (more precisely, log with toe and shoulder) on the captured light. In an even more relevant connection, Cineform RAW was developed in 2005 and presented at NAB 2006. It uses a "pre-emphasis" non-linear curve (more precisely, a tunable log curve) to discard excessive whites and preserve blacks. You may also want to consult this blog post from David@Cineform from 2007 about REDCODE and Cineform: http://cineform.blogspot.com/2007/09/10-bit-log-vs-12-bit-linear.html Re: "green average subtraction": Using nearby pixels for prediction/entropy reduction goes at least as far back as JPEG, which specifies 7 such predictors. In a Bayer mosaic, red and blue pixels will always neighbor a green pixel, hence using the brightness correlating green channel for prediction of red and blue channels is a tiny step. Re: using a Bayer sensor, as a an "unconventional avenue": The Dalsa Origin, presented at NAB 2003, and available for renting since 2006, was producing Bayer raw (uncompressed). The Arri Arriflex D-20, introduced in November 2005, was doing Bayer raw (uncompressed). Can't recall the SI-2K release year, but it was doing Bayer compressed raw (Cineform RAW, externally) in 2006.
  5. Red do have multiple patents assigned. Some include claims that are worded very broadly, and some do include quite a lot of specifics. Compare the claims in these two (they use the typical obfuscated language and structure that make patents look impenetrable): https://patents.google.com/patent/US9596385B2/ https://patents.google.com/patent/US8872933B2/ Note, for example, how claim 1 in the "electronic apparatus" patent lists the very specific way a blue or red channel is predicted from nearby green values. Contrast this with the wording in the "video camera" patent's Claim 1, which pretty much covers any raw camera with a resolution of 4K or more (compressed or not). Red do have patents on codec specifics, but most of Red's camera/apparatus/device patents mentioning compression explicitly list a bunch of compression approaches as possible means to achieve said compression. They are, informally speaking, patenting the idea of implementing the (compressed) RAW recording camera, rather than any specific compression technique. It is hardly a coincidence that Blackmagic's own "raw" codec, a response to Red's patent violation claims, appears to be designed so that it isn't actually raw. Now, compression itself is an extensively studied field. It is, very, VERY hard to come with significant innovations in this field. A "raw" codec will use any of a few well known image compression techniques and adapt it for Bayer data. That is all there is to raw compression. Raw codecs are universally rehashing old ideas, and (slightly, if at all) differing in the details of data formatting and layout, which has little to do with actual compression technique. Yes, you can pre-process raw data in a bunch of ways, and these are usually (and I use this as an euphemism for "always") trivial for anyone "skilled in the art" (with being "non-trivial" assumed as a prerequisite for patentability). For the curious, probably the biggest advancement in compression in the last two decades is ANS which, incidentally, was explicitly released into the public domain by its creator, Jarek Duda, with the intention to prevent any patents around it.
  6. It depends; variable bit rate means size will vary with image complexity. It is usually (but not always) between 3:1 and 4:1, might be around 600GB for a couple of hours of 4K. I wouldn't bother shrinking a 4:1 source any further, but if you really have the inclination, you can try 5:1 or 7:1 on top of it, which will shave off ~20% and ~40% respectively. Ofc, always test and judge results yourself.
  7. 4:1 is usually fine, and of course less is better. But I'd probably use VBR HQ for constant quality.
  8. Yes, a couple of hours of 4K at 5:1 should be somewhere under 500GB. I usually recommend 5:1 for oversampled delivery only (i.e. when shooting 4K or higher, but going for a 2K DCP). I know some users routinely use 5:1 for 4K material and are happy with it, but I am a bit conservative about this. I'd imagine most indie work ends up with a 2K DCP anyway (well, at least anything I've shot that ended in a theater has always been 2K).
  9. Only if you will be doing more lossy compression on the same video down the line, and the methods used in the different compression passes differ in some significant way. If you are going to use the same method (only with different amounts of quantization), it doesn't matter much. So if you'd be doing compression after acquisition with, say, slimraw, there are enough differences between lossy slimraw and lossy in-camera to warrant doing lossless in-camera. Well, it is normal. Not only BRAW needs to happen in-camera which imposes some limits (power, memory, real-time, etc), but it is likely hindered by its attempt to avoid Bayer level compression (possibly due to the patent thing). On the other hand, denoising (which often goes together with debayering) does have advantages when done before very high compression. More precisely, lower resolution images can withstand less compression abuse. It should be fairly intuitive: if you have a fixed delivery resolution, let's say 2K, and you arrive at this delivery resolution from a 2K image, you can't afford messing with the original image much. But if you deliver to 2K from a 4K source, you can surely afford doing more compression to the 4K image. BM raw is already tonally remapped through a log curve. The 10-bit log mode in slimraw is only intended for linear raw. No. Size will always go up when transcoding from lossy back to the lossless scheme: this works by decompressing from lossy to uncompressed, then doing lossless compression on the decomrpessed image; you can't do the lossless pass straight on top of the original lossy raw, it doesn't work like this. So going this route only makes sense when people need to maximize recording times (and shoot lossy), but still want to use Premiere in post. If you insist on using DNG, you'll get best quality per size from shooting lossless in-camera, then going through any of the lossy modes in slimraw: which one depends entirely on what target data size you are after. I honestly wouldn't bother doing it for a camera that has in-camera lossy DNG, unless I really, really wanted to shrink down to 5:1 or more.
  10. This is very resolution dependent, but assuming 4k, the corresponding settings would be: lossless, 5:1, and 7:1 / VBR LT. Only if you'd use slimraw in a lossy mode afterwards. It is generally better to avoid multiple generations of lossy compression, and there are a few significant differences in how in-camera dng lossy compression works in comparison to slimraw's lossy compression. Yes. Well, 5:1 is matched by 5:1. The meaning of these ratios is that you get down to around 1/5 of the original data size, which is the same no matter what format you are going to use. "Safety" is something only the user can judge. You are always losing something with lossy compression. It is "safe", in the sense that it is reliable, and it will work. VBR HQ will normally produce files between 4:1 and 3:1, but since it's constant quality/variable bit rate it depends somewhat on image complexity. Now, it is important to note that it is probably not a good idea to swap a BRAW workflow for a DNG workflow, unless you need truly lossless files (for VFX work, for example). Even though a low compression lossy DNG file will very likely look better than an equally sized BRAW frame (because by (partially) debayering in BRAW you increase the data size, and then shrink it back down through compression, while there is no such initial step in DNG; remember: debayering triples your data size!), this quality loss is progressively less important with resolution going up. Competing with BRAW is certainly not a goal for slimraw. There are basically 4 types of slimraw users: 1) People shooting uncompressed DNG raw: Bolex D16, Sony FS series, Canon Magic Lantern raw, Panasonic Varicam LT, Ikonoskop, etc. The go-to compression mode for these users is the Lossless 10-bit log mode for 2K video, or one of the lossy modes for higher resolution video. 2) People shooting losslessly compressed DNG on an early BM camera (Pocket, original BMCC, Production 4K) or on a DJI camera: these users normally offload with one of the lossy modes to reduce their footprint (often 3:1 or VBR HQ for the Pocket and BMCC, and 4:1/5:1 for the 4K). Lossless 10-bit log is also popular with DJI cameras. 3) People doing DNG proxies for use in post with Resolve. They are usually using 7:1 compression and 2x downscale for a blazing fast entirely DNG based workflow in Resolve (relinking in Resolve is a one-click affair and you can go back-and-forth between originals and proxies all the time during post). 4) People shooting BM cameras and recording 3:1 or 4:1 CDNG for longer recording times, who do their post in Premiere. They use slimraw to transcode back to lossless CinemaDNG in post and import the footage in Premiere. Of course, there are other uses (like timelapses, or doing lossless-to-lossy on a more recent BM camera, if you are a quality freak (a few users are), slimraw will beat in-camera for the same output sizes, which is expected -- it doesn't have the limitations of doing processing in-camera), but these are less common. So yeah, if you don't need VFX, it is likely best to just stick to BRAW and don't complicate your life.
  11. This is most likely uncompressed source to losslessly compressed output. It also looks like a rather old version of slimraw. But if you want to know more about the various types of compression in the dng format, here is an overview: http://www.slimraw.com/article-cdngmodes.html (@Emanuel I am around, just not following the discussion closely )
  12. Up till BRAW the only consistently present characteristic of raw video from a camera manufacturer claiming "raw" was a Bayer image. There's been lossy compressed raw (Cineform, Red, BM), there's been tonally remapped raw (Arri, BM, Red, Panasonic, Canon), there's been white balanced raw (Canon), there's been baked ISO raw (Canon, Sony), etc. But all "raw" has always been Bayer. In this sense BRAW is a stretch of the term "raw" as we know it: it is not a Bayer image. I wouldn't call it "raw", but obviously there are market reasons for naming it this way. This is similar to how "visually lossless" is being abused as marketing speak for "lossy". "Visually lossless" can only be applied to delivery images viewed in well defined viewing environments (that's how it is used in any scientific paper that takes itself seriously). By definition, it is not applicable to acquisition formats (raw or anything else) meant to be hammered in post: you can't claim "visually lossless", because you have no knowledge about what will be done to the image, nor where it will end up.
  13. I am pretty sure there is nothing patent breaching in BM's lossy take on DNG: it's all common techniques which have been out there for ages. The problem is likely with another company's patents, which are so broad, that they cover a lot of ground in in-camera raw compression (no matter what method or format you use), and if anything, BM's DNG specifics actually appear to be circumventing some details in these patents. I am not a lawyer, and I haven't read all the patents of that other company, but I think BM doesn't actually breach the ones I've read (due to a certain important detail in BM's implementation). Whether BM are aware of this, or this is simply a battle they don't want to pick, is a different story. In any case, it is definitely not a coincidence that BRAW is not raw in the first place, despite its name, -- it is a debayered image with metadata, think ProRes + metadata.
  14. No need to feel sorry for us PC users. Resolve has been cutting through 4K raw like butter for years. I've been shooting raw exclusively since 2013. Stopped using proxies in 2015. I've only ever used regular consumer hardware for post. Frankly, raw is old news for PC users.
  15. It should be fine in terms of coverage, but Blackmagic cameras have thinner filters, so likely more aberrations.
  16. Premiere has issues with missing metadata. It doesn't care if the values are stretched. What happens is that it can infer missing metadata correctly when the values are shifted and zero padded to 16 bits. Also, there is 14-bit CinemaDNG, but Premiere has trouble with 14-bit uncompressed (not with compressed though!). Now that all ML dng is compressed, it should work fine with Premiere at 14 bits.
  17. FYI, there is no reason to ever use any of the "maximized" variants in raw2cdng. 10-bit raw can be all you need, if it isn't linear. ML raw is linear though, and 10-bit is noticeably worse than 12- and 14-bit even though it is better than, for example, 10-bit raw from DJI cameras. 12-bit is actually quite good.
  18. There is support for MXF containers in the CinemaDNG specification. AFAIK, no support in cameras and apps though. CinemaDNG 3:1 is similar size to ProRes Raw HQ and CinemaDNG 4:1 is similar size to ProRes Raw. DNG performance in Resolve is excellent.
  19. I recall something like 115 for midgrey but it's been 4 years since I've last shot the 5d3, I may be wrong. Having raw white clip referred values is pretty cool, we didn't have these back then. IMO, the problem with using a histogram for exposure is that it kind of promotes post unfriendly habits like ETTR. The spotmeter on the other hand is all about consistency.
  20. You can use the spotmeter for this. This simple tool is faster/better than a waveform for judging skin exposure and not nearly as obtrusive as false color (you can have it on ALL the time). All you need to know to make good use of it is the map between the numbers you see in the profile you are monitoring with (say, you have the camera set to Standard while shooting raw) and the the numbers you'll get in post after doing your raw import routine. Shoot a grey chip chart, record what goes where in live view (or just record a clip in Standard), import the raw footage and make a table with two columns. Voila, you now know that +1 is ~175 in "spotmeter values" and falls wherever in your imported footage. You don't really need to memorize the mapping with great precision. All you need is knowing where a -3 to +3 range falls as this is where the important stuff is in an image. Knowing your tonal curves is useful in most situations anyway. But it happens to be priceless when shooting raw and monitoring an image which you know is different than what you'll be seeing in post.
  21. Nice shots @Gregormannschaft But this also illustrates the technical problem with s-log + low bitdepth + compression. Skin is really thin on the last image, for example, and the wall is on the wrong side of the banding limit.
  22. One discriminating characteristic of log curves compared to negative is that (on most of them) there is no shoulder. (Well, the shoulder on Vision3 series films is very high, so not much of a practical consideration unless you overexpose severely.) An effect of this lack of shoulder is that you can generally re-rate slower without messing up color relations through the range, as long as clipping is accounted for. Arri's Log-C has so much latitude over nominal mid gray that rating it at 400 still leaves tons for highlights. I don't think any other camera has similar (or more) latitude above the nominal mid point? Pretty much all the other camera manufacturers inflate reported numbers by counting and reporting the noise fest at the bottom in the overall DR. No wonder that a camera with "more" DR than an Alexa looks like trash in a side-by-side latitude comparison.
  23. Banding is a combination of bitdepth precision, chroma subsampling, compression and stretching the image in post. S-log3 is so flat (and doesn't even use the full 8-bit precision) that pretty much all grades qualify as aggressive tonal changes. S-log2 is a bit better, but still needs more exposure than nominal in most cases. Actually, I can't think of any non-Arri cameras that don't need some amount of overexposure in log even at higher bitdepths. These curves are ISO rated for maximizing a technical notion of SNR which doesn't always (if ever) coincide with what we consider a clean image after tone mapping for display. That said, ETTR isn't usually the best way to expose log (or any curve): too much normalizing work in post on shot by shot basis. Better to re-rate the camera slower and expose consistently. In the case of Sony A series it is probably best to just shoot one of the Cine curves. They have decent latitude without entirely butchering mids density. Perhaps the only practical exception is shooting 4k and delivering 1080p, which restores a bit of density after the downscale.
  24. You will still get the best HD/2K delivery quality from a 4K camera even if you never deliver in 4K. A Sony FS700 + Odyssey 7Q goes for $3-4k nowadays and can shoot great 4K 12-bit raw. And it can also do high fps for slow-mo, which may be appealing for your music video/slow movement scenes.
  25. Well, shooting negative film is certainly much closer to shooting raw, than baked (as in jpeg). The negative is unusable until printed and there is a great deal of choices that need to be taken during exposure, development and printing. Choices which command the tonal and color characteristics of the image. There may be "immediate patina and abstraction" in the result, but getting the result isn't immediate by any means.
×
×
  • Create New...