Jump to content


  • Posts

  • Joined

  • Last visited

Everything posted by cpc

  1. Did it win, though? Apparently Red settled with Sony after Sony counter sued for infringement. And Apple's case was dismissed for what looks, at least partially, like procedural reasons (basically due to incompleteness -- the decision uses the word "unclear" multiple times in relation to Apple's rationale and says literally "In sum, Petitioner’s obviousness challenge is unclear and incomplete"). That is, Apple lost, but it is not clear if the patent won.
  2. Companies can be clueless about these matters. And Nikon comes from a stills background. On the other hand, Nikon's patents portfolio includes tens of thousands of patents, including thousands in the US. They can probably dig into it for counter infringements, if they are forced to. I don't recall resolution specifics in the Sony case, but I wouldn't be surprised if Sony did exactly this to fend off Red.
  3. It should be clear that there is nothing in Red's compression algorithm that's specifically related to video as in a "sequence of related frames". It still compresses individual frames independently of each other, simply does it at fairly high frame rates (24+). Also, as repeatedly mentioned already the wavelet based "visually lossless" CineformRAW compression for Bayer raw data was introduced that same year months before Mr. Nattress had even met Mr. Jannard... If you read David Newmans's blog, which is still live on the interwebs, and is a great source of inside information for anyone interested, you will know that CineformRAW was started in 2004 and work for adding it to the Silicon Imaging camera started right after NAB 2005. Not that this matters, as Red did not patent a specific implementation. They patented a broad and general idea, which was with absolute certainty discussed by others at the same or at previous times. Which isn't Red's fault, of course. It's just a consequence of the stupidity of the patent system.
  4. The patent expires in 6 years or so, IIRC. Or would we? The guy that invented ANS, possibly the most important fundamental novelty in compression in the last 2 or 3 decades, did put it in the public domain. It is now everywhere. In every new codec worth mentioning. And in hardware like the PS5.
  5. Yes, you can do that. You can also do more sensible things like partial debayer (e.g Blackmagic BRAW). This isn't novelty though. This is a basic example of inevitable evolution.
  6. If you read the patents carefully they usually describe a few possible ways of doing this or that as "claims", and then explicitly say "but not limited to these". For years I used to think Red's patents are limited to in-camera Bayer compression at ratios 6:1 or higher, because this ratio is repeatedly mentioned as a "claim". Apparently, this wasn't the case as demonstrated by their actions against BM and others.
  7. @Andrew Reid Lossless image compression has been around for decades. Raw images are images. Cinema raw images are raw images are images. There isn't anything particularly special about raw images compression-wise. CineformRAW (introduces in 2005) is cited in Red's patents. CineformRAW is cinema raw compression. Red don't claim an invention of raw video compression, they claim putting it in cameras first. Red's patents are mostly referring to "visually lossless" which is an entirely meaningless phrase in relation to raw. Here is a quote from one of their patents: "As used herein, the term “visually lossless” is intended to include output that, when compared side by side with original (never compressed) image data on the same display device, one of ordinary skill in the art would not be able to determine which image is the original with a reasonable degree of accuracy, based only on a visual inspection of the images." This, of course, makes no sense because anyone of ordinary skill can increase image contrast during raw development to an extreme point where the "visually lossless" image breaks before the original. It is a stupid marketing phrase which needs multiple additional definitions (standard observer, standard viewing conditions, standard display, standard raw processing) to make it somewhat useful. None of these are given in the patent, btw. A basic requirement for some tech to be patentable is that it isn't an obvious solution to a problem for someone reasonably skilled in the art. If you present someone reasonably skilled with the goal of putting high bandwidth raw data into limited on-board storage do you think they wouldn't ponder about compression? In a world where raw video cameras exist (Dalsa) and externally recorded compressed raw video exists (SI2k)? Because that's what's patented; not any particular implementation of Red's. To play on your argument: surely big players like Apple and Sony didn't think this was patentable. There must be some base to that. I have no knowledge of the US patent law system, but it definitely lacks common sense. So kudos to Red for capitalizing on this lack of common sense.
  8. Dunno what's a game changer, but almost 15 years ago the SI2k Mini was winning people Academy awards for cinematography. Incidentally, the SI2k is a camera that's relevant in this thread for other reasons. 🙂
  9. Smaller VF magnification can be a positive for spectacles wearers as you don't have to move your eye around the image. I've dumped otherwise great cameras before because of their excessive VF magnification.
  10. How do you price size though? Using the official dimensions, the A7c fits in a box of half the volume of the S5 bounding box.
  11. re: appeal The Sony NEX 6 has the most perfect size-feature balance for small hands and spectacles of all digital cameras I've tried. A7 series is big and heavy, particularly so after the first iteration (the main reason I still use an A7s mark I as a primary photo camera), and EVF position is worse (for me) than on the NEX series. This camera on the other hand... color me interested. The position of the EVF alone is an insta-win.
  12. cpc

    Sony A7S III

    This is too optimistic, I think. The A7s needed overexposure in s-log, it was barely usable at nominal ISO (and I am being generous with my wording here). With the lower base ISO in s-log3 of the A7s III (640 vs 1600 on the A7s), Sony now basically make this overexposure implicit.
  13. cpc

    Sony A7S III

    For determining the clip point it doesn't matter if the footage is overexposed; overexposure doesn't move the clip point; if anything, it helps to find this point easier. All you need is locating a hard clipping area (like the sun). re: exposing While a digital spotmeter would be the perfect tool for exposing log, the A7s II does have "gamma assist" where you are recording s-log, but previewing an image properly tone mapped for display. The A7s III likely has this too. You don't really need perfect white balance in-camera when shooting a fully reversible curve like s-log3. This can be white balanced in post in a mathematically correct way, similarly to how you balance raw in post. You only need to have in-camera WB in the ballpark to maximize utilization of available tonal precision.
  14. cpc

    Sony A7S III

    The sun is so bright that you'd need significant underexposure to bring it down to below clip levels (on any camera). And these images don't look underexposed to me. A clipping value of 0.87106 is still very respectable: on the s-log3 curve, this is slightly more than 6 stops above middle gray. With "metadata ISO" cameras like the Alexa the clip point in Log-C moves up with ISOs higher than base, and lower with ISOs lower than base. But on Sony A7s cameras you can't rate lower than base in s-log (well, on the A7s you can't, at least), so this is likely shot at base s-log3 ISO 640. I any case, the s-log3 curve has a nominal range of around 9 stops below mid gray (usable range obviously significantly lower), so this ties up with the boasted 15 stops of DR in video. You can think of the camera as shooting 10 - log2(1024/ (0.87*1024 - 95)) bit footage in s-log3. That is, as a 9.64 bit camera. 🙂
  15. cpc

    Sony A7S III

    With the middle point mapped as per the specification, the camera simply lacks the highlights latitude to fill all the available s-log3 range. Basically, it clips lower than what s-log3 can handle. You should still be importing as data levels: this is not a bug, it is expected. Importing as video levels simply stretches the signal, you are importing it wrong and increasing the gamma of the straight portion of the curve (it is no longer the s-log3 curve), thus throwing off any subsequent processing which relies on the curve being correct.
  16. @Lensmonkey: Raw is really the same as shooting film. Something that you should take into consideration is that middle gray practically never falls in the middle of the exposure range on negative film. You have tons of overexposure latitude, and very little underexposure latitude, so overexposing a negative for a denser image is very common. With raw on lower end cameras it is quite the opposite: you don't really have much (if any) latitude for overexposure, because of the hard clip at sensor saturation levels, but you can often rate faster (higher ISO) and underexpose a bit. This is the case, provided that ISO is merely a metadata label, which is true for most cinema cameras, and looking at the chart it is likely true for the Sigma up to around 1600, where some analog gain change kicks in.
  17. Your "uncompressed reference" has lost information, that the 4:4:4 codecs are taking into consideration, hence the difference. You should use uncompressed RGB for reference, not YUV, and certainly not 4:2:2 subsampled. Remember, 4:2:2 subsampling is a form of compression.
  18. Can't argue with this, I am using manual lenses almost exclusively myself. On the other hand, ML does provide the best exposure and (manual) focusing tools available in-camera south of 10 grand, maybe more, by far, so this offsets the lack of IBIS somewhat. I am amazed these tools aren't matched by newer cameras 8 years later.
  19. A 2012 5d mark 3 shoots beautiful 1080p full-frame 14-bit lossless compressed raw with more sharpness than you'll ever need for YT, at a bit rate comparable to Prores XQ. If IS lenses can do instead of IBIS, I don't think you'll find a better deal.
  20. The problem is missing time codes in the audio files recorded by the camera. Resolve needs these to auto sync audio and image and present them as a single entity. Paul has posted a workaround here: As a general rule, if the uncompressed image and audio don't auto sync in Resolve, the compressed image won't auto sync either.
  21. I will be surprised if Resolves does rescale in anything different than the image native gamma, that is in whatever gamma the values are at the point of the rescale operation. But if anything, some apps convert from sRGB or power gamma to linear for scaling, and then back. You can do various transforms to shrink the ratio between extremes, and this will generally reduce ringing artifacts. I know people deliberately gamma/log transform linear rendered images for rescale. But it is mathematically and physically incorrect. There are examples and lengthy write-ups on the web with what might go wrong if you scale in non-linear gamma, but perhaps most intuitively you can think about it in an "energy conserving" manner. If you don't do it in linear, you are altering the (locally) average brightness of the scene. You may not see this easily in real life images, because it will often be masked by detail, but do a thought experiment about, say, a greyscale synthetic 2x1 image scaled down to a 1x1 image and see what happens. I have a strong dislike for ringing artifacts myself, but I believe the correct approach to reduce these would be to pre-blur to band limit the signal and/or use a different filter: for example, Lanczos with less lobes, or Lanczos with pre-weighted samples; or go to splines/cubic; and sometimes bilinear is fine for downscale between 1x and 2x, since it has only positive weights. On the other hand, as we all very well know, theory and practice can diverge, so whatever produces good looking results is fine. Rescaling Bayer data is certainly more artifact prone, because of the missing samples, and the unknown of the subsequent deBayer algorithm. This is also the main reason SlimRAW only downscales precisely 2x for DNG proxies. It is actually possible to do Bayer aware interpolation and scale 3 layers instead of 4. This way the green channel will benefit from double the information compared to the others. You can think of this as interpolating "in place", rather than scaling with subsequent Bayer rearrangement. Similar to how you can scale a full color image in dozens of ways, you can do the same with a Bayer mosaic, and I don't think there is a "proper" way to do this. It is all a matter of managing trade offs, with the added complexity that you have no control over exactly how the image will be then debayered in post. It is in this sense that rescaling Bayer is worse -- you are creating an intermediate image, which will need to endure some serious additional reconstruction. Ideally, you should resize after debayering, because an advanced debayer method will try to use all channels simultaneously (also, see below). This is possible, and you can definitely extract more information and get better results by using neighboring pixels of different color because channels correlate somewhat. Exploiting this correlation is at the heart of many debayer algorithms, and, in some sense, memorizing many patterns of correlating samples is how recent NN based debayering models work. But if you go this way, you may just as well compress and record the debayered image with enough additional metadata to allow WB tweaks and exposure compensation in post, or simply go the partially debayered route similar to BRAW or Canon Raw Light. In any case, we should also have in mind that the higher the resolution, the less noticeable the artifacts. And 4K is quite a lot of pixels. In real life images I don't think it is very likely that there will be noticeable problems, other than the occasional no-OLPF aliasing issues.
  22. Binning is also scaling. Hardware binning will normally just group per-channel pixels together without further spatial considerations, but a weighted binning techique is basically bilinear interpolation (when halving resolution). Mathematically, scaling should be done in linear, assuming samples are in an approximately linear gamut, which may or may not be the case. Digital sensors, in general, have good linearity of light intensity levels (certainly way more consistent than film), but native sensor gamut is not a clean linear tri-color space. If you recall the rules of proper compositing, scaling itself is very similar -- you do it in linear to preserve the way light behaves. You sometimes may get better results with non-linear data, but this is likely related to idiosyncrasies of the specific case and is not the norm. re: Sigma's downscale I assume, yes, they simply downsample per channel and arrange into a Bayer mosaic. Bayer reconstruction itself is a process of interpolation, you need to conjure samples out of thin air. No matter how advanced the method, and there are some really involved methods, it is really just that, divination of sample values. So anything that loses information beforehand, including channel downsample, will hinder reconstruction. Depending on the way the downscale is done, you can obstruct reconstruction of some shapes more than others, so you might need to prioritize this or that. A simple example of tradeoffs: binning may have better SNR than some interpolation methods but will result in worse diagonal detail.
  23. Only because I have code lying around that does this in multiple ways, and it shows various ways of producing artifacts without doing weird things. It is not necessary to do crazy weird things to break the image. Even the fanciest way of Bayer donwscale will produce an image that's noticeably worse than debayering in full res and then downscaling to the target resolution, there's no way around it even in the conceptually easiest case of 50% downscale.
  24. Thanks. Here is the same file scaled down to 3000x2000 Bayer in four different ways (lineskipping, two types of binning and a bit more fancy interpolation). Not the same as 6K-to-4K Bayer, but it might be interesting anyway. _SDI2324_bin.DNG _SDI2324_interp.DNG _SDI2324_skip.DNG _SDI2324_wbin.DNG
  25. Thanks, these gears look really nice. I don't think there is a single Leica R that can hold a candle to the Contaxes in terms of flare resistance. The luxes also flare a lot (here is the 50), but I haven't found this to be a problem in controlled shoots. Sorry, I meant the DNG file.
  • Create New...