Jump to content

cpc

Members
  • Posts

    204
  • Joined

  • Last visited

Reputation Activity

  1. Like
    cpc got a reaction from John Matthews in Nikon buys Red?   
    This might be premature. Red has a bunch of newer raw/compression related patents which are "continuation" of the old patents or merge a few of the old patents in a new one. E.g., 10531098 (issued 2020), 11076164 (issued 2021), 11503294 (issued 2022), 11818351 (issued 2023), etc. I have no knowledge of the legal implications of these, but won't be surprised one bit if they actually extend the in-camera raw compression monopoly.
  2. Like
    cpc got a reaction from majoraxis in Deciding closest modern camera to Digital Bolex look   
    The Kodak sensor also appears to have thicker filter dyes which results in rich color and excellent color separation. Later sensors may have been optimized for sensitivity, particularly cheaper sensors. If you look at images from the BM Pocket, they have more compressed color with hues mаshed together.
  3. Like
    cpc got a reaction from kye in MY VARI ND filters suck thread   
    Most issues come from the fact that VND is not ND. Which should be obvious: 2xPola is anything but ND, except someone with an inclination for marketing thought it clever to call 2xPola "Variable ND".
    Pola filters are special purpose filters and it makes no sense to use them for general purpose light levels reduction. There are too many variables involved in filtering polarized light, starting with light falling angles and reflective surface characteristics, for this to be a reliable levels reducing method. Not to mention the adverse side effects of filtering out some reflected light more than other, e.g. preferentially filtering out skin subsurface scattered light otherwise known as "skin glow".
  4. Like
    cpc got a reaction from 92F in The great AI content theft debate   
    "When you’re fundraising, it’s AI. When you’re hiring, it’s ML. When you’re implementing, it’s linear regression."
    Replace "fundraising" with "marketing" and the truth value doesn't change. It is "artificial intelligence" as much as your phone or watch is "smart". Which is none.
    So the answer is "No, it isn't", but it largely depends on definitions and heavily overloaded semantics. "AI" certainly doesn't "think", nor "feel", but it can "sense" or "perceive" by being fed data from sensors, and it can represent knowledge and learn. The latter two are where the usefulness comes from, currently. A model can distill structure from a dataset in order to represent knowledge needed for solving a specific task. It is glorified statistics, is all. But anthropomorphizing is in our DNA, we have a sci-fi legacy imprinted on us, and model design itself has long been taking cues from neurobiology, so you'll never be able to steer terminology in the field towards something more restrained.
  5. Thanks
    cpc got a reaction from IronFilm in RED Files Lawsuit Against Nikon   
    Did it win, though?
    Apparently Red settled with Sony after Sony counter sued for infringement. And Apple's case was dismissed for what looks, at least partially, like procedural reasons (basically due to incompleteness -- the decision uses the word "unclear" multiple times in relation to Apple's rationale and says literally "In sum, Petitioner’s obviousness challenge is unclear and incomplete"). That is, Apple lost, but it is not clear if the patent won.
  6. Like
    cpc got a reaction from IronFilm in RED Files Lawsuit Against Nikon   
    Companies can be clueless about these matters. And Nikon comes from a stills background.
    On the other hand, Nikon's patents portfolio includes tens of thousands of patents, including thousands in the US. They can probably dig into it for counter infringements, if they are forced to. I don't recall resolution specifics in the Sony case, but I wouldn't be surprised if Sony did exactly this to fend off Red.
  7. Like
    cpc got a reaction from Davide DB in RED Files Lawsuit Against Nikon   
    Did it win, though?
    Apparently Red settled with Sony after Sony counter sued for infringement. And Apple's case was dismissed for what looks, at least partially, like procedural reasons (basically due to incompleteness -- the decision uses the word "unclear" multiple times in relation to Apple's rationale and says literally "In sum, Petitioner’s obviousness challenge is unclear and incomplete"). That is, Apple lost, but it is not clear if the patent won.
  8. Like
    cpc got a reaction from Davide DB in RED Files Lawsuit Against Nikon   
    Companies can be clueless about these matters. And Nikon comes from a stills background.
    On the other hand, Nikon's patents portfolio includes tens of thousands of patents, including thousands in the US. They can probably dig into it for counter infringements, if they are forced to. I don't recall resolution specifics in the Sony case, but I wouldn't be surprised if Sony did exactly this to fend off Red.
  9. Like
    cpc got a reaction from Davide DB in RED Files Lawsuit Against Nikon   
    It should be clear that there is nothing in Red's compression algorithm that's specifically related to video as in a "sequence of related frames". It still compresses individual frames independently of each other, simply does it at fairly high frame rates (24+). Also, as repeatedly mentioned already the wavelet based "visually lossless" CineformRAW compression for Bayer raw data was introduced that same year months before Mr. Nattress had even met Mr. Jannard... If you read David Newmans's blog, which is still live on the interwebs, and is a great source of inside information for anyone interested, you will know that CineformRAW was started in 2004 and work for adding it to the Silicon Imaging camera started right after NAB 2005.
    Not that this matters, as Red did not patent a specific implementation. They patented a broad and general idea, which was with absolute certainty discussed by others at the same or at previous times. Which isn't Red's fault, of course. It's just a consequence of the stupidity of the patent system.
  10. Like
    cpc got a reaction from PannySVHS in RED Files Lawsuit Against Nikon   
    It should be clear that there is nothing in Red's compression algorithm that's specifically related to video as in a "sequence of related frames". It still compresses individual frames independently of each other, simply does it at fairly high frame rates (24+). Also, as repeatedly mentioned already the wavelet based "visually lossless" CineformRAW compression for Bayer raw data was introduced that same year months before Mr. Nattress had even met Mr. Jannard... If you read David Newmans's blog, which is still live on the interwebs, and is a great source of inside information for anyone interested, you will know that CineformRAW was started in 2004 and work for adding it to the Silicon Imaging camera started right after NAB 2005.
    Not that this matters, as Red did not patent a specific implementation. They patented a broad and general idea, which was with absolute certainty discussed by others at the same or at previous times. Which isn't Red's fault, of course. It's just a consequence of the stupidity of the patent system.
  11. Like
    cpc got a reaction from IronFilm in RED Files Lawsuit Against Nikon   
    The patent expires in 6 years or so, IIRC.
     
    Or would we?
    The guy that invented ANS, possibly the most important fundamental novelty in compression in the last 2 or 3 decades, did put it in the public domain. It is now everywhere. In every new codec worth mentioning. And in hardware like the PS5.
  12. Thanks
    cpc got a reaction from IronFilm in RED Files Lawsuit Against Nikon   
    Yes, you can do that. You can also do more sensible things like partial debayer (e.g Blackmagic BRAW).
     
    This isn't novelty though. This is a basic example of inevitable evolution.
  13. Like
    cpc got a reaction from mercer in RED Files Lawsuit Against Nikon   
    If you read the patents carefully they usually describe a few possible ways of doing this or that as "claims", and then explicitly say "but not limited to these". For years I used to think Red's patents are limited to in-camera Bayer compression at ratios 6:1 or higher, because this ratio is repeatedly mentioned as a "claim". Apparently, this wasn't the case as demonstrated by their actions against BM and others.
  14. Thanks
    cpc got a reaction from IronFilm in RED Files Lawsuit Against Nikon   
    @Andrew Reid Lossless image compression has been around for decades. Raw images are images. Cinema raw images are raw images are images. There isn't anything particularly special about raw images compression-wise. CineformRAW (introduces in 2005) is cited in Red's patents. CineformRAW is cinema raw compression. Red don't claim an invention of raw video compression, they claim putting it in cameras first.
    Red's patents are mostly referring to "visually lossless" which is an entirely meaningless phrase in relation to raw. Here is a quote from one of their patents: "As used herein, the term “visually lossless” is intended to include output that, when compared side by side with original (never compressed) image data on the same display device, one of ordinary skill in the art would not be able to determine which image is the original with a reasonable degree of accuracy, based only on a visual inspection of the images." This, of course, makes no sense because anyone of ordinary skill can increase image contrast during raw development to an extreme point where the "visually lossless" image breaks before the original. It is a stupid marketing phrase which needs multiple additional definitions (standard observer, standard viewing conditions, standard display, standard raw processing) to make it somewhat useful. None of these are given in the patent, btw.
    A basic requirement for some tech to be patentable is that it isn't an obvious solution to a problem for someone reasonably skilled in the art. If you present someone reasonably skilled with the goal of putting high bandwidth raw data into limited on-board storage do you think they wouldn't ponder about compression? In a world where raw video cameras exist (Dalsa) and externally recorded compressed raw video exists (SI2k)? Because that's what's patented; not any particular implementation of Red's. To play on your argument: surely big players like Apple and Sony didn't think this was patentable. There must be some base to that. I have no knowledge of the US patent law system, but it definitely lacks common sense. So kudos to Red for capitalizing on this lack of common sense.
  15. Like
    cpc got a reaction from PannySVHS in RED Files Lawsuit Against Nikon   
    Dunno what's a game changer, but almost 15 years ago the SI2k Mini was winning people Academy awards for cinematography. Incidentally, the SI2k is a camera that's relevant in this thread for other reasons. 🙂
  16. Like
    cpc got a reaction from Geoff CB in Sony A7S III   
    The sun is so bright that you'd need significant underexposure to bring it down to below clip levels (on any camera). And these images don't look underexposed to me. A clipping value of 0.87106 is still very respectable: on the s-log3 curve, this is slightly more than 6 stops above middle gray. With "metadata ISO" cameras like the Alexa the clip point in Log-C moves up with ISOs higher than base, and lower with ISOs lower than base. But on Sony A7s cameras you can't rate lower than base in s-log (well, on the A7s you can't, at least), so this is likely shot at base s-log3 ISO 640.
    I any case, the s-log3 curve has a nominal range of around 9 stops below mid gray (usable range obviously significantly lower), so this ties up with the boasted 15 stops of DR in video. You can think of the camera as shooting 10 - log2(1024/ (0.87*1024 - 95)) bit footage in s-log3. That is, as a 9.64 bit camera. 🙂
  17. Like
    cpc got a reaction from Hangs4Fun in Sony A7S III   
    The sun is so bright that you'd need significant underexposure to bring it down to below clip levels (on any camera). And these images don't look underexposed to me. A clipping value of 0.87106 is still very respectable: on the s-log3 curve, this is slightly more than 6 stops above middle gray. With "metadata ISO" cameras like the Alexa the clip point in Log-C moves up with ISOs higher than base, and lower with ISOs lower than base. But on Sony A7s cameras you can't rate lower than base in s-log (well, on the A7s you can't, at least), so this is likely shot at base s-log3 ISO 640.
    I any case, the s-log3 curve has a nominal range of around 9 stops below mid gray (usable range obviously significantly lower), so this ties up with the boasted 15 stops of DR in video. You can think of the camera as shooting 10 - log2(1024/ (0.87*1024 - 95)) bit footage in s-log3. That is, as a 9.64 bit camera. 🙂
  18. Like
    cpc got a reaction from Geoff CB in Sony A7S III   
    With the middle point mapped as per the specification, the camera simply lacks the highlights latitude to fill all the available s-log3 range. Basically, it clips lower than what s-log3 can handle.
    You should still be importing as data levels: this is not a bug, it is expected. Importing as video levels simply stretches the signal, you are importing it wrong and increasing the gamma of the straight portion of the curve (it is no longer the s-log3 curve), thus throwing off any subsequent processing which relies on the curve being correct.
  19. Like
    cpc got a reaction from kye in How much resolution for YT? Contemplating going back to 1080p   
    Can't argue with this, I am using manual lenses almost exclusively myself.
    On the other hand, ML does provide the best exposure and (manual) focusing tools available in-camera south of 10 grand, maybe more, by far, so this offsets the lack of IBIS somewhat. I am amazed these tools aren't matched by newer cameras 8 years later.
  20. Like
    cpc got a reaction from Katrikura in How much resolution for YT? Contemplating going back to 1080p   
    A 2012 5d mark 3 shoots beautiful 1080p full-frame 14-bit lossless compressed raw with more sharpness than you'll ever need for YT, at a bit rate comparable to Prores XQ. If IS lenses can do instead of IBIS, I don't think you'll find a better deal.
  21. Like
    cpc got a reaction from JJHLH in Sigma Fp review and interview / Cinema DNG RAW   
    As promised, the Sigma fp centered release of slimRAW is now out, so make sure to update. SlimRAW now works around Resolve's lack of affection for 8-bit compressed CinemaDNG, and slimRAW compressed Sigma fp CinemaDNG will work in Premiere even though the uncompressed originals don't.
    There is also another peculiar use: even though Sigma fp raw stills are compressed, you can still (losslessly) shrink them significantly through slimRAW. It discards the huge embedded previews and re-compresses the raw data, shaving off around 30% of the original size. (Of course, don't do this if you want the embedded previews.)
  22. Like
    cpc got a reaction from paulinventome in Sigma Fp review and interview / Cinema DNG RAW   
    The problem is missing time codes in the audio files recorded by the camera. Resolve needs these to auto sync audio and image and present them as a single entity.
    Paul has posted a workaround here:
    As a general rule, if the uncompressed image and audio don't auto sync in Resolve, the compressed image won't auto sync either.
  23. Like
    cpc got a reaction from kaylee in Sigma Fp review and interview / Cinema DNG RAW   
    The problem is missing time codes in the audio files recorded by the camera. Resolve needs these to auto sync audio and image and present them as a single entity.
    Paul has posted a workaround here:
    As a general rule, if the uncompressed image and audio don't auto sync in Resolve, the compressed image won't auto sync either.
  24. Like
    cpc reacted to rawshooter in Sigma Fp review and interview / Cinema DNG RAW   
    I finally tested SlimRaw (trial version) with Wine and Debian 10, and it works like a charm!


  25. Like
    cpc got a reaction from Lars Steenhoff in Sigma Fp review and interview / Cinema DNG RAW   
    I will be surprised if Resolves does rescale in anything different than the image native gamma, that is in whatever gamma the values are at the point of the rescale operation. But if anything, some apps convert from sRGB or power gamma to linear for scaling, and then back.
    You can do various transforms to shrink the ratio between extremes, and this will generally reduce ringing artifacts. I know people deliberately gamma/log transform linear rendered images for rescale. But it is mathematically and physically incorrect. There are examples and lengthy write-ups on the web with what might go wrong if you scale in non-linear gamma, but perhaps most intuitively you can think about it in an "energy conserving" manner. If you don't do it in linear, you are altering the (locally) average brightness of the scene. You may not see this easily in real life images, because it will often be masked by detail, but do a thought experiment about, say, a greyscale synthetic 2x1 image scaled down to a 1x1 image and see what happens.
    I have a strong dislike for ringing artifacts myself, but I believe the correct approach to reduce these would be to pre-blur to band limit the signal and/or use a different filter: for example, Lanczos with less lobes, or Lanczos with pre-weighted samples; or go to splines/cubic; and sometimes bilinear is fine for downscale between 1x and 2x, since it has only positive weights. On the other hand, as we all very well know, theory and practice can diverge, so whatever produces good looking results is fine.
    Rescaling Bayer data is certainly more artifact prone, because of the missing samples, and the unknown of the subsequent deBayer algorithm. This is also the main reason SlimRAW only downscales precisely 2x for DNG proxies.
    It is actually possible to do Bayer aware interpolation and scale 3 layers instead of 4. This way the green channel will benefit from double the information compared to the others. You can think of this as interpolating "in place", rather than scaling with subsequent Bayer rearrangement.
    Similar to how you can scale a full color image in dozens of ways, you can do the same with a Bayer mosaic, and I don't think there is a "proper" way to do this. It is all a matter of managing trade offs, with the added complexity that you have no control over exactly how the image will be then debayered in post. It is in this sense that rescaling Bayer is worse -- you are creating an intermediate image, which will need to endure some serious additional reconstruction. Ideally, you should resize after debayering, because an advanced debayer method will try to use all channels simultaneously (also, see below).
    This is possible, and you can definitely extract more information and get better results by using neighboring pixels of different color because channels correlate somewhat. Exploiting this correlation is at the heart of many debayer algorithms, and, in some sense, memorizing many patterns of correlating samples is how recent NN based debayering models work. But if you go this way, you may just as well compress and record the debayered image with enough additional metadata to allow WB tweaks and exposure compensation in post, or simply go the partially debayered route similar to BRAW or Canon Raw Light.
     
    In any case, we should also have in mind that the higher the resolution, the less noticeable the artifacts. And 4K is quite a lot of pixels. In real life images I don't think it is very likely that there will be noticeable problems, other than the occasional no-OLPF aliasing issues.
×
×
  • Create New...