Jump to content

cpc

Members
  • Posts

    204
  • Joined

  • Last visited

Everything posted by cpc

  1. cpc

    Nikon buys Red?

    This might be premature. Red has a bunch of newer raw/compression related patents which are "continuation" of the old patents or merge a few of the old patents in a new one. E.g., 10531098 (issued 2020), 11076164 (issued 2021), 11503294 (issued 2022), 11818351 (issued 2023), etc. I have no knowledge of the legal implications of these, but won't be surprised one bit if they actually extend the in-camera raw compression monopoly.
  2. Most issues come from the fact that VND is not ND. Which should be obvious: 2xPola is anything but ND, except someone with an inclination for marketing thought it clever to call 2xPola "Variable ND". Pola filters are special purpose filters and it makes no sense to use them for general purpose light levels reduction. There are too many variables involved in filtering polarized light, starting with light falling angles and reflective surface characteristics, for this to be a reliable levels reducing method. Not to mention the adverse side effects of filtering out some reflected light more than other, e.g. preferentially filtering out skin subsurface scattered light otherwise known as "skin glow".
  3. In their peak years Nikon filed a couple of thousands of patents a year. Their patents portfolio is definitely in the tens of thousands. Good chances that they have dug out a few that RED infringed upon. After all, protection is the main reason corporations amass patents in the first place.
  4. "When you’re fundraising, it’s AI. When you’re hiring, it’s ML. When you’re implementing, it’s linear regression." Replace "fundraising" with "marketing" and the truth value doesn't change. It is "artificial intelligence" as much as your phone or watch is "smart". Which is none. So the answer is "No, it isn't", but it largely depends on definitions and heavily overloaded semantics. "AI" certainly doesn't "think", nor "feel", but it can "sense" or "perceive" by being fed data from sensors, and it can represent knowledge and learn. The latter two are where the usefulness comes from, currently. A model can distill structure from a dataset in order to represent knowledge needed for solving a specific task. It is glorified statistics, is all. But anthropomorphizing is in our DNA, we have a sci-fi legacy imprinted on us, and model design itself has long been taking cues from neurobiology, so you'll never be able to steer terminology in the field towards something more restrained.
  5. Let's not forget that ChatGPT is built entirely on Google developed science. Google still have many of the top machine learning R&D people. It is premature to write them off. I'm absolutely certain that Google can deploy and optimize LLMs for scale in a way few other companies can. All they need is incentive, and now they also have that. re: Bard model errors It is certainly a failure to have this happen in a presentation, but ChatGPT makes stupid mistakes all the time. Here is a good writeup with some blatant examples by Wolfram: https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/
  6. The Kodak sensor also appears to have thicker filter dyes which results in rich color and excellent color separation. Later sensors may have been optimized for sensitivity, particularly cheaper sensors. If you look at images from the BM Pocket, they have more compressed color with hues mаshed together.
  7. Did it win, though? Apparently Red settled with Sony after Sony counter sued for infringement. And Apple's case was dismissed for what looks, at least partially, like procedural reasons (basically due to incompleteness -- the decision uses the word "unclear" multiple times in relation to Apple's rationale and says literally "In sum, Petitioner’s obviousness challenge is unclear and incomplete"). That is, Apple lost, but it is not clear if the patent won.
  8. Companies can be clueless about these matters. And Nikon comes from a stills background. On the other hand, Nikon's patents portfolio includes tens of thousands of patents, including thousands in the US. They can probably dig into it for counter infringements, if they are forced to. I don't recall resolution specifics in the Sony case, but I wouldn't be surprised if Sony did exactly this to fend off Red.
  9. It should be clear that there is nothing in Red's compression algorithm that's specifically related to video as in a "sequence of related frames". It still compresses individual frames independently of each other, simply does it at fairly high frame rates (24+). Also, as repeatedly mentioned already the wavelet based "visually lossless" CineformRAW compression for Bayer raw data was introduced that same year months before Mr. Nattress had even met Mr. Jannard... If you read David Newmans's blog, which is still live on the interwebs, and is a great source of inside information for anyone interested, you will know that CineformRAW was started in 2004 and work for adding it to the Silicon Imaging camera started right after NAB 2005. Not that this matters, as Red did not patent a specific implementation. They patented a broad and general idea, which was with absolute certainty discussed by others at the same or at previous times. Which isn't Red's fault, of course. It's just a consequence of the stupidity of the patent system.
  10. The patent expires in 6 years or so, IIRC. Or would we? The guy that invented ANS, possibly the most important fundamental novelty in compression in the last 2 or 3 decades, did put it in the public domain. It is now everywhere. In every new codec worth mentioning. And in hardware like the PS5.
  11. Yes, you can do that. You can also do more sensible things like partial debayer (e.g Blackmagic BRAW). This isn't novelty though. This is a basic example of inevitable evolution.
  12. If you read the patents carefully they usually describe a few possible ways of doing this or that as "claims", and then explicitly say "but not limited to these". For years I used to think Red's patents are limited to in-camera Bayer compression at ratios 6:1 or higher, because this ratio is repeatedly mentioned as a "claim". Apparently, this wasn't the case as demonstrated by their actions against BM and others.
  13. @Andrew Reid Lossless image compression has been around for decades. Raw images are images. Cinema raw images are raw images are images. There isn't anything particularly special about raw images compression-wise. CineformRAW (introduces in 2005) is cited in Red's patents. CineformRAW is cinema raw compression. Red don't claim an invention of raw video compression, they claim putting it in cameras first. Red's patents are mostly referring to "visually lossless" which is an entirely meaningless phrase in relation to raw. Here is a quote from one of their patents: "As used herein, the term “visually lossless” is intended to include output that, when compared side by side with original (never compressed) image data on the same display device, one of ordinary skill in the art would not be able to determine which image is the original with a reasonable degree of accuracy, based only on a visual inspection of the images." This, of course, makes no sense because anyone of ordinary skill can increase image contrast during raw development to an extreme point where the "visually lossless" image breaks before the original. It is a stupid marketing phrase which needs multiple additional definitions (standard observer, standard viewing conditions, standard display, standard raw processing) to make it somewhat useful. None of these are given in the patent, btw. A basic requirement for some tech to be patentable is that it isn't an obvious solution to a problem for someone reasonably skilled in the art. If you present someone reasonably skilled with the goal of putting high bandwidth raw data into limited on-board storage do you think they wouldn't ponder about compression? In a world where raw video cameras exist (Dalsa) and externally recorded compressed raw video exists (SI2k)? Because that's what's patented; not any particular implementation of Red's. To play on your argument: surely big players like Apple and Sony didn't think this was patentable. There must be some base to that. I have no knowledge of the US patent law system, but it definitely lacks common sense. So kudos to Red for capitalizing on this lack of common sense.
  14. Dunno what's a game changer, but almost 15 years ago the SI2k Mini was winning people Academy awards for cinematography. Incidentally, the SI2k is a camera that's relevant in this thread for other reasons. 🙂
  15. Smaller VF magnification can be a positive for spectacles wearers as you don't have to move your eye around the image. I've dumped otherwise great cameras before because of their excessive VF magnification.
  16. How do you price size though? Using the official dimensions, the A7c fits in a box of half the volume of the S5 bounding box.
  17. re: appeal The Sony NEX 6 has the most perfect size-feature balance for small hands and spectacles of all digital cameras I've tried. A7 series is big and heavy, particularly so after the first iteration (the main reason I still use an A7s mark I as a primary photo camera), and EVF position is worse (for me) than on the NEX series. This camera on the other hand... color me interested. The position of the EVF alone is an insta-win.
  18. cpc

    Sony A7S III

    This is too optimistic, I think. The A7s needed overexposure in s-log, it was barely usable at nominal ISO (and I am being generous with my wording here). With the lower base ISO in s-log3 of the A7s III (640 vs 1600 on the A7s), Sony now basically make this overexposure implicit.
  19. cpc

    Sony A7S III

    For determining the clip point it doesn't matter if the footage is overexposed; overexposure doesn't move the clip point; if anything, it helps to find this point easier. All you need is locating a hard clipping area (like the sun). re: exposing While a digital spotmeter would be the perfect tool for exposing log, the A7s II does have "gamma assist" where you are recording s-log, but previewing an image properly tone mapped for display. The A7s III likely has this too. You don't really need perfect white balance in-camera when shooting a fully reversible curve like s-log3. This can be white balanced in post in a mathematically correct way, similarly to how you balance raw in post. You only need to have in-camera WB in the ballpark to maximize utilization of available tonal precision.
  20. cpc

    Sony A7S III

    The sun is so bright that you'd need significant underexposure to bring it down to below clip levels (on any camera). And these images don't look underexposed to me. A clipping value of 0.87106 is still very respectable: on the s-log3 curve, this is slightly more than 6 stops above middle gray. With "metadata ISO" cameras like the Alexa the clip point in Log-C moves up with ISOs higher than base, and lower with ISOs lower than base. But on Sony A7s cameras you can't rate lower than base in s-log (well, on the A7s you can't, at least), so this is likely shot at base s-log3 ISO 640. I any case, the s-log3 curve has a nominal range of around 9 stops below mid gray (usable range obviously significantly lower), so this ties up with the boasted 15 stops of DR in video. You can think of the camera as shooting 10 - log2(1024/ (0.87*1024 - 95)) bit footage in s-log3. That is, as a 9.64 bit camera. 🙂
  21. cpc

    Sony A7S III

    With the middle point mapped as per the specification, the camera simply lacks the highlights latitude to fill all the available s-log3 range. Basically, it clips lower than what s-log3 can handle. You should still be importing as data levels: this is not a bug, it is expected. Importing as video levels simply stretches the signal, you are importing it wrong and increasing the gamma of the straight portion of the curve (it is no longer the s-log3 curve), thus throwing off any subsequent processing which relies on the curve being correct.
  22. @Lensmonkey: Raw is really the same as shooting film. Something that you should take into consideration is that middle gray practically never falls in the middle of the exposure range on negative film. You have tons of overexposure latitude, and very little underexposure latitude, so overexposing a negative for a denser image is very common. With raw on lower end cameras it is quite the opposite: you don't really have much (if any) latitude for overexposure, because of the hard clip at sensor saturation levels, but you can often rate faster (higher ISO) and underexpose a bit. This is the case, provided that ISO is merely a metadata label, which is true for most cinema cameras, and looking at the chart it is likely true for the Sigma up to around 1600, where some analog gain change kicks in.
  23. Your "uncompressed reference" has lost information, that the 4:4:4 codecs are taking into consideration, hence the difference. You should use uncompressed RGB for reference, not YUV, and certainly not 4:2:2 subsampled. Remember, 4:2:2 subsampling is a form of compression.
  24. Can't argue with this, I am using manual lenses almost exclusively myself. On the other hand, ML does provide the best exposure and (manual) focusing tools available in-camera south of 10 grand, maybe more, by far, so this offsets the lack of IBIS somewhat. I am amazed these tools aren't matched by newer cameras 8 years later.
  25. A 2012 5d mark 3 shoots beautiful 1080p full-frame 14-bit lossless compressed raw with more sharpness than you'll ever need for YT, at a bit rate comparable to Prores XQ. If IS lenses can do instead of IBIS, I don't think you'll find a better deal.
×
×
  • Create New...