Jump to content

cpc

Members
  • Posts

    204
  • Joined

  • Last visited

Posts posted by cpc

  1. 20 hours ago, John Matthews said:

    Redcode RAW was initially implemented in the early 2010s, but the filing was December 28th 2007. I imagine the patent will only last a few more years. This must have also played a role in the acquisition. Nikon will have a very short window to leverage it.

    This might be premature. Red has a bunch of newer raw/compression related patents which are "continuation" of the old patents or merge a few of the old patents in a new one. E.g., 10531098 (issued 2020), 11076164 (issued 2021), 11503294 (issued 2022), 11818351 (issued 2023), etc. I have no knowledge of the legal implications of these, but won't be surprised one bit if they actually extend the in-camera raw compression monopoly.

  2. Most issues come from the fact that VND is not ND. Which should be obvious: 2xPola is anything but ND, except someone with an inclination for marketing thought it clever to call 2xPola "Variable ND".

    Pola filters are special purpose filters and it makes no sense to use them for general purpose light levels reduction. There are too many variables involved in filtering polarized light, starting with light falling angles and reflective surface characteristics, for this to be a reliable levels reducing method. Not to mention the adverse side effects of filtering out some reflected light more than other, e.g. preferentially filtering out skin subsurface scattered light otherwise known as "skin glow".

  3. On 4/28/2023 at 1:08 AM, Eric Calabros said:

    I don't think Nikon has anything in tech IP to offer that may be useful for RED. They don't want Z mount. And almost all of Nikon's sensor patents are based of long time collaboration with Sony Semi. Masked AF pixels, stacked fabrications, all are covered by Sony IP too.

    In their peak years Nikon filed a couple of thousands of patents a year. Their patents portfolio is definitely in the tens of thousands. Good chances that they have dug out a few that RED infringed upon. After all, protection is the main reason corporations amass patents in the first place.

  4. "When you’re fundraising, it’s AI. When you’re hiring, it’s ML. When you’re implementing, it’s linear regression."

    Replace "fundraising" with "marketing" and the truth value doesn't change. It is "artificial intelligence" as much as your phone or watch is "smart". Which is none.

    So the answer is "No, it isn't", but it largely depends on definitions and heavily overloaded semantics. "AI" certainly doesn't "think", nor "feel", but it can "sense" or "perceive" by being fed data from sensors, and it can represent knowledge and learn. The latter two are where the usefulness comes from, currently. A model can distill structure from a dataset in order to represent knowledge needed for solving a specific task. It is glorified statistics, is all. But anthropomorphizing is in our DNA, we have a sci-fi legacy imprinted on us, and model design itself has long been taking cues from neurobiology, so you'll never be able to steer terminology in the field towards something more restrained.

  5. 15 hours ago, Andrew Reid said:

    Here's how I think AI will develop.

    Step one, is the complete obliteration of Google, they are the next Blackberry. That's not to say they won't have skin in the game, but you can be sure (as ChatGPT is showing) that as the "establishment" they will always be a step behind the cutting edge like Nokia vs Apple. Google can't even get their established products working well. Search is a total mess. YouTube isn't fulfilling full potential, and their smartphone business is a bit player even after all this time.

    Let's not forget that ChatGPT is built entirely on Google developed science. Google still have many of the top machine learning R&D people. It is premature to write them off. I'm absolutely certain that Google can deploy and optimize LLMs for scale in a way few other companies can. All they need is incentive, and now they also have that.

     

    re: Bard model errors

    It is certainly a failure to have this happen in a presentation, but ChatGPT makes stupid mistakes all the time. Here is a good writeup with some blatant examples by Wolfram: https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/

     

  6. 5 hours ago, Andrew Reid said:

    Patent wins vs Apple, Sony...

    Did it win, though?

    Apparently Red settled with Sony after Sony counter sued for infringement. And Apple's case was dismissed for what looks, at least partially, like procedural reasons (basically due to incompleteness -- the decision uses the word "unclear" multiple times in relation to Apple's rationale and says literally "In sum, Petitioner’s obviousness challenge is unclear and incomplete"). That is, Apple lost, but it is not clear if the patent won.

  7. 1 hour ago, Davide DB said:

    Is it possible that a company like Nikon went off the rails in this matter without taking into account the history of previous lawsuits (perhaps misguided? She tried?) or is Nikon in posses of information that we do not have?

    Companies can be clueless about these matters. And Nikon comes from a stills background.

    On the other hand, Nikon's patents portfolio includes tens of thousands of patents, including thousands in the US. They can probably dig into it for counter infringements, if they are forced to. I don't recall resolution specifics in the Sony case, but I wouldn't be surprised if Sony did exactly this to fend off Red.

  8. It should be clear that there is nothing in Red's compression algorithm that's specifically related to video as in a "sequence of related frames". It still compresses individual frames independently of each other, simply does it at fairly high frame rates (24+). Also, as repeatedly mentioned already the wavelet based "visually lossless" CineformRAW compression for Bayer raw data was introduced that same year months before Mr. Nattress had even met Mr. Jannard... If you read David Newmans's blog, which is still live on the interwebs, and is a great source of inside information for anyone interested, you will know that CineformRAW was started in 2004 and work for adding it to the Silicon Imaging camera started right after NAB 2005.

    Not that this matters, as Red did not patent a specific implementation. They patented a broad and general idea, which was with absolute certainty discussed by others at the same or at previous times. Which isn't Red's fault, of course. It's just a consequence of the stupidity of the patent system.

  9. 23 minutes ago, webrunner5 said:

    Maybe just Maybe they didn't think to throw everyone under the bus with a patent. Back then everyone played fairly nice. it only took Red to f us for I guess the rest of our lives. Hardly something to brag about.

    The patent expires in 6 years or so, IIRC.

     

    58 minutes ago, mercer said:

    As others have pointed out... any one of these companies could create an external handle or grip as a raw module and get around the patent.

    Red used an idea, put in the R&D and made a workable product. Why should these multinational conglomerates be allowed to come in and piggy back their work just because it was inevitable someone else may want to do it sometime.

    We'd have zero innovations if that's the way things worked. 

    Or would we?

    The guy that invented ANS, possibly the most important fundamental novelty in compression in the last 2 or 3 decades, did put it in the public domain. It is now everywhere. In every new codec worth mentioning. And in hardware like the PS5.

  10. 14 minutes ago, FHDcrew said:

    What if a camera recorded 23 fps and then used optical flow to make the extra single frame?  I think that would get around RED’s patent lol

    Yes, you can do that. You can also do more sensible things like partial debayer (e.g Blackmagic BRAW).

     

    22 minutes ago, Andrew Reid said:

    The novelty is they applied it to 24fps RAW video in a solid state recorder/camera.

    This isn't novelty though. This is a basic example of inevitable evolution.

  11. 3 hours ago, mercer said:

    If I remember correctly, it's not as broad as stated here. It discusses the process of compression, which compression ratios they are claiming ownership of and how it will work within the confines of the camera. There's also room within the concept for others to work around.

    If you read the patents carefully they usually describe a few possible ways of doing this or that as "claims", and then explicitly say "but not limited to these". For years I used to think Red's patents are limited to in-camera Bayer compression at ratios 6:1 or higher, because this ratio is repeatedly mentioned as a "claim". Apparently, this wasn't the case as demonstrated by their actions against BM and others.

  12. @Andrew Reid Lossless image compression has been around for decades. Raw images are images. Cinema raw images are raw images are images. There isn't anything particularly special about raw images compression-wise. CineformRAW (introduces in 2005) is cited in Red's patents. CineformRAW is cinema raw compression. Red don't claim an invention of raw video compression, they claim putting it in cameras first.

    Red's patents are mostly referring to "visually lossless" which is an entirely meaningless phrase in relation to raw. Here is a quote from one of their patents: "As used herein, the term “visually lossless” is intended to include output that, when compared side by side with original (never compressed) image data on the same display device, one of ordinary skill in the art would not be able to determine which image is the original with a reasonable degree of accuracy, based only on a visual inspection of the images." This, of course, makes no sense because anyone of ordinary skill can increase image contrast during raw development to an extreme point where the "visually lossless" image breaks before the original. It is a stupid marketing phrase which needs multiple additional definitions (standard observer, standard viewing conditions, standard display, standard raw processing) to make it somewhat useful. None of these are given in the patent, btw.

    A basic requirement for some tech to be patentable is that it isn't an obvious solution to a problem for someone reasonably skilled in the art. If you present someone reasonably skilled with the goal of putting high bandwidth raw data into limited on-board storage do you think they wouldn't ponder about compression? In a world where raw video cameras exist (Dalsa) and externally recorded compressed raw video exists (SI2k)? Because that's what's patented; not any particular implementation of Red's. To play on your argument: surely big players like Apple and Sony didn't think this was patentable. There must be some base to that. I have no knowledge of the US patent law system, but it definitely lacks common sense. So kudos to Red for capitalizing on this lack of common sense.

  13. 5 hours ago, IronFilm said:

    Yeah I worked on a Netflix show where they had the VENICE head on a gimbal for the cam op, while a grip walked closely behind with the VENICE body in a backpack. 

    But anyway, ARRI did this before Sony!! 😉 With their ARRI ALEXA M

     

    Dunno what's a game changer, but almost 15 years ago the SI2k Mini was winning people Academy awards for cinematography. Incidentally, the SI2k is a camera that's relevant in this thread for other reasons. 🙂

  14. 25 minutes ago, Geoff CB said:

    Don't think this will sell well, terrible video specs for the price, bad viewfinder, and not in any way better than the A7III for photographers. 

    Smaller VF magnification can be a positive for spectacles wearers as you don't have to move your eye around the image. I've dumped otherwise great cameras before because of their excessive VF magnification.

  15. 7 minutes ago, Trankilstef said:

    Here in France it is the same price (even 100 euros more for the body + kit lens), which i find insane for a 2 years old tech.

    How do you price size though? Using the official dimensions, the A7c fits in a box of half the volume of the S5 bounding box.

  16. re: appeal

    The Sony NEX 6 has the most perfect size-feature balance for small hands and spectacles of all digital cameras I've tried. A7 series is big and heavy, particularly so after the first iteration (the main reason I still use an A7s mark I as a primary photo camera), and EVF position is worse (for me) than on the NEX series. This camera on the other hand... color me interested. The position of the EVF alone is an insta-win.

  17. 7 hours ago, Llaasseerr said:

    Going back to this comment. So I'm someone that wants my DR above middle grey and in this case I would be inclined to just shoot with a -2 ND and push it 2 stops in post. I don't like grainless images anyway.  Looking forward to being able to test that theory out though haha.

    This is too optimistic, I think. The A7s needed overexposure in s-log, it was barely usable at nominal ISO (and I am being generous with my wording here). With the lower base ISO in s-log3 of the A7s III (640 vs 1600 on the A7s), Sony now basically make this overexposure implicit.

  18. 2 hours ago, Hangs4Fun said:

    my guess is that the footage has some overexposure in it.  S-Log3 can be tricky in mirrorless to expose properly, there is none of the cool tools you have in the pro Sony cameras.  If they didn't find 32% gray properly and bump up 2 stops from there, then the results we see are what I would expect.  Since most people (rightfully so) shied away from using S-Log3 in 8bit mirrorless camera's they are rusty on how to properly expose it. 

    I will bet you lunch that they got zero at middle gray and then added 2 stops from there (instead of starting at 32%).

    For determining the clip point it doesn't matter if the footage is overexposed; overexposure doesn't move the clip point; if anything, it helps to find this point easier. All you need is locating a hard clipping area (like the sun).

     

    re: exposing

    While a digital spotmeter would be the perfect tool for exposing log, the A7s II does have "gamma assist" where you are recording s-log, but previewing an image properly tone mapped for display. The A7s III likely has this too.

    You don't really need perfect white balance in-camera when shooting a fully reversible curve like s-log3. This can be white balanced in post in a mathematically correct way, similarly to how you balance raw in post. You only need to have in-camera WB in the ballpark to maximize utilization of available tonal precision.

     

  19. 11 minutes ago, Hangs4Fun said:

    What if this is an on-set exposure issue?  I think the assumption being made is that the S-Log3 footage is properly exposed.  If your goal was to protect even the sun rays, then setting exposure levels is critical.

    So my main argument here is, we need to know what the exposure settings were when shot to draw conclusions here.  Better to do tests with varying levels done and see if this is really the camera, the NLE, or just the recording conditions not matching the post production goals

    The sun is so bright that you'd need significant underexposure to bring it down to below clip levels (on any camera). And these images don't look underexposed to me. A clipping value of 0.87106 is still very respectable: on the s-log3 curve, this is slightly more than 6 stops above middle gray. With "metadata ISO" cameras like the Alexa the clip point in Log-C moves up with ISOs higher than base, and lower with ISOs lower than base. But on Sony A7s cameras you can't rate lower than base in s-log (well, on the A7s you can't, at least), so this is likely shot at base s-log3 ISO 640.

    I any case, the s-log3 curve has a nominal range of around 9 stops below mid gray (usable range obviously significantly lower), so this ties up with the boasted 15 stops of DR in video. You can think of the camera as shooting 10 - log2(1024/ (0.87*1024 - 95)) bit footage in s-log3. That is, as a 9.64 bit camera. 🙂

  20. On 8/17/2020 at 12:39 AM, Llaasseerr said:

    Just checking out the DR with the bike trail footage. I used the clips showing the sun, since the sensor is clipping. This can be confirmed by looking at the waveform. Not sure if this has been mentioned already, but it seems the Resolve Clip Attributes>default data level on import is incorrect.

    Maybe some others chan check this. The Auto setting is mapping to Data levels, but then the max value (the sensor clipping point in this case) seems too low. Setting it to Video levels appears to correct this. The Color Range metadata tag on the clip is full range though, so I can see why it's doing this.

    The default "Full" levels max Slog3 value in the clip is 0.87106 which converts to a linear sensor clipping point of 12 when inverting the log curve. So the log max value when the sensor clips is nowhere near 1.0, and considering the Slog3 curve max linear value is 38.42, it's under utilised.

    Manually setting to Video levels, the max Slog3 value in the clip is 0.94408 which is much closer to a theoretical max of 1.0 and converts to a linear sensor clipping point of 23.203 when inverting the log curve. As a comparison, the original a7s had a max linear value of 8.43214 so this is an additional 1.5 stops - not too shabby!

    If I'm right, then Video levels shows a much fatter image on the waveform monitor.  I can now see the black levels looked a little milky on the default Full levels image, and there's no black level clipping occurring at Video levels which is a telltale sign that it's set incorrectly. Although the shadows are pushed down further and the image is punchier, it's not clipping. It will be good to check this against ProRes RAW clips when they start appearing.

    With the middle point mapped as per the specification, the camera simply lacks the highlights latitude to fill all the available s-log3 range. Basically, it clips lower than what s-log3 can handle.

    You should still be importing as data levels: this is not a bug, it is expected. Importing as video levels simply stretches the signal, you are importing it wrong and increasing the gamma of the straight portion of the curve (it is no longer the s-log3 curve), thus throwing off any subsequent processing which relies on the curve being correct.

  21. @Lensmonkey:

    Raw is really the same as shooting film. Something that you should take into consideration is that middle gray practically never falls in the middle of the exposure range on negative film. You have tons of overexposure latitude, and very little underexposure latitude, so overexposing a negative for a denser image is very common. With raw on lower end cameras it is quite the opposite: you don't really have much (if any) latitude for overexposure, because of the hard clip at sensor saturation levels, but you can often rate faster (higher ISO) and underexpose a bit. This is the case, provided that ISO is merely a metadata label, which is true for most cinema cameras, and looking at the chart it is likely true for the Sigma up to around 1600, where some analog gain change kicks in.

  22. 53 minutes ago, kye said:

    I just rendered a Prores 4444 and a Prores 4444 XQ from Resolve and while the file sizes are much larger, they get a lower SSIM score than Prores HQ.

    Any ideas why that might happen?

    They're not radically lower, so I don't think that I've stuffed it up or that there's a technical error somewhere.  

    Your "uncompressed reference" has lost information, that the 4:4:4 codecs are taking into consideration, hence the difference.

    You should use uncompressed RGB for reference, not YUV, and certainly not 4:2:2 subsampled. Remember, 4:2:2 subsampling is a form of compression.

  23. 8 hours ago, kye said:

    True.  IBIS is a pretty killer offering though, considering that if you take all of the lenses made throughout history, the percentage of those with OIS is almost zero, but IBIS gives stabilisation to every lens ever made.

    Can't argue with this, I am using manual lenses almost exclusively myself.

    On the other hand, ML does provide the best exposure and (manual) focusing tools available in-camera south of 10 grand, maybe more, by far, so this offsets the lack of IBIS somewhat. I am amazed these tools aren't matched by newer cameras 8 years later.

×
×
  • Create New...