Jump to content

Ilkka Nissila

Members
  • Posts

    47
  • Joined

  • Last visited

About Ilkka Nissila

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Ilkka Nissila's Achievements

Member

Member (2/5)

25

Reputation

  1. intoPIX's patents describe the algorithm. If it is the same as used in another previous product then it is unlikely to have been granted a patent. Of course this assumes the patent office can understand the algorithms and the novelties in context, which is not necessarily the case. Given the patent text it should be possible to implement it.
  2. A lot of people use external monitors because it allows them to see the image properly without having to look through a hole for a long time and inadvertently shaking the camera from time to time by the eyebrows/forehead/glasses touching it. The recording function is useful because fast and high capacity storage for an external recorder can be an order of magnitude cheaper than for a camera. It also reduces the likelihood of overheating as the card inside the camera does get hot if used for longer takes at high data rates and it and the processor contribute to overall camera internal temperature. At events such as sports or big concerts I rarely see people use the EVF even when it is present. This is probably because it is more relaxing and easier to work with a tripod-based long focal length setup, you don't have to position your eye so precisely and the bigger screen gives leeway to change posture. Wanting to do spontaneous, high-quality video is like desiring cheap intergalactic space flight. It's just not in the cards a lot of the time. 😉 I can see the integral recording reduce the risk of cable falling off and terminating the recording accidentally. But then if the camera stops or malfunctions because of overheating, the outcome could be the same or worse (if the camera needs to cool down it takes more time than plugging in the cable).
  3. Economies of scale would benefit the cost of producing more samples of the same design, so if considering the combined economies of Nikon and Sony, likely it would be cheaper to produce the same sensor for both Nikon and Sony cameras. But, there is the brand identity thing, and Nikon want to do their own thing so e.g. the 45 MP sensor that Nikon use in the Z8, Z9, Z7 series and D850 is not used in any Sony camera. Nikon could be doing that because they want to maintain their own brand identity or they want specific features that Sony do not want in their cameras, such as the ISO 64 which was developed first for the D810 and D850, and Nikon engineers interviewed by imaging resource felt it was the most significant thing they achieved, a true ISO 64. Originally this was implemented reportedly to allow sports photographers (e.g. in motorsports) to pan with slower shutter speeds without having to use an ND filter to get to the right shutter speed. But of course landscape and other photographers can also use it and benefit from the larger number of photons captured (increasing color sensitivity & tonal range), and for photographers who want to use very large apertures in bright sunlight as well. In those ISO 64 capable sensors the high ISO PDR seems to have experienced a slight drop compared to equal ISOs on the 36 MP sensor that had a base ISO of 100 (D800), as well as compared to some Sony models. So there is a tradeoff that Nikon wanted to make to achieve this base ISO and it's not a clear win for general-purpose use, rather it's useful for specific applications. I believe a part of the reduction of ISO was achieved by using a different color filter array (there are some published measurements on DPR and nikongear) which had a more flat blue curve maybe improving colour accuracy (?). Anyway this is an example of a feature which Nikon claims is their sensor designers' achievement. Of course, no one outside of Nikon and Sony know exactly how they work within their partnership, and this shouldn't really matter. Only how the cameras work for the users matters in the end.
  4. There is no "one" patent, it's a series of patents, and patents or some of their claims can be invalidated if new evidence is discovered.
  5. intoPix's web site lists Nikon Z8 and Z9's N-RAW as using TicoRAW for stills and video (Zf doesn't have N-RAW video but does have the corresponding stills compression options HE and HE* which are similar to Z8's and Z9's HE and HE* so we can safely guess it too is TicoRAW). Nikon's Z9, Z8, and Zf manuals state that they are "powered by intoPIX technology". Z8 and Zf were launched in 2023, so there you have mentions "after 2022". Since RED's earlier lawsuit against Jinni Tech was also withdrawn when the latter used the same argument as Nikon did with the same outcome, yet Jinni Tech didn't need to purchase RED the company to reach this outcome, so we can fairly safely assume Nikon's decision to purchase RED is unrelated to the lawsuit. Since Nikon's argument is that the patents are invalid they aren't likely to sue others for infringing those invalid patents. But RED may have other patents or aspects of patents that Nikon may want to use. And very likely they do want to enter the high-end video camera market since some customers won't purchase hybrids without system compatibility with higher-end video cameras.
  6. Nikon use intoPIX's TicoRAW for high-efficiency encoding of raw stills and raw video. It's a different algorithm from what RED is using. RED's patent has been suggested to be invalid anyway, as RED demonstrated it in a camera more than one year before applying for the patent (which was Nikon's counter-argument when RED sued them and so the case was settled outside of court, which also happened with Jinni Tech who used a similar argument). I doubt very much Nikon bought RED for the patents but simply to get a foothold in the higher-end video camera market.
  7. The Mk II has subject-detection available also in wide-area L AF box, instead of only in the full-frame auto-area AF as in the Mk I. For me limiting the search area for the subject is key to obtaining controlled and reliable results in photographing people. In the newer Zf, the AF box size and shape can be adjusted with many different options and subject-detection is also available there. For me these are the most typical modes I use the cameras in, and the most useful as it gives just the right compromise between user control and automation for me. I would expect the Z6 III to feature also the same custom area AF as the Zf has (which is ahead of the Z8 and Z9 in the number of box sizes available).
  8. You need to go to the custom settings menu and the g settings (video). There is a setting where you can assign the hi-res zoom to a pair of custom function buttons (such as Fn1 to zoom in and Fn2 to zoom out). You can also adjust the zoom speed. In addition to buttons on the camera itself, it's possible to control the zoom from the remote video grip that Nikon makes. The main limitation of the high-res zoom is that it limits the AF area to a central wide area of the frame. You can't move the box off center or control how big it is. So you lose some control over the autofocus. Subject detection is available though. I guess the limitation is because the box sizes are tied to the phase-detection sensor positions and those positions with the zoomed-in frame would then change as you zoom. But other than that I like the feature.
  9. These things can be done. I just configured my video shooting bank A for Prores 422 HQ 25 fps and 1/50s SDR, and bank B for h.265 4K 50 fps 10-bit 1/100s with N-Log, and I can now switch between banks by pressing and holding Fn3 (which I programmed to act as shooting bank selection button from the video custom settings custom controls menu) and rotating the main command dial. Very handy.
  10. The b type has lower capacity than c (and the difference is significant). Recording time may also depend on which codec you use, some are more processing-intensive. Django: You can use shooting menu settings banks (A-D) for video and the bank selection can be set to Fn1, Fn2, Fn3, or Fn vertical, for example. I am not a settings bank user so I would have to check if you can select the record file type, resolution and frame rate there but I would guess that you can.
  11. What is a C1-3 dial? If you mean custom settings like U1-U3 on some Nikons, where the mode dial has customisable options which remember most settings, then no, neither the Z8 nor the Z9 have those. But there are photo shooting banks and custom settings banks.
  12. In my experience it isn't bad. A single Z8 battery (EN-EL15c) lasts about 2 hours of continuous video recording or 3-4 hours of active still photography (about 2000 shots). I often have the MB-N12 mounted with two EN-EL15cs inside, and that basically gets me through most events with the exception of all-day sports events that can last 8-10 hours - in those cases a third battery may be needed, or recharging during the day. If you need to do more than 2 hours of video recording or 2000 shots / 4 hours of active shooting then you do need two batteries for the Z8. For winter weather conditions I'm not quite sure yet but with the dual battery setup in the vertical grip I just don't run into situations where the battery capacity becomes a limiting factor. The Z9 battery is no doubt better, and the camera has some other advantages but with it you lose the option of having a more compact camera by taking the vertical grip off. It's just a personal choice of what you prioritise.
  13. It doesn't quite work like that. If you shoot a sequence at 120 fps & 1/240s with the GS camera, and compare the results after processing (by taking the images and processing them to reduce noise) to native RS camera at 24 fps, 1/48s, then the latter will still have more dynamic range (unless an ND filter was needed to get the slow shutter speed in too bright light). Phone cameras get away with a lot of stuff, including somehow merging the images even in the presence of moving parts in the image by algorithms only because the images are viewed as a tiny part of the human visual field so the imperfect guesses by the AI don't bother us as much as they would if they were shown on large screens. The images often look unnatural and fake to those who are experienced in looking at actual photographs, though.
  14. Mechanical (focal plane) shutter produces some rolling shutter. You can see it e.g. if photographing a propeller plane or helicopter at fast shutter speeds. You can also see it when there is artificial light that flickers, in the past the advice was to use a slow shutter speed to avoid the banding from fluorescent lights; today the lights are often LED based but still there are circumstances where the lights or screens show banding even when photographing with mechanical shutter. The GS eliminates this problem. I would think that photojournalists, sports and music photographers would like it, but it's a pricey camera for sure. Golf, yes, quite likely there even the fastest rolling shutter would show distortion, and the club swings quite fast so you can get some interesting timings at 120 fps or 240 fps. But those are kind of special applications. I guess specialists who work on these kind of sports with very fast motion would get it.
  15. But how much would you be willing to pay extra for the larger sensor and lenses that cover it? After all, many lenses have masks that minimize flare but also limit the image coverage to a rectangle. Let's say lenses for a square 36 mm times 36 mm sonsor would have 1/2 stop smaller apertures and 50% higher price as a result, and camera body would also be 50% more expensive. Flash sync speed would be 1/125 s and sensor read time 50% longer. Would you still want it, and would you expect that everyone would be fine with it so that mass production would be realistic and square sensors would replace rectangular ones all over the market?
×
×
  • Create New...