Jump to content

Ilkka Nissila

Members
  • Posts

    47
  • Joined

  • Last visited

Everything posted by Ilkka Nissila

  1. intoPIX's patents describe the algorithm. If it is the same as used in another previous product then it is unlikely to have been granted a patent. Of course this assumes the patent office can understand the algorithms and the novelties in context, which is not necessarily the case. Given the patent text it should be possible to implement it.
  2. A lot of people use external monitors because it allows them to see the image properly without having to look through a hole for a long time and inadvertently shaking the camera from time to time by the eyebrows/forehead/glasses touching it. The recording function is useful because fast and high capacity storage for an external recorder can be an order of magnitude cheaper than for a camera. It also reduces the likelihood of overheating as the card inside the camera does get hot if used for longer takes at high data rates and it and the processor contribute to overall camera internal temperature. At events such as sports or big concerts I rarely see people use the EVF even when it is present. This is probably because it is more relaxing and easier to work with a tripod-based long focal length setup, you don't have to position your eye so precisely and the bigger screen gives leeway to change posture. Wanting to do spontaneous, high-quality video is like desiring cheap intergalactic space flight. It's just not in the cards a lot of the time. šŸ˜‰ I can see the integral recording reduce the risk of cable falling off and terminating the recording accidentally. But then if the camera stops or malfunctions because of overheating, the outcome could be the same or worse (if the camera needs to cool down it takes more time than plugging in the cable).
  3. Economies of scale would benefit the cost of producing more samples of the same design, so if considering the combined economies of Nikon and Sony, likely it would be cheaper to produce the same sensor for both Nikon and Sony cameras. But, there is the brand identity thing, and Nikon want to do their own thing so e.g. the 45 MP sensor that Nikon use in the Z8, Z9, Z7 series and D850 is not used in any Sony camera. Nikon could be doing that because they want to maintain their own brand identity or they want specific features that Sony do not want in their cameras, such as the ISO 64 which was developed first for the D810 and D850, and Nikon engineers interviewed by imaging resource felt it was the most significant thing they achieved, a true ISO 64. Originally this was implemented reportedly to allow sports photographers (e.g. in motorsports) to pan with slower shutter speeds without having to use an ND filter to get to the right shutter speed. But of course landscape and other photographers can also use it and benefit from the larger number of photons captured (increasing color sensitivity & tonal range), and for photographers who want to use very large apertures in bright sunlight as well. In those ISO 64 capable sensors the high ISO PDR seems to have experienced a slight drop compared to equal ISOs on the 36 MP sensor that had a base ISO of 100 (D800), as well as compared to some Sony models. So there is a tradeoff that Nikon wanted to make to achieve this base ISO and it's not a clear win for general-purpose use, rather it's useful for specific applications. I believe a part of the reduction of ISO was achieved by using a different color filter array (there are some published measurements on DPR and nikongear) which had a more flat blue curve maybe improving colour accuracy (?). Anyway this is an example of a feature which Nikon claims is their sensor designers' achievement. Of course, no one outside of Nikon and Sony know exactly how they work within their partnership, and this shouldn't really matter. Only how the cameras work for the users matters in the end.
  4. There is no "one" patent, it's a series of patents, and patents or some of their claims can be invalidated if new evidence is discovered.
  5. intoPix's web site lists Nikon Z8 and Z9's N-RAW as using TicoRAW for stills and video (Zf doesn't have N-RAW video but does have the corresponding stills compression options HE and HE* which are similar to Z8's and Z9's HE and HE* so we can safely guess it too is TicoRAW). Nikon's Z9, Z8, and Zf manuals state that they are "powered by intoPIX technology". Z8 and Zf were launched in 2023, so there you have mentions "after 2022". Since RED's earlier lawsuit against Jinni Tech was also withdrawn when the latter used the same argument as Nikon did with the same outcome, yet Jinni Tech didn't need to purchase RED the company to reach this outcome, so we can fairly safely assume Nikon's decision to purchase RED is unrelated to the lawsuit. Since Nikon's argument is that the patents are invalid they aren't likely to sue others for infringing those invalid patents. But RED may have other patents or aspects of patents that Nikon may want to use. And very likely they do want to enter the high-end video camera market since some customers won't purchase hybrids without system compatibility with higher-end video cameras.
  6. Nikon use intoPIX's TicoRAW for high-efficiency encoding of raw stills and raw video. It's a different algorithm from what RED is using. RED's patent has been suggested to be invalid anyway, as RED demonstrated it in a camera more than one year before applying for the patent (which was Nikon's counter-argument when RED sued them and so the case was settled outside of court, which also happened with Jinni Tech who used a similar argument). I doubt very much Nikon bought RED for the patents but simply to get a foothold in the higher-end video camera market.
  7. The Mk II has subject-detection available also in wide-area L AF box, instead of only in the full-frame auto-area AF as in the Mk I. For me limiting the search area for the subject is key to obtaining controlled and reliable results in photographing people. In the newer Zf, the AF box size and shape can be adjusted with many different options and subject-detection is also available there. For me these are the most typical modes I use the cameras in, and the most useful as it gives just the right compromise between user control and automation for me. I would expect the Z6 III to feature also the same custom area AF as the Zf has (which is ahead of the Z8 and Z9 in the number of box sizes available).
  8. You need to go to the custom settings menu and the g settings (video). There is a setting where you can assign the hi-res zoom to a pair of custom function buttons (such as Fn1 to zoom in and Fn2 to zoom out). You can also adjust the zoom speed. In addition to buttons on the camera itself, it's possible to control the zoom from the remote video grip that Nikon makes. The main limitation of the high-res zoom is that it limits the AF area to a central wide area of the frame. You can't move the box off center or control how big it is. So you lose some control over the autofocus. Subject detection is available though. I guess the limitation is because the box sizes are tied to the phase-detection sensor positions and those positions with the zoomed-in frame would then change as you zoom. But other than that I like the feature.
  9. These things can be done. I just configured my video shooting bank A for Prores 422 HQ 25 fps and 1/50s SDR, and bank B for h.265 4K 50 fps 10-bit 1/100s with N-Log, and I can now switch between banks by pressing and holding Fn3 (which I programmed to act as shooting bank selection button from the video custom settings custom controls menu) and rotating the main command dial. Very handy.
  10. The b type has lower capacity than c (and the difference is significant). Recording time may also depend on which codec you use, some are more processing-intensive. Django: You can use shooting menu settings banks (A-D) for video and the bank selection can be set to Fn1, Fn2, Fn3, or Fn vertical, for example. I am not a settings bank user so I would have to check if you can select the record file type, resolution and frame rate there but I would guess that you can.
  11. What is a C1-3 dial? If you mean custom settings like U1-U3 on some Nikons, where the mode dial has customisable options which remember most settings, then no, neither the Z8 nor the Z9 have those. But there are photo shooting banks and custom settings banks.
  12. In my experience it isn't bad. A single Z8 battery (EN-EL15c) lasts about 2 hours of continuous video recording or 3-4 hours of active still photography (about 2000 shots). I often have the MB-N12 mounted with two EN-EL15cs inside, and that basically gets me through most events with the exception of all-day sports events that can last 8-10 hours - in those cases a third battery may be needed, or recharging during the day. If you need to do more than 2 hours of video recording or 2000 shots / 4 hours of active shooting then you do need two batteries for the Z8. For winter weather conditions I'm not quite sure yet but with the dual battery setup in the vertical grip I just don't run into situations where the battery capacity becomes a limiting factor. The Z9 battery is no doubt better, and the camera has some other advantages but with it you lose the option of having a more compact camera by taking the vertical grip off. It's just a personal choice of what you prioritise.
  13. It doesn't quite work like that. If you shoot a sequence at 120 fps & 1/240s with the GS camera, and compare the results after processing (by taking the images and processing them to reduce noise) to native RS camera at 24 fps, 1/48s, then the latter will still have more dynamic range (unless an ND filter was needed to get the slow shutter speed in too bright light). Phone cameras get away with a lot of stuff, including somehow merging the images even in the presence of moving parts in the image by algorithms only because the images are viewed as a tiny part of the human visual field so the imperfect guesses by the AI don't bother us as much as they would if they were shown on large screens. The images often look unnatural and fake to those who are experienced in looking at actual photographs, though.
  14. Mechanical (focal plane) shutter produces some rolling shutter. You can see it e.g. if photographing a propeller plane or helicopter at fast shutter speeds. You can also see it when there is artificial light that flickers, in the past the advice was to use a slow shutter speed to avoid the banding from fluorescent lights; today the lights are often LED based but still there are circumstances where the lights or screens show banding even when photographing with mechanical shutter. The GS eliminates this problem. I would think that photojournalists, sports and music photographers would like it, but it's a pricey camera for sure. Golf, yes, quite likely there even the fastest rolling shutter would show distortion, and the club swings quite fast so you can get some interesting timings at 120 fps or 240 fps. But those are kind of special applications. I guess specialists who work on these kind of sports with very fast motion would get it.
  15. But how much would you be willing to pay extra for the larger sensor and lenses that cover it? After all, many lenses have masks that minimize flare but also limit the image coverage to a rectangle. Let's say lenses for a square 36 mm times 36 mm sonsor would have 1/2 stop smaller apertures and 50% higher price as a result, and camera body would also be 50% more expensive. Flash sync speed would be 1/125 s and sensor read time 50% longer. Would you still want it, and would you expect that everyone would be fine with it so that mass production would be realistic and square sensors would replace rectangular ones all over the market?
  16. Hasselblad has the advantage of use of central shutters in each lens, so you can flash sync normally at all speeds (Fuji is limited to 1/125s and slower, which is very slow). For daylight + flash shoots this is very useful, one can use much smaller flashes to balance with bright daylight. The Hasselblad is also very well-designed ergonomically and compact with some of the lenses being quite small. Fuji has focus drift: https://blog.kasson.com/gfx-100s/focus-drift-with-the-110-2-gf-on-the-fujifilm-gfx-100s/ https://blog.kasson.com/the-last-word/fujifilm-gfx-af-accuracy/
  17. I think the problem originates from photography (and video) being originally quite difficult to do technically so that when there is a really good photo or short movie, it was viewed with excitement, and people gathered around online to celebrate such things and try to learn the craft themselves. Online forums were quite active. Eventually the cameras got better, easier to use and cheaper, and so hundreds of millions of people bought them, and making a decent photograph was no longer unusual, not a luxury or a rarity. Thus it became progressively more difficult to make a living from it, or be noticed with your images (whether amateur or professional). Forum activity reflects this - if it is no longer possible to make a difference with photos or videos then fewer people will enjoy the pursuit, or chatting about it. Of course, it is still possible but there is such a quantity of it readily available for consumption that people no longer stop to watch this content. And they don't value it because they don't see it as special. Even if the photo is special, they are looking at it on a tiny screen the size of their hands at reading distance away, and that's really small. If you try to come back to it, chances are you will not find it again, as the feed has changed with new material. Rarely is the creator of the photograph mentioned online. What's the point then? I think rather than give information for free, a lot of people are giving workshops and may try to commercialize their knowledge. In the beginning of the internet, people were so excited about sharing and it was not about making money. Social media tends to show people what they've liked before, so then all the content gets likes and there is no space for criticism, or if you do, then your comment probably gets deleted, you might become unfriended, or get a fierce rebuttal to the criticism. No one bothers to read through the discourse. Forums are full of discussions where disagreements and agreements are on more equal footing, but in social media, it's all about likes, and agreeing with the opinion of the poster. The algorithms ensure that you basically only see things that are similar to what you have "liked" before. And if you do give a more neutral or negative comment, people get offended as you're clearly not subscribing to the same bubble that they are. There is no room for genuine discussion on social media. That people don't read any more is a serious problem. It means they aren't being informed, and they probably aren't thinking much, either. Finding facts in videos is very time-consuming and that medium is more suited for dissemination of stories, emotions, etc. and experiences whereas text and illustrations usually are better in disseminating facts. All these media have their place and should be supported. How this happens, I don't know. Personally I enjoy watching videos from time to time, movies, documentaries etc. but find that often it is faster to find the information I'm looking for outside of the video medium. I would be surprised if young people can get through life without reading.
  18. It's easier to follow when one knows the story pretty well from books etc. I guess the different timelines are there to keep it a mystery, who was behind Oppenheimer's loss of reputation and security clearance after he opposed the development of the hydrogen bomb and wanted that there should be disarmament and international oversight over nuclear weapons. That is revealed towards the end (at least one version). If the film had progressed in a linear time it would not be possible to make the story as exciting I guess, as they'd have to show the behind-the-scenes as the time went by.
  19. I have had the Z8 since late May and have not experienced any heat-related warnings or other signs of overheating. I live in Finland so our weather is not like Florida but we've had 28 C days. When shooting video, I have used mostly Prores 422 HQ in 4K/25, 4K/50 or 1080p/50 but also tested 4K/120 h.265 and they all worked fine in and out of doors. The most I shot on a day was about 200 GB of video onto CFexpress type B (325 GB ProGrade Digital Cobalt). I've also used Sony 128 GB CFexpress and those didn't overheat in the time required to fill up the card in 4K/25 Prores 422 HQ but the card was very warm to the touch afterwards. Tests published on youtube by Ricci Chera (who works for Nikon School UK, so keep that in mind), Gerald Undone and others generally found that the Z8 can overheat in the most data-intensive video modes in about 30 min if using memory cards that tend to run hot, but using the right cards (Delkin Black is reported to run the coolest, Prograde Digital Cobalt is also good), the camera typically runs out of battery (2 hours on one charge of the EN-EL15c) before overheating, according to those reports. Both Delkin Black and Prograde Cobalt cards are sometimes significantly discounted at B&H, so if getting a Z8, one may want to look out for those deals. My own experience confirms that the Prograde Cobalt seems to run less hot than the Sony and I'm happy with the purchase although these cards aren't exactly cheap. For longer recording times I would likely go with Delkin Power as those are available in larger capacities. I love the Prores 422 HQ; such amazing detail and color. I have no experience shooting 8K.
  20. IBIS with non-CPU lenses works if you type in the focal length and maximum aperture into the list of non-CPU lenses that you use. This includes mechanical-only lenses that don't have electrical contacts. https://onlinemanual.nikonimglib.com/z8/en/sum_non-cpu_lens_data_guid-fac0444d-3965-c25a-4c18-84574cb10167_285.html After typing in the data for the non-CPU lenses, you can then select from the list the actual one that you are using and the data is recalled. This can be programmed to a custom function button. The AF speed in video recording is adjustable: https://onlinemanual.nikonimglib.com/z8/en/csmg_af_speed_guid-a4c4cd1b-0ad3-5c90-eac7-5c2fb524a4b5_247.html and tracking sensitivity also: https://onlinemanual.nikonimglib.com/z8/en/csmg_af_tracking_sensitivity_guid-54657d66-753f-7fcb-b56b-2b78446094a0_248.html For me the video AF has been excellent on the Z8.
  21. This has to do with the creation of exposure latitude in N-log mode by exposing like this ISO 64 sensor like an ISO 800 sensor. šŸ˜‰ From the perspective of stills shooting this would be the equivalent of underexposing the image by 3.7 stops and then pushing it to be brighter in post-processing. So yes, there will be noise. Even in video, you can shoot in SDR mode and use the base ISO of 64 of the Z8 and not get much noise, but then you don't get the kind of highlight recovery that you get in log. I think it would not be quite correct to say that these cameras are noisy, as the noise level is not that far from theoretical limits set by physics. While there is a little bit less noise recorded by cameras where there is better cooling and dual gain etc. it's not like a night and day difference. When watching a 4K UHD blu-ray of Rogue One at 60 Mbit/s what is evident is that there is a lot of noise visible that isn't as obvious when streaming. What's great is the audio quality compared to streaming. I'm surprised how much noise in the image there is and how they chose to distribute it like that given the large sensor used to shoot it (Arri Alexa 65). I suspect this has to do with people in the cinema field being used to the grain of color negative film and not going for a straight clean digital image due to aesthetic preferences. If they had shot it at lower ISO I would think the result could be squeaky clean. (I have seen videos suggesting that some colorists add noise on purpose, and there are even still photographers who do that to mimic film grain and to hide some imperfections. I'm not convinced that adding noise has more than a transitional benefit between generations of people where technology is changing quickly.) There are cinema cameras which do record the image with two gains to improve the dynamic range and merge the data into the video file. There are also other approaches on the sensor level to achieve increased dynamic range, such as Super CCD by Fujifilm some years back, but this requires a different photodiode layout and it seems they're not continuing this line of product development. What is possible, of course, is to take advantage of the temporal redundancy and reduce the noise based on the similarity of consecutive frames. But really, the easiest way to reduce noise is by giving up exposure headroom in the highlights and recording more light, which is also what you are saying. I find it curious how cinema camera manufacturers define ISO differently than still cameras. I get it that displays can have a larger dynamic range than paper, and this could be a part of the reason, i.e. still cameras are designed to expose by giving as much light as possible to the sensor while still being able to display highlights up to the whites on a photographic print paper. Basically ink on paper has a dynamic range of 200:1, while high dynamic range displays have 20000:1 or higher. Thus there may be the need to consider the display when deciding on an exposure, and basically, it also means the images made for HDR displays would potentially be noisier in parts of the image because of the need to include those highlights in the shooting phase. In stills photography HDR images can be made by exposing two or more frames at different exposures, but this wouldn't exactly work for video; what they can do is record two sets of data with different gains, but the rewards obtainable from this approach are more limited.
  22. When using the hi-res zoom, e.g. at the 2X setting, the area that each original image pixel takes in the final (zoomed-in) image increases by a factor of 4 which makes the noise more visible than when not zooming (because each pixel of a 4K image is normally (when not hi-res zoomed in) made by averaging four pixels in the 8K sensor, these resampled pixels are less noisy). However, I still consider the hi-res zoom a valuable and even exciting feature. The reason for this is that for video, 4K is arguably more than enough for almost all practical purposes, and already high-quality 4K video takes a lot of storage space; shooting in 8K would result in significantly increased storage needs and not necessarily give anything for the final presentation. Using the 8K sensor in this way to solve a practical problem (current lack of powered zooms in the Nikon system) is quite sensible even if it is limited to 1X-2X zooming. A few limitations should be mentioned. First, in the normal shooting of 4K video, it is created by oversampling from 8K. When zooming in, the degree of oversampling is reduced. However, in my opinion the resulting quality is still very good. When doing hand-held footage, it should also be noted that only the optical (with VR lens) and sensor-based VR are available together with hi-res zoom, and the electronic VR is disabled. Finally, when in hi-res zoom mode, the focus area is not displayed in the viewfinder or in the LCD, and it is automatically selected to be wide-area L presumably in the center of the frame. One can have subject-detection on while using hi-res zoom, but one can't see exactly where the limits of the AF area are, and one can't move it about in the frame. So there are a few caveats with this feature in its current implementation. I would like to see Nikon display the focus area even if it can't be moved about. Still I quite like the feature and the ability to zoom in and out in a way that is a bit more smooth than when using the mechanical optical zoom control on the lens. On the Z9 new firmware update 4.0, this feature gets more options regarding the rate of zooming. As for your last point, I understand that Sony has a more advanced interpolation available to preserve details when digitally zooming in. Adobe and other software offer this kind of a feature in post-processing software, so I would guess the benefits of doing that in camera are mainly that one can have the final result immediately available. For sports photography, I suppose this is a priority. However, digital zooming always results in the amplification of noise.
  23. The image achieved by 2X cropping a 4K image from 8K to the center 4K in order to achieve a 400mm field of view using a 200mm f/2.8 doesn't look like the image from a 400mm f/2.8 lens (except for the field of view). The image will look like a 4K full-frame image from a 400mm f/5.6 lens, instead. This is in terms of image noise as well as depth of field. You cannot magically turn a 70-200/2.8 into the practical equivalent of a 400/2.8 by simply taking the center of the frame in a crop. Now, the digital zooming offered by the Z8 is handy and practical as it offers the equivalent of a motorized zoom which is not offered by Nikon on the lens side, at least not yet. By developing the firmware further, Nikon will be able to offer variable digital zooming rates and the image detail should hold up quite well in this case, due to the lens optical quality and the fact that it's using an 8K sensor to implement it. However, if you then actually take a 400/2.8 and compare the images to ones obtained using the 70-200/2.8 with the 2X high-resolution zoom, they will be very different. Nonetheless I think this is a smart use of the high-resolution sensor, as long as the lens itself can actually resolve that kind of detail and the focus can be maintained to the higher resolution level. If shooting in low light, however, one can then quickly see why it's not a real 400mm f/2.8. IMO the high-resolution zoom is best used for tripod-based operation, as in the zoomed-in state, the hand-held stabilization is not as good as it would be without the zooming-in, and electronic VR is not available when using the high-resolution zoom feature. Of course, for wildlife use, a tripod used to be considered a given, but nowadays...
  24. Nikon has a 24-120mm f/4 native Z-mount lens that is generally well-regarded. Oh, the Tamron is an f/2-f/2.8. That's pretty neat.
  25. I'm not a lawyer but I guess that if a court decides the patent is invalid, RED would lose all income related to license agreements based on that patent. Since they pulled the lawsuit, the other companies cannot automatically stop paying license fees if they have agreed to do so as the patent hasn't been invalidated officially.
×
×
  • Create New...