
Ilkka Nissila
Members-
Posts
119 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by Ilkka Nissila
-
Nikon use intoPIX's TicoRAW for high-efficiency encoding of raw stills and raw video. It's a different algorithm from what RED is using. RED's patent has been suggested to be invalid anyway, as RED demonstrated it in a camera more than one year before applying for the patent (which was Nikon's counter-argument when RED sued them and so the case was settled outside of court, which also happened with Jinni Tech who used a similar argument). I doubt very much Nikon bought RED for the patents but simply to get a foothold in the higher-end video camera market.
-
The Mk II has subject-detection available also in wide-area L AF box, instead of only in the full-frame auto-area AF as in the Mk I. For me limiting the search area for the subject is key to obtaining controlled and reliable results in photographing people. In the newer Zf, the AF box size and shape can be adjusted with many different options and subject-detection is also available there. For me these are the most typical modes I use the cameras in, and the most useful as it gives just the right compromise between user control and automation for me. I would expect the Z6 III to feature also the same custom area AF as the Zf has (which is ahead of the Z8 and Z9 in the number of box sizes available).
-
You need to go to the custom settings menu and the g settings (video). There is a setting where you can assign the hi-res zoom to a pair of custom function buttons (such as Fn1 to zoom in and Fn2 to zoom out). You can also adjust the zoom speed. In addition to buttons on the camera itself, it's possible to control the zoom from the remote video grip that Nikon makes. The main limitation of the high-res zoom is that it limits the AF area to a central wide area of the frame. You can't move the box off center or control how big it is. So you lose some control over the autofocus. Subject detection is available though. I guess the limitation is because the box sizes are tied to the phase-detection sensor positions and those positions with the zoomed-in frame would then change as you zoom. But other than that I like the feature.
-
These things can be done. I just configured my video shooting bank A for Prores 422 HQ 25 fps and 1/50s SDR, and bank B for h.265 4K 50 fps 10-bit 1/100s with N-Log, and I can now switch between banks by pressing and holding Fn3 (which I programmed to act as shooting bank selection button from the video custom settings custom controls menu) and rotating the main command dial. Very handy.
-
The b type has lower capacity than c (and the difference is significant). Recording time may also depend on which codec you use, some are more processing-intensive. Django: You can use shooting menu settings banks (A-D) for video and the bank selection can be set to Fn1, Fn2, Fn3, or Fn vertical, for example. I am not a settings bank user so I would have to check if you can select the record file type, resolution and frame rate there but I would guess that you can.
-
What is a C1-3 dial? If you mean custom settings like U1-U3 on some Nikons, where the mode dial has customisable options which remember most settings, then no, neither the Z8 nor the Z9 have those. But there are photo shooting banks and custom settings banks.
-
In my experience it isn't bad. A single Z8 battery (EN-EL15c) lasts about 2 hours of continuous video recording or 3-4 hours of active still photography (about 2000 shots). I often have the MB-N12 mounted with two EN-EL15cs inside, and that basically gets me through most events with the exception of all-day sports events that can last 8-10 hours - in those cases a third battery may be needed, or recharging during the day. If you need to do more than 2 hours of video recording or 2000 shots / 4 hours of active shooting then you do need two batteries for the Z8. For winter weather conditions I'm not quite sure yet but with the dual battery setup in the vertical grip I just don't run into situations where the battery capacity becomes a limiting factor. The Z9 battery is no doubt better, and the camera has some other advantages but with it you lose the option of having a more compact camera by taking the vertical grip off. It's just a personal choice of what you prioritise.
-
It doesn't quite work like that. If you shoot a sequence at 120 fps & 1/240s with the GS camera, and compare the results after processing (by taking the images and processing them to reduce noise) to native RS camera at 24 fps, 1/48s, then the latter will still have more dynamic range (unless an ND filter was needed to get the slow shutter speed in too bright light). Phone cameras get away with a lot of stuff, including somehow merging the images even in the presence of moving parts in the image by algorithms only because the images are viewed as a tiny part of the human visual field so the imperfect guesses by the AI don't bother us as much as they would if they were shown on large screens. The images often look unnatural and fake to those who are experienced in looking at actual photographs, though.
-
Mechanical (focal plane) shutter produces some rolling shutter. You can see it e.g. if photographing a propeller plane or helicopter at fast shutter speeds. You can also see it when there is artificial light that flickers, in the past the advice was to use a slow shutter speed to avoid the banding from fluorescent lights; today the lights are often LED based but still there are circumstances where the lights or screens show banding even when photographing with mechanical shutter. The GS eliminates this problem. I would think that photojournalists, sports and music photographers would like it, but it's a pricey camera for sure. Golf, yes, quite likely there even the fastest rolling shutter would show distortion, and the club swings quite fast so you can get some interesting timings at 120 fps or 240 fps. But those are kind of special applications. I guess specialists who work on these kind of sports with very fast motion would get it.
-
Sony Burano : a groundbreaking cinema camera
Ilkka Nissila replied to Trankilstef's topic in Cameras
But how much would you be willing to pay extra for the larger sensor and lenses that cover it? After all, many lenses have masks that minimize flare but also limit the image coverage to a rectangle. Let's say lenses for a square 36 mm times 36 mm sonsor would have 1/2 stop smaller apertures and 50% higher price as a result, and camera body would also be 50% more expensive. Flash sync speed would be 1/125 s and sensor read time 50% longer. Would you still want it, and would you expect that everyone would be fine with it so that mass production would be realistic and square sensors would replace rectangular ones all over the market? -
Hasselblad has the advantage of use of central shutters in each lens, so you can flash sync normally at all speeds (Fuji is limited to 1/125s and slower, which is very slow). For daylight + flash shoots this is very useful, one can use much smaller flashes to balance with bright daylight. The Hasselblad is also very well-designed ergonomically and compact with some of the lenses being quite small. Fuji has focus drift: https://blog.kasson.com/gfx-100s/focus-drift-with-the-110-2-gf-on-the-fujifilm-gfx-100s/ https://blog.kasson.com/the-last-word/fujifilm-gfx-af-accuracy/
-
I think the problem originates from photography (and video) being originally quite difficult to do technically so that when there is a really good photo or short movie, it was viewed with excitement, and people gathered around online to celebrate such things and try to learn the craft themselves. Online forums were quite active. Eventually the cameras got better, easier to use and cheaper, and so hundreds of millions of people bought them, and making a decent photograph was no longer unusual, not a luxury or a rarity. Thus it became progressively more difficult to make a living from it, or be noticed with your images (whether amateur or professional). Forum activity reflects this - if it is no longer possible to make a difference with photos or videos then fewer people will enjoy the pursuit, or chatting about it. Of course, it is still possible but there is such a quantity of it readily available for consumption that people no longer stop to watch this content. And they don't value it because they don't see it as special. Even if the photo is special, they are looking at it on a tiny screen the size of their hands at reading distance away, and that's really small. If you try to come back to it, chances are you will not find it again, as the feed has changed with new material. Rarely is the creator of the photograph mentioned online. What's the point then? I think rather than give information for free, a lot of people are giving workshops and may try to commercialize their knowledge. In the beginning of the internet, people were so excited about sharing and it was not about making money. Social media tends to show people what they've liked before, so then all the content gets likes and there is no space for criticism, or if you do, then your comment probably gets deleted, you might become unfriended, or get a fierce rebuttal to the criticism. No one bothers to read through the discourse. Forums are full of discussions where disagreements and agreements are on more equal footing, but in social media, it's all about likes, and agreeing with the opinion of the poster. The algorithms ensure that you basically only see things that are similar to what you have "liked" before. And if you do give a more neutral or negative comment, people get offended as you're clearly not subscribing to the same bubble that they are. There is no room for genuine discussion on social media. That people don't read any more is a serious problem. It means they aren't being informed, and they probably aren't thinking much, either. Finding facts in videos is very time-consuming and that medium is more suited for dissemination of stories, emotions, etc. and experiences whereas text and illustrations usually are better in disseminating facts. All these media have their place and should be supported. How this happens, I don't know. Personally I enjoy watching videos from time to time, movies, documentaries etc. but find that often it is faster to find the information I'm looking for outside of the video medium. I would be surprised if young people can get through life without reading.
-
It's easier to follow when one knows the story pretty well from books etc. I guess the different timelines are there to keep it a mystery, who was behind Oppenheimer's loss of reputation and security clearance after he opposed the development of the hydrogen bomb and wanted that there should be disarmament and international oversight over nuclear weapons. That is revealed towards the end (at least one version). If the film had progressed in a linear time it would not be possible to make the story as exciting I guess, as they'd have to show the behind-the-scenes as the time went by.
-
I have had the Z8 since late May and have not experienced any heat-related warnings or other signs of overheating. I live in Finland so our weather is not like Florida but we've had 28 C days. When shooting video, I have used mostly Prores 422 HQ in 4K/25, 4K/50 or 1080p/50 but also tested 4K/120 h.265 and they all worked fine in and out of doors. The most I shot on a day was about 200 GB of video onto CFexpress type B (325 GB ProGrade Digital Cobalt). I've also used Sony 128 GB CFexpress and those didn't overheat in the time required to fill up the card in 4K/25 Prores 422 HQ but the card was very warm to the touch afterwards. Tests published on youtube by Ricci Chera (who works for Nikon School UK, so keep that in mind), Gerald Undone and others generally found that the Z8 can overheat in the most data-intensive video modes in about 30 min if using memory cards that tend to run hot, but using the right cards (Delkin Black is reported to run the coolest, Prograde Digital Cobalt is also good), the camera typically runs out of battery (2 hours on one charge of the EN-EL15c) before overheating, according to those reports. Both Delkin Black and Prograde Cobalt cards are sometimes significantly discounted at B&H, so if getting a Z8, one may want to look out for those deals. My own experience confirms that the Prograde Cobalt seems to run less hot than the Sony and I'm happy with the purchase although these cards aren't exactly cheap. For longer recording times I would likely go with Delkin Power as those are available in larger capacities. I love the Prores 422 HQ; such amazing detail and color. I have no experience shooting 8K.
-
IBIS with non-CPU lenses works if you type in the focal length and maximum aperture into the list of non-CPU lenses that you use. This includes mechanical-only lenses that don't have electrical contacts. https://onlinemanual.nikonimglib.com/z8/en/sum_non-cpu_lens_data_guid-fac0444d-3965-c25a-4c18-84574cb10167_285.html After typing in the data for the non-CPU lenses, you can then select from the list the actual one that you are using and the data is recalled. This can be programmed to a custom function button. The AF speed in video recording is adjustable: https://onlinemanual.nikonimglib.com/z8/en/csmg_af_speed_guid-a4c4cd1b-0ad3-5c90-eac7-5c2fb524a4b5_247.html and tracking sensitivity also: https://onlinemanual.nikonimglib.com/z8/en/csmg_af_tracking_sensitivity_guid-54657d66-753f-7fcb-b56b-2b78446094a0_248.html For me the video AF has been excellent on the Z8.
-
This has to do with the creation of exposure latitude in N-log mode by exposing like this ISO 64 sensor like an ISO 800 sensor. 😉 From the perspective of stills shooting this would be the equivalent of underexposing the image by 3.7 stops and then pushing it to be brighter in post-processing. So yes, there will be noise. Even in video, you can shoot in SDR mode and use the base ISO of 64 of the Z8 and not get much noise, but then you don't get the kind of highlight recovery that you get in log. I think it would not be quite correct to say that these cameras are noisy, as the noise level is not that far from theoretical limits set by physics. While there is a little bit less noise recorded by cameras where there is better cooling and dual gain etc. it's not like a night and day difference. When watching a 4K UHD blu-ray of Rogue One at 60 Mbit/s what is evident is that there is a lot of noise visible that isn't as obvious when streaming. What's great is the audio quality compared to streaming. I'm surprised how much noise in the image there is and how they chose to distribute it like that given the large sensor used to shoot it (Arri Alexa 65). I suspect this has to do with people in the cinema field being used to the grain of color negative film and not going for a straight clean digital image due to aesthetic preferences. If they had shot it at lower ISO I would think the result could be squeaky clean. (I have seen videos suggesting that some colorists add noise on purpose, and there are even still photographers who do that to mimic film grain and to hide some imperfections. I'm not convinced that adding noise has more than a transitional benefit between generations of people where technology is changing quickly.) There are cinema cameras which do record the image with two gains to improve the dynamic range and merge the data into the video file. There are also other approaches on the sensor level to achieve increased dynamic range, such as Super CCD by Fujifilm some years back, but this requires a different photodiode layout and it seems they're not continuing this line of product development. What is possible, of course, is to take advantage of the temporal redundancy and reduce the noise based on the similarity of consecutive frames. But really, the easiest way to reduce noise is by giving up exposure headroom in the highlights and recording more light, which is also what you are saying. I find it curious how cinema camera manufacturers define ISO differently than still cameras. I get it that displays can have a larger dynamic range than paper, and this could be a part of the reason, i.e. still cameras are designed to expose by giving as much light as possible to the sensor while still being able to display highlights up to the whites on a photographic print paper. Basically ink on paper has a dynamic range of 200:1, while high dynamic range displays have 20000:1 or higher. Thus there may be the need to consider the display when deciding on an exposure, and basically, it also means the images made for HDR displays would potentially be noisier in parts of the image because of the need to include those highlights in the shooting phase. In stills photography HDR images can be made by exposing two or more frames at different exposures, but this wouldn't exactly work for video; what they can do is record two sets of data with different gains, but the rewards obtainable from this approach are more limited.
-
When using the hi-res zoom, e.g. at the 2X setting, the area that each original image pixel takes in the final (zoomed-in) image increases by a factor of 4 which makes the noise more visible than when not zooming (because each pixel of a 4K image is normally (when not hi-res zoomed in) made by averaging four pixels in the 8K sensor, these resampled pixels are less noisy). However, I still consider the hi-res zoom a valuable and even exciting feature. The reason for this is that for video, 4K is arguably more than enough for almost all practical purposes, and already high-quality 4K video takes a lot of storage space; shooting in 8K would result in significantly increased storage needs and not necessarily give anything for the final presentation. Using the 8K sensor in this way to solve a practical problem (current lack of powered zooms in the Nikon system) is quite sensible even if it is limited to 1X-2X zooming. A few limitations should be mentioned. First, in the normal shooting of 4K video, it is created by oversampling from 8K. When zooming in, the degree of oversampling is reduced. However, in my opinion the resulting quality is still very good. When doing hand-held footage, it should also be noted that only the optical (with VR lens) and sensor-based VR are available together with hi-res zoom, and the electronic VR is disabled. Finally, when in hi-res zoom mode, the focus area is not displayed in the viewfinder or in the LCD, and it is automatically selected to be wide-area L presumably in the center of the frame. One can have subject-detection on while using hi-res zoom, but one can't see exactly where the limits of the AF area are, and one can't move it about in the frame. So there are a few caveats with this feature in its current implementation. I would like to see Nikon display the focus area even if it can't be moved about. Still I quite like the feature and the ability to zoom in and out in a way that is a bit more smooth than when using the mechanical optical zoom control on the lens. On the Z9 new firmware update 4.0, this feature gets more options regarding the rate of zooming. As for your last point, I understand that Sony has a more advanced interpolation available to preserve details when digitally zooming in. Adobe and other software offer this kind of a feature in post-processing software, so I would guess the benefits of doing that in camera are mainly that one can have the final result immediately available. For sports photography, I suppose this is a priority. However, digital zooming always results in the amplification of noise.
-
The image achieved by 2X cropping a 4K image from 8K to the center 4K in order to achieve a 400mm field of view using a 200mm f/2.8 doesn't look like the image from a 400mm f/2.8 lens (except for the field of view). The image will look like a 4K full-frame image from a 400mm f/5.6 lens, instead. This is in terms of image noise as well as depth of field. You cannot magically turn a 70-200/2.8 into the practical equivalent of a 400/2.8 by simply taking the center of the frame in a crop. Now, the digital zooming offered by the Z8 is handy and practical as it offers the equivalent of a motorized zoom which is not offered by Nikon on the lens side, at least not yet. By developing the firmware further, Nikon will be able to offer variable digital zooming rates and the image detail should hold up quite well in this case, due to the lens optical quality and the fact that it's using an 8K sensor to implement it. However, if you then actually take a 400/2.8 and compare the images to ones obtained using the 70-200/2.8 with the 2X high-resolution zoom, they will be very different. Nonetheless I think this is a smart use of the high-resolution sensor, as long as the lens itself can actually resolve that kind of detail and the focus can be maintained to the higher resolution level. If shooting in low light, however, one can then quickly see why it's not a real 400mm f/2.8. IMO the high-resolution zoom is best used for tripod-based operation, as in the zoomed-in state, the hand-held stabilization is not as good as it would be without the zooming-in, and electronic VR is not available when using the high-resolution zoom feature. Of course, for wildlife use, a tripod used to be considered a given, but nowadays...
-
Nikon has a 24-120mm f/4 native Z-mount lens that is generally well-regarded. Oh, the Tamron is an f/2-f/2.8. That's pretty neat.
-
I'm not a lawyer but I guess that if a court decides the patent is invalid, RED would lose all income related to license agreements based on that patent. Since they pulled the lawsuit, the other companies cannot automatically stop paying license fees if they have agreed to do so as the patent hasn't been invalidated officially.
-
The Z9 rolling shutter is reported to be a lot faster for stills than video, so it's better to just use it in stills mode if you want stills. The frame is larger, it is read faster, there is more dynamic range, the option of less compression, and you can even start the recording before pressing the shutter button fully down.
-
Sony lack of firmware updates is getting completely ridiculous!
Ilkka Nissila replied to Amazeballs's topic in Cameras
-
Sony lack of firmware updates is getting completely ridiculous!
Ilkka Nissila replied to Amazeballs's topic in Cameras
ZV-E1 and A7RV have a new dedicated AI processor (absent from the other bodies) which is used to implement these new subject-detection and tracking features that go beyond the capabilities of the older products in some ways. While some version of the algorithms could probably be implemented on the older hardware, it would probably be a pain for the engineers to try to make the new algorithms that were developed for the new AI processor work on old hardware. From the Sony web page: "Real-time Recognition AF incorporates an innovative AI processing unit that uses subject form data to accurately recognise movement – human pose estimation technology uses learned human forms and postures to recognise not just eyes, but body and head position with high precision, making it possible to lock onto and track a subject facing away from the camera. The AI processing unit can even differentiate between multiple subjects having different postures and recognition of individual faces has also been improved so that tracking reliability is achieved in challenging situations such as when a subject’s face is tilted, in shadow, or backlit. In addition to Human and Animal[xxiii], the AI processing unit now makes it possible to recognise Bird[xxiv], Insect, Car/Train and Airplane[xxv] subjects, providing even greater flexibility and reliability when shooting both stills and video." -
In 120 fps mode the camera measured at 110 F which is above 43 degrees C. This is sufficient to cause skin burn. Since the tests were done indoors, it would be logical to think that in full outdoor sunlight in a warm climate, in high mode the temperatures can reach higher temperatures also in the other (not slow motion) modes. It seems Canon is doing what it can. They offer the standard mode which should not cause burns in normal circumstances, and the high mode which might cause burns in some conditions if the user is not careful when touching the camera, and then they offer the R5C with fan to offer a hardware cooling solution to the problem. It seems reasonable that Canon is addressing the problem in this stepwise fashion as they get more experience and user reports with the hardware. The other option would have been to delay the camera's launch until a comprehensive solution can be offered (either in hardware or in terms of more refined options in safety algorithms and appropriate warning notices to go with certain settings) and lose market share.
-
Nikon's compressed NEF stills (from the D2X era onwards) are indeed visually lossless (the highlights have high SNR but also high shot noise and they simply drop out some of the least significant bits which are covered by the shot noise in the highlights and that's how they achieved visually lossless compression) but the Nikon compression (of the D2X era) only reduces the file size by a small amount; the compression ratio is not as high as in the Red or the Intopix compression algorithms (the latter being used in the Z9). It is extremely difficult to see any difference between the D2X era (visually lossless) compressed and lossless compressed NEFs (and you'd need to do very aggressive curves adjustment to see any difference at all). But the difference in file size between the two formats is small. Red's approach could be considered to be an advancement in the state of the art by providing a higher degree of visually lossless compression than Nikon's. But Intopix's algorithm used in the Z9 also provides high compression ratios and was awared a patent. It is used by the Z9 for both stills and video. One could argue that since raw video is a trivial extension of raw stills at sufficiently high fps (24 or higher) and Nikon are simply using (visually lossless) compression on each frame (as they did in their still cameras since at least since the D2X) the use of such a method to record a raw video is a trivial extension of what Nikon did in the past, only this time with higher resolution and higher frame rate. The compression algorithm itself that is used by the Z9 for high efficiency raw stills and raw video is documented in Intopix's patent. Since it was awarded a patent it clearly includes a new inventive step. RED seem to claim in their lawsuit that Nikon are using RED's technology and not someone else's. Interesting. The patent system was created to foster innovation and to allow the rising of sufficient funding for research and development of new useful things. Here it is used by one company to try to block competition which is offering a superior product (in some ways, i.e. autofocus) at a 90% lower price (maybe more) from entering the market (which is not really the same market as the Z9 is mainly a still camera with some video capabilities and most of its purchasers are likely mainly using it for still photography simply because Nikon isn't a prominent video let alone cinema brand). If RED prevail then this is an example of the patent system damaging the American customers' opportunities to purchase competitive gear and not in any way fostering innovation. Perhaps the customers can just purchase their Z9's from overseas in the future.