Ilkka Nissila
Members-
Posts
156 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by Ilkka Nissila
-
Nonetheless he said the videos were shot with a Ronin 4D which does not support "open gate" video recording; ergo, illustrating that it was not necessary and other camera characteristics were more important to the project than open gate. Nothing comes free; open gate at full resolution without line skipping would mean the sensor read time increases and so there would be more rolling shutter and possibly it might need more processing power to handle that data (or at least it would generate more heat). These may be appropriate compromises for some users. However, seriously one can ask whether all cameras need to have open gate or if it is sufficient that a few do, enough to satisfy this market. Short form videos are considered to be tiring to the brain are reduce the viewer's ability to concentrate and control themselves. I believe most of not all of the vertical videos belong to this class. For long form content video, I believe the horizontal format is much more suitable. Times square, huh? I recently checked hotel prices in NYC and they were in the $500+. I wonder where the tourists are coming from given these prices. I have stayed in Manhattan many times before but the prices were 1/4th of today's prices.
-
I don't quite see it that way; if social medial platforms are viewed on a computer, the browser takes up all the display area available and fits the content using the whole window, this can be vertical or horizontal or square for that matter. Basically only when the social media is viewed on a mobile device do some apps and websites default to vertical viewing, but that's a limitation of the device basically, and the typical way people default to using it. Originally instagram photos were square, not vertical or horizontal. Some social media platforms assume that a video is shot vertically on a mobile phone, and for a time it wasn't even possible to shoot in horizontal oritentation and have the social media site or app display it correctly; it would always force it to the vertical format. This, however, is incompatible with the way most news media sites present videos, which are horizontal only, mimicking TV. When these news media sites then displayed social media videos or cell phone videos, they would not be able to technically display the video as a vertical, instead they generated blurred sides to the video to turn the vertical video into horizontal. This is all a bunch of nonsense really. Vertical videos make it difficult to show the context and environment in which something is happening. This is why cinema and TV are in landscape orientation: it's better for displaying the content. Photos have been always shot both vertically and horizontally (probably most still horizontally, for the same reason as video), as the continuity can be broken in stills and one can simply flip the camera quickly to vertical and shoot some (portrait) shots that way and return to the landscape orientation to show context; in video, one can not do such flipping without causing problems to the viewer. Books and magazines naturally lend to images in portrait orientation or in some cases, square; for displaying a landscape image in large size one would need to use a double page spread, which of course is commonly done, but it does create some issues if an important part of the image is in the mid section. What's more the verticals in (still) photography were traditionally not anything remotely like 9:16 but 4:5, 3:4, and 2:3. I think seriously social media apps and sites should consider making the vertical format something like 4:5 rather than 9:16 as the latter is just not very good. It's too narrow. Device fitting inside a pocket in an extreme limitation. Clearly, if the main reason vertical videos are requested by advertising clients is people looking at their mobile phones in tube or bus, or wherever, the quality loss from cropping from 16:9 is hardly going to be visible on those tiny displays. Sure, the angle of view is narrrower but it's always going to look awkward having such an extreme aspect ratio in a vertical image. Interesting to hear that there are now high-resolution displays which show video content in public. I can't remember for sure seeing such things myself, though it's possible that I have seen it but didn't pay attention to it. I would be very surprised if those displays are as elongated as 9:16 though. It just doesn't make any visual sense to use such an extreme aspect ratio for vertical content when there is a choice to stick to 4:5 or 2:3. And when those much more suitable aspect ratios are used for the vertical content, the cropping from landscape 16:9 is less extreme and easier to manage.
-
Sounds like random people making stuff up; the ZR has a fast read time in video mode (for a relatively low-cost mirrorless camera); it doesn't make any sense to make a video-first camera based on a sensor that is more than 10 years old and has a very slow read time. I couldn't find any reports of it on NR.
-
AI is not a person or a human being and it doesn't share evolutionary history or biological safeguards with us. It's therefore more unpredictable what it might do. I share a lot of the concerns you express in your article. It's worth following what Bernie Sanders has been saying about AI and the impact on the workforce and that the benefits of AI should be shared among all of humanity and not concentrated in the hands of a few ultra-rich people. Although Musk has been claiming that work becomes optional in the future and there can be universal income that allows us to do anything we want but everything that the big seven companies and their billionaire owners have been doing suggests that they only care about power and getting even more rich and are not at all likely to share the riches with the people. What in their past and current behavior would make anyone believe this would ever change, without society and its political leaders forcing a change? Musk seems to think he is player 1 in a computer simulation (the world) and so everything that happens is part of a game and an adventure to him. World destroyed? No matter. Restart simulation. As long as he gets to try to get to make it to the next level (Mars) in the game, that's all that matters. We are all just extras in the game. What concerns me the most is that in the race for Mars and superpowerful AI, the Earth's environment, the climate, and its people are sacrificed and yet Mars is and will likely always remain hostile and unsuitable for human life, so all that we have could be sacrified on a useless and pointless goal by a person who doesn't have all the birds at home. The situation is a clear demonstration of why individual wealth must be limited and redistributed when it gets out of hand.
-
The only principle that they follow is defined by their self-interest. If a law or moral principle exists which they think would help them gain more power or wealth, they use it argument why others should follow it. But they never feel the need to obey laws or ethical principles if it would be disadvantageous to their attempts to increase their power or wealth. Similar to the Russia which cries wolf when Western countries freeze their foreign assets, but do not see any problem in the looting & killing of Ukrainians. These are examples of people who are guided by only their self-interest and will do anything to gain more and more power and wealth. What is amazing is how the common people actually voted those people into positions of power.
-
The high dynamic range (using DGO technology) in the Sony A7 V is for low to middle ISO stills when using the mechanical shutter; DGO is not used for video, and certainly there won't be any 16-stop dynamic range at ISO 3200 or 8000. The claimed 16 stops is likely achieved on a signficantly downsampled ISO 100 still image and criteria based on engineering dynamic range (SNR = 1). Do the EOSHD website and browsers used by visitors support high dynamic range photos on Super Retina XDR and other HDR screens? Otherwise, I'm not sure what the OP is looking to see. Having lower noise can't harm the image and it's up to the user to make use of the higher fidelity, or not make use of it.
-
Filming job in US? Delete your social media
Ilkka Nissila replied to Andrew - EOSHD's topic in Cameras
I believe it's just mainstream social media sites such as Facebook, Instagram, Twitter/X, LinkedIn etc., that they care about, not small niche forums on very specific topics not related to politics. I think it's safe to visit the US unless you have a written record of publicly speaking against Trump or his policies, in which case it might not currently be safe. -
I think in any given time window, a truly good movie is a rare thing. It's not that there are no good films being currently made, rather we remember those old films which left a lasting impression on us, and tend to forget those films which were not good. For films made in the 1980s and 1990s we remember the very best ones. For films made in the 2020s we are more likely to remember the latest ones we saw. High image quality (be it high dynamic range or resolution) cameras don't make things worse in terms of the quality of the outcome but it may be that they motivate the production to aim for greater perfection in some sense and then not realize that technical perfection is not necessarily a worthy goal on its own if it leads to losses in other areas, such as the story and dramatic intent. I think visual aesthetics have been changing with the ubiquity of the mobile phone camera and the kind of processing that phone manufacturers apply to the images by default and also the kind of post-processing that people apply to their images in instagram etc. People who have grown up on these devices are used to the auto-HDR AI look and they may think that kind of a look is normal and looks good. Cinema cameras that capture high dynamic range allow that kind of post-processing to be applied, but they also allow other options; it is how they are used that is important. As camera and TV (particularly streaming) resolution has been increasing, it is possible that to get technical perfection, the producers think all the actors need to be really beautiful with perfect skin etc. as they are shown in such great fine detail in the movie. Post-processing edits to how skinny models look in magazine covers or online, and fixing of imperfections in plastic surgery por post-processing also have lead to new aesthetics which is like a race that got out of hand, leading to ever less realistic photographs and movies. If they process everything to look a tone-mapped fake HDR image with local tonal variations everywhere and no contrast between the different elements in the scene, and all the characters are super perfect then there is a huge disconnect with reality. Classical films often had rough characters along with the beautiful, which made things look realistic even if the lighting was hard and stylized (by necessity, as the film material required a lot of light, so hard lights were used and there had to be intent). Actual HDR technology can help avoid the tone-mapped HDR look and have shadows dark all the while showing details (preserving the global contrast between parts of the image). However, how this technology is used is up to the people making the movie, of course. I have to admit that most of my favorite movies were shot on film, although I do like several which were shot on digital. I don't think shooting on film per se makes those movies look good but it may be that the filmmakers were able to choose an aesthetic (by film and lighting and costume choices etc.) and hold creative control over it with a more firm hand when using traditional techniques. This could also be why camera manufacturers have been adding "looks" and "grain" baked into the footage as options recently. They can help to lock in a certain look and the added grain prevents excessive mucking up with the image in post-processing. However, to me this seems like less than an ideal solution which would be for the team members to communicate and understand the intent and work together to achieve it. I notice there is no agreement as to what look is good online, people will have wildly differing opinions on such topics. Thus it is up to us as viewers to select our favorites and enjoy them rather than hope that every new movie follows the same aesthetics. This will never happen, of course, as there are so many opinions.
-
Curves are available in the advanced settings when creating custom picture controls (which you can the upload as recipes).
-
I think the mechanical systems that allow the back LCD to tilt behind the optical axis as well as opening to the left for selfie orientation are more complicated and require more parts than what Nikon is using in the ZR, and this would make the camera heavier, larger, and more expensive (would make it less attractive for many people, and it might not solve the problem it currently solves). Higher-end models will no doubt be made over time with different solutions to how the LCD turns into different orientations. The Z8 and Z9 offer a screen which does not tilt forwards (selfie orientation) but it does retain the LCD approximately on the optical axis.
-
I could never understand the "accelerated" manual focusing, it makes things just more difficult and unpredictable. Nikon fortunately have firmware updates to most of the S-line lenses (exception: 14-24/2.8) that feature what people call linear manual focusing (I'm not really sure what is linear in it, what it does is make focus ring position and focus distance correspond to each other in a bijective relationship at least within the power cycle of the camera). What's even nicer is that you can choose how much you have to turn to achieve a given focus change, so it is adoptable for different users and needs. I think the focus by wire should never have been accelerated by default in any lens. As for the priority on autofocus, mirrorless so-called hybrid cameras and their lenses are a bit more (still) photography-oriented than video, and so the needs of the stills shooters come first in most models. Autofocus is very useful when you want consistent focus on the eye, for example, or when shooting action subjects (again, stills). For some things (such as when multiple subjects at different distances have to be sharp in the frame, and the best way to achieve this is to focus in between them) manual focus is better but manufacturers chose to prioritize ease of use than the needs of skilled users. Lenses with mechanical manual focus are of course available, natively and via adapters, for those who prioritise MF.
-
Since RED says the colorimetry and gains are different in R3D NE vs. N-RAW, this seems to support that. Nikon traditionally has done a white balance adjustment before storing the values in the RAW file, and the raw conversion software has to know what processing has been applied in order to correct the WB. My guess is that RED might not do that (to preserve consistency across the different cameras storing R3D files) and so the colors are different in the different raw formats. RED also does not adjust sensor gain between intermediate ISO settings, as far as storing values in the raw file is concerned, apart from the two base ISOs, if I understood this correctly, and this approach is also used in the ZR R3D NE. Nikon applies different gains to the data also at intermediate ISO values when storing data in N-RAW files. So, the two formats work somewhat differently and are intended for different postprocessing pipelines.
-
I've had good experiences with Prores 422 HQ on Z8, and am thinking the same, why don't tubers test and show the results in their comparisons? While the data rate is quite high (note that Nikon has stated they will add Prores 422 LT to the ZR in a FW update) it is a video file for which in-camera distortion and vignetting corrections available and there is some noise reduction in play as well. Cined has tested it it is included in the database and it seems from the numbers that the camera in Prores HQ mode applies less noise reduction to high ISO files than h.265. It seems like a good compromise and if the LT version comes soon it might turn minds.
-
Panasonic is reading the sensor slower in both the normal and DR boost modes, explaining how they can get more DR out of it than Nikon in their implementation. It may or may not be the same sensor. In any case Nikon's compromise is different from Panasonic's and these are both legitimate choices. The ZR was under development before Nikon acquired RED and what RED know-how they added in this camera is likely in the firmware (and post-processing support in the R3D NE format pipeline). In cined's testing the latitude test shows better retention of color across exposure adjustments in post-processing when using the R3D NE than N-RAW and so it would seem that the RED acquisition already paid off for Nikon to become more competitive in the video arena, and this is not just marketing if it benefits users. What Nikon should do now is try to make the h.265 a bit more competitive so that more people who cannot handle the raw data rates can still benefit from the camera. It would be very costly for everyone to shoot everything in R3D NE to get benefits from the camera. I personally am looking forward to seeing some Prores 422 HQ material shot with the camera and see how that fares in comparison with the Z8.
-
Is there somewhere where we can see an example of this problem vs. another camera with a better implementation of h.265? I think it's understandable that when a highly compressed video codec is used, there is noticeable quality loss and the manufacturer is trying to mitigate this with some algorithmic processing of the data. Is it really the case that the quality of the h.265 is worse than in a previous model from Panasonic or another camera in a similar price class, or could it be a case of increasing expectations over time as we see high-quality footage using better screens more often?
-
How about Prores 422? Prores 422 4K at 25 fps is 433 Mbps vs. 2.3Gbps Prores RAW 6K (normal) and 3.5Gbps for Prores RAW HQ. I would think the Prores 422 on the S1II is likely to be a good intermediate sized format between RAW and h.265, at least from my Nikon experience the quality should be very good.
-
While I think it would make sense for hybrid cameras to offer similar "looks" across photos and video for easier presentation together, I am not really sure storing photos in log format makes sense. First, while linear encoding would waste bits due to the highlight photon shot noise making the least significant bits meaningless, this has already been corrected in compressed raw file formats such as Nikon's (technically lossy but visually lossless) compressed NEF. If I recall correctly, Nikon simply leaves out the LSBs in highlight pixels, thus saving storage space. In log video mode, cameras bias the exposure metering to produce about three stops of underexposure compared to normal SDR photos, and this leads to a lot of noise in the main subject (if there is one). It may not be such an issue for video because in video you can do temporal noise reduction which you cannot do for photos since they're individual frames with different content in each image. Usually in still photography, people want the main subject to have the highest possible image quality, and exposure metering algorithms typically emphasize the detected or selected subject and only secondarily protect highlights from blowing out. I still almost always increase midtones in post-processing by a curves adjustment, reducing highlight contrast and bringing the subject (midtones) up in brightness. For scenes that require a large dynamic range, many photographers I know of shoot a set of bracketed frames in order to ensure high SNR for each major part of the image and then merge the images with masks or other such techniques (depending on the subject). For video, exposure blending with masks is not possible but some automated DR-enhancement methods that blend two amplification levels exist in a few cameras (dual gain output). While the idea of having highlight exposure latitude is appealing, it comes at a cost in the midtone and shadow SNR and I think many still photographers would consider the outcome to be of poor quality compared to what they are used to. It's also the case that many if not most (?) still photographers use Auto ISO and manual exposure mode as their go-to exposure mode and they expect the camera in most cases to set the ISO precisely to get close to the desired brightness for the main subject as they are shooting. I often set the camera to ISO 100 or 64 and Auto ISO, lettting the camera vary ISO from 64 to 12800 to get the exposure correct and the photos near usable as they come out of the camera with minimal tweaking. This won't work for log as most of the ISO settings are unusable in log given the 3 stop underexposure built-into the approach. Yes, you can apply +2-3 stops of EV correction and then get similar results to linear modes but then the exposures on the screen will look off and it's harder to see the subject and get the correct feeling of the scene and how it would render in the photograph. I just don't see this going anywhere outside of a few filmmakers wanting look-matched still photos when video is their primary output. Still photographers outside of agency photojournalism shoot raw and that's that for the most part.
-
In still photography, the storage space issue for RAW is less pressing than in video and since each still image can be studied for a long time (at least in print) people can pay more attention to quality (and photographers can afford more time into editing of individual frames with masks etc. while in video it would be extremely tedious to make exposure blending or other manually drawn mask based operations on a frame by frame basis). In the early years of digital system cameras, the difference between RAW and JPEG was more obvious and people got used to RAW because the image details were better and of course the files are more editable. For video, I suspect that RAW usage will be more limited to high end where there are professional colorists etc. and occasional shooters who don't shoot a huge quantity of material. But maybe I am wrong. 😉
-
Different post-processing pipelines and their settings for N-RAW and R3D NE may be what is causing such differences and not necessarily different primary data in the file, unless the person making the video actually used the renaming hack. However, of course it is possible that the data are different in the files. However, sharpening images and storing them in the raw format makes no sense as the images are not in RGB format at that time. Sharpening in that phase could mess up the colors so I doubt they are doing it.
-
In an unregulated state, all the money will go to the owners of the AI built on stolen data (from creatives without compensation) and no working person will have money. It'll be like the 1920s again, and remember the tariffs then made the US depression spread worldwide, leading eventually to World War II. After which a period of relative decency began, until the 1980s where all the money more and more were given to the fewest of people, leading eventually to Brexit, Trump, the Russia-Ukraine war. All of these phenomena since the 1980s happened because the multimillionaires and billionaires want to have all the money and keep it too. Adapting is the same as capitulation which makes working people the equivalent of slaves. All the money will go to the techno-oligarchs and their criminal politician friends. The only way to solve the problem is to make AI models based on stolen data illegal and erase them or give due compensation to the creators of the original teaching data that was used to make the model, and tax billionaires so that they end up with only the money that a decent life requires. This would restore fairness and decency in society and good lives to ordinary people.
-
8K50p N-RAW Normal is 362 MB/s. Comparisons between different codecs at different frame rates doesn't make much sense since frame rate is usually specified by the application and not used just to fill a data rate quota. h.265 is available on the ZR.
-
So is the High ISO NR item in the Video recording menu grayed-out when selecting h.265?
-
It's only a 1:2 difference. You can get similar file sizes from Prores 422 HQ 4K as N-RAW Normal 6K, and the Prores looks gorgeous at least from the Z8; I would expect the same from the ZR. There could be an issue though if you want 50 or 60 fps then the quality may not be as good from the ZR as it is with the Z8 (due to lack of support of extended oversampling).
-
The R3D NE is only available in bitrates similar to N-RAW high quality, not normal, which is what many people seem to be complaining about.
-
With F-mount lenses the AF won't be great during video but it should be OK for stills for the most part. I don't think focus pulling by hand with autofocus stills lenses (F-mount) is going to be easy. Practice with the same kind of subjects helps. For video AF, native lenses do best. Most Z mount Nikkors also can be programmed for linear manual focusing. Raw video takes up a lot of storage very quickly.
