Jump to content

Ilkka Nissila

Members
  • Posts

    164
  • Joined

  • Last visited

Everything posted by Ilkka Nissila

  1. I don't quite understand what the problem is. Metadata giving instructions for interpreting the exposure (such as a "soft" ISO setting which does not actually affect the stored image data) can work for proprietary formats such as raw video but is there a similar option for non-raw video formats in any camera? If the video is to be used "as is", with minimal editing, all the major editing and playback software would need to know what to do with the data and the instructions that come in the metadata. If the file is meant to be always edited (as in log video) then it may make some sense to offer this as an option but the user always has the option of using just one of the base ISOs in the camera if they so wish, so I'm not sure what added value there is from having a separate brightness adjustment; just to see the image better? The problem is that by doing that you likely become disconnected from how much exposure latitude you have in either direction as the brightness of the image shown is just an adjusted brightness for viewing pleasure and does not reflect the actual exposure or values stored in the file. To compensate for this loss of visual connection between what is shown on the screen and the actual position of the values stored then would require additional exposure monitoring tools, such as colors indicating how many stops you are from saturation at each point of the image, and this then can lead to screen clutter in a small camera with a small screen. The Nikon ZR, as far as I've understood, does offer such an option by choosing R3D recording: the camera lets you choose one of two base ISOs and then adjust the brightness using the ISO sensitivity adjustment which does not affect the stored data. I already see people asking Nikon to add "traffic lights" for monitoring to help deal with the disconnect. Does the ZR waveform display reflect only the actual stored values or is the brightness adjustment or ISO sensitivity also affect the waveform?
  2. There is a 100% tariff on electric vehicles from China, so BYD is not sold in the US. EVs from Europe have a 25% tariff (based on what I could find out). Tesla is being given relatively free reign. I don't think this will actually help Tesla in the long run as it allows them to operate (in the US market) without really being competitive (on the world market). Tariffs can be useful to help some industries grow but Tesla kind of started the EV boom and they should have had enough time to develop their production so that they can compete on the world market without subsidies or tariffs.
  3. I don't understand how the US expects to solve potential security problems caused by these products if they allow them to be operated as before. My guess is that there is a longer-yerm perspective and they want to hinder DJI and other manufacturers' sales in order to allow American companies to take the market and a soft transition would mean the customers can fly their drones until they crash them or wear them out, and then replace with new US products. My guess though is that the US companies will never make competitive products for the consumer market and the government policy can change as soon as they realize that. It should be simple enough to verify that the code on the device does not transmit data to China without the user's permission and DJI could easily host any flight log analysis within the US rather than send it overseas. This is about something other than security IMO.
  4. Very witty. If WW III starts, photographers with drones can then make films about it, so drones may be very useful. I would imagine the operation of unapproved drones can be shut down in the US territory based on GPS data, so IMO it would be pretty risky to invest in equipment that is not approved.
  5. How about using Dolby Vision? On supported devices, streaming services, and suitably prepared videos it adjusts the image based on the device's capabilities automatically, and can do this even on a scene-by-scene basis. I have not tried to export my own videos for Dolby Vision yet, but it seems work very nicely on my Sony xr48a90k TV. The TV adjusts itself based on ambient light and the Dolby Vision adjusts the video content to the capabilities of the device. It seems to be supported also on my Lenovo X1 Carbon G13 laptop. High dynamic range scenes are quite common, if one for example has the sun in the frame, or at night after the sky has gone completely dark, and if one does not want blown lamps or very noisy shadows in dark places. In landscape photography, people can sometimes bracket up to 11 stops to avoid blowing out the sun and this requires quite a bit of artistry to get it mapped in a beautiful way onto SDR displays or paper. This kind of bracketing is unrealistic for video so the native dynamic range of the camera becomes important. For me it is usually more important to have reasonably good SNR in the main subject in low-light conditions than dynamic range, as in video, it's not possible to use very slow shutter speeds or flash. From this point of view I can understand why Canon went for three native ISOs in their latest C80/C400 instead of the dynamic range optimized DGO technology in the C70/C300III. For documentary videos with limited lighting options (one-person shoots) the high ISO image quality is probably a higher priority than the dynamic range at the lowest base ISO, given how good it already is on many cameras. However, I'd take more dynamic range any day if offered without making the camera larger or much more expensive. Not because I want to produce HDR content but because the scenes are what they are, and usually for what I do the use of lighting is not possible.
  6. Voters need to see more damage before they will admit fault in their own thinking and having been conned big time.
  7. Okay, so there are two separate issues: foreign made drones, and DJI products with wireless capabilities. But that doesn't make much sense; how would a DJI gimbal affect US national security? It would seem that they approached the banning on two avenues: DJI products speicifically, and foreign-made drones. Maybe they can realize a double ban is not needed and DJI camera and stabilizer products that are not drones could be allowed. Though it is possible they just want to support US businesses while pretending it is about national security. Since RED is now owned by Nikon, are there any competing products that are owned by Americans and produced in the US? What is likely to happen is that movie and TV products that would benefit from a Ronin 4D will simply be done in other countries with no such limitations, and Hollywood gets smaller. Is that what the US government wants?
  8. Notice that the ban is not on DJI but on future non-approved models of non-US made drones. You can still get stabilizers, action cameras, microphones from DJI (and existing drones).
  9. Nonetheless he said the videos were shot with a Ronin 4D which does not support "open gate" video recording; ergo, illustrating that it was not necessary and other camera characteristics were more important to the project than open gate. Nothing comes free; open gate at full resolution without line skipping would mean the sensor read time increases and so there would be more rolling shutter and possibly it might need more processing power to handle that data (or at least it would generate more heat). These may be appropriate compromises for some users. However, seriously one can ask whether all cameras need to have open gate or if it is sufficient that a few do, enough to satisfy this market. Short form videos are considered to be tiring to the brain are reduce the viewer's ability to concentrate and control themselves. I believe most of not all of the vertical videos belong to this class. For long form content video, I believe the horizontal format is much more suitable. Times square, huh? I recently checked hotel prices in NYC and they were in the $500+. I wonder where the tourists are coming from given these prices. I have stayed in Manhattan many times before but the prices were 1/4th of today's prices.
  10. I don't quite see it that way; if social medial platforms are viewed on a computer, the browser takes up all the display area available and fits the content using the whole window, this can be vertical or horizontal or square for that matter. Basically only when the social media is viewed on a mobile device do some apps and websites default to vertical viewing, but that's a limitation of the device basically, and the typical way people default to using it. Originally instagram photos were square, not vertical or horizontal. Some social media platforms assume that a video is shot vertically on a mobile phone, and for a time it wasn't even possible to shoot in horizontal oritentation and have the social media site or app display it correctly; it would always force it to the vertical format. This, however, is incompatible with the way most news media sites present videos, which are horizontal only, mimicking TV. When these news media sites then displayed social media videos or cell phone videos, they would not be able to technically display the video as a vertical, instead they generated blurred sides to the video to turn the vertical video into horizontal. This is all a bunch of nonsense really. Vertical videos make it difficult to show the context and environment in which something is happening. This is why cinema and TV are in landscape orientation: it's better for displaying the content. Photos have been always shot both vertically and horizontally (probably most still horizontally, for the same reason as video), as the continuity can be broken in stills and one can simply flip the camera quickly to vertical and shoot some (portrait) shots that way and return to the landscape orientation to show context; in video, one can not do such flipping without causing problems to the viewer. Books and magazines naturally lend to images in portrait orientation or in some cases, square; for displaying a landscape image in large size one would need to use a double page spread, which of course is commonly done, but it does create some issues if an important part of the image is in the mid section. What's more the verticals in (still) photography were traditionally not anything remotely like 9:16 but 4:5, 3:4, and 2:3. I think seriously social media apps and sites should consider making the vertical format something like 4:5 rather than 9:16 as the latter is just not very good. It's too narrow. Device fitting inside a pocket in an extreme limitation. Clearly, if the main reason vertical videos are requested by advertising clients is people looking at their mobile phones in tube or bus, or wherever, the quality loss from cropping from 16:9 is hardly going to be visible on those tiny displays. Sure, the angle of view is narrrower but it's always going to look awkward having such an extreme aspect ratio in a vertical image. Interesting to hear that there are now high-resolution displays which show video content in public. I can't remember for sure seeing such things myself, though it's possible that I have seen it but didn't pay attention to it. I would be very surprised if those displays are as elongated as 9:16 though. It just doesn't make any visual sense to use such an extreme aspect ratio for vertical content when there is a choice to stick to 4:5 or 2:3. And when those much more suitable aspect ratios are used for the vertical content, the cropping from landscape 16:9 is less extreme and easier to manage.
  11. Sounds like random people making stuff up; the ZR has a fast read time in video mode (for a relatively low-cost mirrorless camera); it doesn't make any sense to make a video-first camera based on a sensor that is more than 10 years old and has a very slow read time. I couldn't find any reports of it on NR.
  12. AI is not a person or a human being and it doesn't share evolutionary history or biological safeguards with us. It's therefore more unpredictable what it might do. I share a lot of the concerns you express in your article. It's worth following what Bernie Sanders has been saying about AI and the impact on the workforce and that the benefits of AI should be shared among all of humanity and not concentrated in the hands of a few ultra-rich people. Although Musk has been claiming that work becomes optional in the future and there can be universal income that allows us to do anything we want but everything that the big seven companies and their billionaire owners have been doing suggests that they only care about power and getting even more rich and are not at all likely to share the riches with the people. What in their past and current behavior would make anyone believe this would ever change, without society and its political leaders forcing a change? Musk seems to think he is player 1 in a computer simulation (the world) and so everything that happens is part of a game and an adventure to him. World destroyed? No matter. Restart simulation. As long as he gets to try to get to make it to the next level (Mars) in the game, that's all that matters. We are all just extras in the game. What concerns me the most is that in the race for Mars and superpowerful AI, the Earth's environment, the climate, and its people are sacrificed and yet Mars is and will likely always remain hostile and unsuitable for human life, so all that we have could be sacrified on a useless and pointless goal by a person who doesn't have all the birds at home. The situation is a clear demonstration of why individual wealth must be limited and redistributed when it gets out of hand.
  13. The only principle that they follow is defined by their self-interest. If a law or moral principle exists which they think would help them gain more power or wealth, they use it argument why others should follow it. But they never feel the need to obey laws or ethical principles if it would be disadvantageous to their attempts to increase their power or wealth. Similar to the Russia which cries wolf when Western countries freeze their foreign assets, but do not see any problem in the looting & killing of Ukrainians. These are examples of people who are guided by only their self-interest and will do anything to gain more and more power and wealth. What is amazing is how the common people actually voted those people into positions of power.
  14. The high dynamic range (using DGO technology) in the Sony A7 V is for low to middle ISO stills when using the mechanical shutter; DGO is not used for video, and certainly there won't be any 16-stop dynamic range at ISO 3200 or 8000. The claimed 16 stops is likely achieved on a signficantly downsampled ISO 100 still image and criteria based on engineering dynamic range (SNR = 1). Do the EOSHD website and browsers used by visitors support high dynamic range photos on Super Retina XDR and other HDR screens? Otherwise, I'm not sure what the OP is looking to see. Having lower noise can't harm the image and it's up to the user to make use of the higher fidelity, or not make use of it.
  15. I believe it's just mainstream social media sites such as Facebook, Instagram, Twitter/X, LinkedIn etc., that they care about, not small niche forums on very specific topics not related to politics. I think it's safe to visit the US unless you have a written record of publicly speaking against Trump or his policies, in which case it might not currently be safe.
  16. I think in any given time window, a truly good movie is a rare thing. It's not that there are no good films being currently made, rather we remember those old films which left a lasting impression on us, and tend to forget those films which were not good. For films made in the 1980s and 1990s we remember the very best ones. For films made in the 2020s we are more likely to remember the latest ones we saw. High image quality (be it high dynamic range or resolution) cameras don't make things worse in terms of the quality of the outcome but it may be that they motivate the production to aim for greater perfection in some sense and then not realize that technical perfection is not necessarily a worthy goal on its own if it leads to losses in other areas, such as the story and dramatic intent. I think visual aesthetics have been changing with the ubiquity of the mobile phone camera and the kind of processing that phone manufacturers apply to the images by default and also the kind of post-processing that people apply to their images in instagram etc. People who have grown up on these devices are used to the auto-HDR AI look and they may think that kind of a look is normal and looks good. Cinema cameras that capture high dynamic range allow that kind of post-processing to be applied, but they also allow other options; it is how they are used that is important. As camera and TV (particularly streaming) resolution has been increasing, it is possible that to get technical perfection, the producers think all the actors need to be really beautiful with perfect skin etc. as they are shown in such great fine detail in the movie. Post-processing edits to how skinny models look in magazine covers or online, and fixing of imperfections in plastic surgery por post-processing also have lead to new aesthetics which is like a race that got out of hand, leading to ever less realistic photographs and movies. If they process everything to look a tone-mapped fake HDR image with local tonal variations everywhere and no contrast between the different elements in the scene, and all the characters are super perfect then there is a huge disconnect with reality. Classical films often had rough characters along with the beautiful, which made things look realistic even if the lighting was hard and stylized (by necessity, as the film material required a lot of light, so hard lights were used and there had to be intent). Actual HDR technology can help avoid the tone-mapped HDR look and have shadows dark all the while showing details (preserving the global contrast between parts of the image). However, how this technology is used is up to the people making the movie, of course. I have to admit that most of my favorite movies were shot on film, although I do like several which were shot on digital. I don't think shooting on film per se makes those movies look good but it may be that the filmmakers were able to choose an aesthetic (by film and lighting and costume choices etc.) and hold creative control over it with a more firm hand when using traditional techniques. This could also be why camera manufacturers have been adding "looks" and "grain" baked into the footage as options recently. They can help to lock in a certain look and the added grain prevents excessive mucking up with the image in post-processing. However, to me this seems like less than an ideal solution which would be for the team members to communicate and understand the intent and work together to achieve it. I notice there is no agreement as to what look is good online, people will have wildly differing opinions on such topics. Thus it is up to us as viewers to select our favorites and enjoy them rather than hope that every new movie follows the same aesthetics. This will never happen, of course, as there are so many opinions.
  17. Curves are available in the advanced settings when creating custom picture controls (which you can the upload as recipes).
  18. I think the mechanical systems that allow the back LCD to tilt behind the optical axis as well as opening to the left for selfie orientation are more complicated and require more parts than what Nikon is using in the ZR, and this would make the camera heavier, larger, and more expensive (would make it less attractive for many people, and it might not solve the problem it currently solves). Higher-end models will no doubt be made over time with different solutions to how the LCD turns into different orientations. The Z8 and Z9 offer a screen which does not tilt forwards (selfie orientation) but it does retain the LCD approximately on the optical axis.
  19. I could never understand the "accelerated" manual focusing, it makes things just more difficult and unpredictable. Nikon fortunately have firmware updates to most of the S-line lenses (exception: 14-24/2.8) that feature what people call linear manual focusing (I'm not really sure what is linear in it, what it does is make focus ring position and focus distance correspond to each other in a bijective relationship at least within the power cycle of the camera). What's even nicer is that you can choose how much you have to turn to achieve a given focus change, so it is adoptable for different users and needs. I think the focus by wire should never have been accelerated by default in any lens. As for the priority on autofocus, mirrorless so-called hybrid cameras and their lenses are a bit more (still) photography-oriented than video, and so the needs of the stills shooters come first in most models. Autofocus is very useful when you want consistent focus on the eye, for example, or when shooting action subjects (again, stills). For some things (such as when multiple subjects at different distances have to be sharp in the frame, and the best way to achieve this is to focus in between them) manual focus is better but manufacturers chose to prioritize ease of use than the needs of skilled users. Lenses with mechanical manual focus are of course available, natively and via adapters, for those who prioritise MF.
  20. Since RED says the colorimetry and gains are different in R3D NE vs. N-RAW, this seems to support that. Nikon traditionally has done a white balance adjustment before storing the values in the RAW file, and the raw conversion software has to know what processing has been applied in order to correct the WB. My guess is that RED might not do that (to preserve consistency across the different cameras storing R3D files) and so the colors are different in the different raw formats. RED also does not adjust sensor gain between intermediate ISO settings, as far as storing values in the raw file is concerned, apart from the two base ISOs, if I understood this correctly, and this approach is also used in the ZR R3D NE. Nikon applies different gains to the data also at intermediate ISO values when storing data in N-RAW files. So, the two formats work somewhat differently and are intended for different postprocessing pipelines.
  21. I've had good experiences with Prores 422 HQ on Z8, and am thinking the same, why don't tubers test and show the results in their comparisons? While the data rate is quite high (note that Nikon has stated they will add Prores 422 LT to the ZR in a FW update) it is a video file for which in-camera distortion and vignetting corrections available and there is some noise reduction in play as well. Cined has tested it it is included in the database and it seems from the numbers that the camera in Prores HQ mode applies less noise reduction to high ISO files than h.265. It seems like a good compromise and if the LT version comes soon it might turn minds.
  22. Panasonic is reading the sensor slower in both the normal and DR boost modes, explaining how they can get more DR out of it than Nikon in their implementation. It may or may not be the same sensor. In any case Nikon's compromise is different from Panasonic's and these are both legitimate choices. The ZR was under development before Nikon acquired RED and what RED know-how they added in this camera is likely in the firmware (and post-processing support in the R3D NE format pipeline). In cined's testing the latitude test shows better retention of color across exposure adjustments in post-processing when using the R3D NE than N-RAW and so it would seem that the RED acquisition already paid off for Nikon to become more competitive in the video arena, and this is not just marketing if it benefits users. What Nikon should do now is try to make the h.265 a bit more competitive so that more people who cannot handle the raw data rates can still benefit from the camera. It would be very costly for everyone to shoot everything in R3D NE to get benefits from the camera. I personally am looking forward to seeing some Prores 422 HQ material shot with the camera and see how that fares in comparison with the Z8.
  23. Is there somewhere where we can see an example of this problem vs. another camera with a better implementation of h.265? I think it's understandable that when a highly compressed video codec is used, there is noticeable quality loss and the manufacturer is trying to mitigate this with some algorithmic processing of the data. Is it really the case that the quality of the h.265 is worse than in a previous model from Panasonic or another camera in a similar price class, or could it be a case of increasing expectations over time as we see high-quality footage using better screens more often?
  24. How about Prores 422? Prores 422 4K at 25 fps is 433 Mbps vs. 2.3Gbps Prores RAW 6K (normal) and 3.5Gbps for Prores RAW HQ. I would think the Prores 422 on the S1II is likely to be a good intermediate sized format between RAW and h.265, at least from my Nikon experience the quality should be very good.
  25. While I think it would make sense for hybrid cameras to offer similar "looks" across photos and video for easier presentation together, I am not really sure storing photos in log format makes sense. First, while linear encoding would waste bits due to the highlight photon shot noise making the least significant bits meaningless, this has already been corrected in compressed raw file formats such as Nikon's (technically lossy but visually lossless) compressed NEF. If I recall correctly, Nikon simply leaves out the LSBs in highlight pixels, thus saving storage space. In log video mode, cameras bias the exposure metering to produce about three stops of underexposure compared to normal SDR photos, and this leads to a lot of noise in the main subject (if there is one). It may not be such an issue for video because in video you can do temporal noise reduction which you cannot do for photos since they're individual frames with different content in each image. Usually in still photography, people want the main subject to have the highest possible image quality, and exposure metering algorithms typically emphasize the detected or selected subject and only secondarily protect highlights from blowing out. I still almost always increase midtones in post-processing by a curves adjustment, reducing highlight contrast and bringing the subject (midtones) up in brightness. For scenes that require a large dynamic range, many photographers I know of shoot a set of bracketed frames in order to ensure high SNR for each major part of the image and then merge the images with masks or other such techniques (depending on the subject). For video, exposure blending with masks is not possible but some automated DR-enhancement methods that blend two amplification levels exist in a few cameras (dual gain output). While the idea of having highlight exposure latitude is appealing, it comes at a cost in the midtone and shadow SNR and I think many still photographers would consider the outcome to be of poor quality compared to what they are used to. It's also the case that many if not most (?) still photographers use Auto ISO and manual exposure mode as their go-to exposure mode and they expect the camera in most cases to set the ISO precisely to get close to the desired brightness for the main subject as they are shooting. I often set the camera to ISO 100 or 64 and Auto ISO, lettting the camera vary ISO from 64 to 12800 to get the exposure correct and the photos near usable as they come out of the camera with minimal tweaking. This won't work for log as most of the ISO settings are unusable in log given the 3 stop underexposure built-into the approach. Yes, you can apply +2-3 stops of EV correction and then get similar results to linear modes but then the exposures on the screen will look off and it's harder to see the subject and get the correct feeling of the scene and how it would render in the photograph. I just don't see this going anywhere outside of a few filmmakers wanting look-matched still photos when video is their primary output. Still photographers outside of agency photojournalism shoot raw and that's that for the most part.
×
×
  • Create New...