Jump to content

Ilkka Nissila

Members
  • Posts

    48
  • Joined

  • Last visited

Everything posted by Ilkka Nissila

  1. I'm not a lawyer but I guess that if a court decides the patent is invalid, RED would lose all income related to license agreements based on that patent. Since they pulled the lawsuit, the other companies cannot automatically stop paying license fees if they have agreed to do so as the patent hasn't been invalidated officially.
  2. The Z9 rolling shutter is reported to be a lot faster for stills than video, so it's better to just use it in stills mode if you want stills. The frame is larger, it is read faster, there is more dynamic range, the option of less compression, and you can even start the recording before pressing the shutter button fully down.
  3. ZV-E1 and A7RV have a new dedicated AI processor (absent from the other bodies) which is used to implement these new subject-detection and tracking features that go beyond the capabilities of the older products in some ways. While some version of the algorithms could probably be implemented on the older hardware, it would probably be a pain for the engineers to try to make the new algorithms that were developed for the new AI processor work on old hardware. From the Sony web page: "Real-time Recognition AF incorporates an innovative AI processing unit that uses subject form data to accurately recognise movement – human pose estimation technology uses learned human forms and postures to recognise not just eyes, but body and head position with high precision, making it possible to lock onto and track a subject facing away from the camera. The AI processing unit can even differentiate between multiple subjects having different postures and recognition of individual faces has also been improved so that tracking reliability is achieved in challenging situations such as when a subject’s face is tilted, in shadow, or backlit. In addition to Human and Animal[xxiii], the AI processing unit now makes it possible to recognise Bird[xxiv], Insect, Car/Train and Airplane[xxv] subjects, providing even greater flexibility and reliability when shooting both stills and video."
  4. In 120 fps mode the camera measured at 110 F which is above 43 degrees C. This is sufficient to cause skin burn. Since the tests were done indoors, it would be logical to think that in full outdoor sunlight in a warm climate, in high mode the temperatures can reach higher temperatures also in the other (not slow motion) modes. It seems Canon is doing what it can. They offer the standard mode which should not cause burns in normal circumstances, and the high mode which might cause burns in some conditions if the user is not careful when touching the camera, and then they offer the R5C with fan to offer a hardware cooling solution to the problem. It seems reasonable that Canon is addressing the problem in this stepwise fashion as they get more experience and user reports with the hardware. The other option would have been to delay the camera's launch until a comprehensive solution can be offered (either in hardware or in terms of more refined options in safety algorithms and appropriate warning notices to go with certain settings) and lose market share.
  5. Nikon's compressed NEF stills (from the D2X era onwards) are indeed visually lossless (the highlights have high SNR but also high shot noise and they simply drop out some of the least significant bits which are covered by the shot noise in the highlights and that's how they achieved visually lossless compression) but the Nikon compression (of the D2X era) only reduces the file size by a small amount; the compression ratio is not as high as in the Red or the Intopix compression algorithms (the latter being used in the Z9). It is extremely difficult to see any difference between the D2X era (visually lossless) compressed and lossless compressed NEFs (and you'd need to do very aggressive curves adjustment to see any difference at all). But the difference in file size between the two formats is small. Red's approach could be considered to be an advancement in the state of the art by providing a higher degree of visually lossless compression than Nikon's. But Intopix's algorithm used in the Z9 also provides high compression ratios and was awared a patent. It is used by the Z9 for both stills and video. One could argue that since raw video is a trivial extension of raw stills at sufficiently high fps (24 or higher) and Nikon are simply using (visually lossless) compression on each frame (as they did in their still cameras since at least since the D2X) the use of such a method to record a raw video is a trivial extension of what Nikon did in the past, only this time with higher resolution and higher frame rate. The compression algorithm itself that is used by the Z9 for high efficiency raw stills and raw video is documented in Intopix's patent. Since it was awarded a patent it clearly includes a new inventive step. RED seem to claim in their lawsuit that Nikon are using RED's technology and not someone else's. Interesting. The patent system was created to foster innovation and to allow the rising of sufficient funding for research and development of new useful things. Here it is used by one company to try to block competition which is offering a superior product (in some ways, i.e. autofocus) at a 90% lower price (maybe more) from entering the market (which is not really the same market as the Z9 is mainly a still camera with some video capabilities and most of its purchasers are likely mainly using it for still photography simply because Nikon isn't a prominent video let alone cinema brand). If RED prevail then this is an example of the patent system damaging the American customers' opportunities to purchase competitive gear and not in any way fostering innovation. Perhaps the customers can just purchase their Z9's from overseas in the future.
  6. Dpreview interviewed Nikon via e-mail, not Zoom. Nikon didn't quite promise a 8K camera. They said "our engineers are considering powerful video features such as 8K". Also "A flagship Nikon Z series mirrorless camera can be expected within the year" is not quite a promise, my interpretation is that they are just saying it is likely to happen.
  7. Canon & Adobe under the same ownership would create a problem - a market-controlling monopoly which wouldn't have been good for the users; it would have been pretty disasterous if Adobe software only worked with Canon camera files, which is no doubt what would have happened under their ownership. I think the approach of using the mobile phone as the connection hub to the world is sensible, and camera manufacturers have been working to integrated connectivity to their ILCs. It's a bit quirky to use but it does work. Screens have been growing, and three of my four ILCs have touch-screen functionality. There are lots of so-called computational tools in the camera including focus stacking, single-shot tone-mapping, mutli-shot HDR, dozens of different visual effects, and basic editing tools right within the camera. A lot of people claim that these things are missing from cameras but they're not. Many photographers laugh at these features because they want more control and the ability to edit the images and do the "computational" part with user input on a ... well, computer, rather than be limited by the camera manufacturer's built-in software for post-processing. The Zeiss ZX1 implements a lot of editing and sharing functionality in the camera and it has been vigorously trashed on photographers' gear forums online. I can't remember any product that got so much online hatred. These people enthusaistically don't want these features on their cameras. Personally I enjoy occasionally editing an image in-camera and sending it to friends via my mobile phone. It takes a couple of minutes to do it and people catch up with what I'm doing. I then later edit the image properly on a computer based on the RAW image(s). I also use automated focus stacking quite a lot. I'm not a big fan of combining multiple exposures in "HDR"-style effects as I find the automated algorithms don't do all that great result in terms of how I like the images to look and I prefer a more manual approach called exposure blending in most cases (with treelines I sometimes do use HDR or D-Lighting). I find the mobile phone cameras suitable for digitizing bills and hand-made drawings and for such tasks, but generally for photography I find the results disgusting. Cameras don't use the same kind of OS as mobile phones as the mobile phones take a long time to start up and people want a camera to be ready to shoot within a split-second after turning it on, instead of taking 30 seconds to boot. Additionally, many experienced (still) photographers want to time action themselves rather than shooting all moments and then selecting afterwards. It's just a creation of habit from the film world, perhaps. A mobile phone OS isn't really suitable for real-time tasks where precise timing is important. Camera manufacturers sometimes make attemtps at Android-based cameras (Nikon and Zeiss have done that) and the resulting product gets universal trashing by the online photography community. A real-time OS is what the camera manufacturers use, and it's for good reasons. The issue behind the camera sales time course is that the world now has hundreds of millions of digital ILCs and perhaps only one million is really needed. For these cameras to stop being functional it would take a long time. Which is why there is likely to be only a trickle of sales from now on. Younger generations have fallen behind on income and thus they don't have the purchasing power their parents had, and thus they don't buy expensive luxury items such as dedicated cameras, unless they work professionally in a field that requires it. Dedicated cameras are not needed in everyday life and the mobile phone camera provides the necessary everyday functionality. Artists and journalists are now largely endangered species and also don't have the jobs or purchasing power that existed in the past. People expect content to be free now so where is the compensation for the people who produce it? I don't have the numbers but my understanding of streaming services is that the compensation to the original artist is worse than it was when physical media distribution was required to disseminate the art, be it music, or photography fo that matter. News sites created by professionals still exist but they generate less money because much of the advertising money goes to google and facebook instead of the producer of the content like it used to be in traditional media (be it TV or newspaper). So everywhere the producer of the original content is stomped upon and it becomes more difficult to make a living in this way, and large international companies reap the profits, taking advantage of the content that they did not make. This, in my opinion, is one of the key problems of our time, and it is also contributing to the challenges facing camera manufacturers.
  8. I don't think Nikon have any plans to leave the DSLR market any time soon; they are far more successful with DSLRs than with mirrorless and their mirrorless market share is quite small. They announced two new DSLRs this year (D780 and D6) and a DSLR lens (120-300/2.8). Also they in their public messaging have consistently stated that they will continue to develop both technologies taking advantage of each technology's advantages into the foreseeable future. I recall reading a comment from Nikon that at least in the next seven years this isn't likely to change (this suggests that they have products and technologies in development for products to be released in this period). Off course, what actually happens is also dependent on what the customers decide to buy, so it's not entirely up to Nikon's to decide since they need to sell products. Personally I am likely to buy a few more DSLRs in years to come. I could never really swallow the EVF. Nikon's plan is to design new products so that parts can be shared across DSLR and mirrorless products, and this way they costs of R&D and manufacturing can be shared, making it possible to maintain both product lines at lower cost. The D780 and Z6 seem to be the first example of this plan, with the Z6 sensor, LV AF technology and video technology inherited by the D780.
  9. It's a little bit more complicated than that. While video users often put cameras on rigs, and fluid heads have stick handles that can be used to pan the camera without touching the camera itself, still photographers who use long lenses on tripod usually have their hands on the camera while shooting on tripod (and they probably shoot in a similar way when recording video, at least when the subjects are moving, since they typically use gimbal heads instead of the more video-oriented fluid heads). So there are circumstances where identifying whether it is safe for the camera surface to heat above 42-43 C might not be so simple. That Andrew was able to make the CFexpress card slow down (and camera display slow card warning) during hack-enabled extended recording suggests that the CFexpress temperature is also close to limits. Allowing long recording times when the card slot is not used and some of the heat generation is moved to the external recorder seems like a sensible practical compromise. Canon may continue to make refinements to the overheating management algorithm. But it doesn't seem like the hopes expressed in this forum and what Canon would consider a good design for the typical user of this camera are going to align.
  10. Here is an article showing that human cells die rather quickly in extended exposure to higher than 43 degrees Celsius: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4188373/ When human cells are at 44 C for a period of 1 hour, only about 10% of the cells survive (Fig. 1). When using external recorder, Canon may have assumed that the camera is no longer hand-held but used on a tripod, since it would be pretty clumsy to hand-held the camera with recorder attached. Thus there is not as much likelihood of long-term exposure of the videographer's hands to the damaging heat when using an external recorder. And yet Canon have probably considered the needs of professional videographers here, and allowed external recorder to be used for that reason. A consumer is less likely to be informed about low-temperature burns and may be hand-holding the camera as a matter of course, whereas a professional may be using tripods and external recorders more frequently. It's a balance act between safety for the typical user and utility in a professional environment.
  11. 43 C is regarded as the limit of temperature that is safe for human tissue so that it's not damaged due to the heat. That's what is used in medical devices as the safety limit: during use, the device must not heat the tissue temperature above 43 C. If the temperature of the skin does rise above 43 C then you can expect some damage, though I don't know how quickly it happens or how severe it is. Roger writes "we ran it for 18 minutes before getting a temp warning. The hottest part of the camera was the back behind the LCD door (43°C / 109°F)". So it seems that the 43 C tissue damage threshold is indeed what Canon used to design their overheating algorithm to protect primarily against, and they're running it pretty close. (Canon also mention controls for internal temperatures as a secondary consideration in the CineD interview). Of course, if you don't hold the camera in your hands during recording and use a tripod or gimbal, then it wouldn't cause burns. But they seem to have designed the protection for those in mind who do their videos hand-held. My guess is that the 43 C could actually be written into some countries legislation or regulations as well, so Canon might not have any choice about it. I'll try to find some information on this.
  12. The CPU's topside thermal pad was still stuck to the aluminium plate in the Chinese disassembly: Continuing the disassembly, there is another thermal pad right under the CPU PCB leading heat to the larger metal plate on the underside. In addition to the top- and bottomside plates, the copper layers of the PCB itself conduct heat from the components.
  13. I think the timer's purpose is two-fold in the overheating management algorithm. One is to provide the user with a degree of predictability so that they can decide whether a particular record mode will be usable for the situation at hand. The user can e.g. try 4K HQ and note that the record time is 4 min and then decide that it's safer to go with 4K regular to get the job done. If the overheat warning and shutdown are just reacting to the immediate temperature, then the user is not given any warning of the impeding ending of recording. The second reason is that heat damage depends on the duration of exposure to heat and not only the momentary absolute temperature value. Theoretically the algorithm can do the following: monitor (1) the current temperature at the three temperature sites, (2) the rate of change of the three temperatures, and (3) compare the past and current temperature history with the heat budget, (4) compare the immediate temperature with absolute limits, and (5) from the user's perspective the camera should try to deliver on the promised record time (even if temperature is rising faster than expected) and avoid a situation where the initial "promise" of a certain record time at the onset of recording is not achieved (so that the user can make educated decision of which mode to shoot in at the onset of recording). Based on these considerations it can then give the current estimate of record time left, and whether to give a warning or shut the camera down. Anyway, I am just considering this from a theoretical perspective and do not know how the algorithm actually works. The time element is an important consideration in instrument control systems and monitoring just one parameter momentarily would probably give quite erratic-appearing behavior. The temperature sensors also produce some noise so averaging the temperature can reduce that.
  14. Apple did not repair it for free the second time. They offered to fix it for a cost but this was not considered worth it because it was not a real fix to the underlying problem but just replacing the component with the same component which then probably would die again. In my opinion the failure rate of professional equipment should be such that in normal use most people would never experience it during the lifetime of the product. I would consider e.g. 1% failure rate in 7 years of daily use as limit of acceptability for a tool that manages data. I've lost data because of equipment failure - a motherboard broke and damaged hard drives, and when attaching the backup to make copies, it would break that too! (This was not an Apple computer but HP.) In my opinion these equipment should be designed in such a way that data loss (which is equal to loss of work) is extremely unusual.
  15. A friend of mine's 2011 Macbook Pro died twice from overheating (once it was fixed under warranty by replacing the motherboard but the actual problem was not solved and the problem later repeated itself) so it seems clear that this is not a good practice to run at such high temperatures. Also, it is not clear where the R5 measures the EXIF-reported temperature. It might not be the processor's internal temperature but a separate sensor inside the camera.
  16. It is normal that temperature control algorithms include not only the current temperature but also the recent temperature history that are factored into the algorithm calculating "what is the next move". It wouldn't be surprising that if you erase the data of the temperature history and other parameters by removing the battery, the algorithm has to go by without it, but it doesn't mean that it works in the intended way to protect the camera if you do that. It has been shown that the algorithm does monitor the temperature (by the freezer experiments) and others have not been able to reproduce Andrew's fridge results, in fact there are reports of temperature going down in the fridge as expected, and with good recording time left in the high-quality modes after a similar period in the fridge. Perhaps settings were different, explaining the different outcomes. Prolonged LV, focusing and IBIS activity can certainly be factors. Canon do recommend turning the camera off when not using it. The long-exposure noise in still photography has been shown to increase as the camera gets hot. Here there is discussion of the long-exposure noise and comparison with rival cameras: http://www.mibreit-photo.com/blog/canon-eos-r5-image-quality/ "The amount of hot pixels and noise in such a photo taken with the R5 will be worse, the longer you have the camera turned on in advance to taking the long exposure." So it does appear that the image quality is deteriorated in certain situations when the camera heats up.
  17. From the CPU, there are layers of static air, circuit board, static air, metal and plastic at least. There is no effective thermal conduction path to the metal chassis either from the outside or from the processor. Thus the processor cannot effectively disseminate heat and outside cooling has only a delayed effect on the internal temperature of the components.
  18. An IR thermometer only records the surface temperature, it may be that components are hotter inside. Putting the camera in a freezer for 25 min reportedly restores the camera's video recording capabilities to baseline (see experiment and observations by Jesse Evans at https://www.fredmiranda.com/forum/topic/1658635/4). Thus the camera is clearly monitoring its temperature and reacting accordingly, but after use that leads to overheating, it seems to be difficult to get the internal temperature to decline sufficiently without extreme measures. I think Canon could tweak the operation of the camera; for example, if it is sitting in a menu, there should not be any need to continue live view operation and they could program the camera so that it doesn't read and process sensor data during this time, and run the processor at lower speed to minimize heating when in a menu browsing and adjusting settings. Also there could be recommendations as to what kind of live view settings to use to maximize subsequent recording time, and so on. It can't be that the camera loses its ability to record high-quality video by merely being switched on.
  19. The non-raw video formats are resampled from the original sensor data and 6K->4K conversion can be done by interpolation. This cannot be done for raw video because raw file stores images before the RGBG Bayer pattern is converted into RGB pixels. There is no straightforward way to convert RGR BGB RGR into RG BG covering the same subsection of the image. So they have to skip data. The alternative would be to do the Bayer interpolation and create a 6K RGB image and then downsample that to 4K RGB and finally re-Bayer it to come up with the final 4K RAW. In this case there would be no aliasing problems but it wouldn't really be RAW ie. original photosite data. Another option that Nikon could have done is to offer a 1.5x cropped raw video without interpolation. In this case it would have been RAW without line skipping. There is no stupidity involved here, only different compromises to choose from.
  20. Dual pixel AF with 45MP final output image would require a natively 91MP sensor and for continuous AF purposes all of this data would have to be read and processed during focusing. Cross-type phase detection with a quadruple pixel design would require 182MP (if 2x2 are used instead of some other pattern). These things are easier to implement in a camera that isn't intended to produce high-resolution stills. Dual pixel AF is limited by the processing power available and having a high pixel count makes it more difficult. Notice that Canon's 50MP models don't have dual pixel AF either. D2H and D2Hs had an LBCAST sensor. I think the main problem wasn't the technology of the sensor but the fact that it was 4MP while Canon's was 8MP. The D2X had a 12 MP sensor but Nikon hadn't yet cracked optimal high ISO at that time (the breakthroughs came later with the D3s). In my opinion, the details of how Nikon collaborate with their partners to make the sensors for their cameras should not matter to the customer. Users should be interested in 1) image quality, 2) performance, and 3) cost. If the results are excellent that is usually enough. It is clear that Nikon's focus isn't in video but they offer video as a feature (instead of the primary function of the camera). Nikon seems to prioritise still image quality over features such as video AF. This is neither a good thing or a bad thing, every company would do well to concentrate on their strengths. I do think Canon may be more motivated to offer full frame 4K in the near future because both of their main rivals now offer it. Since Nikon are planning on releasing a high end mirrorless camera system in the future, that will surely require some kind of on-sensor PDAF which then can be offered on the DSLR side as well.
  21. Noise depends on the luminosity or number of photons detected; it is not constant but increases approximately proportionally to the square root of the signal (the luminosity or photon count). The number of distinct tonal or colour values (that can be distinguished from noise) can be calculated if the SNR is known as a function of luminosity or RGB values. From this graphs it is possible to calculate how much tonal or colour information there is in the image which is what DXOMark is estimating. You cannot estimate how many colours or tones are separable from noise by assuming that there is a fixed "noise floor". In my opinion the DXOMark analysis here is sound. Color depth is just the number of bits that are used to encode the colour values of the pixel. It doesn't consider noise.
  22. If you look more closely at DXOMark measurements (the graphs) the D5 has better dynamic range from about 3200 to 51200 than the 1DX II. Which is more important (low ISO or high ISO dynamic range), can be argued depending on the application. Typical sports shooters are shooting publication ready jpgs in the camera which mean their dynamic range is limited at that point in practice even if they once in a blue moon get the chance to use low ISO. Furthermore the tonal range (number of tones that can be separated from each other and noise) and color sensitivity (number of color values that can be distinguished from each other and noise) are greater in the D5 across the 100-25600 range than in the 1DX Mark II. For me these are very important measures of the smoothness of tones and colour gradations especially if the contrast is increased in post they determine how well the image's tonal and colour integrity hold up. To decide on which sensor is best for a given application, one needs to look at the shooting conditions and what kind of post-processing / look is preferred for the final image. The D5 isn't the ideal camera to shoot in direct sunlight due to its lower base ISO dynamic range; that much is clear. On the other hand, the 35mm full frame camera which has the best base ISO dynamic range is also made by Nikon: the D810. So they have solutions for this situation also, just in a different camera. The D500/D7500 sensor allows fast reads for high fps use, which the D7200's sensor (which scores better on dxomark for low ISO metrics) is apparently not well equipped to do. However, many users of the D500 report that they find the high ISO image quality to be better in the D500 than in the D7200 and the color neutrality is held across a greater range of ISO settings than in previous cameras. This is also true of the D5. So there are characteristics of the new sensors which are missed by dxo's overall scoring (which is mostly based on low ISO performance and ignores large parts of the elevated ISO measurements) but appreciated by photographers who use these cameras. In dxomark's graphs, the D7500 has better dynamic range than the NX500 at every ISO setting but the difference is pronounced from ISO 400 to 25600. DXO weight their overall score heavily on base ISO results which is usually not what people are using in practice unless they work in the studio or are tripod based landscape photographers. I think there is useful information in DXOMark data but you have to go into the graphs in the Measurements tab to access it. I think the cropped 4K (which is the same actual pixels crop as is used in Canon's 4K capable DSLRs) is used because it requires less processing and produces less heat than doing a full sensor read and resampling the images to 4K. I don't think it's a question of who makes the sensor so much. If they wanted to they could make a full frame 4K camera but it would cost more and most Nikon users are focused on still photography and only need some video capability on the side for occasional use. I realize you are interested primarily in video and would like Nikon to do better in that area. I am sure this sentiment is shared by many, however, Nikon's history is in still photography and they remain primarily focused on that. Users who have greater priority needs in video tend to congregate to other brands. Since Nikon is working on a full frame mirrorless camera I would expect that they will implement some form of phase-detect focus sensors in the main image sensors and at that point there will probably be more interest in using Nikons for video. But at present it seems that all the optimization that Nikon do is to get the best still image quality possible for the applications expected for each particular camera.
×
×
  • Create New...