Jump to content

horshack

Members
  • Posts

    167
  • Joined

  • Last visited

Everything posted by horshack

  1. Vignetting is due to the angle of incidence of light reaching the pixels on a sensor and I can't think of any reason why a smaller sensor would have less oblique angles. In fact, the higher the pixel density the more likely vignetting is to occur. BSI sensors should be less prone to this but to my knowledge neither the G5 or G5s are BSI.
  2. The light-gathering ability wont be the same due to more pixel vignetting from the f/0.95 lens.
  3. horshack

    NX2 rumors

    The rumor sounds crazy, absolutely crazy, but would be awesome if true.
  4. The raws indicate a reduction in read noise rather than an increase in sensitivity. If sensitivity was increased the noise differential would be noticeable well below ISO 12,800. Btw, where did you get the 1.66 figure from?
  5. Dpreview just posted their GH5s review, which includes a full set of High ISO raw stills for their IQ comparison widget. Here are few comps I generated: GH5s vs GH5, ISO 6400, Low-Light, Normalized to Common Image Resolution GH5s vs GH5, ISO 12800, Low-Light Normalized to Common Image Resolution GH5s vs GH5, ISO 25600, Low-Light, Normalized to Common Image Resolution
  6. Regarding what actual low-light improvements have been made to the GH5s sensor (vs just video noise reduction), the best way to suss that out is to look at the raw stills performance compared to its predecessor. Any noise improvement in video which is not also demonstrated in stills performance can only be the result of noise reduction (presuming the predecessor's video processing didn't have its own faults/limitations, such as the need to sub-sample/skip lines due to limited readout performance, which the GH5 didn't have). With that said, here's a comparison from dpreview of the GH5s vs GH5 for ISO 1600 - they don't currently have higher ISOs depicted because they've only done ISO invariance testing so far. Dpreview ISO Invariance Widget, ISO 1600, Normalized to common image resolution And here's a comparison of the base ISO DR: Dpreview ISO Invariance Widget, Base ISO, Pushed +6EV, Normalized to common image resolution
  7. It's because Sony read the sensor out at 1/15 in FF mode on the A7rII, which isn't fast enough to support video frame rates, so it employed line skipping, which throws out both light-gathering ability and sharpness as well (soft video). On the A7rIII Sony reads out the sensor at 1/30, which is fast enough to support video frame rates without needing to toss out rows.
  8. The Sony A7R III has a 42MP sensor vs just 10MP for the GH5S which makes the similar low light performance in 4K video quite mind-bending. How did Sony manage it? Temporal Noise reduction most likely. And that would be from Panasonic's ASIC processing of the video rather than a property of the sensor itself.
  9. Andrew, in the video you demonstrated the Canon 35mm f/2 IS with the Metabones (AF sucked). Did you try that Canon with the MC-11 as well?
  10. Every ISO has a conversion gain, including base ISO. If you read back over Jim's post you'll see him calculate the theoretical conversion gain for "base" ISO 100 on the A7rII.
  11. Jim and I converse all the time. The explanation you quoted from Jim doesn't have direct relevance to the discussion because "dual native ISO" isn't what the name implies - and btw there's no such thing as "native ISO" since conversion gain happens at all ISO levels. Again, it's simply Pany's name for the Aptina technology, which is a dual-gain configuration and has been implemented in multiple sensors prior to the GH5s, from both Sony (A7s, A7rII, A7rIII) and Nikon (D4 and forward). "Dual gain" has been around for a while but it's not implemented in that many cameras, at least those using APS-C and larger sensors. Nikon first implemented on their sensor starting with the D4, and Sony with the A7s. Btw, gain is applied at all ISOs, including base ISO.
  12. Sony has had "dual native ISO" in their sensors going back to the A7s, and it's in the A7r II and A7r III as well. It's a pretty mature technology and was invented by Aptina - you can read the original white paper on it here: http://www.photonstophotos.net/Aptina/DR-Pix_WhitePaper.pdf
  13. The stills performance yields what the sensor's true capabilities are in terms of noise. Video is that same stills noise performance, at least for full-sampled sensors like the GH5/A7s, plus any downsampling for the lower-res modes like HD (which still matches still performance just ideally downsampled, as one would do in PS). The only way video can outperform its equivalent stills performance on a given sensor is with noise reduction, which comes at the expense of detail, and which can be achieved in post-processing from any sensor, just slower because even a GPU can't match the performance of a specialized image processing ASIC in cameras. I suspect Panasonic is doing temporal NR, which is why the locked-down shots without motion look so good.
  14. Dpreview just added the GH5s to their studio sample widget. You can use the raws to gauge the true nature of the sensor improvement (ie, without the effect/benefit of noise reduction) https://***URL removed***/news/2333575124/panasonic-lumix-dc-gh5s-studio-scene-posted
  15. The A7s's compulsory NR doesn't start until ISO 102,400. Any non-typical artifacts you see before that would be from the codec. As for a sensor whose size is just over 1/4 the area of the A7s's performing better than it at ISO 6400, at least in terms of overall noise and detail, all I can say is Pany's marketing is proving very effective then.
  16. The MFT sensor had room for improvement plus they dropped the MP down to further lower the read noise at High ISO (Sony did the same on the A7s). This means the GH5s likely matches or slightly exceeds the A7s sensor on a per-area basis, which would place it two stops behind the A7s on a full image basis. The rest of the improvement is noise reduction, including possibly 3DNR although that would have to be measured with careful scene selection.
  17. Again, the laws of physics can't be repealed. The A7s's sensor was already at/near the maximum light-gathering efficiency and low read noise potential of bayer sensors. The only way around these limits is a new sensor paradigm (such as RGBW layouts) or more advanced post-acquisition noise reduction, the latter of which is likely what the GH5 is incorporating.
  18. No doubt the A7s's codec is inferior but I was only speaking to sensor performance - the codec can be worked around via external recording. Is the GH5 vs GH5s screenshot you posted a static locked-down shot? If so then the difference can explained via 3D (temporal) noise reduction. Panasonic can't replace the laws of physics. The A7s sensor has a Quantum Efficiency of 65% and a sub 1e- read noise - that's pretty much near the theoretical limit of current bayer sensor design. If the GH5s matched that then it would still be 2EV below the A7s in noise performance based on the difference of sensor area.
  19. Be careful when evaluating the GH5s vs A7sII low-light performance videos - look at both noise *and* detail, and look for scenes with movement rather than just locked-down static shots. Noise reduction can lead to false conclusions of sensor performance.
  20. Intel has had H.264 acceleration for quite a while (since Haswell). H.265 acceleration was only introduced in the current generation Kaby lake processors.
  21. The poor AF of the A7sII strikes me as the better alternative than no AF on the A9II (except if you can shoot @ f/3.5 or can accept not being able to precisely control the shutter speed). A one-size-fits-all body for both a stills and video is hard to come by. The closest thing we have to that right now is the A7rII.
  22. Yep, for the culling purposes I designed the app for it doesn't need to be frame accurate since the it's a rough cut of the clips which are intended to become the new source files that then can be edited down the exact frames. I considered added frame accuracy in case anyone wanted to use the generation as a final cut rather than for culling - it would involve using ffprobe to identify the keyframe boundary near the in/out point, then doing two separate ffmpeg invocations to transcode between the keframe boundary and specific in/out points, then invoking ffmpeg again to concatenate the transcoded portion with the copy muxed portion. Basically a smart renderer. If I get enough demand for something like this I might reconsider adding it in a future version.
  23. Quick Description: Precut is an open-source app I wrote that renders video files from an Adobe Premiere Project without transcoding/recompression. Its purpose is to reduce the storage requirements of your source media by letting you prune your source files to just the portions you need. Precut uses ffmpeg to perform its work, so it should be compatible with nearly every codec and video wrapper available. Here is the homepage for the app: http://testcams.com/precut/ And here is a YouTube demonstration:
  24. BM had free hands-on training for Resolve at NAB this week. Three-hour course. They had 50 Macbooks set up in a large meeting room. The trainer was excellent. I left very impressed with the product (I use Premiere). Earlier in the day I was in BM's booth and grabbed one of the employees to ask a question about the interface - he knew the program inside-out. I asked him what he did at the company and turns out he's one of the software engineers for Resolve. Software engineers on the trade show floor interacting with prospects and customers? Now that is impressive!
  25. This is surprising. By quite bad did you mean detail or noise? Did you see the poor results in both FF and Super 35mm modes for 1080? Unless you like to use full-time AF for video. I doubt the A7rII will match the NX1 in this regard: https://www.youtube.com/watch?v=0DigAgYD-QY https://www.youtube.com/watch?v=znoDaqWLt9E
×
×
  • Create New...