Jump to content

mfeldt

Members
  • Posts

    20
  • Joined

  • Last visited

Everything posted by mfeldt

  1. This! It's just - still not there, like so many times before. 3D gives me a headache, and I will avoid it as long as it can be made to enable a better experience for anyone, preferably without any need for additional gear when watching! 24p2D will live until the holodeck gets invented! Maybe even longer!
  2. That's good point, actually! I do not know the filling factor of photographic sensors - anyone has information on that?
  3. Good day everyone, across all the threads here you encounter some kind of mantra that keeps getting repeated over and over, and that is that bigger sensors do exhibit better low-light performance because tezhy are more sensitive. My professional background being astronomy and the design of quite advanced instruments for large telescopes, I keep wondering why everyone keeps taking this mantra for granted. For us, where capturing every single photon if possible, sensor size never really is an issue. The physical size of a pixel determines primarily ts full-well capacity, i.e. how many electrons it can take before one needs to apply a reset. Via the number of bits that you can use to store this maximum value, and of course the inevitable read-out noise, this is connected to the dynamic range you can achieve. The quantum efficiency, i.e. the ability to turn arriving photons into photoelectrons, is not connected to the size of a pixel. What may appear connected is the "light gathering area", but in fact this is true only in a limited sense, as it depends on the optical design of the camera. In astronomy, at least in what we call the high-angular resolution part of it, we tend to try to achieve diffraction limited sampling. In this case, a pixel samples a solid angle that corresponds to about half the resolution limit of the optics, given in turn by the diameter of the pupil. If the pixels of the detector become bigger (imposed e.g. by the manufacturer), we adapt the f/ratio to keep the sampling of a point image constant, and thus each pixel will still keep receiving the same amount of photons per second as was the case with smaller pixels. Of course you could refrain from adapting the f-ratio, whereupon that amount would increase and more signal per amount of time provided to each pixel. however, your fewer pixels would be available for sampling a single "point" (smallest structure the optics can produce), up to the point where you get single pixels representing e.g. the image of a star - something you would clearly not want! So I keep wondering whether there is maybe a secret difference between astronomical instruments and photographic cameras that makes pixel size (and via the number of pixels thus sensor size) play a role - or whether the whole discussion is maybe on the wrong subject. Could it be, that in fact it''s more that bigger sensor are usually in bigger cameras that carry bigger lenses with bigger apertures but use the same number of pixels per image area, so that they are more sensitive because of the larger aperture? Or is there some hidden influence of electronics that make smaller sensors have more readnoise, thus simply requiring more photoelectrons to overcome it, adapting the digital sampling and leading to "darker" images? Looking forward to answers...
  4. I'm seriously evaluating between a dji osmo, or upgrading my Lumix to a G7 plus a 2nd hand nebula 4000. Both options would roughly cost the same... For me, it's a question of footage quality and handiness during e.g. a holiday...
  5. I'm having a hard time to belive that any kind of in-camera stabilization will ever match the dynamic range of an external device like the osmo or a steadycam. Dynamic range in terms of pointing (and position for true steadycam rigs) errors that is, of course... not in terms of gray levels! The osmo will keep your image stable even when turned 90 deg off target - nothing inside a camera body could ever achieve that! Apart from that, any system moving the lenses will necessarily degrade the image quality in all non-ideal positions. To keep this unnoticeable, you'll need to over-specify, thus increasing heavily the cost of such systems
  6. Of course it mostly processing power and bus speed that impose the limitation. According to specs, the sensor can deliver 4000x3000 pixels x12 bit x 35 fps. Thats probably concerning the pixel values which will case a bit rate of 4000x3000x12x35~5Gbit/s. This goes into some image processing unit to de-bayer and produce color information. Maybe the spec is also already for color information, in which case you would triple the rate to 15Gbit/s. Reading RAW means to preserve that information and write to storage. Compare SD card and bus speed rates and you'll notice you are in deep trouble.
  7. Are the action shots warp stabilized or anything? There's a terrible wobble in the image.... I'm not sure I understand the criticism. In fact I never fully understood why people applied a log profile, when they started with 8bit data and the result produced by the codec is also 8bit. All you get is gray blacks. Even if you have access to 12 bit sensor data, "log profiling" means little more than choosing the range of bits you finally encode, clipping either top or bottom or throwing away some of the intermediate bits. "Gaining information" is hardly possible - when the output is 8bit, your information content is 8 bits per pixel per color - unless you apply some compression of course - then it's even less!
  8. Such as for example this thing here....: https://eu.ptgrey.com/blackfly-23-mp-color-usb3-vision-sony-pregius-imx249-2 uncompressed frames up to 1920x1200 @ 41 FPSJPEG or h264 compression if neededglobal shutterUSB3 interface379 €Of course one would have to fiddle a bit...
  9. So it would be nice to have some original footage uploaded here....
  10. Actually it would be quite interesting to have a 1080p raw-able head for this device at a price much reduced with respect to the x5r....
  11. I do not doubt this, but I think you should not start throwing away information right after you have extracted it from the sensor. Inevitably, you are going to loose further information in every post-processing step you apply. Compressing to log scale and fewer bits *before* going into that process would be like applying a dolby (anyone remember that?) noise reduction when recording a master tape... that system also applied sort of a compression curve to store information in restricted frequency spectrum. Or the RIAA mastering curves. No one would ever have had the idea of using it before the data got written to the final medium! Imagine e.g. filming the night sky... 90% of the image will be around 5% peak brightness, i.e. all changes in most of the image will use only 50 different gray scale values when recording 10bit i.e. 1024 levels. That's when compressing linearly, some clever algorithm might even assign fewer values since the eye can certainly not see any differences in that area anyway. 13 stops dynamic range on the other hand could of course in principle deliver 8192 levels, and roughly 400 of these would be available to reproduce grey levels in the very large dark part of the image. Now you may say that the eye is not fit to resolve anything in this part anyway, which is true. But some clever post production guy might want to bring up the milky way or even more subtle nebular structures. Guess with what he's better off when he starts stretching those 5% to ensure visible contrasts...?
  12. Well, somwhere in your technobabble you're still loosing (or actually throwing away) information. Then comes the argument that the human eye can distinguish grey levels at some brightness levels better than at others, and you can therefore cpompress certain ranges, be it via log, a gamma curve, or something else *may* be true. I would argue that a) You should never ever reduce the amount of bits before you have done anything you could possibly want to do to the image! b ) Human vision varies between individuals, and the areas of dynamic range that you have compressed might have been indiscernible to some, but not to others. Since you have to "unlog" (i.e. exp) before displaying on a screen, soem individuals may still notice banding where others don't.
  13. Maybe I'm too naive about the dynamic range of an analogue image, and maybe it leads to far off topic... but I thought: 13 stops dynamic range gives you 2¹³ = 8192 discernible grey levels, while a 10 bit image can just convey 1024 levels of grey. So you're loosing what your camera delivered in the encoding process.
  14. There are sensors, actually cameras that could do the job, e.g.: http://www.lumenera.com/usb3/lt665.php or http://www.edmundoptics.de/imaging/cameras/usb-cameras/point-grey-grasshopper-3-high-performance-usb-3-0-cameras/88-514 Getting from there to a handy device incorporating the usual comfy features videographers are used to should be a development project of one or two years.
  15. I agree, but do you want 3 additional stops that cannot be encoded into the image?
  16. It's actually quite easy, but it's not simply a camera. An MFT or DSLR with a sustained raw shooting rate of selectable and reliable 24, 25, and 30 fps would be just fine. Resolution about 16MP. Add to this a post-production software capable of reading alle the raw frames, cropping (add stabilization & choose composition), grading and finally downsampling to 4k or 2k clips at 8-12bit greyscale depth as desired - that would be a perfect world!
  17. What you're saying is also not strictly true... You would be correct if you take an image of an area of strictly the same colour and intensity everywhere, and the sensor would be totally free of noise. But imagine a color ramp.... the downsampling and averaging of neighbouring pixels *does* yield intermediate values of colour information, and you do get more than the intitally available 4 values out! It's an interpolation, true, and the accuracy of the result depends on the spatial frequency of the object itself and the resolution of the camera and sensor system. But if that's not too far off, the result of the interpolation should come pretty close to a sensor recording 10 bit right away! Maybe a tedious calculation reveals not the full 10 bit of information content, but 9.x and a slightly reduced spatial resolution, but still it can well be worthwhile!
  18. I kept wondering whether the resampling would also work in the temporal domain... the GH4 allows 1080p at 4x frame rates. Downsampling in the temporal domain, i.e. averaging 4 frames to generate 1, should in principle - if the sensor gain is well calibrated and noise is about 1 (or at least single digit) ADUs also allow going from 8bit to 10 bit luma information per channel. Of course it would only work in static parts of the image, creating smear in the moving ones like, say, a moving car or train. But maybe that would create an interesting look on its own? m.
×
×
  • Create New...