Jump to content

mfeldt

Members
  • Content Count

    20
  • Joined

  • Last visited

  1. I'm offering the camera body, plus the gimbal, plus a nice bag to take it all out into the field. The camera is just body+cap+battery. The gimbal is complete with original box + charger. Pictures can be found here: https://www.ebay-kleinanzeigen.de/s-anzeige/video-ausruestung-lumix-dmc-g70-kamera-nebula-400-gimbal/952880813-245-9174 € 650,- + shipping, collection preferred
  2. This! It's just - still not there, like so many times before. 3D gives me a headache, and I will avoid it as long as it can be made to enable a better experience for anyone, preferably without any need for additional gear when watching! 24p2D will live until the holodeck gets invented! Maybe even longer!
  3. That's good point, actually! I do not know the filling factor of photographic sensors - anyone has information on that?
  4. Good day everyone, across all the threads here you encounter some kind of mantra that keeps getting repeated over and over, and that is that bigger sensors do exhibit better low-light performance because tezhy are more sensitive. My professional background being astronomy and the design of quite advanced instruments for large telescopes, I keep wondering why everyone keeps taking this mantra for granted. For us, where capturing every single photon if possible, sensor size never really is an issue. The physical size of a pixel determines primarily ts full-well capacity, i.
  5. I'm seriously evaluating between a dji osmo, or upgrading my Lumix to a G7 plus a 2nd hand nebula 4000. Both options would roughly cost the same... For me, it's a question of footage quality and handiness during e.g. a holiday...
  6. I'm having a hard time to belive that any kind of in-camera stabilization will ever match the dynamic range of an external device like the osmo or a steadycam. Dynamic range in terms of pointing (and position for true steadycam rigs) errors that is, of course... not in terms of gray levels! The osmo will keep your image stable even when turned 90 deg off target - nothing inside a camera body could ever achieve that! Apart from that, any system moving the lenses will necessarily degrade the image quality in all non-ideal positions. To keep this unnoticeable, you'll need to over-specif
  7. Of course it mostly processing power and bus speed that impose the limitation. According to specs, the sensor can deliver 4000x3000 pixels x12 bit x 35 fps. Thats probably concerning the pixel values which will case a bit rate of 4000x3000x12x35~5Gbit/s. This goes into some image processing unit to de-bayer and produce color information. Maybe the spec is also already for color information, in which case you would triple the rate to 15Gbit/s. Reading RAW means to preserve that information and write to storage. Compare SD card and bus speed rates and you'll notice you are in deep trouble.
  8. Are the action shots warp stabilized or anything? There's a terrible wobble in the image.... I'm not sure I understand the criticism. In fact I never fully understood why people applied a log profile, when they started with 8bit data and the result produced by the codec is also 8bit. All you get is gray blacks. Even if you have access to 12 bit sensor data, "log profiling" means little more than choosing the range of bits you finally encode, clipping either top or bottom or throwing away some of the intermediate bits. "Gaining information" is hardly possible - when the output is 8bit, yo
  9. Such as for example this thing here....: https://eu.ptgrey.com/blackfly-23-mp-color-usb3-vision-sony-pregius-imx249-2 uncompressed frames up to 1920x1200 @ 41 FPSJPEG or h264 compression if neededglobal shutterUSB3 interface379 €Of course one would have to fiddle a bit...
  10. So it would be nice to have some original footage uploaded here....
  11. Actually it would be quite interesting to have a 1080p raw-able head for this device at a price much reduced with respect to the x5r....
  12. I do not doubt this, but I think you should not start throwing away information right after you have extracted it from the sensor. Inevitably, you are going to loose further information in every post-processing step you apply. Compressing to log scale and fewer bits *before* going into that process would be like applying a dolby (anyone remember that?) noise reduction when recording a master tape... that system also applied sort of a compression curve to store information in restricted frequency spectrum. Or the RIAA mastering curves. No one would ever have had the idea of using it befor
  13. Well, somwhere in your technobabble you're still loosing (or actually throwing away) information. Then comes the argument that the human eye can distinguish grey levels at some brightness levels better than at others, and you can therefore cpompress certain ranges, be it via log, a gamma curve, or something else *may* be true. I would argue that a) You should never ever reduce the amount of bits before you have done anything you could possibly want to do to the image! b ) Human vision varies between individuals, and the areas of dynamic range that you have compressed might have
  14. Maybe I'm too naive about the dynamic range of an analogue image, and maybe it leads to far off topic... but I thought: 13 stops dynamic range gives you 2¹³ = 8192 discernible grey levels, while a 10 bit image can just convey 1024 levels of grey. So you're loosing what your camera delivered in the encoding process.
×
×
  • Create New...