Jump to content

mfeldt

Members
  • Posts

    20
  • Joined

  • Last visited

Posts posted by mfeldt

  1. On 21.4.2016 at 3:07 AM, The Chris said:

    3D anything = nauseating and unwatchable for me and many others. This will be a specialty thing at best, like IMAX. 

    This! It's just - still not there, like so many times before. 

     

    3D gives me a headache, and I will avoid it as long as it can be made to enable a better experience for anyone, preferably without any need for additional gear when watching!

    24p2D will live until the holodeck gets invented!  Maybe even longer!

  2. 1 hour ago, araucaria said:

    Exactly. And the larger sensors are also better when you aren't fighting for photons, bright scenes or tripod, because of larger full well capacity.

    There is also one thing, the gaps between pixels may reflect photons.

    That's good point, actually!  I do not know the filling factor of photographic sensors - anyone has information on that?

  3. Good day everyone,

      across all the threads here you encounter some kind of mantra that keeps getting repeated over and over, and that is that bigger sensors do exhibit better low-light performance because tezhy are more sensitive.

      My professional background being astronomy and the design of quite advanced instruments for large telescopes,  I keep wondering why everyone keeps taking this mantra for granted. For us, where capturing every single photon if possible, sensor size never really is an issue.

      The physical size of a pixel determines primarily ts full-well capacity, i.e. how many electrons it can take before one needs to apply a reset. Via the number of bits that you can use to store this maximum value, and of course the inevitable read-out noise, this is connected to the dynamic range you can achieve.  The quantum efficiency, i.e. the ability to turn arriving photons into photoelectrons, is not connected to the size of a pixel. 

      What may appear connected is the "light gathering area", but in fact this is true only in a limited sense, as it depends on the optical design of the camera. In astronomy, at least in what we call the high-angular resolution part of it, we tend to try to achieve diffraction limited sampling. In this case, a pixel samples a solid angle that corresponds to about half the resolution limit of the optics, given in turn by the diameter of the pupil.  If the pixels of the detector become bigger (imposed e.g. by the manufacturer), we adapt the f/ratio to keep the sampling of a point image constant, and thus each pixel will still keep receiving the same amount of photons per second as was the case with smaller pixels.  Of course you could refrain from adapting the f-ratio, whereupon that amount would increase and more signal per amount of time provided to each pixel. however, your fewer pixels would be available for sampling a single "point" (smallest structure the optics can produce), up to the point where you get single pixels representing e.g. the image of a star - something you would clearly not want!

      So I keep wondering whether there is maybe a secret difference between astronomical instruments and photographic cameras that makes pixel size (and via the number of pixels thus sensor size) play a role - or whether the whole discussion is maybe on the wrong subject. Could it be, that in fact it''s more that bigger sensor are usually in bigger cameras that carry bigger lenses with bigger apertures but use the same number of pixels per image area, so that they are more sensitive because of the larger aperture? Or is there some hidden influence of electronics that make smaller sensors have more readnoise, thus simply requiring more photoelectrons to overcome it, adapting the digital sampling and leading to "darker" images?

      Looking forward to answers...

     

     

  4. On 12.11.2015 at 9:56 AM, wernst said:

    Weight is the only solution? I don't think so. Big masses of steadycams have their specific, inherent problems. (“laws of physics” well discovered in Scotland . . . )

    My experiences with rigs “with a bunch of mass” are simply painful. Trying to balance these rigs takes hours. And after swopping a single piece, e.g. the battery with another type and weight you have to balance it again from zero. To set up the in-cam stabilization of the EM5II is just switching ON the camera. Yes, the EM5II has still some issues, e.g. with pan. Interestingly the “real thing” has the same kind of stabilization problem, e.g. the after drift when a pan stops. (inertia, mass vs. gimbal)   

    Trying to keep systems as simple as possible is my credo. A while ago Andrew has asked here:  “Does downgrading your equipment to simpler more basic models make you more creative?” He himself gave the answer: “Creativity comes from constraints”. I fully agree. A shot with some imperfections in stability taken within seconds is better than missing a shot because it would have taken 1 hour to setup the steadycam. Or for your convenience you have left your heavy steadycam at home or in the car anyway.

    For well-arranged studio type of scenery any system for the best possible IQ will be chosen. For travel or documentary style shooting I prefer an easy and simple solution.     

     

     

    I'm having a hard time to belive that any kind of in-camera stabilization will ever match the dynamic range of an external device like the osmo or a steadycam. Dynamic range in terms of pointing (and position for true steadycam rigs) errors that is, of course... not in terms of gray levels!

    The osmo will keep your image stable even when turned 90 deg off target - nothing inside a camera body could ever achieve that!

    Apart from that, any system moving the lenses will necessarily degrade the image quality in all non-ideal positions. To keep this unnoticeable, you'll need to over-specify, thus increasing heavily the cost of such systems

  5. Check the literature. The specs are most likelh understated. I don't believe the processor is causing those limitations. Also, this sensor is made for mobile phones as well as small sensor camcorders, so I am guessing its capability is a lot more. 

    Downoad the 1st file. Also, check the 240 fps Hummingbird video (especially for its quality and sharpness). 

     

    http://www.sony.net/Products/SC-HP/IS/sensor2/products/imx377.html

     

    http://m.youtube.com/watch?v=U9V3ZnRnq9U

     

    Of course it mostly processing power and bus speed that impose the limitation.  According to specs, the sensor can deliver 4000x3000 pixels x12 bit x 35 fps. 

    Thats probably concerning the pixel values which will case a bit rate of 4000x3000x12x35~5Gbit/s.  This goes into some image processing unit to de-bayer and produce color information. Maybe the spec is also already for color information, in which case you would triple the rate to 15Gbit/s.  Reading RAW means to preserve that information and write to storage. Compare SD card and bus speed rates  and you'll notice you are in deep trouble.

  6. Someone said there where only static shots.

    The biggest problem imo when shooting handheld is the ois. I wish it could be turned of. It wouldn't be a problem to handhold such a wide lens but the moving sensor makes everything woble.

    In this video there is plenty of handheld footage. It starts roughly at 2:30.

    Colors from those city shots are really nice imo.

    Are the action shots warp stabilized or anything? There's a terrible wobble in the image....

    What it ACTUALLY DOES is change the Exposure values (either shutter speed or ISO) in order to recover highlights and then apply some 'greyish' filter to give it a cinematic look. It happens in all situations. In low light is more visible, because it cannot go slower than the selected frame rate.

    I'm not sure I understand the criticism.

    In fact I never fully understood why people applied a log profile, when they started with 8bit data and the result produced by the codec is also 8bit.  All you get is gray blacks.  Even if you have access to 12 bit sensor data, "log profiling" means little more than choosing the range of bits you finally encode, clipping either top or bottom or throwing away some of the intermediate bits.  "Gaining information" is hardly possible - when the output is 8bit, your information content is 8 bits per pixel per color - unless you apply some compression of course - then it's even less!

  7. Actually it would be quite interesting to have a 1080p raw-able head for this device at a price much reduced with respect to the x5r....

     

     

    Such as for example this thing here....:

    https://eu.ptgrey.com/blackfly-23-mp-color-usb3-vision-sony-pregius-imx249-2

    • uncompressed frames up to 1920x1200 @ 41 FPS
    • JPEG or h264 compression if needed
    • global shutter
    • USB3 interface
    • 379 €

    Of course one would have to fiddle a bit...

  8. I do not doubt this, but I think you should not start throwing away information right after you have extracted it from the sensor.

    Inevitably, you are going to loose further information in every post-processing step you apply. Compressing to log scale and fewer bits *before* going into that process would be like applying a dolby (anyone remember that?) noise reduction when recording a master tape...  that system also applied sort of a compression curve to store information in restricted frequency spectrum. Or the RIAA mastering curves.

     

    No one would ever have had the idea of using it before the data got written to the final medium!

     

    Imagine e.g. filming the night sky... 90% of the image will be around 5% peak brightness, i.e. all changes in most of the image will use only 50 different gray scale values when recording 10bit i.e. 1024 levels.  That's when compressing linearly, some clever algorithm might even assign fewer values since the eye can certainly not see any differences in that area anyway.

     

    13 stops dynamic range on the other hand could of course in principle deliver 8192 levels, and roughly 400 of these would be available to reproduce grey levels in the very large dark part of the image.

    Now you may say that the eye is not fit to resolve anything in this part anyway, which is true. But some clever post production guy might want to bring up the milky way or even more subtle nebular structures.  Guess with what he's better off when he starts stretching those 5% to ensure visible contrasts...?

  9. the key word is "discernible".

     

    *********** technobabble below ***************

     

    in an 10bit file you have 2^10 or 1024 shades.. you could interpret that as holding 10 stops if you calculate that as a contrast ratio of 1:1024, but that has nothing to do with what the eye really notices in terms of steps of contrast. So the 13 stop formula above is not what you need for number of shades to encode 13 stops of light from a scene. the just-noticeable difference for levels of contrast is actually based on other factors, including resolution, intensity and refresh rate. you can also encode light intensity as linear, or as a log curve, which has a major impact on the number of shades necessary for noticeable contrast. some tests indicate 12bit is the minimum, but it's standard when working with film or 2k digital footage (which adds grain and or noise), that you can encode 13-14 stops into 10bit log gamma w/o discernible steps.

     

    **************** end babble **********************

     

    translation.. 10bit log is good enough for 13-14 stops. 

    Well,  somwhere in your technobabble you're still loosing (or actually throwing away) information.

    Then comes the argument that the human eye can distinguish grey levels at some brightness levels better than at others, and you can therefore cpompress certain ranges, be it via log, a gamma curve, or something else *may* be true.

     

    I would argue that

     

      a) You should never ever reduce the amount of bits before you have done anything you could possibly want to do to the image!

      b ) Human vision varies between individuals, and the areas of dynamic range that you have compressed might have been indiscernible to some, but not to others.  Since you have to "unlog" (i.e. exp) before displaying on a screen, soem individuals may still notice banding where others don't.

  10. Huh? What do you mean? In terms of delivery? It's not as simple as that.

    Maybe I'm too naive about the dynamic range of an analogue image, and maybe it leads to far off topic... but I thought:

     

    13 stops dynamic range gives you 2¹³ = 8192 discernible grey levels, while a 10 bit image can just convey 1024 levels of grey. So you're loosing what your camera delivered in the encoding process.

  11. Some of the arguments here remind me of stills photography over a decade back, when people were starting out and raw capability was starting to be available. Fast forward ten years and no pro would want a camera without that capability, even if they shot JPEGs 90% of the time.

     

    We're back at this stage with these cameras - I'd love to see raw video implemented with sensible software options like we have for stills. That'd really shake up the market, me thinks. Unfortunately the hardware isn't there yet. I don't think there's a sensor readout and bus transfer that can deal with that on a cheap enough basis, is there?

     

     

    There are sensors, actually cameras that could do the job, e.g.:

     

    http://www.lumenera.com/usb3/lt665.php

     

    or

     

    http://www.edmundoptics.de/imaging/cameras/usb-cameras/point-grey-grasshopper-3-high-performance-usb-3-0-cameras/88-514

     

    Getting from there to a handy device incorporating the usual comfy features videographers are used to should be a development project of one or two years.

  12. It's actually quite easy, but it's not simply a camera.

     

    An MFT or DSLR with a sustained raw shooting rate of selectable and reliable 24, 25, and 30 fps would be just fine. Resolution about 16MP.

     

    Add to this a post-production software capable of reading alle the raw frames, cropping (add stabilization & choose composition), grading and finally downsampling to 4k or 2k clips at 8-12bit greyscale depth as desired - that would be a perfect world!

  13. This is a myth.. you have to record at 10bit to get 10bits of light information from the scene, or latitude in post.. another case of being fooled by math. Say you had 2bits, which is 4 colors, at 100k.. you start to get the idea. You could never recreate all the details of the scene if the information has been truncated to 4 colors, no matter how many pixels you recorded and resampled in post. (you are correct arya44, even though you say you are new to this). It is incorrect to think that there is an inverse relationship between bit depth and pixel depth with respect to depth info from the scene.

     

     

     

    What you're saying is also not strictly true... You would be correct if you take an image of an area of strictly the same colour and intensity everywhere, and the sensor would be totally free of noise.  But imagine a color ramp.... the downsampling and averaging of neighbouring pixels *does* yield intermediate values of colour information, and you do get more than the intitally available 4 values out!

     

    It's an interpolation, true, and the accuracy of the result depends on the spatial frequency of the object itself and the resolution of the camera and sensor system.  But if that's not too far off, the result of the interpolation should come pretty close to a sensor recording 10 bit right away!

     

    Maybe a tedious calculation reveals not the full 10 bit of information content, but 9.x and a slightly reduced spatial resolution, but still it can well be worthwhile!

  14.   I kept wondering whether the resampling would also work in the temporal domain... the GH4 allows 1080p at 4x frame rates.  Downsampling in the temporal domain, i.e. averaging 4 frames to generate 1, should in principle - if the sensor gain is well calibrated and noise is about 1 (or at least single digit) ADUs also allow going from 8bit to 10 bit luma information per channel.

     

      Of course it would only work in static parts of the image, creating smear in the moving ones like, say, a moving car or train.  But maybe that would create an interesting look on its own?

     

      m.

×
×
  • Create New...