Jump to content

tupp

Members
  • Posts

    1,148
  • Joined

  • Last visited

Everything posted by tupp

  1. I don't see a banded transition between bit-depth levels, nor do I see any "rainbowing." If it were an actual banded transition between bit-depth levels, the trasition would appear sharper, and we wouldn't see all that gradation within the area in question -- there would just be more bands instead of gradation. It looks like a cast shadow from the subject onto the background. The "banding" that you might perceive is probably the umbra and penumbra of the cast shadow. It appears that you were using a large, softer and cooler source (possibly daylight balanced) and that there was warmer ambient light (possibly tungsten balanced) filling the cast shadow.
  2. @Inazuma Very interesting! I can see how the banding is reduced on the third clip (compared to the second clip). However, I confess that I prefer the contrast in the original clip over that in the processed clips. Thanks for the examples!
  3. Looking forward to watching the video! If it is an easy process, I might reconsider shooting with the A7s. Thanks!
  4. @Inazuma Thanks for the info! Thank you for linking the article. >One the comments shows the dramatic advantage of "diffuse" dithering over merely applying noise. I have used noise and blurring with still photos to eliminate banding, but with stills it is easy to localize the effect within the image. Has anyone used masks (or other selection tools) to keep blurring/noise/dithering to a local area in video?
  5. What is the best way to deal with potential banding from the A7s and from other cameras that produce 8-bit files? Here is an example of banding that appeared in A7s footage after pushing it in post. Thanks!
  6. I had to rethink my attitude towards artifacts (and artificiality) after seeing this video: http://www.youtube.com/watch?v=ugeVou06fWI
  7. tupp

    GH4 to get Log

    Not sure on this point. Exposure and "exposure range" are two different things. Also, exposure range and dynamic range are two different things. There is no universal correlation between bit-depth intervals and stops/luma/EV. Again, 8 bits worth of tones can be mapped to a system with a dynamic range of 25 stops, while 32 bits worth of tones can be mapped to a system with a dynamic range of only 5 stops. Indeed, cameras currently exist in which the user can select 8-bit, 12-bit, 16-bit and 24-bit depths, while camera's dynamic range remains unchanged. So, the value of each bit interval is determined by the dynamic range (or, usually, the amplitude range) of the digital system, along with the number of bits used. Of course, this scenario is fairly simple on linear response systems. Systems with adjusted response curves (or with randomly mapped intervals) are much more complex.
  8. tupp

    GH4 to get Log

    The rest of your post is excellent, but these two lines are not true. Bit depth and dynamic range are two different and independent properties. Bit depth is the number of tonal/amplitude steps in a digital system. Dynamic range gives the usable amplitude range relative to noise (similar to signal-to-noise ratio), and dynamic range applies to both digital and analog systems. To illustrate how bit depth and dynamic range are independent, consider that 8 bits of tonal intervals can be mapped to a system with a dynamic range of 25 stops, but, by the same token, 32 bits of tonal intervals can be mapped to a system with a dynamic range of 5 stops. Furthermore, a film emulsion (or analog magnetic tape) can have a dynamic range of, say, 7 stops, but that emulsion (or tape) will have no bit depth -- it's analog.
  9. With a captured image, it is true that resolution can be sacrificed for increased bit depth -- folks have been doing it for years. On the other hand, the color depth of a captured image can never be increased (unless something artificial is introduced). It is important to keep in mind that bit depth and color depth are not the same property. The relationship of bit depth, resolution and color depth is fundamental to digital imaging. With all other variables remaining the same, the association between bit depth, resolution and color depth in an RGB system is: COLOR DEPTH = (BIT DEPTH x RESOLUTION)3 So, if two adjacent RGB pixel groups are cleanly summed/binned into one large RGB pixel group, the bit depth will double. However, the color depth will remain the same or be slightly reduced (depending on the efficiency of the conversion). The color depth can never increase. As others in this thread have suggested, there appears to be something else going on other than normal banding in the OP's first image. Even so, if banding exists in an image, increasing the bit depth at the cost of resolution will not eliminate the banding, unless the banding is only a few pixels wide (or unless the resolution becomes so coarse that the image is no longer discernable). Again, when swapping resolution for more bit depth, the bit depth has been increased, but not the color depth.
  10. Well, that quote says it all (except for the fact that circle-of-confusion and depth-of-field are actually two different {but related} properties). Nowhere in that quote is resolution mentioned. Image format is mentioned as "often associated" with circle-of-confusion values, but the quote states that the most appropriate basis of circle-of-confusion is visual acuity. Visual acuity is not resolution. If we were to base depth-of-field on resolution, we would encounter all kinds of silly, varying DOF ratings of the same optical system, which would change with each film stock/sensor and with the whim of any person viewing the captured image who decides to manipulate it long after it has been captured.
  11. That "free parameter" is determined by a fraction of the focal length, not by sensor/film resolution.
  12. No. Post-capture processing and the "acceptable circle of confusion" don't affect the depth-of-field. Depth-of-field is an optical property that exists regardless of whether or not the image is captured. I seem to recall reading a similar point somewhere in this thread.
  13. When discussing depth-of-field in the "real world," the term applies only to real world optical systems. No. Re-read the "simplifications." All of those post-capture processes are ignored by the DOF formulas. Depth-of-field is an optical property -- it has nothing to do with sharpening or softening an image after the image is received by the sensor/film. Furthermore, not all of the methods that you mentioned change the resolution. Full-frame, half-frame, 16mm, IMAX, 4"x5", 11"x14" -- the size of the medium at the focal plane really has nothing to do with the depth-of-field. Depth-of-field and an image's "look" are two different things, but, certainly, DOF can affect the "look" of an image. Well, this passage that you have quoted says it all: Resolution of the imaging medium is ignored. When the resolution of the capture medium is similar (or greater than) the optical resolution, the captured image will seem more blurred throughout, making it more difficult to discern the depth-of-field. Nevertheless, the depth-of-field (an optical property) remains the same. "Acceptable circle of confusion" and DOF are two different properties. Magnification of the image at the film plane affects resolution, but not depth-of-field. Cropping into and enlarging a captured digital image can lower the effective resolution, but manipulating a captured image has absolutely nothing to do with depth-of-field. I doubt that you would notice any difference in DOF with the same f-stop on the same lens, by merely changing film stocks. Regardless, you weren't increasing the depth-of-field -- you might have been creating an image that had a coarser resolution overall, thus, making the overall image look softer and more homogeneously in the same focus. Big sensors matter, but the size of the sensor doesn't change the DOF. Depth-of-field is an optical property.
  14. No. I do not wish to get into a semantics argument, but the definitions of depth-of-field and circle-of-confusion involve basic, well-established optical properties that apply to imaging. The resolution of the sensor/film (and projector/monitor) is not a consideration, as depth-of-field is a purely optical characteristic. The Wikipedia page on depth-of-field addresses this very point (note limitation #6). No need to get personal, but you would be mistaken if you considered sensor/film resolution as an optical property. The term "Softer" is generic. It can apply to the properties of both resolution and sharpness. I never stated that you wrote so. The focal plane/surface is part of the optical system, but the resolution of the medium at the focal plane/surface has nothing to do with optics. Huh? I am not sure what you are proposing, but it appears that you are suggesting that the depth-of-field will change if I merely vary the ISO setting on my camera while the lens and its aperture remain the same. Is that what you are saying? If so, would you be interested in a little wager?
  15. No. Depth-of-field (and circles-of-confusion) is a purely optical property, regardless of what is receiving the image at the focal plane. Coarser resolution can make the sharply focused areas look softer (similar to the less sharply focused areas), but a given depth-of-field optically remains constant, regardless of sensor/film resolution. No. Depth-of-field is an optical property. The size and resolution of the film/sensor have nothing to do with the optical property of depth-of-field. The only exception to this rule occurs when the sensor corners creep into the optically inferior edge of the image circle. Yes. Some lenses resolve more sharply than others.
  16. Shoot static on an old TV set.
  17. It's definitely useful to know that there are adapters that convert most medium format mounts to Mamiya 645. However, what do you mean by "creates shadows?" What causes such shadows? Another advantage of having the tilt mechanism closer to the film plane is that one might be able to utilize more of the image circle, which possibly enables the lens to be pushed past the extremes possible with a more distant, medium format tilt/shift adapter (eventually causing the aformentioned shadows).
  18. Consider an E-mount to Nikkor lens tilt-shift adapter. If there is no vignetting with this adapter, the tilt-shift features should work with medium format lenses, with an additional plain, cheap nikkor-to-medium-format adapter. Of course, it will certainly work as a non-tilt/non-shift adapter for the aforementioned Nikkors. There are nikkor-to-medium-format tilt/shift adapters, but then you are locked-in to a particular medium format mount, with no chance of tilting/shifting the nikkors. In regards to the rumored USD$1,800 price of the the A7s, I have always been a loud proponent of full frame sensors/cameras for video. However, it is disappointing that the best 4k bit rate from the A7s is 8-bits -- and that is only possible through the HDMI cable. The GH4 can do 4k 8-bits to an SD card for the same price, plus 10-bits from the HDMI cable. Furthermore, with a focal reducer, the GH4 can mimic a super 35mm frame with an extra stop of sensitivity.
  19. tupp

    G6 vs. Forza 18k

    Good point on the G6's focus peaking! Another advantage of using the G6 instead of the Forza is that there is no confusion between the ACs and electricians if I say, "Set-up the 18K!"
  20. tupp

    G6 vs. Forza 18k

    Not sure which to get, but the Forza 18K camera shoots full res at 60fps. However, the G6 does have wireless control. Wonder if the Forza will take my Fujian 35mm, f1.7 without vignetting.
  21. The "One-Cam" looks like a typical machine vision camera with a recorder. The 9 stops of DR is in line other such cameras. Some rental houses offer the IO Industries cameras, and they have had booths at NAB and Cinegear for the past few years. However, one of the most impressive machine vision cinematography set-ups is this project using a Point Grey, 4K, 11.5-stop DR camera. The guy developed an encoder that recorded real-time to cDNG. He even made a touch control interface. The footage looks fairly clean, too, as evidenced from this screen cap and this screen cap. Also, the early Apertus cameras used open-source Elphel machine vision cameras. Of course, the Apertus team developed a lot of stuff years ago, including encoding schemes and touch control interfaces.
  22. I've gotten good results with this cheap shotgun mic. The price is low enough that you still have money leftover for a more professional mic, while keeping it as a backup. I think that Audio Technica gives a lifetime warranty,
  23. There are several DIY camera cage vids on YouTube. Here's one shown with a Panasonic camera.
  24. A couple of points: - the same f-stop on every lens of the same focal length won't necessarily transmit the same amount light to the focal plane, which is why we have T-stops -- they account for the light "Transmission" factor; - the field of view doesn't change from one lens to the next if the (effective) focal length remains the same (barring any condensing optics behind the aperture, such as a speed booster).
  25. Anything open source. Right now I am using Cinelerra-CV.
×
×
  • Create New...