Jump to content

tupp

Members
  • Content Count

    811
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by tupp

  1. That explains why we use Fresnels to illuminate smooth cycs and green screens instead of open-face cyclights and open-face flood washes specifically made for that purpose. /s What?! Open-face sources have "hot spots" and "uneven spread?" ... compared to Fresnels?! Please explain. Too busy right now to respond to the rest of your post.
  2. Don't confuse "photon shot noise" with the noise generated by a digital (or analog) system. This type of noise is the random photons that strike the film, sensor, video tube, retina, etc. Since photon shot noise is something that applies equally to almost any type of imaging system using electromagnetic waves yet is not inherent in any of these systems, this type of noise is irrelevant to a discussion on the noise produced by a camera, sensor or digital system. SNR in imaging is not based on RGB values, and it is a metric that is used in analog imaging systems that might not even have RGB values. SNR is essentially the ratio of a signal's amplitude to it's noise level, and SNR is usually expressed in decibels. Dynamic range is a similar metric that also applies to both analog (some without RGB values) and digital systems. Not sure how that would work. Sounds a bit shaky. Yes. You can. The noise floor within an imgaging system can usually be determined fairly easily, Just look at any proper dynamic range chart/test. Keep in mind that the increase in photon shot noise with greater exposure is not inherent in the imaging system itself. No. You are describing bit depth, which is not color depth. Color depth in digital systems is simply the resolution multiplied by the bit depth to the power of the number of color channels, so for an RGB digital system, the formula is: COLOR DEPTH = (BIT DEPTH x RESOLUTION)³ Also, keep in mind that we can have great color depth without any bit depth, as in analog imaging. In addition, because resolution is a fundamental factor of color depth, we can have great color depth in digital imaging systems that have a bit depth of "1," as in digital screen printing. Agreed. We don't usually separate the noise from the color depth of a system -- even the noise has color depth.
  3. Ouch! ... and yet, DxOMark gives color depth scores in "bits" (bit depth?), implying a fundamental misunderstanding of digital color depth. Their explanation of their color depth metric is somewhat vague and based on a dubious characteristic, which they term as "color sensitivity." There is a mathematical formula for absolute color depth of a digital system. A fairly accurate figure can also be given to represent the absolute number of shades above the noise floor (effective bit depth -- not color depth). Don't know why DxOMark doesn't use these basic metrics.
  4. Actually, resolution is a big factor of color depth, and color depth is mostly an independent property from ISO (the DxOMark "sports" score). On the other hand, the DxOMark "color depth" rated as "bit depth" is dubious (to say the least). Most do not realize that resolution and bit depth are equally weighted factors in determining color depth in digital systems. The actual formula to determine the color depth of digital RGB systems is simple: COLOR DEPTH = (BIT DEPTH x RESOLUTION)³ The bit depth is the number of values per channel, so 10-bit=1024, 12-bit=4096, 16-bit=65536, etc. The resolution is usually that of one color channel of the entire frame, which would yield the absolute color depth of the entire digital image (ie. with 1920x1080 RGB pixel groups, the resolution of one color channel would be 2,073,600). So, given resolution's equal weighting to bit depth as a factor of color depth, DxOMark use of "bit depth" figure to express color depth seems fundamentally flawed. DxOMark's explanation of their color depth metric is vague and apparently involves a characteristic which they call "color sensitivity," but they give no information on how this property is derived.
  5. Thank you, but my points were so cromulent that I fear I might have given them a little too much covfefe.
  6. I did not disregard binning. I directly addressed binning: Actually, binning yields slightly reduced signal-to-noise ratio compared to that of equivalently larger photosites. In the first place, there are fewer photons/light-waves captured with four binned photosites as compared to a single equally-sized photosite. This is due to some photons/light-waves being wasted when striking the border between the binned photosites. In addition, there is a minute increase in noise inherent in binning. It's very tiny, but it appears nonetheless. Also, the binned photosites don't have less noise than an equivalent sized larger photosite with the same sensor tech. Furthermore, as I said above, you can't compare apples to oranges -- if you bin one sensor but not the other, you are presenting two different scenarios regarding post sensor processing, and you are now dealing with two variables (instead of one): photosite size; and post sensor processing. Again, larger photosites give better performance than smaller photosites, as long as all other variables are equivalent -- identical sensor tech and identical post sensor processing. No. As I mentioned, there is a slight increase in noise when binning. We are discussing sensors and photosites -- not printing. Some of the rain drops will be lost in the "48" grid, as a few drops will land on the border between the vessels and a few drops will stick to the inside of the "48" vessels when you pour (bin) each group of "48" vessels into each respective "12" vessel. Firstly, noise doesn't increase just because an image is projected to a lager size -- the noise level stays the same relative to the image, regardless of projected size. Secondly, even if noise increased when a (say) 12MP image was projected, the exact same thing would happen to a 12MP image binned from a larger resolution (say 48MP). No. My "model" applies to all digital imaging sensors, including those to which binning (summed or averaged) has been applied. Larger photosites yield greater signal-to-noise than smaller photosites on sensors with the same tech, all other variables bein equal.
  7. No. Dynamic range, noise and sensitivity are all part of the same thing. Larger photosites have less noise, i.e. a lower noise floor (all other variables being equal). Dynamic range essentially is the range of amplitude above the noise floor. So, with a lower noise floor we get a greater dynamic range. In addition, lower noise means greater effective sensitivity. Larger photosites yield images with less noise, and, thus, higher effective ISOs. So, larger photosites simultaneously provide greater dynamic range, reduced noise and increased sensitivity. Certainly, a larger sensor receives more photons/light-waves (all other variables being equal). Nevertheless, the size of the sensor inherently has nothing to do with it's performance regarding DR/noise/sensitivity. If you take two full frame sensors that are absolutely equal in every way, except for one has larger photosites than the other, the sensor with the larger photosites will have better performance in regards, to DR/noise/sensitivity, at the sensor level (all post sensor processing being equal). In addition, if you take the exact same scenario, and merely swap out the full frame sensor having bigger photosites with a M4/3 sensor that has the same size photosites, the M4/3 sensor will have better DR/noise/sensitivity performance, at the sensor level. Keep in mind that the full frame sensor and M4/3 sensor in this scenario are absolutely equal in every way, except for the M4/3 sensor has larger photosites than the FF sensor (and remember that all post sensor processing on both sensors is equivalent). No doubt. However, if you equally bin two sensors (with identical sensor tech) equally, the sensor with the larger photosites will give greater dynamic range, reduced noise and increased sensitivity. You can't "compare apples to oranges." All other variables must be equal -- if one sensor is binned, then the other sensor must be equally binned. Again, this is comparing apples to oranges. The newer sensor tech in the A7R introduces additional variables other than merely larger photosites. If you made a M4/3 sensor with the A7R sensor tech and gave that sensor larger photosites than the A7R sensor, the M4/3 version would have better DR/noise/sensitivity performance. Once again, this is comparing apples to oranges. If the BMCC sensor has larger photosites that certainly helps with it's capture dynamic range, but there is a huge difference in both the sensor tech and in post-sensor processing between the BMCC and current M4/3 cameras. The BMCC is an older sensor, and it's greatest capture dynamic range comes from its raw mode, which applies hardly any post-sensor processing. On the other hand, most M4/3 cameras have a lot of post-sensor processing, including noise reduction, which can increase sensitivity but not necessarily capture DR. If you were to take a sensor from any of the cameras that you mention and create another sensor with the same exact sensor tech, but with larger photosites, the sensor with the larger photosites would yield greater DR, less noise and increased effective sensitivity.
  8. Just to reiterate: in regards to a sensor with a given quality/configuration, it is the photosite size that influences maximum sensitivity/DR -- not the size of the sensor. When comparing a full frame sensor with zillion megapixels (tiny photosites) to a M4/3 with much fewer, larger photosites, the M4/3 sensor will exhibit a higher maximum usable sensitivity and a greater capture dynamic range. Again, this principle assumes that all other variables are equal, such as: the sensor internal configuration/design; the A/D converter; post-sensor NR; etc. Of course, if we compare a full frame sensor and a M4/3 sensor having the same resolution and the same internal configuration/design, the full frame sensor will have larger photosites, and thus greater max sensitivity and dynamic range. So, if you embiggen the photosites, you generally embiggen the sensitivity/DR, regardless of what the "jabronis" say. embiggen:
  9. If the iso, shutter speed and f-stop are the same, then the exposure of the two cameras should be the same, regardless of the sensor size. With identical settings and barring the use of filters or extreme color/contrast profiles, the only difference in exposure might be due to lens transference. Keep in mind that iso is "sensitivity," so two cameras set to the same iso should have the same light sensitivity. Noise is an entirely different issue, but suffice it to say, larger photosites (sensor pixels) usually mean less noise (more dynamic range), all other variables being equal. So, if a full frame sensor and a M4/3 sensor have the same resolution, the full frame sensor will likely have less noise (all other variables being equal).
  10. As the saying goes, "the best camera is the one that you have with you." So, if you are traveling, you might consider a serious "large sensor" compact camera such as the Panasonic LX100. Do you really want to tote around a camera and interchangeable lenses? The LX100 is relatively inexpensive and yields nice 4k footage, and, most importantly, it sports a spectacular, fast Leica zoom. On the other hand, it only shoots HD in 60p, but how important is it to have 4K 60P footage of your vacation? Well, if your travels take you to India, the LX100 seems to work well there!:
  11. .. Or perhaps Tim Cook just can't sustain the reality distortion field. Chiclet/island keyboards suck, regardless. Back in the early 1960s IBM spent about two years field testing differently shaped keys for their new Selectric typewriter. They found that the "cupped-top" keys (along with a certain amount of travel and key spacing) gave the best performance, overwhelmingly. Bell Telephone came to the same conclusion with the the cupped-top keys on their touch tone phones starting in the late 1960s. Two years makes a fairly exhaustive field test, so that basic key design is hard to beat by some fashion-conscious industrial designer who favors form over function. There are still companies that field test, but probably not as thoroughly as was common in the past. Since the late 1990s, Apple has done very little field testing.
  12. tupp

    In memoriam

    He was so young and talented. Sad. Thanks for posting this.
  13. I would be scared shooting that interview. If you accidentally busted a take, you might find yourself sipping polonium tea.
  14. Yes. Those kids are talented. I have never recorded raw on the EOSM, and I don't know the answer as to whether or not H.264 has to be recorded while raw is recording. On the other hand, I have run Tragic Lantern with All I-frames, Full HD on the EOSM with boosted bit rate -- while using the Fujian 35mm f1.7! The All I-frames along with boosted bit rate gives more robust frames/files.
  15. The Fujian 35mm definitely covers the entire APS-C sensor, and there don't seem to be any reports that the "no-brand" 25mm f1.4 APS-C lens vignettes on APS-C. Here is a video shot by our own @maxotics on the EOSM and the Fujian 35mm with its exceptionally peculiar focal plane (which would likely frustrate the forum's staunch "DOF calculators"). Here is footage from an A7S in APS-C mode. The description of the "no brand CCTV C-mount" lens matches the 25mm f1.4 APS-C lens reviewed in the link I posted above. Regardless, there certainly are a few C-mount lenses that cover APS-C.
  16. Can you use one of the C-mount lenses that cover the entire APS-C sensor, such as the Fujian 35mm f1.7 or this 25mm f1.4?
  17. It might be good to consider a hackintosh. Here is a $70 hackintosh that outperforms a 2016 MacBook Pro and that can edit 4K video in FCP: I am not a post-production expert, but every pro editor with whom I've worked always transcodes footage for optimal performance on their NLE -- they never edit compressed/camera files.
  18. Right... Did you try turning off your computer and then turning it back on? I've heard that doing so is the best remedy for oversimplified anecdotal computer problems. Actually, Android can run on x86 processors, and, as I recall, Android was briefly listed as a Linux distribution on Distrowatch in its early days, prior to the Google acquisition. However, I didn't mention the possibility of using MLVFS on Android to suggest that someone should try it. You left out an important OS that runs MLVFS (Linux), and I was merely giving another example of an OS (Android Linux) on which MLVFS could work. MLVFS is open source software, so it can probably be compiled to work on several other platforms (the BSDs notwithstanding).
  19. Thanks for this article. Very helpful! From your article: MLVFS also works in most Linux distros (and, consequently, probably in Android as well). It's just a simple compile using "make."
  20. They don't have to do any of that. They're an open source organization. It doesn't seem like they are trying to appeal to the typical Canon/EF/AF/IS type of shooter. They are developing a cinematography camera (and an exceptionally versatile one at that), so they probably aren't too concerned about making the "skin tones" perfect right out of the camera (nor about 5-axis IBIS, DPAF, etc.). They seem more dedicated to boosting image quality attributes that appeal to cinematographers. As the project is open source, this camera offers the most options for configuration and for imaging. On the other hand, the footage that has appeared over the last couple of years looks nice, and they seem to be doing a better job than BlackMagic and AJA with the same sensor. Some want a quality cinematography camera that is versatile/modifiable and completely controllable, allowing the creation of images slightly more distinctive than those from most others. The Axiom appeals to shooters of that type.
  21. I love the look of the Fisher-Price Pixelvision!: @BTM_Pix: If they are selling at 99 cents, I'll take a dozen!
  22. I thought the footage looked nice. However, I am fairly sure they have footage showing "skintones." On the other hand, this camera is still in the beta stage, so nothing is final, and, judging from the interview, there will be a few choices of low-level color styles and film stock emulation. Apertus is an open source organization. They probably are not trying to sell high volume to typical GH/EF shooters. It is likely that they are more interested in versatility and quality (and, of course freedom from proprietary constraints). By the way, these guys started around 2006, so they are tenacious. As I recall, their first raw camera(s) was one of the open source Elphel brand around 2008. They were early pioneers of a few things that we now take for granted, including interchangeable lens mounts and touch screen control/remotes. The have already released the developer version of the Axiom, and I have no doubt that they will release the production model (sounds like it could be next year!). They likely will get more out of the CMOSIS CMV12000 sensor than Blackmagic and Aja did.
  23. Thanks for the link! This progress update is exciting. The footage from the Axiom has been looking nice for the last couple of years, and, judging from the interview, it sounds like there will be plenty of choice in regards to looks/"film stocks." Actually, they are giving plenty of choice in regards to everything! Can't wait till they start shipping production models!
×
×
  • Create New...