Jump to content

tupp

Members
  • Posts

    1,148
  • Joined

  • Last visited

Posts posted by tupp

  1. 8 hours ago, IronFilm said:

    At the moment third party adapters range from twenty bucks or so, up to a thousand dollars or so. But due to bundling it up, and vast economies of scales, they would cost much much less to produce in terms of how much it would add to the final price of an EVA1 MFT,

    ... Not to mention that Panasonic could customize the external housing of the smart adapters to follow the form of the camera, employing a extra reinforcement flange that bolts the adapter to the camera body.

     

    Clueless EF shooters would mount (and electronically control) their lenses with no wobble... and they would be none the wiser that they were actually using an adapter!

  2. After scanning this thread, this method reminds me of the wavelet decompose plug-ins for the GIMP, which is a functionality that has been in open-source software for years.   Basically, these plug-ins separate detail "frequencies" into their own separate layers.  I use wavelet decompose mostly for skin retouching, but some have been using it for sharpening for quite awhile.

     

    One of the wavelet decompose plug-ins can separate an image into 100 different frequency layers, but I can't imagine why that many separate frequencies would ever be needed.

     

    I don't think that proper wavelet decompose functionality has yet appeared in proprietary imaging software.  Often, advanced features such as this show up in Photoshop years after the GIMP, and these "new" features are usually much trumpeted by the Adobe crowd.

  3. 6 hours ago, BTM_Pix said:

    What I can say from all the testing I've done with it is that the sheer versatility of this thing in using native MFT, B4, PL, Canon AF, Contax Zeiss, M42 and Nikon lenses is that you'll get no arguments from me about what mount Panasonic have missed a trick not putting on the EVA ;) 

    Indeed.

     

    For some reason, most manufacturers (and many shooters) can't comprehend the incredible advantages of starting with a shallow lens mount on the camera.  All that is needed after that is smart adapters (perhaps with some customization).

  4. On 7/27/2017 at 1:48 PM, TheRenaissanceMan said:

    It depends on the fixture of course--purpose-made open faces can certainly provide an even wash-- but your standard redhead/blonde isn't exactly providing a smooth clean beam.

    Redheads and blondes generally provide a very wide, smooth beam, more so than many Fresnels and other fixtures with refractive optics.

     

     

    On 7/27/2017 at 1:48 PM, TheRenaissanceMan said:

    And fresnels aren't as even as a leko, but the way they taper off at the edges is very pretty and works great for lighting talent if you want a hard source (particularly at full spot).

    Lekos (ellipsoidals) aren't necessarily smoother than Fresnels  nor open-face lights.  Lekos are certainly limited in their application, as their beam is relatively narrow with a hard cut at the edge, and as they are significantly less efficient than Fresnels and open-face lights (especially open-face lights) with comparable wattage.

     

    On "full spot" open-face lights are give almost exactly the same results as those of Fresnels, but the open-face fixtures generally have more spill outside of the spot with a more gradual pattern fall-off.

     

     

    On 7/27/2017 at 1:48 PM, TheRenaissanceMan said:

    If you have work where you've shone a redhead/blonde/Arrilite/Mickey Mole directly on talent's face and gotten good results, I'd love to see it. 

    Below are three images shot with direct light from either a Fresnel fixture or an open-face fixture of comparable size.  I shot at least one of these pictures.

    j-chase5431-sm-bw.jpg.4ea1778e727b7e54d1c737f6fd58e935.jpgl_brisland-sm.jpg.a7df6e5f13bb0e1372ba421ede632f1b.jpgnpop5007d-sm-bw.thumb.jpg.51f3aac95cb0cc5daeb6f4025b56abcd.jpg

     

    Can you tell me which ones were lit with a Fresnel source and which ones were lit with an open-face source?

     

  5. 15 hours ago, TheRenaissanceMan said:

    Open faced sources usually have more output, as well.

    15 hours ago, TheRenaissanceMan said:

    And have hot spots, uneven spread, and a generally harsh quality of light.  Most lights used on film sets are fresnels,

    That explains why we use Fresnels to illuminate smooth cycs and green screens instead of open-face cyclights and open-face flood washes specifically made for that purpose.  /s

     

    What?!  Open-face sources have "hot spots" and "uneven spread?" ... compared to Fresnels?!  Please explain.

     

    Too busy right now to respond to the rest of your post.

  6. 15 hours ago, Ilkka Nissila said:

    Noise depends on the luminosity or number of photons detected; it is not constant but increases approximately proportionally to the square root of the signal (the luminosity or photon count).

    Don't confuse "photon shot noise" with the noise generated by a digital (or analog) system.  This type of noise is the random photons that strike the film, sensor, video tube, retina, etc.  Since photon shot noise is something that applies equally to almost any type of imaging system using electromagnetic waves yet is not inherent in any of these systems, this type of noise is irrelevant to a discussion on the noise produced by a camera, sensor or digital system.

     

     

    15 hours ago, Ilkka Nissila said:

    The number of distinct tonal or colour values (that can be distinguished from noise) can be calculated if the SNR is known as a function of luminosity or RGB values.

    SNR in imaging is not based on RGB values, and it is a metric that is used in analog imaging systems that might not even have RGB values.  SNR is essentially the ratio of a signal's amplitude to it's noise level, and SNR is usually expressed in decibels.  Dynamic range is a similar metric that also applies to both analog (some without RGB values) and digital systems.

     

     

    15 hours ago, Ilkka Nissila said:

    From this graphs it is possible to calculate how much tonal or colour information there is in the image which is what DXOMark is estimating.

    Not sure how that would work.  Sounds a bit shaky.

     

     

    15 hours ago, Ilkka Nissila said:

    You cannot estimate how many colours or tones are separable from noise by assuming that there is a fixed "noise floor".

    Yes.  You can.  The noise floor within an imgaging system can usually be determined fairly easily,  Just look at any proper dynamic range chart/test.

     

    Keep in mind that the increase in photon shot noise with greater exposure is not inherent in the imaging system itself.

     

     

    15 hours ago, Ilkka Nissila said:

    Color depth is just the number of bits that are used to encode the colour values of the pixel.

    No.  You are describing bit depth, which is not color depth.

     

    Color depth in digital systems is simply the resolution multiplied by the bit depth to the power of the number of color channels, so for an RGB digital system, the formula is: 

    COLOR DEPTH = (BIT DEPTH x RESOLUTION)³

     

    Also, keep in mind that we can have great color depth without any bit depth, as in analog imaging.

     

    In addition, because resolution is a fundamental factor of color depth, we can have great color depth in digital imaging systems that have a bit depth of "1," as in digital screen printing.

     

     

    15 hours ago, Ilkka Nissila said:

    Color depth [snip] It doesn't consider noise.

    Agreed.  We don't usually separate the noise from the color depth of a system -- even the noise has color depth.

  7. 37 minutes ago, MdB said:

    This just goes to prove that so few people understand DXO scores and measurements and therefore go and complain about it. 

    Your ignorance is not other people’s problem. I know you think the GH5 should get top billing on everything, but not understanding the number or how they are presented doesn’t make them wrong, it just makes you look desperate. 

    Ouch!

     

    ... and yet, DxOMark gives color depth scores in "bits" (bit depth?), implying a fundamental misunderstanding of digital color depth.  Their explanation of their color depth metric is somewhat vague and based on a dubious characteristic, which they term as "color sensitivity."

     

    There is a mathematical formula for absolute color depth of a digital system.  A fairly accurate figure can also be given to represent the absolute number of shades above the noise floor (effective bit depth -- not color depth).  Don't know why DxOMark doesn't use these basic metrics.

  8. On 7/14/2017 at 6:12 AM, akeem said:

    They've gotten their maths all muddled up in the color depth scoring (which is heaving biased to high res camers), which has a big effect on the sports score.

    Actually, resolution is a big factor of color depth, and color depth is mostly an independent property from ISO (the DxOMark "sports" score).

     

    On the other hand, the DxOMark "color depth" rated as "bit depth" is dubious (to say the least).

     

    Most do not realize that resolution and bit depth are equally weighted factors in determining color depth in digital systems.  The actual formula to determine the color depth of digital RGB systems is simple:

    COLOR DEPTH = (BIT DEPTH x RESOLUTION)³

     

    The bit depth is the number of values per channel, so 10-bit=1024, 12-bit=4096, 16-bit=65536, etc.  The resolution is usually that of one color channel of the entire frame, which would yield the absolute color depth of the entire digital image (ie. with 1920x1080 RGB pixel groups, the resolution of one color channel would be 2,073,600).

     

    So, given resolution's equal weighting to bit depth as a factor of color depth, DxOMark use of "bit depth" figure to express color depth seems fundamentally flawed.  DxOMark's explanation of their color depth metric is vague and apparently involves a characteristic which they call "color sensitivity," but they give no information on how this property is derived.

     

     

  9. 1 hour ago, cantsin said:

    Yes and no, because again you disregard binning.

    I did not disregard binning.  I directly addressed binning:

    1 hour ago, tupp said:

    However, if you equally bin two sensors (with identical sensor tech) equally, the sensor with the larger photosites will give greater dynamic range, reduced noise and increased sensitivity.

    You can't "compare apples to oranges."  All other variables must be equal -- if one sensor is binned, then the other sensor must be equally binned.

     

     

    1 hour ago, cantsin said:

    A 48MP FF sensor has photo sites with 25% the size of a 12MP FF sensor. Let's assume that this reciprocally results in a 400% higher noise floor per pixel. However, if you use the 12MP image as a 4K video, each photo site becomes one display pixel. If you have a 48MP image and good signal processing, you'll bin 4 pixel into 1 - which will reduce the noise equivalently.

    Actually, binning yields slightly reduced signal-to-noise ratio compared to that of equivalently larger photosites.

     

    In the first place, there are fewer photons/light-waves captured with four binned photosites as compared to a single equally-sized photosite.  This is due to some photons/light-waves being wasted when striking the border between the binned photosites.

     

    In addition, there is a minute increase in noise inherent in binning.  It's very tiny, but it appears nonetheless.

     

    Also, the binned photosites don't have less noise than an equivalent sized larger photosite with the same sensor tech.

     

    Furthermore, as I said above, you can't compare apples to oranges -- if you bin one sensor but not the other, you are presenting two different scenarios regarding post sensor processing, and you are now dealing with two variables (instead of one):  photosite size; and post sensor processing.

     

    Again, larger photosites give better performance than smaller photosites, as long as all other variables are equivalent -- identical sensor tech and identical post sensor processing.

     

     

    1 hour ago, cantsin said:

    which will reduce the noise equivalently. [snip]  In other words: One shouldn't consider noise floor per pixel, but literally the whole picture. The full well capacity advantage of a 12 MP sensor gets neutralized through its disadvantage of having fewer photosites.

    No.  As I mentioned, there is a slight increase in noise when binning.

     

     

    1 hour ago, cantsin said:

    Same is true if you print on a large format; the 12 MP image will have each pixel at 400% the size of the 48 MP image, which also means that the noise floor will get enlarged 400%, so the advantage evens out.

    We are discussing sensors and photosites -- not printing.

     

     

    1 hour ago, cantsin said:

    You could also compare it to rain (instead of light) falling into into two grids of vessels: Both grids have the same size, but one consists of 12 (4x3) vessels/compartments, the other of 48 (8x6).

    Some of the rain drops will be lost in the "48" grid, as a few drops will land on the border between the vessels and a few drops will stick to the inside of the "48" vessels when you pour (bin) each group of "48" vessels into each respective "12" vessel.

     

     

    1 hour ago, cantsin said:

    No, because as soon as you blow up both images to cinema screen size, the lower noise advantage of the sensor containing the larger photosites will get neutralized by the fact that each individual pixel is being more enlarged and thus having its noise amplified

    Firstly, noise doesn't increase just because an image is projected to a lager size -- the noise level stays the same relative to the image, regardless of projected size.

     

    Secondly, even if noise increased when a (say) 12MP image was projected, the exact same thing would happen to a 12MP image binned from a larger resolution (say 48MP).

     

    1 hour ago, cantsin said:

    Your model only works for cameras that produce their downscaled video image from line skipping rather than from binning. (Which used to be the norm for DSLR and mirrorless cameras until recently, so your model isn't completely wrong - it just no longer applies to most present-day camera technology.)

    No.  My "model" applies to all digital imaging sensors, including those to which binning (summed or averaged) has been applied.

     

    Larger photosites yield greater signal-to-noise than smaller photosites on sensors with the same tech, all other variables bein equal.

  10. 1 hour ago, cantsin said:

    What you write is only true for dynamic range (because of the larger full-well capacity of a bigger sensel). Noise and low light performance won't be affected...

    No.  Dynamic range, noise and sensitivity are all part of the same thing.

     

    Larger photosites have less noise, i.e. a lower noise floor (all other variables being equal).  Dynamic range essentially is the range of amplitude above the noise floor.  So, with a lower noise floor we get a greater dynamic range.  In addition, lower noise means greater effective sensitivity.  Larger photosites yield images with less noise, and, thus, higher effective ISOs.   So, larger photosites simultaneously provide greater dynamic range, reduced noise and increased sensitivity.

     

     

    1 hour ago, cantsin said:

    the larger sensor still allows more photons to hit the sensor, no matter how coarse or fine the pixel grid

    Certainly, a larger sensor receives more photons/light-waves (all other variables being equal).

     

    Nevertheless, the size of the sensor inherently has nothing to do with it's performance regarding DR/noise/sensitivity.  If you take two full frame sensors that are absolutely equal in every way, except for one has larger photosites than the other, the sensor with the larger photosites will have better performance in regards, to DR/noise/sensitivity, at the sensor level (all post sensor processing being equal).

     

    In addition, if you take the exact same scenario, and merely swap out the full frame sensor having bigger photosites with a M4/3 sensor that has the same size photosites, the M4/3 sensor will have better DR/noise/sensitivity performance, at the sensor level.  Keep in mind that the full frame sensor and M4/3 sensor in this scenario are absolutely equal in every way, except for the M4/3 sensor has larger photosites than the FF sensor (and remember that all post sensor processing on both sensors is equivalent).

     

     

    1 hour ago, cantsin said:

    binning the native sensor resolution to the delivery resolution (such as 4K) will reduce single-pixel noise.

    No doubt.

     

    However, if you equally bin two sensors (with identical sensor tech) equally, the sensor with the larger photosites will give greater dynamic range, reduced noise and increased sensitivity.

     

    You can't "compare apples to oranges."  All other variables must be equal -- if one sensor is binned, then the other sensor must be equally binned.

     

     

    1 hour ago, cantsin said:

    This is why there really isn't a dramatic difference in the low light performance between any of the present-day FF sensors. There's not even a dramatic difference between the A7s/II with its 12 MP sensor and A7R/II with its 42 MP sensor in regard to low light performance (in fact, the A7R has even better low light performance than the A7s because of its slightly newer sensor tech:

    Again, this is comparing apples to oranges.  The newer sensor tech in the A7R introduces additional variables other than merely larger photosites.

     

    If you made a M4/3 sensor with the A7R sensor tech and gave that sensor larger photosites than the A7R sensor, the M4/3 version would have better DR/noise/sensitivity performance.

     

     

    1 hour ago, cantsin said:

    The BM Cinema Camera beautifully illustrates the point because it doesn't really have a better low light performance than any other MFT camera, but - thanks to the big photo sites on its sensor - a much better dynamic range.)

    Once again, this is comparing apples to oranges.

     

    If the BMCC sensor has larger photosites that certainly helps with it's capture dynamic range, but there is a huge difference in both the sensor tech and in post-sensor processing between the BMCC and current M4/3 cameras.

     

    The BMCC is an older sensor, and it's greatest capture dynamic range comes from its raw mode, which applies hardly any post-sensor processing.  On the other hand, most M4/3 cameras have a lot of post-sensor processing, including noise reduction, which can increase sensitivity but not necessarily capture DR.

     

    If you were to take a sensor from any of the cameras that you mention and create another sensor with the same exact sensor tech, but with larger photosites, the sensor with the larger photosites would yield greater DR, less noise and increased effective sensitivity.

     

  11. Just to reiterate:  in regards to a sensor with a given quality/configuration, it is the photosite size that influences maximum sensitivity/DR -- not the size of the sensor.

     

    When comparing a full frame sensor with zillion megapixels (tiny photosites) to a M4/3 with much fewer, larger photosites, the M4/3 sensor will exhibit a higher maximum usable sensitivity and a greater capture dynamic range.  Again, this principle assumes that all other variables are equal, such as:  the sensor internal configuration/design; the A/D converter; post-sensor NR; etc.

     

    Of course, if we compare a full frame sensor and a M4/3 sensor having the same resolution and the same internal configuration/design, the full frame sensor will have larger photosites, and thus greater max sensitivity and dynamic range.

     

    So, if you embiggen the photosites, you generally embiggen the sensitivity/DR, regardless of what the "jabronis" say.

     

    embiggen:

     

  12. If the iso, shutter speed and f-stop are the same, then the exposure of the two cameras should be the same, regardless of the sensor size.  With identical settings and barring the use of filters or extreme color/contrast profiles, the only difference in exposure might be due to lens transference.

     

    Keep in mind that iso is "sensitivity," so two cameras set to the same iso should have the same light sensitivity.

     

    Noise is an entirely different issue, but suffice it to say, larger photosites (sensor pixels) usually mean less noise (more dynamic range), all other variables being equal.  So, if a full frame sensor and a M4/3 sensor have the same resolution, the full frame sensor will likely have less noise (all other variables being equal).

  13. As the saying goes, "the best camera is the one that you have with you."  So, if you are traveling, you might consider a serious "large sensor" compact camera such as the Panasonic LX100.  Do you really want to tote around a camera and interchangeable lenses?

     

    The LX100 is relatively inexpensive and yields nice 4k footage, and, most importantly, it sports a spectacular, fast Leica zoom. On the other hand, it only shoots HD in 60p, but how important is it to have 4K 60P footage of your vacation?

     

     

    3 hours ago, l1nkin said:

    There is only the Japan planned yet, but that camera could be useful for my work itself though (tv / web / events...)

    Well, if your travels take you to India, the LX100 seems to work well there!:

     

  14. 8 hours ago, Inazuma said:

    things have been going downhill since Jobs died

    .. Or  perhaps Tim Cook just can't sustain the reality distortion field.

     

    Chiclet/island keyboards suck, regardless.  Back in the early 1960s IBM spent about two years field testing differently shaped keys for their new Selectric typewriter.  They found that the "cupped-top" keys (along with a certain amount of travel and key spacing) gave the best performance, overwhelmingly.  Bell Telephone came to the same conclusion with the the cupped-top keys on their touch tone phones starting in the late 1960s.

     

    Two years makes a fairly exhaustive field test, so that basic key design is hard to beat by some fashion-conscious industrial designer who favors form over function.  There are still companies that field test, but probably not as thoroughly as was common in the past.  Since the late 1990s, Apple has done very little field testing.

  15. 1 hour ago, Emanuel said:

    Eskild's last post and work made for Norwegian public administration:

    He was so young and talented.  Sad.

     

    Thanks for posting this.

  16. Yes.  Those kids are talented.

     

    I have never recorded raw on the EOSM, and I don't know the answer as to whether or not H.264 has to be recorded while raw is recording.

     

    On the other hand, I have run Tragic Lantern with All I-frames, Full HD on the EOSM with boosted bit rate -- while using the Fujian 35mm f1.7!  The All I-frames along with boosted bit rate gives more robust frames/files.

  17. The Fujian 35mm definitely covers the entire APS-C sensor, and there don't seem to be any reports that the "no-brand" 25mm f1.4 APS-C lens vignettes on APS-C.

     

    Here is a video shot by our own @maxotics on the EOSM and the Fujian 35mm with its exceptionally peculiar focal plane (which would likely frustrate the forum's staunch "DOF calculators").

     

    Here is footage from an A7S in APS-C mode.  The description of the "no brand CCTV C-mount" lens matches the 25mm f1.4  APS-C lens reviewed in the link I posted above.

     

    Regardless, there certainly are a few C-mount lenses that cover APS-C.

×
×
  • Create New...