Jump to content

tupp

Members
  • Content Count

    802
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by tupp


  1. On 8/29/2017 at 11:44 PM, Charlie said:

    Hey man, agree to disagree.....I know more about Al Green than even Al Green does!!! hahaha, seriously, my favourite singer, know all his tracks inside out. The pitch shift suited the video.

    Okay, but keep in mind that such a treatment of a classic album cut screams "millennial discarding the sanctity of the original," and some in the position to hire might react adversely.

     

    It's almost like using hip-hop jargon in business correspondence -- probably not a good idea.

     

    ... just keeping it real, Holmes...


  2. 3 hours ago, Jacek said:

    Ah, quick release legs? Didn't notice at first.. nice :)

    That feature has existed in other tripods for a long time.

     

    I think that the Manfrotto 058B appeared at least 15 years ago:

     

    I think Manfrotto has several models with this feature, including one with video legs.  They also have a similar small version of this with separate release buttons at the top of each leg.  To release all the legs at the same time, you just wrap your hand around the smaller base, and squeeze all the buttons.

     

    I suspect that there are other manufacturers with tripods that have a similar feature.

     


  3. On 8/12/2017 at 1:09 AM, mercer said:

    Well, there are a lot more costs in manufacturing than just the price of parts.

    I included the manufacturing operations in the list.  The design costs are essentially the same, as the only extra cost is creating another Autocad file and adding two to eight threaded holes -- the front tube to the EF lenses has to be designed, regardless.

     

     

    On 8/12/2017 at 1:09 AM, mercer said:

    And there are plenty of modern super 35mm cine lenses that will mount to EF:

    • Zeiss Compact Primes

    • Schneider Cine Xenar III

    • Cooke Mini S4

    • Rokinon Cine

    • Tokina Cinema

    Again, your list is fine for those who don't want to stray outside of the box.  However, creative pros will often want more versatility than that list (not to mention great Cine lenses which only come in PL.

     

     

    On 8/12/2017 at 1:09 AM, mercer said:

    The EVA1 seems to be more of a B-Cam to a Varicam LT than an A-Cam to a GH5. I don't agree with that marketing/production decision but Panasonic seems to have designed it that way.

    The short-sighted intentions of Panasonic's management/marketing/sales people have no bearing on what they should have done.  Furthermore, outside speculation on some company's intentions is not exceptionally relevant.

     

     

    On 8/12/2017 at 1:09 AM, mercer said:

    And not too many cinematographers are putting old Schneider Super 35mm glass on a Varicam LT.

    Well, not too typical shooters are putting old Schneider cinema lenses on Alexas either, but they can if they want to!

     

    I was helping someone on a commercial a few months ago, and the DP had just hit the big time.  She is already sitting in on panel discussions with ASC and BSC members.  She was using old Crystal Express lenses on an Alexa, and they were distinctively beautiful.

     

    Shooters on that level usually seek alternatives to typical run-of-the-mill EF glass, as they are usually more sensitive to the subtleties of lenses' looks and effects.

     

     

    On 8/12/2017 at 1:09 AM, mercer said:

    Unfortunately, Micro 4/3 is just not a professional cinema camera mount.

    Then neither is an EF mount.  However, cinema lens mounts don't need to be as rugged as PL or PV -- we have lens support for that.

     

    Furthermore, Panasonic didn't have to commit to a M4/3 mount.  They could have just made a shallow interchangeable lens plate, as I have described.

     

     

    On 8/12/2017 at 1:09 AM, mercer said:

    And the EF mount offers enough possibilities for professional cinema lenses and for lower cost professional still lenses.

    The EF mount offers only a small fraction of the possibilities available with a Micro 4/3 mount, an E-mount (there are ways to make this work) or an EF-M mount.

     

     

    On 8/12/2017 at 1:09 AM, mercer said:

    But as far as adapting goes, there are plenty of older lenses that will adapt to EF:

    For some PL mount lenses...

    That is a risky adapter as there are plenty of lenses that won't into it.

     

    On the other hand, a simple PL adapter for M4/3 will take almost every PL lens.  In addition, some M4/3-to-PL adapters also allow TILT/SWING movement!:

    http://www.ebay.com/itm/TILT-adapter-ARRI-Red-One-Arriflex-PL-lens-for-Micro-Four-Thirds-4-3-cameras-/322240061177

    http://www.ebay.com/itm/ARAX-TILT-adapter-ARRI-PL-lens-MICRO-4-3-Camera-Camcorder-pl-tilt-micro-adapter-/271523819029

     

    Tilt/swing adapters are impossible with the EF mount and any full-frame or smaller lens.

     

     

    On 8/12/2017 at 1:09 AM, mercer said:

    So you're right, there are some lenses that won't work

    FTFY:  There are countless lenses and adapters that won't work with an EF mount!

     

     

    On 8/12/2017 at 1:09 AM, mercer said:

    but the EVA1 isn't designed for those lenses because not a lot pro cinematographers would use old c-mounts or Veydras or Speedboosters.

    No.  Regardless of speculation for whom the EVA1 is designed, a lot of the top cinematographers use PL/PV glass and also seek out all kinds of unusual optics and adapters (probably not Veydras) that will give them an edge.

     

    Panasonic could have easily accommodated such high-end shooters (and even some of the lowly shooters who know better) while also catering to the typical EF people.

     

     

    On 8/12/2017 at 1:09 AM, mercer said:

    I think you're more interested in the possibilities of a future AF200 and not an EVA1.

    Actually, I think that every cinematography camera should have either a shallow mount or an interchangeable lens plate.  We're experienced and creative pros -- we need versatility, not protection from FUD.

     

     


  4. 7 hours ago, mercer said:

    The problem is that at that price point, Panasonic wouldn't be getting anything out of doing it except extra engineering and manufacturing costs.

    Actually, Panasonic would be getting a much more serious and versatile camera for very little extra in cost.

     

     

    There has to be some sort of "tube" or enclosure going from the sensor to the lens mount.  So, having that "tube" as a separate piece doesn't require much extra in materials, but it adds a whole heck of a lot to the capability (and maintenance) of the camera.   The extra costs would be for:

    • a separate die-cast piece (could be incorporated in the same die as the rest of the camera housing);
    • one to four threading operations in the camera body (about US$2 each);
    • a deburring operation;
    • a powder coating (again, no extra materials here);
    • wiring/contacts for the EF electronics (US$30?).

     

     

    Such a small expense is negligible to such an expensive camera, but a removable front would greatly enhance what lenses and adapters/speedboosters one can use.

     

     

    7 hours ago, mercer said:

    Most people buying at that price range won't be using Panasonic lenses anyway, so why incur those costs for a minority of shooters that want to use Minolta, Konica, or FD lenses?

    Who said anything about using Panasonic lenses?  That is the kind of narrow thinking that produces cinema cameras with EF mounts!

     

     

    A removable front would literally open the camera up to a whole world of lenses, including professional cinematography lenses offered in PL mount, PV mount, C-mount, Arri Bayonet mount, and it would even allow the attachment of "Minolta, Konica, or FD lenses (which all have very nice optics)."   Additionally, such a feature would enable the use of focal reducers (extra stops and wider view angle), tilt/swing adapters, macro bellows, helical mechanisms, and other lens modifiers.

     

     

    Certainly, the typical walled-garden EF shooter is not interested in such versatility, but this multitude of possibilities would be very useful to cinematographers who want to create interesting images and who want to get an edge on the "straight" shooters.

     

     

    Again, the additional manufacturing cost for a removable front would be minimal, and the EF shooters would never know the difference.


  5. 42 minutes ago, tomekk said:

    Isn't wavelet decompose in GIMP called frequency separation technique in Photoshop? 

    Yes.  Essentially, frequency separation is wavelet decompression with with just two layers -- the residual layer and the high frequency layer.   However, on Photoshop it probably still has to be done manually (similar to the manual procedure given by the OP).

     

    Two layer frequency separation sets up a little more quickly in the GIMP, due to the grain extract and grain merge features.  Of course, it is even faster to get two-layer frequency separation in the GIMP with either of the wavelet decompression plug-ins, but setting it up manually probably gives one more control over the "frequency."

     

    I don't know if Photoshop currently has a wavelet decompression plug-in (it didn't have one four years ago).  If it doesn't, manually making five wavelet scale layers plus a residual layer would probably be a long, arduous process in Photoshop.


  6. 8 hours ago, IronFilm said:

    At the moment third party adapters range from twenty bucks or so, up to a thousand dollars or so. But due to bundling it up, and vast economies of scales, they would cost much much less to produce in terms of how much it would add to the final price of an EVA1 MFT,

    ... Not to mention that Panasonic could customize the external housing of the smart adapters to follow the form of the camera, employing a extra reinforcement flange that bolts the adapter to the camera body.

     

    Clueless EF shooters would mount (and electronically control) their lenses with no wobble... and they would be none the wiser that they were actually using an adapter!


  7. After scanning this thread, this method reminds me of the wavelet decompose plug-ins for the GIMP, which is a functionality that has been in open-source software for years.   Basically, these plug-ins separate detail "frequencies" into their own separate layers.  I use wavelet decompose mostly for skin retouching, but some have been using it for sharpening for quite awhile.

     

    One of the wavelet decompose plug-ins can separate an image into 100 different frequency layers, but I can't imagine why that many separate frequencies would ever be needed.

     

    I don't think that proper wavelet decompose functionality has yet appeared in proprietary imaging software.  Often, advanced features such as this show up in Photoshop years after the GIMP, and these "new" features are usually much trumpeted by the Adobe crowd.


  8. 6 hours ago, BTM_Pix said:

    What I can say from all the testing I've done with it is that the sheer versatility of this thing in using native MFT, B4, PL, Canon AF, Contax Zeiss, M42 and Nikon lenses is that you'll get no arguments from me about what mount Panasonic have missed a trick not putting on the EVA ;) 

    Indeed.

     

    For some reason, most manufacturers (and many shooters) can't comprehend the incredible advantages of starting with a shallow lens mount on the camera.  All that is needed after that is smart adapters (perhaps with some customization).


  9. On 7/27/2017 at 1:48 PM, TheRenaissanceMan said:

    It depends on the fixture of course--purpose-made open faces can certainly provide an even wash-- but your standard redhead/blonde isn't exactly providing a smooth clean beam.

    Redheads and blondes generally provide a very wide, smooth beam, more so than many Fresnels and other fixtures with refractive optics.

     

     

    On 7/27/2017 at 1:48 PM, TheRenaissanceMan said:

    And fresnels aren't as even as a leko, but the way they taper off at the edges is very pretty and works great for lighting talent if you want a hard source (particularly at full spot).

    Lekos (ellipsoidals) aren't necessarily smoother than Fresnels  nor open-face lights.  Lekos are certainly limited in their application, as their beam is relatively narrow with a hard cut at the edge, and as they are significantly less efficient than Fresnels and open-face lights (especially open-face lights) with comparable wattage.

     

    On "full spot" open-face lights are give almost exactly the same results as those of Fresnels, but the open-face fixtures generally have more spill outside of the spot with a more gradual pattern fall-off.

     

     

    On 7/27/2017 at 1:48 PM, TheRenaissanceMan said:

    If you have work where you've shone a redhead/blonde/Arrilite/Mickey Mole directly on talent's face and gotten good results, I'd love to see it. 

    Below are three images shot with direct light from either a Fresnel fixture or an open-face fixture of comparable size.  I shot at least one of these pictures.

    j-chase5431-sm-bw.jpg.4ea1778e727b7e54d1c737f6fd58e935.jpgl_brisland-sm.jpg.a7df6e5f13bb0e1372ba421ede632f1b.jpgnpop5007d-sm-bw.thumb.jpg.51f3aac95cb0cc5daeb6f4025b56abcd.jpg

     

    Can you tell me which ones were lit with a Fresnel source and which ones were lit with an open-face source?

     


  10. 15 hours ago, TheRenaissanceMan said:

    Open faced sources usually have more output, as well.

    15 hours ago, TheRenaissanceMan said:

    And have hot spots, uneven spread, and a generally harsh quality of light.  Most lights used on film sets are fresnels,

    That explains why we use Fresnels to illuminate smooth cycs and green screens instead of open-face cyclights and open-face flood washes specifically made for that purpose.  /s

     

    What?!  Open-face sources have "hot spots" and "uneven spread?" ... compared to Fresnels?!  Please explain.

     

    Too busy right now to respond to the rest of your post.


  11. 15 hours ago, Ilkka Nissila said:

    Noise depends on the luminosity or number of photons detected; it is not constant but increases approximately proportionally to the square root of the signal (the luminosity or photon count).

    Don't confuse "photon shot noise" with the noise generated by a digital (or analog) system.  This type of noise is the random photons that strike the film, sensor, video tube, retina, etc.  Since photon shot noise is something that applies equally to almost any type of imaging system using electromagnetic waves yet is not inherent in any of these systems, this type of noise is irrelevant to a discussion on the noise produced by a camera, sensor or digital system.

     

     

    15 hours ago, Ilkka Nissila said:

    The number of distinct tonal or colour values (that can be distinguished from noise) can be calculated if the SNR is known as a function of luminosity or RGB values.

    SNR in imaging is not based on RGB values, and it is a metric that is used in analog imaging systems that might not even have RGB values.  SNR is essentially the ratio of a signal's amplitude to it's noise level, and SNR is usually expressed in decibels.  Dynamic range is a similar metric that also applies to both analog (some without RGB values) and digital systems.

     

     

    15 hours ago, Ilkka Nissila said:

    From this graphs it is possible to calculate how much tonal or colour information there is in the image which is what DXOMark is estimating.

    Not sure how that would work.  Sounds a bit shaky.

     

     

    15 hours ago, Ilkka Nissila said:

    You cannot estimate how many colours or tones are separable from noise by assuming that there is a fixed "noise floor".

    Yes.  You can.  The noise floor within an imgaging system can usually be determined fairly easily,  Just look at any proper dynamic range chart/test.

     

    Keep in mind that the increase in photon shot noise with greater exposure is not inherent in the imaging system itself.

     

     

    15 hours ago, Ilkka Nissila said:

    Color depth is just the number of bits that are used to encode the colour values of the pixel.

    No.  You are describing bit depth, which is not color depth.

     

    Color depth in digital systems is simply the resolution multiplied by the bit depth to the power of the number of color channels, so for an RGB digital system, the formula is: 

    COLOR DEPTH = (BIT DEPTH x RESOLUTION)³

     

    Also, keep in mind that we can have great color depth without any bit depth, as in analog imaging.

     

    In addition, because resolution is a fundamental factor of color depth, we can have great color depth in digital imaging systems that have a bit depth of "1," as in digital screen printing.

     

     

    15 hours ago, Ilkka Nissila said:

    Color depth [snip] It doesn't consider noise.

    Agreed.  We don't usually separate the noise from the color depth of a system -- even the noise has color depth.


  12. 37 minutes ago, MdB said:

    This just goes to prove that so few people understand DXO scores and measurements and therefore go and complain about it. 

    Your ignorance is not other people’s problem. I know you think the GH5 should get top billing on everything, but not understanding the number or how they are presented doesn’t make them wrong, it just makes you look desperate. 

    Ouch!

     

    ... and yet, DxOMark gives color depth scores in "bits" (bit depth?), implying a fundamental misunderstanding of digital color depth.  Their explanation of their color depth metric is somewhat vague and based on a dubious characteristic, which they term as "color sensitivity."

     

    There is a mathematical formula for absolute color depth of a digital system.  A fairly accurate figure can also be given to represent the absolute number of shades above the noise floor (effective bit depth -- not color depth).  Don't know why DxOMark doesn't use these basic metrics.


  13. On 7/14/2017 at 6:12 AM, akeem said:

    They've gotten their maths all muddled up in the color depth scoring (which is heaving biased to high res camers), which has a big effect on the sports score.

    Actually, resolution is a big factor of color depth, and color depth is mostly an independent property from ISO (the DxOMark "sports" score).

     

    On the other hand, the DxOMark "color depth" rated as "bit depth" is dubious (to say the least).

     

    Most do not realize that resolution and bit depth are equally weighted factors in determining color depth in digital systems.  The actual formula to determine the color depth of digital RGB systems is simple:

    COLOR DEPTH = (BIT DEPTH x RESOLUTION)³

     

    The bit depth is the number of values per channel, so 10-bit=1024, 12-bit=4096, 16-bit=65536, etc.  The resolution is usually that of one color channel of the entire frame, which would yield the absolute color depth of the entire digital image (ie. with 1920x1080 RGB pixel groups, the resolution of one color channel would be 2,073,600).

     

    So, given resolution's equal weighting to bit depth as a factor of color depth, DxOMark use of "bit depth" figure to express color depth seems fundamentally flawed.  DxOMark's explanation of their color depth metric is vague and apparently involves a characteristic which they call "color sensitivity," but they give no information on how this property is derived.

     

     


  14. 1 hour ago, cantsin said:

    Yes and no, because again you disregard binning.

    I did not disregard binning.  I directly addressed binning:

    1 hour ago, tupp said:

    However, if you equally bin two sensors (with identical sensor tech) equally, the sensor with the larger photosites will give greater dynamic range, reduced noise and increased sensitivity.

    You can't "compare apples to oranges."  All other variables must be equal -- if one sensor is binned, then the other sensor must be equally binned.

     

     

    1 hour ago, cantsin said:

    A 48MP FF sensor has photo sites with 25% the size of a 12MP FF sensor. Let's assume that this reciprocally results in a 400% higher noise floor per pixel. However, if you use the 12MP image as a 4K video, each photo site becomes one display pixel. If you have a 48MP image and good signal processing, you'll bin 4 pixel into 1 - which will reduce the noise equivalently.

    Actually, binning yields slightly reduced signal-to-noise ratio compared to that of equivalently larger photosites.

     

    In the first place, there are fewer photons/light-waves captured with four binned photosites as compared to a single equally-sized photosite.  This is due to some photons/light-waves being wasted when striking the border between the binned photosites.

     

    In addition, there is a minute increase in noise inherent in binning.  It's very tiny, but it appears nonetheless.

     

    Also, the binned photosites don't have less noise than an equivalent sized larger photosite with the same sensor tech.

     

    Furthermore, as I said above, you can't compare apples to oranges -- if you bin one sensor but not the other, you are presenting two different scenarios regarding post sensor processing, and you are now dealing with two variables (instead of one):  photosite size; and post sensor processing.

     

    Again, larger photosites give better performance than smaller photosites, as long as all other variables are equivalent -- identical sensor tech and identical post sensor processing.

     

     

    1 hour ago, cantsin said:

    which will reduce the noise equivalently. [snip]  In other words: One shouldn't consider noise floor per pixel, but literally the whole picture. The full well capacity advantage of a 12 MP sensor gets neutralized through its disadvantage of having fewer photosites.

    No.  As I mentioned, there is a slight increase in noise when binning.

     

     

    1 hour ago, cantsin said:

    Same is true if you print on a large format; the 12 MP image will have each pixel at 400% the size of the 48 MP image, which also means that the noise floor will get enlarged 400%, so the advantage evens out.

    We are discussing sensors and photosites -- not printing.

     

     

    1 hour ago, cantsin said:

    You could also compare it to rain (instead of light) falling into into two grids of vessels: Both grids have the same size, but one consists of 12 (4x3) vessels/compartments, the other of 48 (8x6).

    Some of the rain drops will be lost in the "48" grid, as a few drops will land on the border between the vessels and a few drops will stick to the inside of the "48" vessels when you pour (bin) each group of "48" vessels into each respective "12" vessel.

     

     

    1 hour ago, cantsin said:

    No, because as soon as you blow up both images to cinema screen size, the lower noise advantage of the sensor containing the larger photosites will get neutralized by the fact that each individual pixel is being more enlarged and thus having its noise amplified

    Firstly, noise doesn't increase just because an image is projected to a lager size -- the noise level stays the same relative to the image, regardless of projected size.

     

    Secondly, even if noise increased when a (say) 12MP image was projected, the exact same thing would happen to a 12MP image binned from a larger resolution (say 48MP).

     

    1 hour ago, cantsin said:

    Your model only works for cameras that produce their downscaled video image from line skipping rather than from binning. (Which used to be the norm for DSLR and mirrorless cameras until recently, so your model isn't completely wrong - it just no longer applies to most present-day camera technology.)

    No.  My "model" applies to all digital imaging sensors, including those to which binning (summed or averaged) has been applied.

     

    Larger photosites yield greater signal-to-noise than smaller photosites on sensors with the same tech, all other variables bein equal.


  15. 1 hour ago, cantsin said:

    What you write is only true for dynamic range (because of the larger full-well capacity of a bigger sensel). Noise and low light performance won't be affected...

    No.  Dynamic range, noise and sensitivity are all part of the same thing.

     

    Larger photosites have less noise, i.e. a lower noise floor (all other variables being equal).  Dynamic range essentially is the range of amplitude above the noise floor.  So, with a lower noise floor we get a greater dynamic range.  In addition, lower noise means greater effective sensitivity.  Larger photosites yield images with less noise, and, thus, higher effective ISOs.   So, larger photosites simultaneously provide greater dynamic range, reduced noise and increased sensitivity.

     

     

    1 hour ago, cantsin said:

    the larger sensor still allows more photons to hit the sensor, no matter how coarse or fine the pixel grid

    Certainly, a larger sensor receives more photons/light-waves (all other variables being equal).

     

    Nevertheless, the size of the sensor inherently has nothing to do with it's performance regarding DR/noise/sensitivity.  If you take two full frame sensors that are absolutely equal in every way, except for one has larger photosites than the other, the sensor with the larger photosites will have better performance in regards, to DR/noise/sensitivity, at the sensor level (all post sensor processing being equal).

     

    In addition, if you take the exact same scenario, and merely swap out the full frame sensor having bigger photosites with a M4/3 sensor that has the same size photosites, the M4/3 sensor will have better DR/noise/sensitivity performance, at the sensor level.  Keep in mind that the full frame sensor and M4/3 sensor in this scenario are absolutely equal in every way, except for the M4/3 sensor has larger photosites than the FF sensor (and remember that all post sensor processing on both sensors is equivalent).

     

     

    1 hour ago, cantsin said:

    binning the native sensor resolution to the delivery resolution (such as 4K) will reduce single-pixel noise.

    No doubt.

     

    However, if you equally bin two sensors (with identical sensor tech) equally, the sensor with the larger photosites will give greater dynamic range, reduced noise and increased sensitivity.

     

    You can't "compare apples to oranges."  All other variables must be equal -- if one sensor is binned, then the other sensor must be equally binned.

     

     

    1 hour ago, cantsin said:

    This is why there really isn't a dramatic difference in the low light performance between any of the present-day FF sensors. There's not even a dramatic difference between the A7s/II with its 12 MP sensor and A7R/II with its 42 MP sensor in regard to low light performance (in fact, the A7R has even better low light performance than the A7s because of its slightly newer sensor tech:

    Again, this is comparing apples to oranges.  The newer sensor tech in the A7R introduces additional variables other than merely larger photosites.

     

    If you made a M4/3 sensor with the A7R sensor tech and gave that sensor larger photosites than the A7R sensor, the M4/3 version would have better DR/noise/sensitivity performance.

     

     

    1 hour ago, cantsin said:

    The BM Cinema Camera beautifully illustrates the point because it doesn't really have a better low light performance than any other MFT camera, but - thanks to the big photo sites on its sensor - a much better dynamic range.)

    Once again, this is comparing apples to oranges.

     

    If the BMCC sensor has larger photosites that certainly helps with it's capture dynamic range, but there is a huge difference in both the sensor tech and in post-sensor processing between the BMCC and current M4/3 cameras.

     

    The BMCC is an older sensor, and it's greatest capture dynamic range comes from its raw mode, which applies hardly any post-sensor processing.  On the other hand, most M4/3 cameras have a lot of post-sensor processing, including noise reduction, which can increase sensitivity but not necessarily capture DR.

     

    If you were to take a sensor from any of the cameras that you mention and create another sensor with the same exact sensor tech, but with larger photosites, the sensor with the larger photosites would yield greater DR, less noise and increased effective sensitivity.

     


  16. Just to reiterate:  in regards to a sensor with a given quality/configuration, it is the photosite size that influences maximum sensitivity/DR -- not the size of the sensor.

     

    When comparing a full frame sensor with zillion megapixels (tiny photosites) to a M4/3 with much fewer, larger photosites, the M4/3 sensor will exhibit a higher maximum usable sensitivity and a greater capture dynamic range.  Again, this principle assumes that all other variables are equal, such as:  the sensor internal configuration/design; the A/D converter; post-sensor NR; etc.

     

    Of course, if we compare a full frame sensor and a M4/3 sensor having the same resolution and the same internal configuration/design, the full frame sensor will have larger photosites, and thus greater max sensitivity and dynamic range.

     

    So, if you embiggen the photosites, you generally embiggen the sensitivity/DR, regardless of what the "jabronis" say.

     

    embiggen:

     


  17. If the iso, shutter speed and f-stop are the same, then the exposure of the two cameras should be the same, regardless of the sensor size.  With identical settings and barring the use of filters or extreme color/contrast profiles, the only difference in exposure might be due to lens transference.

     

    Keep in mind that iso is "sensitivity," so two cameras set to the same iso should have the same light sensitivity.

     

    Noise is an entirely different issue, but suffice it to say, larger photosites (sensor pixels) usually mean less noise (more dynamic range), all other variables being equal.  So, if a full frame sensor and a M4/3 sensor have the same resolution, the full frame sensor will likely have less noise (all other variables being equal).

×
×
  • Create New...