Jump to content

Wishes for 10 years on from the birth of mirrorless


sanveer
 Share

Recommended Posts

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
1 hour ago, jonpais said:

I think you are missing the point. I have never said anyone was factually wrong because they like anything. That doesn’t even make logical sense. I never said one camera was better, never have. We were discussing tonal transitions and noise, not my my age or appearance and certainly not preferences.

Except you literally have done exactly that. Go back and re-read what you wrote when he said he liked the 10 bit 4:2:2 better. You're awful, man. 

Thank you @Shirozinafor alerting me to the ignore feature! 

Link to comment
Share on other sites

On 8/24/2018 at 3:54 PM, newfoundmass said:

The point you're missing is, you can post all the lab tests that you want, when it comes to what camera is "better" it all boils down to preference and what each person believes.

An additional point is that posting lab tests are not that helpful if they're not relevant. 

For instance tests of stills performance, when we're discussing video performance. 

 

Link to comment
Share on other sites

5 minutes ago, IronFilm said:

An additional point is that posting lab tests are not that helpful if they're not relevant. 

For instance tests of stills performance, when we're discussing video performance. 

 

Such as posting results of video performance when they are relevant. 

Link to comment
Share on other sites

The mount size difference between full frame and M43 is actually pretty small (it seems to be a phenomenon endemic to all ILCs, since even the tiny 1inch sensor sensor, the Samsung NX1 has a much larger mount to the size of the sensor). I also noticed that M43 contact points for lenses are further away from the rim of the mount, closer towards the sensor (and apparently 11, instead of the 10 appears on the Sony FF sensors). Panasonic could push the contact point at the very corner, a few more millimetres closer to the rim of the mount, for accommodating a larger sensor (and maybe introduce a new lens lineup). 

I am guessing, therefore, that not only can an APS-C size sensor fit on M43 size mount camera (like the JVC LS300), but something much closer to a Full Frame sensor, could, as well. Though, how the lenses handle such a large sensor, would need to be tested.

I therefore think Panasonic should explore the possibility of putting a sensor larger than APS-C in a camera that does not have IBIS (like the GM Series and the GH5s), with higher usable ISO.

It could have two versions, one with much higher pixel count (24-28MP), and the other with much larger pixels (12-18MP with pixels larger than the ones on the GH5s). It seems quite doable. If they could adapt the present M43 lenses, it would be amazing.

What it definitely needs to have is PDAF (especially for high speed photography and continuous autofocus in video), 14-bit colour and a little more innovation (15-20 fps for sports photography, a new Log Profile since the old VLog is taking away dynamic range from the sensor's full capacity. 

They should price it at $2000, targeting both the A7iii and the A7s series. 

 

http://j.mp/2MXuZ03

Screenshot_20180828-150456.png

S35-MFT.gif

Screenshot_20180828-154631.png

Link to comment
Share on other sites

What kind of cinema camera lineup does Panasonic have?  I'm wondering how protective they might be of those sales.  

It seems the new trend in mirrorless is to have two bodies - a lower priced one and a higher priced more full featured model.  They could do something like 4k60 and 1080p240 on the lower GH6 and 4k60 RAW on the higher priced one perhaps.  Assuming they can get their focussing system covered to phase-detect.

Link to comment
Share on other sites

18 hours ago, sanveer said:

The mount size difference between full frame and M43 is actually pretty small (it seems to be a phenomenon endemic to all ILCs, since even the tiny 1inch sensor sensor, the Samsung NX1 has a much larger mount to the size of the sensor). I also noticed that M43 contact points for lenses are further away from the rim of the mount, closer towards the sensor (and apparently 11, instead of the 10 appears on the Sony FF sensors). Panasonic could push the contact point at the very corner, a few more millimetres closer to the rim of the mount, for accommodating a larger sensor (and maybe introduce a new lens lineup). 

I am guessing, therefore, that not only can an APS-C size sensor fit on M43 size mount camera (like the JVC LS300), but something much closer to a Full Frame sensor, could, as well. Though, how the lenses handle such a large sensor, would need to be tested.

I therefore think Panasonic should explore the possibility of putting a sensor larger than APS-C in a camera that does not have IBIS (like the GM Series and the GH5s), with higher usable ISO.

It could have two versions, one with much higher pixel count (24-28MP), and the other with much larger pixels (12-18MP with pixels larger than the ones on the GH5s). It seems quite doable. If they could adapt the present M43 lenses, it would be amazing.

What it definitely needs to have is PDAF (especially for high speed photography and continuous autofocus in video), 14-bit colour and a little more innovation (15-20 fps for sports photography, a new Log Profile since the old VLog is taking away dynamic range from the sensor's full capacity. 

They should price it at $2000, targeting both the A7iii and the A7s series. 

 

Personally I dont think Panasonic needs a 'bigger' sensor. And while a bigger sensor in the current mount may be possible, it will probably be of little use in most cases because of the image circle form their existing lenses.

My feeling is that sensor 'size' should in theory become 'less' important over the next 10 years rather than 'more' important due to computational photography that we are already seeing in say smartphones (most notably the Pixel 2.)

Theoretically at least we should be able to see...

1) Better DR through 'HDR'

2) Better low light performance 'by median averaging'

3) A built in variable ND 'by mean averaging'

4) Higher resolution through 'pixel shifting'

...all achieved by combining multiple exposures into each shot...

And smaller sensors have an advantage here because they can typically get more 'exposures' faster off a sensor and blend them with less image processing power and less heat. So for instance we see 1080 240 off the GH5 that we dont see in FF.

I dont see Panasonic achieving anything by moving away from its strengths (especially at the US$2k level) - namely ibis. And most importantly I dont think they will go anywhere but backward unless they sort out their weaknesses - such as c-af.

Link to comment
Share on other sites

54 minutes ago, Mmmbeats said:

Can you pull multiple exposures off a sensor and still achieve 180° shutter?

In a theoretical sense yes.

Imagine that the correct exposure was:

1/200 f2.8 @ 25fps @ base iso

You could put on a 2 stop exposure ND and have...

1/50 f2.8 @ 25fps @ base iso = 180 degree

Alternatively (assuming (big assumptions) 0 delay between each exposure and buckets of image processing power) you could take

4 x 1/200 f2.8 @ 100fps @ base iso and 'mean average' (take each pixel value and divide by 4) and end up with an identical result (same motion) but less noise

But we havent yet seen this sort of computational photography 'much' in larger sensors with probably the Red Helium being an exception...

https://www.dxomark.com/red-helium-8k-dxomark-sensor-score-108-a-new-all-time-high-score2/

Link to comment
Share on other sites

5 hours ago, kye said:

What kind of cinema camera lineup does Panasonic have?  I'm wondering how protective they might be of those sales.  

 

They have next the Panasonic EVA1 above it, which doesn't even share a MFT mount with the GHx series!! Major oversight and mistake by Panasonic. 

Next step up is their Varicam range from Panasonic, which is much much more expensive. 

 

Link to comment
Share on other sites

4 hours ago, Robert Collins said:

Personally I dont think Panasonic needs a 'bigger' sensor. And while a bigger sensor in the current mount may be possible, it will probably be of little use in most cases because of the image circle form their existing lenses.

Yes and no. Yes, because the imagine circle (the circumference of M43 lenses is way lesser than full frame for Most lenses, Despite similar mount circumference). Everyone can do with a bigger sensor or way better processing or both. 

 

4 hours ago, Robert Collins said:

My feeling is that sensor 'size' should in theory become 'less' important over the next 10 years rather than 'more' important due to computational photography that we are already seeing in say smartphones (most notably the Pixel 2.)

Actually, right now there are quite a few issues with the sensor size, optics (lenses and glass mostly), photo formats and computational photography which make it impossible (atleast for the moment) for smaller sensors to replace larger sensors. 

 

4 hours ago, Robert Collins said:

Theoretically at least we should be able to see...

1) Better DR through 'HDR'

2) Better low light performance 'by median averaging'

3) A built in variable ND 'by mean averaging'

4) Higher resolution through 'pixel shifting'

...all achieved by combining multiple exposures into each shot...

And smaller sensors have an advantage here because they can typically get more 'exposures' faster off a sensor and blend them with less image processing power and less heat. So for instance we see 1080 240 off the GH5 that we dont see in FF.

There are larger sensors that do faster frame rates, and yes atleast theoretically, smaller sensors should have faster readout speeds. 

1) Most Flagships have better dynamic range through HDR, and yes, the Pixel 2 is probably the leader of the wolf pack.

2) Just as dynamic range can be improved by stacking, so can low light. 

3) If you mean lower or higher exposure by stacking, perhaps. That's is possible even without it. Though I wonder how accurate things like exposure or white balance consistency would be.

4) Some guys seem to be doing it, but nobody seems to be doing it extremely well (except ILCs, which do it with cameras set on tripods to enable perfect sticking with no shame of any kind hampering stiching). This has multi-frame and not multi-exposure doing the magic work. 

Actually the Pixel and the P20 Pro have shown that there is hope for computational photography, but at the moment there are a few limitations that need to be addressed first. The most important ones are processing power, the sensor's maximum readout speed, photo format and optics.

The l16 is proof of the fact that many of these are easier with a single sensor than with multiple ones. And corner softness is a huge issue, as are other issues with stitching of multiple smaller photos to create one large photo. The Pixels secret sauce is actually a copy of Panasonic's Pre Burst IN 4K Mode.  It starts shooting pics way before you actually press the shutter, thereby saving time and having the ability to atck more photos. Most of these smaller sensors can do full Res 10-bit or 12-bit at 25-40 frames per second. If they were able to do between 60 and 120 f4ames at full res, and if had the processing power to shoot (and stack) these pics in RAW, it would be way better than what we get now. I guess there is no really consumer processing of RAW photos, especially on smartphones (which I believe is not due to processing power limitations or writing speeds or pipeline issues). So stable photos are usually just 8-bit JPEGs. There is HEIF/ HEIV photos in 10-bit, but their quality iand implementation is still very early stage and there is hardly anyone adopting the format, right now (except for Apple in the iPhone X, which has some serious glitches, which need to be ironed out). 

Unfortunately 8-bit JPEGs whether they have 12 stops or 14 stops (like in the Mavic 2 Pro), will bot be replacing professional camera photos shot on 12 and 14-bit RAW, anytime soon.

There are some major issues with optics, cross talk and colour information and other issues with small sensors and cameras, which experts (like those on the l16 light camera) are genuinely ignorant of, and that smartphone companies (like Google for the Pixel cameras) conviently doesn't discuss.

I have high hopes from the new Sony 48MP IMX586 Sensor. But, I also realise that processing such large photos and having phoyo stacking on such an enormous resolution sensor is probably too challenging for any present processor (regardless of the number of additional ISPs onboard). Plus the optics would always be a compromise, especially all the plastic ones.

I don't see Computational Photography assisted photos from small/ tiny sensor competing with photos from M43 and larger cameras anytime soon. I would actually believe that it may take ATLEAST 5 years more, on a very conservative estimate.  It's not that the tech isn't there, it's just that nobody really wants to Genuinely Disrupt the ILC and Professional Photography and Videography market anytime soon. Especially at this side of the $1000 price range. 

Link to comment
Share on other sites

7 hours ago, Robert Collins said:

In a theoretical sense yes.

Imagine that the correct exposure was:

1/200 f2.8 @ 25fps @ base iso

You could put on a 2 stop exposure ND and have...

1/50 f2.8 @ 25fps @ base iso = 180 degree

Alternatively (assuming (big assumptions) 0 delay between each exposure and buckets of image processing power) you could take

4 x 1/200 f2.8 @ 100fps @ base iso and 'mean average' (take each pixel value and divide by 4) and end up with an identical result (same motion) but less noise

But we havent yet seen this sort of computational photography 'much' in larger sensors with probably the Red Helium being an exception...

https://www.dxomark.com/red-helium-8k-dxomark-sensor-score-108-a-new-all-time-high-score2/

I don't quite get it.  The main point of 180° shutter is to emulate typical film motion blur, no?  Switching to 1/200 SS is going to reduce the motion blur and leave you with that (to my taste) yucky video-ey feel.  If so, the idea is dead in the water.  Or have I missed something?

Link to comment
Share on other sites

Lately, I've been cheating the 180 degree rule quite a bit, makes no difference to me! Just not for moving subjects. I just posted a beta LUT test where I used practically every shutter angle available. It can work for interviews as well, provided the interviewees aren't jumping up and down and doing somersaults. Anything to avoid ND filters!

Link to comment
Share on other sites

Yeah, definitely works going a little bit faster from time to time (I shoot 25fps at 1/50 ordinarily).  1/60 I doubt it makes much difference (possibly even nicer to the eye sometimes in fact).  1/80 can just about work.  1/100 is already too much for me for 'average' levels of motion. 

Was forced to shoot some camera B stuff above this recently (newly purchased body, didn't have an ND I could use) - the footage just didn't cut into the piece at all and couldn't be used.

True too that if there's no motion then the ND comes off and it's any number you like on the SS (if I remember!).

ETA: Don't like slower though - seems 'effecty' to me.

Link to comment
Share on other sites

6 hours ago, Mmmbeats said:

I don't quite get it.  The main point of 180° shutter is to emulate typical film motion blur, no?  Switching to 1/200 SS is going to reduce the motion blur and leave you with that (to my taste) yucky video-ey feel.  If so, the idea is dead in the water.  Or have I missed something?

Yes you are. You are right that each individual photo taken taken at 1/200 will be sharper and 'too sharp'. But if you take 4 consecutive shots at '1/200' and blend them together by 'mean averaging' you will introduce the same motion blur on anything moving as one shot taken at 1/50. You can test it yourself with stills and photoshop.

Link to comment
Share on other sites

1 minute ago, Robert Collins said:

Yes you are. You are right that each individual photo taken taken at 1/200 will be sharper and 'too sharp'. But if you take 4 consecutive shots at '1/200' and blend them together by 'mean averaging' you will introduce the same motion blur on anything moving as one shot taken at 1/50. You can test it yourself with stills and photoshop.

Ah, I see.  That's pretty cool.  

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...