Jump to content
seku

Panasonic seems to be announcing something "BIG" on December 15

Recommended Posts

On 12/7/2017 at 10:16 AM, Cinegain said:

D5300 has served me well, but a liveview approach for stills on this is the worst and the video capabilities... so-so, it's just no mirrorless camera with all the innovation and features. Do love me some APS-C/S35 goodness, but... apparently you can't have it all. 

As a D5300 (5200 a year before that) owner for the past 3 years, I agree. It produces a good image and is certainly a capable camera (depending on the context) but it does lack a lot of the cool refinements that have come up in recent years...stock flat profile, 120fps slo mo, and of course no luck on the video AF, at least for me. But it does have a nice look to it if your just shooting for YouTube, or basically anything where 1080p plus isn't required. I will likely upgrade in the next year to something else. I've been toying with swapping it for a d5500 (both about the same price used) just to try the stock flat profile against flaat 11 which I use. But even then...

Share this post


Link to post
Share on other sites
EOSHD Pro Color for Sony cameras EOSHD Pro LOG for Sony CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

With an ISOless sensor it is true that differential pixel gain wouldn't change the total dynamic range of the sensor. For example thats why a high ISO is used with S-log in Sony. There is underexposure behind the scenes and then pull up. But not all cameras have this and in many cases amplification does help reduce the noise. 

Also there is preprocessing done even in the RAW data. Fuji has implemented a spatial noise reduction in each channel. Here is the amplitude map of the fft in the raw red channel information from a 6400 iso file:

fft.jpg.f7e68d354c80d1e3029dc0cf05254cc4.jpg

This nice reduction of amplitude in the high frequencies is a typical sign of spatial filtering. 

Now to the original point, I also think that Quad Bayer Coding HDR is done with different temporal windows but with the same onset time.

"The IMX294CJK uses a Quad Bayer structure, and outputs data binned in 2 × 2 pixel units in normal mode. In HDR mode, integration can be divided into long-time integration and short-time integration for each 2 pixels in the Quad array (Figure 1). In this case there is no time difference between long-time integration and short-time integration, which realizes HDR with little blending offset when imaging moving subjects (Photograph 3)."

Share this post


Link to post
Share on other sites
1 hour ago, maxotics said:

If you're amplifying before the ADC wouldn't that circuitry distort the signal before digitising, just trading one source of error (analog) for another (digital)?  How would any specific pixel know whether to amplify or not?  I mean, that's the rub isn't it, no pixel can know beforehand how many photons will be sent its way.  Anyway, can you give more specifics about how the sensor works in that regard? 

There is this sensor: http://www.onsemi.com/pub/Collateral/KAE-02152-D.PDF

It has  some kind of non-destructive amplifier that uses some side effects of transferring the charge during readout to probe the charge and determine whether use charge multiplier to add gain or not. This kind of amplification allows to dig much more DR, acoording to specs. But it's CCD, and I have no idea if this tech is suitable for CMOS sensors. Wish I could understand this stuff better.

1 hour ago, maxotics said:

When you say a "RAW file is not actual sensor data, there is still a lot of processing done", sure the camera manufacturers differ in how they program their ADC (where black levels might be set, say, which is data non-destructive assuming you won't need the full 14bits), but I don't know of any other "lot of processing" done.  Can you elaborate?   My experience is that RAW seems pretty close to actual sensor data.  Indeed, if you just output a non-debayered grayscale TIFF with dcraw (least amount of processing) it's unusable without a lot of complex processing.

Fixed-pattern noise reduction, black frame subtraction, microlens effects correction (like "Italian flag"), regular spatial NR, dead pixels remapping - it's what I know. That's why you can get nicely matching results from two different cameras of one model - all the fine and unit specific tuning is already done. If something is omitted or made wrong we get those stories like Ursa mini 4.6K launch. So, better say that raw image isn't baked but normalised.

Share this post


Link to post
Share on other sites
15 minutes ago, slonick81 said:

Fixed-pattern noise reduction, black frame subtraction, microlens effects correction (like "Italian flag"), regular spatial NR, dead pixels remapping - it's what I know. That's why you can get nicely matching results from two different cameras of one model - all the fine and unit specific tuning is already done. If something is omitted or made wrong we get those stories like Ursa mini 4.6K launch. So, better say that raw image isn't baked but normalised.

My understanding is the sensor data to RAW isn't changed, but that all those issues you mentioned are taken care of in various coefficients and instructions saved into the RAW file in the header.  Most people don't think to second-guess the manufacturer's instructions on how to pre-process RAW data so I believe that leads to some confusion.  I wish I knew this stuff better too, that's why I want to make sure what you say is correct.  My understanding remains that RAW data is truly original measurements of photon absorption.  When I worked with Magic Lantern RAW, on the "pink pixel" problem, it seemed that if what you said was correct, the focus pixel data would never make it to the RAW stream.  

I am currently working on a "workbench" to better analyze RAW data which hopefully you and other might find interesting when it's ready to show.  Last year I studied LOG gammas, my conclusions of which attract some hate mail ;) I'm hoping that if I start from the ground up in RAW, some of my findings about LOG gammas will make more sense.  And if I'm wrong about some of my conclusions, I can better see those errors.

Share this post


Link to post
Share on other sites

Wow,...yeah, I dont know of any image sensor to ever do a stunt like this. (mixuture of long and short exposure times)

Could this Sony IMX294 produce an image with motion blur in the shadows but with sharp highlights....all in the same image?

Obviously, this timing trick will see into the dark or "low light" only as well as the slowest collected pixels can take in light. In theory, a similar sensor without this quad timing cluster (single timing)  would see into the shadows to the exact same extent. It would only be missing the highlight protection feature that this multi-timing would bring.

Does that make sense? At 1/30 of a second exposure, both, the single time exposure and the dual time exposure will collect the same amount of photons in low light. The benefit only comes to the dual time sensor in the highlights?

So,..I can see this being a high dynamic range sensor but not really a "low light" beast.

I dunno....im just guessing out loud.

Share this post


Link to post
Share on other sites
5 hours ago, maxotics said:

My understanding is the sensor data to RAW isn't changed, but that all those issues you mentioned are taken care of in various coefficients and instructions saved into the RAW file in the header.

Well, I read from time to time Alex Tutubalin blog (RawDigger, FastRawViewer, LibRaw) and all the cases he describes as developer of raw files manipulation tools made me to think this way. He has an idea about raw metadata and how to apply it but often gets some strange results with new cameras. They are usually not affecting general style photography but can be found by raw analysis or in some extreme shooting conditions, like:

https://www.lonelyspeck.com/why-i-no-longer-recommend-sony-cameras-for-astrophotography-an-open-letter-to-sony/

But I guess it's safe to think there is quite some fine tweaking inside modern photocameras. Is it applicable to cinema cameras? Hard to say, but why all these companies avoid the benefits of modern processing power? Like, Sony a7R3 chews 420Mpix/s for stills, so it's capable in theory to apply same processing for 4K 50fps stream.

8 hours ago, maxotics said:

When I worked with Magic Lantern RAW, on the "pink pixel" problem, it seemed that if what you said was correct, the focus pixel data would never make it to the RAW stream. 

I'm no expert in ML but looks like in general for raw recoding it's a side hack that grabs image from memory buffer at some stage of processing pipeline and dumps it to card. Who knows, maybe this issue is corrected later in processing or it wasn't designed to be corrected in this sensor mode or operation mode.

Share this post


Link to post
Share on other sites

@slonick81  It surprises me that no one involved in the design and manufacturer of camera sensors visits this forum and gives a hint why the GH5 can't output RAW video.  If the lowly first gen Canon EOS-M can do 720 at 37MBS and the 5D3 1080 at 80MBS I don't know why the Panny cameras don't.   What is it about RAW capture that makes the sensor so hot it must be cooled, like Peltier cooling with the BMPCC. So I'm with you, so many questions!

Yes, I think you're right, the RAW from ML Magic isn't like photographic RAW, still, it looks fine!  

Share this post


Link to post
Share on other sites
33 minutes ago, wolf33d said:

 

GH5s specs : https://photorumors.com/2017/12/14/here-are-the-detailed-panasonic-gh5s-specifications/#ixzz51FQWa6Dj

Seems like no new AF or it would be a highlight. Too bad :( 

 

Thats supposed to be detailed??? 

Going by just that... I don't think anyone is losing anything by this new announcement... I think the GH5 (Non-S) is still a better all rounder.

If they add 4K at 120fps (this is a huge "may be") and definitely the low-light stuff... the GH5 still has better 20MP stills... 6K anamorphic open-gate... so you win some to lose some, but at an additional $500 more.

Share this post


Link to post
Share on other sites

If the low mp sensor is true and the 10bit 60p... maybe there will be a permanent price reduction of the GH5... which means lower open box models as well. Hell at $1599, the current open box price is pretty damn good but I’ll happily take $1299. I’ve been looking for an all-i 1080p camera for a while... maybe I’ll get my wish after all. 

Share this post


Link to post
Share on other sites
3 hours ago, Mckinise said:

C4K at 60p, 150Mbps, 4:2:2 10 bit Long GOP

1080 at 240fps

12Mp sensor with up to 100,000 ISO

https://www.43rumors.com/ft5-panasonic-gh5s-shoots-240-fps-slow-motion-movies/

That's awesome. 60P 10 bit is really great. 1080 @240 is also great.

Since this is a new different sensor, having Phase Detection AF would be EVEN more great. I am not buying it without that, even if it would do 4K10bit @ 240p (well ok I would...)

Share this post


Link to post
Share on other sites
17 minutes ago, no_connection said:

Unless they have some new magic way to collect light, low light is still depending on sensor size and not pixel count. Well that and the ability to process the data.

Well it is not the pixel count, it is the pixel size!

 

Share this post


Link to post
Share on other sites
12 minutes ago, no_connection said:

Unless they have some new magic way to collect light, low light is still depending on sensor size and not pixel count. Well that and the ability to process the data.

There are many different ways of increasing the amount of collected light that is converted to signal GIVEN the sensor size. Few examples:

1. Pixel size. Yes the majority of sensor designs pixels cover less area than the total area of the sensor, so more pixel coverage the more light is gathered.

2. Color filters. Typical Bayern sensors have an RGB pattern that filters away most of the light for each pixel location. The less you filter the more light you collect but the lower the color sensitivity becomes. Other pixel designs offer better light absorption like RGBW or CYGM. 

3. Quantum efficiency. That describes the ability of the sensor to respond to the incoming photon signal and the conversion of it to a measurable electron signal. For example BSI technology moves the readout circuitry from between each pixel’s microlens and photodiode to behind the photodiode layer allowing more light to reach the light-sensitive photodiode.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...