Jump to content

Panasonic GH6


kye
 Share

Recommended Posts

1 hour ago, hyalinejim said:

Have you tried WBing GH5 10bit VLog in an ACES colour space? You should be able to do it with relative impunity.

I know there's no GH5 IDT, but you can sandwich Vlog to ACES to VLog and get your footage back to what it was, but with WB fixed.

Incidentally, I don't think that GH5 VLog should be treated as Rec709 gamut. Admittedly, converting to V-Gamut gives slightly wonky colour (slightly pink skin) but leaving it as is gives much worse (horribly green skin). 

I never bought the Vlog update so I haven't tried that.  I've also never used ACES either.

I do use Resolve Colour Management now - I think that's new in R17?  When I interpret the GH5 HLG footage as Rec2100 the controls work pretty well.  As I've mentioned before I've tested it against Rec2100 and Rec2020 and it isn't a perfect match with either of those, but Rec2100 works well enough to be useful.

I did look at buying V-Log, but once I saw that it also isn't natively supported I figured there was no point - I already have one thing that's useful but not an exact match so why pay for another one 🙂 

All this is in context of how you're grading of course, and I'm really liking grading under a PFE LUT (2393 is pretty good) which helps to obscure the GH5 WB / exposure / lacklustre colour science quirks in the images.

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
38 minutes ago, kye said:

the GH5 WB / exposure / lacklustre colour science quirks

Yes, but (and for the benefit of anyone reading who might be considering a GH5) if you shoot VLog and get your footage into ACES space there are no WB or exposure quirks. The colour science is not so easily answered, however there is one solution in my other recent thread.

Link to comment
Share on other sites

5 hours ago, kye said:

I'm still not convinced about this.

Yes, they do say:
"Nothing is "baked" into an ARRIRAW image: Image processing steps like de-Bayer, white balance, sensitivity, up-sampling or down-sampling, which are irreversibly applied in-camera for compressed recording, HD-SDI outputs, and the viewfinder image, are not applied to ARRIRAW. All these parameters can be applied to the image in post."

However, immediately before that, they say:
"For the absolute best in image quality, for the greatest flexibility in post, and for safest archiving, the 16-bit (linear) raw data stream from the sensor can be recorded as 12-bit (log) ARRIRAW files."

So in this sense, the "nothing" baked into the ARRIRAW includes the combination of two streams of ADC and also a colour space conversion.  It's perfectly possible to do whatever you like to the image and still have the first statement be true in a figurative sense, which is how they have obviously intended it - "nothing you don't want baked in is baked in".  The ARRI colour science could still well be baked in and no-one would be critical and the statement they make would still be true in that figurative sense.

LOL.. there is no catch.. all they are saying here is that 16-bit ARRIRAW can be converted to 12-bit ARRIRAW.

RAW is RAW though, nothing is baked-in.. again their CS is an ensemble of things:

To create such outstanding images, all components of the imaging chain are custom designed by our engineers and carefully tuned for optimal performance, starting with the optical low pass filter, the CMOS sensor, the imaging electronics, and the image processing software.

.. do not under-estimate the "custom" factor here. As just discussed above, the 2xADC (DGO) tech alone is revolutionary and game-changing. It took 10 years for Canon to even notice and reverse-engineer it. GH6 is apparently also trying to copy that albeit with a different method.

 

5 hours ago, kye said:

Like I said, the folks with access to higher-end equipment and colourists for backup and troubleshooting swap to the higher budget stuff when the going gets tough, but those of us who don't have that luxury are stuck with what we have and have to make the best of it without any backup.

But when we suggest that we'd prefer things that make our lives easier (robust colour science, codecs, DR, ISO performance, etc) rather than things that don't really help in difficult situations (resolution, etc) somehow that doesn't make sense to the people that aren't in our shoes?

Pal, please don't make this into a small guy VS corp situation. I know you think I'm your enemy here but believe me there is no "luxury" in this field at the moment, more like do or die... that said I actually encourage you to challenge anything that isn't clear or dubious to you. that's how you learn and debate is good for all of us, so thank you my man.. keep it coming 😉

to answer your question, what I've been trying to tell you is that one of the main benefits of high resolution..is actually to help in "difficult situations" i.e. reframing/stabilization/still extraction. No one is actually delivering 6K/8K outside of mastering for future-proof archiving. a very good point made above though is that +6K could be a hinder to things like DGO tech and makes me understand/reconsider why ARRI (obvious IQ leader) isn't in any rush to go there...

 

Link to comment
Share on other sites

On 2/19/2022 at 1:23 AM, Video Hummus said:

Well the DGO method is using hardware and the other method is doing two different exposures at different shutter speeds and then blending them. I think ZCAM tried to do this and this is also what phones are doing with fast shutter speeds and fast readouts (looks bad in my opinion).

This is an older thread discussing some of the 'dual gain output'/HDR methods in use - https://www.eoshd.com/comments/topic/38363-simultaneous-dual-gain-sensors/

After reading the Arri and Canon 'whitepaper' info:

Arri use a simultaneous dual ADC approach (their amplifiers and ADCs appear to be off-sensor), the combining the results.

Canon DGO reads out the stored charge for a pixel, stores it, then does two sequential ADC conversions at different gains and combines the results.

Both those approaches avoid the 'images captured at different times' motion artifacts issue that sequential frame capture HDR creates.

In that thread, androidlad mentions another HDR mode that some Sony sensors support called DOL-HDR - https://www.eoshd.com/comments/topic/38363-simultaneous-dual-gain-sensors/?do=findComment&comment=354063

Link to comment
Share on other sites

55 minutes ago, Django said:

LOL.. there is no catch.. all they are saying here is that 16-bit ARRIRAW can be converted to 12-bit ARRIRAW.

So, can you get the 16-bit files out of the camera and see them in an NLE?  or just the 12-bit files?

There's nothing stopping them from reading the data off the sensor, changing it in whatever ways they want to without debayering it or anything like that, and then saving it to a card.  I talented high-school student could write an algorithm to do that without any problems at all, so it's not impossible.  

When light goes into a camera it will go through the optical filters (eg, OLPF) and the bayer filter (the first element of the colour science as these filters will determine the spectral response of the R, G, and B photosites.  Then it gets converted from analog to digital, and then it's data.  There's very little opportunity for colour science tweaks there.  I've looked at their 709 LUT and it doesn't seem to be there either.

I'm seeing things in the colour science of the footage, but I'm just not sure where they are being applied in the signal path, and in-camera seems to be the only place I haven't looked.  

55 minutes ago, Django said:

.. do not under-estimate the "custom" factor here. As just discussed above, the 2xADC (DGO) tech alone is revolutionary and game-changing. It took 10 years for Canon to even notice and reverse-engineer it. GH6 is apparently also trying to copy that albeit with a different method.

It would be amazing if we were to get that tech in affordable cameras.  It will give better DR and may prompt even higher quality files (i.e. 12-bit LOG is way better than 12-bit RAW).

55 minutes ago, Django said:

Pal, please don't make this into a small guy VS corp situation. I know you think I'm your enemy here but believe me there is no "luxury" in this field at the moment, more like do or die... that said I actually encourage you to challenge anything that isn't clear or dubious to you. that's how you learn and debate is good for all of us, so thank you my man.. keep it coming 😉

It's not a small guy vs corp thing at all.

Most of the people pointing Alexas or REDs at something have control of that something.  
Most of the hours of footage captured by those cameras will be properly exposed at native ISO, will be in high-CRI single-temperature lighting, and will be pointed at something where the entire contents of the frame are within certain tolerances (eg, lighting ratios and no out-of-gamut colours, etc).

Most of the people pointing sub-$2K cameras at something do not have total control of that something, and many even have no control over that something.  
A lot of the hours of footage captured by those cameras will not be properly exposed at native ISO (or wouldn't be at 180 shutter), won't be in high-CRI single-temperature lighting, and won't be pointed at something where the entire contents of the frame are within certain tolerances (eg, lighting ratios and no out-of-gamut colours, etc).

You really notice how well your camera/codec handles mixed lighting when you arrive somewhere that looks completely neutral lighting and look through the viewfinder and see this:

vlcsnap-2019-05-14-21h06m25s814s.thumb.jpg.96d965f10c7aa48b62330465137b6776.jpg

This was a shoot I had a lot of trouble grading but managed to make at least passable, for my standards anyway.  There are other shots that I've tried for years to grade and haven't been able to, even through automating grades, because things moved between light-sources.

Unfortunately that's the reality for most low-cost camera owners 😕 

55 minutes ago, Django said:

to answer your question, what I've been trying to tell you is that one of the main benefits of high resolution..is actually to help in "difficult situations" i.e. reframing/stabilization/still extraction. No one is actually delivering 6K/8K outside of mastering for future-proof archiving. 

The difficult situations I find myself in are:

  • low-light / highISO
  • mixed-lighting
  • high DR

and when I adjust shots from the above to have the proper WB and exposure and run NR to remove ISO noise, the footage just looks so disappointing.

Resolution can't help with any of those.  I've shot in 5K, 4K, 3.3K and 1080p, and it's rare that the "difficult situation" I find myself in would be helped by having extra resolution.  I appreciate that my camera downsamples in-camera, which reduces noise in-camera, and the 5K sensor on the GH5 allows me to shoot in downsampled 1080p and also engage the 2X digital zoom and still have it downsampling (IIRC it's taking that from something like 2.5K) but I'd swap that for a lower resolution sensor with better low-light and more robust colour science without even having to think about it.

55 minutes ago, Django said:

a very good point made above though is that +6K could be a hinder to things like DGO tech and makes me understand/reconsider why ARRI (obvious IQ leader) isn't in any rush to go there...

They care about quality over quantity, and realise that one comes at the expense of the other.

This is literally what I've been trying to explain to you for (what seems like) weeks now.

34 minutes ago, ac6000cw said:

This is an older thread discussing some of the 'dual gain output'/HDR methods in use - https://www.eoshd.com/comments/topic/38363-simultaneous-dual-gain-sensors/

After reading the Arri and Canon 'whitepaper' info:

Arri use a simultaneous dual ADC approach (their amplifiers and ADCs appear to be off-sensor), the combining the results.

Canon DGO reads out the stored charge for a pixel, stores it, then does two sequential ADC conversions at different gains and combines the results.

Both those approaches avoid the 'images captured at different times' motion artifacts issue that sequential frame capture HDR creates.

In that thread, androidlad mentions another HDR mode that some Sony sensors support called DOL-HDR - https://www.eoshd.com/comments/topic/38363-simultaneous-dual-gain-sensors/?do=findComment&comment=354063

Interesting stuff.

In that thread, the post you linked to from @androidlad says:

On 4/29/2020 at 3:52 AM, androidlad said:

The HDR video mode on Mavic Air 2 uses the DOL-HDR feature on the newer Sony sensors, it's very similar to dual gain but the two readout are not completely simultaneous (a few miliseconds apart), and requires 2 ADC for high and low gain, which halves the max frame rate and doubles the rolling shutter.

This idea of taking frames "a few milliseconds apart" sounds like taking two exposures where the exposure time doesn't overlap.  Assuming that this is the case then yeah, motion artefacts are the downside.  Of course, with drones it's less of a risk as things are often further away and unless you put an ND on it the SS will be very short, so motion blur is negligible anyway.

We definitely want two readouts from the same exposure for normal film-making.

Link to comment
Share on other sites

34 minutes ago, kye said:

So, can you get the 16-bit files out of the camera and see them in an NLE?  or just the 12-bit files?

There's nothing stopping them from reading the data off the sensor, changing it in whatever ways they want to without debayering it or anything like that, and then saving it to a card.  I talented high-school student could write an algorithm to do that without any problems at all, so it's not impossible.  

When light goes into a camera it will go through the optical filters (eg, OLPF) and the bayer filter (the first element of the colour science as these filters will determine the spectral response of the R, G, and B photosites.  Then it gets converted from analog to digital, and then it's data.  There's very little opportunity for colour science tweaks there.  I've looked at their 709 LUT and it doesn't seem to be there either.

I'm seeing things in the colour science of the footage, but I'm just not sure where they are being applied in the signal path, and in-camera seems to be the only place I haven't looked.  

It would be amazing if we were to get that tech in affordable cameras.  It will give better DR and may prompt even higher quality files (i.e. 12-bit LOG is way better than 12-bit RAW).

It's not a small guy vs corp thing at all.

Most of the people pointing Alexas or REDs at something have control of that something.  
Most of the hours of footage captured by those cameras will be properly exposed at native ISO, will be in high-CRI single-temperature lighting, and will be pointed at something where the entire contents of the frame are within certain tolerances (eg, lighting ratios and no out-of-gamut colours, etc).

Most of the people pointing sub-$2K cameras at something do not have total control of that something, and many even have no control over that something.  
A lot of the hours of footage captured by those cameras will not be properly exposed at native ISO (or wouldn't be at 180 shutter), won't be in high-CRI single-temperature lighting, and won't be pointed at something where the entire contents of the frame are within certain tolerances (eg, lighting ratios and no out-of-gamut colours, etc).

You really notice how well your camera/codec handles mixed lighting when you arrive somewhere that looks completely neutral lighting and look through the viewfinder and see this:

vlcsnap-2019-05-14-21h06m25s814s.thumb.jpg.96d965f10c7aa48b62330465137b6776.jpg

This was a shoot I had a lot of trouble grading but managed to make at least passable, for my standards anyway.  There are other shots that I've tried for years to grade and haven't been able to, even through automating grades, because things moved between light-sources.

Unfortunately that's the reality for most low-cost camera owners 😕 

The difficult situations I find myself in are:

  • low-light / highISO
  • mixed-lighting
  • high DR

and when I adjust shots from the above to have the proper WB and exposure and run NR to remove ISO noise, the footage just looks so disappointing.

Resolution can't help with any of those.  I've shot in 5K, 4K, 3.3K and 1080p, and it's rare that the "difficult situation" I find myself in would be helped by having extra resolution.  I appreciate that my camera downsamples in-camera, which reduces noise in-camera, and the 5K sensor on the GH5 allows me to shoot in downsampled 1080p and also engage the 2X digital zoom and still have it downsampling (IIRC it's taking that from something like 2.5K) but I'd swap that for a lower resolution sensor with better low-light and more robust colour science without even having to think about it.

They care about quality over quantity, and realise that one comes at the expense of the other.

This is literally what I've been trying to explain to you for (what seems like) weeks now.

Interesting stuff.

In that thread, the post you linked to from @androidlad says:

This idea of taking frames "a few milliseconds apart" sounds like taking two exposures where the exposure time doesn't overlap.  Assuming that this is the case then yeah, motion artefacts are the downside.  Of course, with drones it's less of a risk as things are often further away and unless you put an ND on it the SS will be very short, so motion blur is negligible anyway.

We definitely want two readouts from the same exposure for normal film-making.

GH6 will use single exposure dual gain HDR:

9083f7cbgy1gzghsfhikxj21hc0u078j.thumb.jpg.e485ee0f45bba8a061cf76416640c27e.jpg

Link to comment
Share on other sites

1 hour ago, androidlad said:

GH6 will use single exposure dual gain HDR:

9083f7cbgy1gzghsfhikxj21hc0u078j.thumb.jpg.e485ee0f45bba8a061cf76416640c27e.jpg

Well, I'm always reluctant to take early specs and speculations to heart, relying on a video blogger's "leak" and created (unsourced) simplistic graphical representation is (along with any other information being provided) guesswork, at best. Hopefully, Panasonic will be tooting their own horn with some technical background information or white paper with this camera's release. Fingers crossed on them doing so, until then I'll file the above image in my "if it can be believed" drawer.

In the meantime, I'll be chewing on some popcorn hoping that Panasonic finally releases the dual-gain technology they've actually been working on for a long, long time...

Image Sensors World: Panasonic 123dB WDR Organic Sensor

http://image-sensors-world.blogspot.com/2016/02/panasonic-123db-wdr-organic-sensor.html

Link to comment
Share on other sites

On 2/19/2022 at 6:44 PM, Caleb Genheimer said:

The Alexa sensor’s photosites have TWO ADC each, at different gain settings, in parallel, for simultaneous capture (at the same moment.) The signals from these offset sensitivity ADCs are then combined by the image processor to cover a resulting higher DR. This is entirely possible in a prosumer camera at reasonable bitrates via LOG scale encoding. The real bottleneck is the ADC tech, which is linear in nature. 
...

It sounds like the GH6 is using an entirely different approach to increase DR: sequential offset exposure.

Is it not possible that Panasonic are also using a hardware solution based on the existing Dual ISO technology they have already used in the GH5S?

Link to comment
Share on other sites

31 minutes ago, Mmmbeats said:

Is it not possible that Panasonic are also using a hardware solution based on the existing Dual ISO technology they have already used in the GH5S?

Actually... I'm just catching up on the rest of the thread that discusses things in more detail...

Link to comment
Share on other sites

22 hours ago, kye said:

So, can you get the 16-bit files out of the camera and see them in an NLE?  or just the 12-bit files?

ARRIRAW is supported natively in Resolve but with a limited number of settings. 

Best workflow is to use the official ARRIRAW converter.

21 hours ago, androidlad said:

The difficult situations I find myself in are:

  • low-light / highISO
  • mixed-lighting
  • high DR

and when I adjust shots from the above to have the proper WB and exposure and run NR to remove ISO noise, the footage just looks so disappointing.

Resolution can't help with any of those.

Never claimed resolution would fix those specific issues, just saying there are real benefits to high resolution for other real-world requirements.

22 hours ago, kye said:

They care about quality over quantity, and realise that one comes at the expense of the other.

This is literally what I've been trying to explain to you for (what seems like) weeks now.

There is always a trade-off somewhere. I have great respect in ARRI for their overall dedication to IQ across the pipeline but there are other companies doing interesting if not better achievements in other related areas such as Komodo's global shutter for instance. That said, I do think ARRI's Dual ADC tech is absolutely key to their DR and latitude, and that is certainly something I'm glad to see trickle down to more affordable cameras.

Link to comment
Share on other sites

I thought this odd...

LUMIX GH6 Launch Event + Griffin's Live Commentary - YouTube

 

...a Lumix "brand ambassador" doing a live, blow-by-blow simulcast during the official announcement? Thoughts?

Disclaimer: I have zero affiliation with either Panasonic or Mr. Hammond, link posted for informational usage only.

Link to comment
Share on other sites

31 minutes ago, Jimmy G said:

I thought this odd...

Why's it odd?  Not my cup of tea, but i imagine some people might find the official launch a bit dry and impenetrable, so he's thrust in front to make it a bit more lively and palatable.  Makes sense to me. 

Link to comment
Share on other sites

2 hours ago, Jimmy G said:

..a Lumix "brand ambassador" doing a live, blow-by-blow simulcast during the official announcement? Thoughts?

So does that mean that reviewers / ambassadors DON'T have the GH6 in hand already?

I remember Sean Robinson saying about a month ago he has already seen / played with a GH6 (but then went on to say that he couldn't tell us anything about it). So I would have imagined that the likes of all our least-favorite youtubers would have had a chance to test it by now.

Link to comment
Share on other sites

1 minute ago, Mark Romero 2 said:

So does that mean that reviewers / ambassadors DON'T have the GH6 in hand already?

I remember Sean Robinson saying about a month ago he has already seen / played with a GH6 (but then went on to say that he couldn't tell us anything about it). So I would have imagined that the likes of all our least-favorite youtubers would have had a chance to test it by now.

Reviews most certainly have had these cameras in hand for a number of weeks now.

Link to comment
Share on other sites

1 hour ago, Django said:

FYI Andrew has a GH6 in-hand but like others is silenced by NDA until official camera launch tomorrow..

Now you tell us?! Heck, I've been carrying on about organic sensors for weeks now and the whole time Andrew has been letting me hang myself? LOL

Link to comment
Share on other sites

3 hours ago, Mmmbeats said:

Why's it odd?  Not my cup of tea, but i imagine some people might find the official launch a bit dry and impenetrable, so he's thrust in front to make it a bit more lively and palatable.  Makes sense to me. 

"Say, uh, boss...the board was thinking that that maybe your delivery is a bit slow...y'know, kinda makes peoples eyes a bit heavy...nothing, um, personal, but, maybe how about we let Griffin make the presentation to the Americans...y'know, spice it up a bit?...whaddaya think?" And here I was thinking maybe the official version might not be in English?

"Odd" as in, new one by me.

Link to comment
Share on other sites

48 minutes ago, Jimmy G said:

"Say, uh, boss...the board was thinking that that maybe your delivery is a bit slow...y'know, kinda makes peoples eyes a bit heavy...nothing, um, personal, but, maybe how about we let Griffin make the presentation to the Americans...y'know, spice it up a bit?...whaddaya think?" And here I was thinking maybe the official version might not be in English?

"Odd" as in, new one by me.

I hear you.

Funny enough he's not the only one doing it.  I've seen someone else promoting their own live dissection of the event online too.  Not official though, I think they've just taken it upon themselves.

It's a very meta world now don't you know?  You might just have to get used to it 😉.  

Link to comment
Share on other sites

5 hours ago, Mmmbeats said:

I hear you.

Funny enough he's not the only one doing it.  I've seen someone else promoting their own live dissection of the event online too.  Not official though, I think they've just taken it upon themselves.

It's a very meta world now don't you know?  You might just have to get used to it 😉.  

If you're talking about the launch event watch party, yeah I saw a link for that on YT...maybe our host should have considered such a thing here...oh yeah, NDA and all, maybe not a good idea...mighta been fun!

Where a bucket on my head?! LOL

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...