Jump to content

What was the first professional camera to shoot LOG gamma?


maxotics
 Share

Recommended Posts

Does anyone know when the first video cameras, both professional and consumer, that recorded a LOG gamma were introduced?  I assume something like the Sony F35?  Anyway, each manufacturers and approximate date would be helpful.  Also, does anyone have an opinion when most professional filmmakers moved from LOG on professional 8-bit cameras to RAW formats (like on RED, etc).  THANKS!    I want to use this information for the next video I do on the subject.

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
46 minutes ago, maxotics said:

I assume something like the Sony F35?

The Sony F23 is even older than the F35 (not saying the F23 was the first with slog from Sony however, but I think it might be: https://www.provideocoalition.com/the_not_so_technical_guide_to_s_log_and_log_gamma_curves/ Though there is still the question of did other manufacturers do log before Sony?! Maybe Panasonic's first ever Varicam had log?!). 

46 minutes ago, maxotics said:

Also, does anyone have an opinion when most professional filmmakers moved from LOG on professional 8-bit cameras to RAW formats (like on RED, etc). 

There were raw cameras before RED but none made the impact that the RED ONE made. 

And the RED ONE hit before slog was commonly used. 

Link to comment
Share on other sites

@maxotics I know you’re an advocate of linear profiles, but aren’t they more contrasty, with less highlight information than LOG? Why is it that the pros all shoot RAW or Log? I watch shows like Chef’s Table (shot on the Red), and can’t believe the beautiful color. Right now, I shoot HLG, which is the best of both worlds for me at the moment. ?

Link to comment
Share on other sites

10 hours ago, jonpais said:

I know you’re an advocate of linear profiles, but aren’t they more contrasty, with less highlight information than LOG?

I like to think of myself as an advocate of understanding how one's camera works. 

linear vs LOG is exhibit A in many filmmaker's choice to skip math class and watch cartoons all day ;)

I've done a couple of videos recently on my YT Maxotics channel, which will be the foundation for my new video on LOG.

Anyway, the short answer is that a camera records a set of data (pixel) points.  Each value is meant to convert into a color.  The manufacturer give you a few options with color profiles, like neutral, portrait, landscape.   

What's the technical difference between linear and LOG?  Simply a difference in equation, NOT a difference in data.  Before anyone jumps down my throat saying there is a difference in data because a LOG gamma will pull different data points from the sensor, yes that's true.  AND IS THE REAL DIFFERENCE.  The difference between linear and LOG as it applies to a shooting gamma is the choice of data.   It is NOT linear vs LOG. So what IS that difference?  

Yes, "'linear 'profiles (which end up in a LOG distribution no matter what) .... have less highlight information"  BUT one can't claim that without the data trade-off on the other end, which is "LOG profiles have more highlight information than linear but less color information in the mid-tones" 

So my question to you, or anyone, is why would you always want to decrease the quality of your skin-tones, or mid-tones, to get more highlights in your image?  If that's your "look".  As I've said a gazillion times.  ACES!  But to believe you're shooting better images than your friend with the non-LOG camera is simply false if you're both shooting to maximize mid-tone saturation.

Part of the confusion is due to various studies about how sensitive we are to light at the high and low end.  Again, some have erroneously concluded that LOG can "fix" a problem with linear video data, as if the people making the cameras are complete idiots :)  No, all h.264 video ALREADY applies a LOG gamma to correct for that biological truth.  

LOG solves a very small problem.  What to do if you wanted to shoot a very non-saturated style (like a zombie movie).  Or if you didn't want your sky completely blowing out in an outside shot where skin-tones aren't important.

I'll put it bluntly.  Anyone who shoots LOG for everything, and does not do that to get a specific flat-style, is not getting the best data out of their camera.   Anyone who believes a LUT will put back the mid-tone color that a LOG gamma removes, is also expecting the impossible.

Finally, there is the question of 10-bit video.  Theoretically, it should be able to pull in those highlights and keep the same mid-tone saturation you'd get in 8-bit.  My preliminary tests with Sony cameras it that it doesn't happen.  I don't know why.  I haven't seen any comparison video of 10-bit vs 8-bit where there is a significant improvement in DR without losing saturation.  Some suggest it is video compression getting in the way.  It's something I will be testing as soon as I'm finished with "work-work" :)

 

Link to comment
Share on other sites

10 hours ago, IronFilm said:

The Sony F23 is even older than the F35 (not saying the F23 was the first with slog from Sony however, but I think it might be: https://www.provideocoalition.com/the_not_so_technical_guide_to_s_log_and_log_gamma_curves/ Though there is still the question of did other manufacturers do log before Sony?! Maybe Panasonic's first ever Varicam had log?!). 

Yes, that article, from 2009, goes into the trade-offs of LOG gammas!  My theory is that many young filmmakers identified LOG with professional, so when LOGs appeared on their cameras began to shoot with it, forgetting the fine-points of data capture mentioned in technical papers like that.  Also, the cottage-industry of color profile makers set out to "fix" LOG footage, conveniently forgetting to educate their customers that sometimes what a filmmaker needs, to get the colors they want, is to shoot in a normal rec.709 gamma. 

Link to comment
Share on other sites

13 hours ago, jonpais said:

@maxotics I know you’re an advocate of linear profiles, but aren’t they more contrasty, with less highlight information than LOG? Why is it that the pros all shoot RAW or Log? I watch shows like Chef’s Table (shot on the Red), and can’t believe the beautiful color. Right now, I shoot HLG, which is the best of both worlds for me at the moment. ?

I've had this question before, and I think it comes mostly down to compression. Big budgets are shooting ProRes or raw, not low bitrate h264. I haven't watched all of Max's stuff but from what I can tell from experimentation, there is definitely a tradeoff between dynamic range and color information when not shooting raw. I'm a great colorist (from years of dealing with S Log), but after reading/watching Max's take on the negative tradeoffs, I've been experimenting with using more contrasty profiles and have found that for compressed footage, even 10 bit, the image is much more robust by starting off closer to final image in camera. This is more true the higher the compression, and especially the higher the noise. I stopped shooting SLog on my FS5 on 120/240fps all-together, and on dark overcast days pretty much don't use it either because there's so little color in the world to begin with.

To me, the real advantage of Log on compressed images is keeping the shadows and highlights from getting macro-blocked together and lost forever. But this definitely comes with the sacrifice of color fidelity.

I've also come to realize that 90% of viewers don't care if blacks or whites are clipped, so long as people (skin tones) look great and there's a smooth roll-off.

Link to comment
Share on other sites

6 minutes ago, EthanAlexander said:

To me, the only real advantage of Log on compressed images is keeping the shadows and highlights from getting macro-blocked together and lost forever. But this definitely comes with the sacrifice of color fidelity.

Excellent point!  Exactly the kind of questions that fascinate me and for which there is no data to be found.  With 4K one gets better color in post-compression highlights by reducing color distortions that pass through from the CFA.  So has 4K cured that problem?  How much of one, or the other?  

I don't know what anyone here thinks, but I've noticed a shift in filmmaking philosophy about color.  For example, I've been watching the "Dirty Money" docs on Netflix (highly recommended).  Each scene has its own color model.  One person might be interviewed and look flat, like a LOG gamma.  Another scene might be rich in saturated colors.  There's something to be said about having a change of look to keep things fresh.  So even though, in a perfect world, an old foggy producer might want everyone to shoot the same gamma and recording tech, it seems as if the issues I'm pointing out are becoming less of an aesthetic problem, not because people favor LOG over rec.709, say, but because they want some variety in their viewing experience.  I'm certainly getting used to it.  

Link to comment
Share on other sites

Log is only relevant for compression where you trade bit depth in areas where it simply does not matter, highlight does simply not need the same attention as lower values and you would be wasting a lot if info if you encoded everything in linear.

*edit* Meant to view post not save it yet :P

Anyway the difference in bright objects can be very high yet we don't see it so you easily waste a few bits just for that 1:100 sun to sky ratio.

I kinda like linear tho for certain operations tho. White balance for one since it can't be done right in any other way.

Link to comment
Share on other sites

Off topic again, apologies to @maxotics. ? How about HDR? Log or RAW are required for delivering high dynamic range. Rec.709 is limited to five or six stops of dynamic range (assuming perfect exposure and white balance), and SDR displays have at most seven stops: whereas HDR has several stops more. Up until recently, while cameras like Arri have been able to shoot fourteen stops of DR, we’ve been limited to seeing less than half of this because display technology has taken years to catch up. If I understand you correctly, Log is a trade-off: less saturation for more highlight detail. But now that we’re able to see as many as five stops more dynamic range isn’t it worth sacrificing a bit of color information?

BTW, I’m one of those who did terribly in math! 

 

 

Link to comment
Share on other sites

20 hours ago, no_connection said:

I kinda like linear tho for certain operations tho. White balance for one since it can't be done right in any other way.

That supports my findings that standard profiles maintain the most color information (white balance solves the equation where R+G+B equals white, the more RGB values you have, the better) while LOG trades color data for essentially gray-scale DR.

9 hours ago, jonpais said:

But now that we’re able to see as many as five stops more dynamic range isn’t it worth sacrificing a bit of color information?

Many people confuse our brain's ability to composite 20 stops of DR (from multiple visualizations) and the 5 stops of DR we can discern from visualizes where our pupils remain the same size. When one is outdoors, say, they don't see the bright color of a beach ball and the clouds in the sky at the same time, the brain creates that image from when you look at the sky with one pupil size, then the ball, with another.   Even so, we too have limitations which is why it is why HDR photography can quickly look fake.  We expect a certain amount of detail-less brightness.  Indeed, if you look at shows like Suits or Nashville, they use a background blown-out style on purpose.  No one there is saying, "we need more detail in the clouds outside the window".  

My monitor can go brighter.  For more DR that is what you do.  Again, brightness is not encoded in your video, it is assumed you'll set your display's physical brightness range to match.  You want 8 stops of DR, if your monitor can do it, just up the brightness!  I don't, because IT IS UNCOMFORTABLE.  My eyes can't tolerate it.  To anyone who says they want more DR I say, take out your spot-meter, set your display to 7 stops of brightness and determine what that REALLY means to you.  Don't just read marketing crap.  See for yourself :) (I'm talking to "you" as in anyone)

As I mentioned in one of those videos.  By the time we watch anything on Netflix we're down to 5% of the original image data captured by our camera.  People could go out and buy higher bit-depth Blue-ray videos but they don't.  Unless you're shooting for a digital projector in a movie theaters, video quality is hamstrung by technological limitations in bandwidth, power consumption, etc., NOT software or FIRMWARE or video CODECs.   None of that matters anyway, everyone is happy with current display technology.  

For me, the bottom line is simple.  Last-mile tech can't change much.  If you want the best image you need to shoot 36-42 bit color (RAW or ProRes) and have powerful computers to grade and edit it.  After that, everyone must drink through the same fizzy straw ;)

 

 

Link to comment
Share on other sites

2 hours ago, maxotics said:

My monitor can go brighter.  For more DR that is what you do.  Again, brightness is not encoded in your video, it is assumed you'll set your display's physical brightness range to match.  You want 8 stops of DR, if your monitor can do it, just up the brightness!  I don't, because IT IS UNCOMFORTABLE.  My eyes can't tolerate it.  To anyone who says they want more DR I say, take out your spot-meter, set your display to 7 stops of brightness and determine what that REALLY means to you.  Don't just read marketing crap.  See for yourself :) (I'm talking to "you" as in anyone)

 

 

I think you are now talking without a full understanding of HLG and HDR10, in which the issue of screen brightness is indeed an important component of the difference between the two, or maybe you are just being cute and cryptic so I do not get what you are saying in what seems to be a rant against DR and thus HDR.

Are you denying that shooting in HLG, a quasi-log gamma, does not get you more DR on a bright, appropriately-set-for-HLG monitor than shooting REC709? That HDR10 videos do not display more DR than REC709 videos on, say, 1000 nits screens? Or are you arguing that DR does not matter or simply that you do not like spectacular (not just specular) highlights?

And, if one's eyes adjust to real-world high DR to fool the brain to think they are seeing high DR they will do that based on what they see on the screen too. That argument against more DR on screens seems specious.

My experience is that DR matters a lot for achieving more realism, next for me is resolution, next is degree of compression, but color bit depth matters little. I have shot using 12bit RAW and see little gain over shooting 8bit. But, boy, I can see big differences between HDR and SDR even at 8 bit. Even if I am deluded, doesn't matter.

Link to comment
Share on other sites

4 hours ago, markr041 said:

And, if one's eyes adjust to real-world high DR to fool the brain to think they are seeing high DR they will do that based on what they see on the screen too. That argument against more DR on screens seems specious.

Hi Mark. We move slowly between dark and bright, inside to outside, etc.  On the screen, scenes, between inside and out, can jump every 3 seconds.  Too much flickering brightness would give one a headache, like neon signs or strobes.

 

4 hours ago, markr041 said:

I have shot using 12bit RAW and see little gain over shooting 8bit

If I can expose well, I can't see much of a difference either.  However, I can improve exposure after-the-fact better in RAW.  Also, there is something about video compression, especially 420, that adds a lot of unnatural noise (lack of chroma) in the image.  RAW provides a natural "grain" like look to the footage, and allows me to set the amount of motion compression, etc.  I have a C100 now which gives me a good enough image that I don't feel the need to shoot RAW.  Also, the A6300 4K downsampled is also excellent.  In short, I agree, with 4K pretty standard, much of the problems of 8-bit have been eliminated.    

 

4 hours ago, markr041 said:

But, boy, I can see big differences between HDR and SDR even at 8 bit. Even if I am deluded, doesn't matter.

 I would be the last to accuse you of delusion :)  The last time I was in BestBuy I couldn't see a good demo.  Maybe I'll try MicroCenter.  I'm glad to hear you do see an improvement.  Something to look forward to! I never want anyone to think I'm against any technology.  FAR FROM IT!  Quite the opposite.  That new 8K Dell monitor?  I want it ;) 

Link to comment
Share on other sites

45 minutes ago, maxotics said:

Hi Mark. We move slowly between dark and bright, inside to outside, etc.  On the screen, scenes, between inside and out, can jump every 3 seconds.  Too much flickering brightness would give one a headache, like neon signs or strobes.

 

If I can expose well, I can't see much of a difference either.  However, I can improve exposure after-the-fact better in RAW.  Also, there is something about video compression, especially 420, that adds a lot of unnatural noise (lack of chroma) in the image.  RAW provides a natural "grain" like look to the footage, and allows me to set the amount of motion compression, etc.  I have a C100 now which gives me a good enough image that I don't feel the need to shoot RAW.  Also, the A6300 4K downsampled is also excellent.  In short, I agree, with 4K pretty standard, much of the problems of 8-bit have been eliminated.    

 

 I would be the last to accuse you of delusion :)  The last time I was in BestBuy I couldn't see a good demo.  Maybe I'll try MicroCenter.  I'm glad to hear you do see an improvement.  Something to look forward to! I never want anyone to think I'm against any technology.  FAR FROM IT!  Quite the opposite.  That new 8K Dell monitor?  I want it ;) 

BestBuy demos are terrible. Before I could watch HDR using videos I chose or shot on my own viewing devices, based on what I saw at BestBuy I thought HDR was a marketing ploy.

If you can adjust the TV picture mode away from Retail Vivid to Normal or Cinema and play an HDR video using the TV's YouTube app maybe you will "get the picture". You also need to view a good video. Maybe people here can suggest some examples.

Link to comment
Share on other sites

11 hours ago, maxotics said:

Hi Mark. We move slowly between dark and bright, inside to outside, etc.  On the screen, scenes, between inside and out, can jump every 3 seconds.  Too much flickering brightness would give one a headache, like neon signs or strobes.

 

If I can expose well, I can't see much of a difference either.  However, I can improve exposure after-the-fact better in RAW.  Also, there is something about video compression, especially 420, that adds a lot of unnatural noise (lack of chroma) in the image.  RAW provides a natural "grain" like look to the footage, and allows me to set the amount of motion compression, etc.  I have a C100 now which gives me a good enough image that I don't feel the need to shoot RAW.  Also, the A6300 4K downsampled is also excellent.  In short, I agree, with 4K pretty standard, much of the problems of 8-bit have been eliminated.    

 

 I would be the last to accuse you of delusion :)  The last time I was in BestBuy I couldn't see a good demo.  Maybe I'll try MicroCenter.  I'm glad to hear you do see an improvement.  Something to look forward to! I never want anyone to think I'm against any technology.  FAR FROM IT!  Quite the opposite.  That new 8K Dell monitor?  I want it ;) 

I agree with you in theory. With dSLRs, at least, I always found ProLost flat to look much better than Technicolor. And I still find SLOG 2, for instance, too flat for 8 bit codecs (I also find it ugly on the F5 while shooting 10 bit, to be fair, though the Kodak emulation LUT on the F5 is quite nice). But my experience beyond that doesn't shore up with your hypothesis.

Since you seem to know much more about this than me, I have a few questions before I respond in more detail:

How is each f stop distributed in RAW, in rec709, and in log, respectively, in terms of how many bits are afforded to each f stop? I've seen linear output from dSLRs and to my eye it looked like 90% of the information was in the highlights, with the shadows being almost black. I believe, straight from the camera, each f stop has twice the information of the next darkest f stop in RAW. Whereas the JPEG (or rec709 video equivalent) conversion looks normal for the screen. I'm not sure how many bits each stop has of data on average in that case. True log looks super flat, and I'm assuming each stop is given an equal amount of data? 

Where did you get the figure of 5 stops of true dynamic range for the eye? Maybe our eyes work differently, but I can see into the shadows even with a bright light source or clouds nearby, and a figure of 15 stops, or more, seems much more likely to me. My pupils aren't constantly oscillating, either. Even if the five stop figure is true scientifically, it isn't experientially. 

Link to comment
Share on other sites

Like it or not, the UHD Alliance already has over fifty members, including Amazon, Asus, Dell, Adobe, dts, Intel, HP, LG, Nvidia, Netflix, Oppo, Panasonic, Paramount, Philips, Samsung, Sharp, Sony, Technicolor, Toshiba, Fox, Universal, and 20th Century Fox, so HDR is here to stay. Arri, one of the most prestigious names in the industry, is leading the way to HDR as well. YouTube, Apple iTunes, Amazon Prime, Vudu and Netflix all stream HDR content.

HDR is not all about extreme contrast and oversaturated color: it is capable of extraordinary subtlety as well. It is not at all necessary to shoot 42-bit RAW in order to see the benefits of HDR. Practically any camera that shoots a log profile is already shooting HDR.

Regardless of what percentage of pixels is lost in streaming, there is no denying the vast improvement in picture quality when viewing HDR content on video sharing platforms like YouTube. Several tablets and smartphones can also display HDR content. The only hurdle that remains are affordable HDR monitors, and I believe that we will see some in the next couple of years. It is not only filmmakers, but also gamers who are behind the push for better displays.

HDR does not require an extraordinary increase in storage capacity or processing power either - 10% more at most. Ken Ross, Mark and I have all uploaded HDR content to YouTube. Practically all NLEs now support HDR. Arri, in one of their papers on grading for HDR delivery, even warns that clients, having seen the HDR version, will not be satisfied with the SDR!

And I can practically guarantee you, @maxotics, that if I were to give you a 65” OLED and a subscription to Netflix, that after watching one season of Chef’s Table in Dolby Vision, you would never give it back.

Link to comment
Share on other sites

Surely there are far better technical experts here than me but from my basic knowledge shooting log/raw, it isn't only about getting maximum DR.. it's also about getting a wider color/contrast latitude for gradability purposes rather than baked in profiles. if you're trying to match various footage/cameras or have a heavy grade/look it's for sure the way to go imo.

raw has the extra advantage of giving you full white balance adjustment & various options that get unlocked in davinci and of course gives you true uncompressed image which can amount to greater detail. it's basically the same story than shooting raw vs jpeg with stills.

that being said, if you don't know what you're doing, or don't have much post production time/skills.. shooting log can amount to ruined footage or post headaches.. and of course low bit codecs may limit your grading latitude and you may just be better off shooting baked picture profiles.. i hardly ever shot c-log on my C100.. WideDR and custom profiles were usually enough. and my XT2 has such gorgeous IQ that I usually just shoot SOOC with the film simulations and various custom profiles. but on the Ursa Mini Pro we rented last weekend for narrative work it was ProRes Film log / RAW shooting all day.. 

now i don't much about shooting HDR/HLG but from what i understand the point is to get max DR and color wow/pop factor while having a very simple workflow not requiring much if any grading allowing for fast turnovers. great for broadcast/events etc. not really interesting for narrative work etc where you want a specific look and/or full control over your image. of course only maybe 0.2% of the world population is HDR equipped to even view the benefits..lol

Link to comment
Share on other sites

5 hours ago, HockeyFan12 said:

How is each f stop distributed in RAW, in rec709, and in log, respectively, in terms of how many bits are afforded to each f stop

My latest video covered this subject.  The short answer is that the sensor is helpless.  When making video, the manufacturer will try to maximize visual information in its 8-bit space.  I don't believe 10-bit really exists in the consumer level because camera chips, AFAIK, are 8-bit.  I am NOT AN EXPERT, in that I don't work for a camera company, never have.  I don't have an engineering degree.  I'm completely self-taught.  My conclusions are based on personal experiments where I have to build a lot of the tests.  I'm happy to help anyone find an excuse to ignore me ;)

The question you want to ask yourself is how much does the manufacturer know, that it isn't telling you, and which there is no economic incentive for people to do.  This isn't conspiracy stuff, this is business as usual.  To see this on a trillion dollar scale watch the latest Netflix documentary on VW "Dirty Money".  

Anyway, Sony and Panasonic know exactly how much color information is lost for every LOG "pulling" of higher DR data and how noisey that data is.   Since they sell many cameras based on users believing they're getting "professional" tools why should they point out its weakness?  Same for the cottage industry of color profiles.  Why don't those people do tests to tell you how much color information you lose in LOG?  There is no economic incentive here.  I'm only interested because I'm just one of those hyper-curious people.  

3 hours ago, jonpais said:

Practically any camera that shoots a log profile is already shooting HDR.

A case in point is Jon's comment.  No offense Jon, but think about what this implies.  It implies that 8-BIT log can capture a fully saturated HDR image.  I don't believe it can compared to a RAW based source.  Again, the reason is LOG in 8-bit must sacrifice accurate color data (midtones) for inaccurate color data (extended DR).  This is gobsmack obvious when one looks at the higher ISO LOG shoots at.  

If you're going for the lowest noise image, will you end up getting a better image shooting HDR LOG at 1600 or rec.709 at 400 ISO? 

Again, everyone is constrained by current bandwidth and 8-bit processors.  All the companies Jon mentioned want to sell TVs, cameras, etc.  They need something new.  The question is, are they going to deliver real benefits or will they, like VW, just plain lie about what's really happening when you take your equipment out ;)  If you don't know, VW's Diesel TDI was supposed to be very clean.  It was, when on a dymo.  As soon as one turned the steering wheel it turned off the defeat device and spewed out 40x more pollutants than the driver thought when they bought the car.

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...