Jump to content

The Aesthetic


kye
 Share

Recommended Posts

5 hours ago, kaylee said:

...

• 5d mark 2 color was amazing, less realistic, more green and magenta (im not a scientist lol dont ask me), vs...

• 5d 3 color which was amazing, warmer and prolly more accurate, but then

• 5d4 came out and their color went to shit. 'flat' skin tones? yes, it looks like the men are wearing foundation

but mind you, im ONLY TALKING ABOUT SKIN TONES, and crazy as it may sound, for my artistic purposes ~im happy to grade the heck out of everything else in the image~

neon grass/etc was common back in the day for canon, but id gladly wrangle the rest of the image for skintones which show blood and skin translucency better. its a huge way that we show emotion~! *no one actually cares what color a tree is* – you can make it teal. but if a persons face is teal they look sick (not sick cool but sick ill lol)

i will power window the entire shot around the skintones. is that a practical workflow? no, its far from ideal, its time consuming. and ofc im speaking about creative 'artistic' filmmaking, not a product shoot where you need color accuracy and immediate turnaround. clearly. i get all that

im just saying that there IS a difference in canon color from the 5d4 on, and i dont think that its an improvement. idk if its better at shooting a color chart, i respect the need, i just personally dont care...

so am i imagining all this? maybe its a dream

I had nearly the same reaction with the newer Canon T2i and Fuji X-T3
compared to my older Fuji S1 Pro, concerning skin tones.
 
In every other way the new cameras blow it away,
but of all the cameras I’ve used the S1 Pro still has the most amazing skin tones.
I still look at 8X10’s framed on the wall and marvel at the color in my kid’s portraits,
shot with only 3MP resolution that look perfectly sharp.

The S1 Pro always had a slight green cast to the image,
and we know that that is also common in Arri cameras.
But just adding green in the X-T3 does not match it.
Some people say it is due to the unique CCD chip that Fuji had developed. I have no idea.

I would rate the Canon and X-T3 skin tones as excellent, the S1 Pro as excellent+.

https://www.dpreview.com/products/fujifilm/slrs/fuji_s1/specifications

 

 

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
6 hours ago, TomTheDP said:

True however ARRI had no choice but to make the LF 4.5k and the LF 6k. They use the same sensor in every camera and that was the only way to do that.

Their new S35 camera is a brand new sensor and they chose to go 4k over 6k or 8k. That to me is a statement and I don't think anyone will question it. Of course they might now put out a full frame 6k and large format 8k camera. That will be interesting to see.

The Alexa65 is a Large Format (65mm) that uses ALEV 3 A3X sensor & the Alexa LF is a FF size sensor using A2X version.

So actually different sensors and different lens systems that take advantage of the extra resolution & resolving power.

The new Alexa 4K S35 camera needs to keep the key attributes of the current industry-standard model which means large photosite count, high dynamic range, low rolling-shutter etc. All these leading factors might contribute to the choice of a 4K sensor over 6/8K. So it's not so much a resolution statement imo but rather a focus on maintaining their in-class leadership position in sensor image specs.

Let's also not forget ARRI's main clients are for theatre projection films of which the majority are still 2K. 

Alexa65 was also developed with IMAX & 3D in mind which use up to dual 4K laser projectors.

Getting back to Netflix and other streaming services, they're resolution requirements are there to also ensure future-proofing of their content displayed on TV/computer screens. 

Different mediums, different requirements, different sensor resolutions. 

Link to comment
Share on other sites

Has Arri confirmed it's 4K or "4K+"?

I remember the EVA1 is 5.6k or the Alexa is 2.8k because that's around 4K or 2K after debayering. So it could be more.

I used to hate on Netflix and their 4K requirement, but I had the chance to work with some footage that I then saw projected in 2K in a theater and the truth is I'm pretty sure you need more resolution for YouTube (maybe because of compression, maybe because you're so close to the screen) than you do for theatrical films (or tv)

So I guess I get the Netflix requirement.

But yeah I think bigger photo sites means more highlight dynamic range. So Arri will need some new tricks up its sleeve to match the OG Alexa's highlight detail – but I am sure they will.

The Alexa also has a "soft" feel to it. Rumor is from the OLPF. Will be interesting to see if they maintain that.

I always felt my t2i had the best colors, the C100 was a close second, Alexa third.

Link to comment
Share on other sites

1 hour ago, HockeyFan12 said:

I always felt my t2i had the best colors, the C100 was a close second, Alexa third.

Yeah a C100 is about as good as it gets for color and sharpness in 1080p. Just proves that on paper a crap Codec is not very accurate on output.

Link to comment
Share on other sites

2 hours ago, Django said:

The Alexa65 is a Large Format (65mm) that uses ALEV 3 A3X sensor & the Alexa LF is a FF size sensor using A2X version.

So actually different sensors and different lens systems that take advantage of the extra resolution & resolving power.

The new Alexa 4K S35 camera needs to keep the key attributes of the current industry-standard model which means large photosite count, high dynamic range, low rolling-shutter etc. All these leading factors might contribute to the choice of a 4K sensor over 6/8K. So it's not so much a resolution statement imo but rather a focus on maintaining their in-class leadership position in sensor image specs.

Let's also not forget ARRI's main clients are for theatre projection films of which the majority are still 2K. 

Alexa65 was also developed with IMAX & 3D in mind which use up to dual 4K laser projectors.

Getting back to Netflix and other streaming services, they're resolution requirements are there to also ensure future-proofing of their content displayed on TV/computer screens. 

Different mediums, different requirements, different sensor resolutions. 

Almost all theater stuff ends up on streaming, sooner rather then later these days, sometimes immediately.

They are technically different sensors but it is the same exact same tech, just bigger. Aside from the size of the noise which obviously decreases as the sensor becomes larger, the images are identical. Any other company is using completely different sensors in their S35 vs full frame cameras, which is incredibly apparent when shooting side by side with them.

Maybe its not an intentional statement but anything ARRI does is a statement in the industry by default imo.

Most content is going to end up in the trash so the idea of future proofing is extremely silly to me. Content is more disposable then ever. Consumerism is driving most things rather then necessity. All the hype over 4k and I have never even noticed a difference between IMAX and standard theater projections. But my opinion doesn't matter and if I was shooting for Netflix or any major studio I would be using an LF not my Classic.

 

1 hour ago, HockeyFan12 said:

Has Arri confirmed it's 4K or "4K+"?

I remember the EVA1 is 5.6k or the Alexa is 2.8k because that's around 4K or 2K after debayering. So it could be more.

I used to hate on Netflix and their 4K requirement, but I had the chance to work with some footage that I then saw projected in 2K in a theater and the truth is I'm pretty sure you need more resolution for YouTube (maybe because of compression, maybe because you're so close to the screen) than you do for theatrical films (or tv)

So I guess I get the Netflix requirement.

But yeah I think bigger photo sites means more highlight dynamic range. So Arri will need some new tricks up its sleeve to match the OG Alexa's highlight detail – but I am sure they will.

The Alexa also has a "soft" feel to it. Rumor is from the OLPF. Will be interesting to see if they maintain that.

I always felt my t2i had the best colors, the C100 was a close second, Alexa third.

Confirmed to be 4k.

About YouTube its more of a bitrate issue then resolution from what I have noticed. 2k videos uploaded in 4k look much better then 2k videos uploaded in HD.

It's always been an annoyance for me, where the original capture looks much better then the compressed or shitty screen you are viewing it on. Internet compression hates saturation, noise, and shadows.

 

Link to comment
Share on other sites

I believe the point OP is trying to make is: "A part of the people who are shooting video have specific ideas about what a good image is but I think they are wrong. These people often have their origin in photography more so then cinematography which would explain their preference for specific visual attributes. Cinematographers however have very different criteria to judge an image and should not take their cues from these people."

I do think that photography and cinema do each have their own language. Being a good photographer doesn't make you a good cinematographer or vice versa. An image which works as a photo might not work as part of a narrative sequence and a great scene from a movie might very well fall flat as a still. However I think this distinction has nothing to do with a particular aesthetic. A good photographer may just as well "dirty-up" the image as part of his work. The significant distinction is intent. Professional photographers and cinematographers first think about what they want to achieve with their images and then use anything in their toolbox to achieve that, be it softening, sharpening, fish-eye distortion, rectilinear (distortion), vintage aberrations etc. The not so professional doesn't think it through that much and uses what he has, or simply uses what he saw others using because it worked really well or looked cool without thinking about how appropriate it is for what he is trying to do.

The starting point should be intent, why do I shoot this image? Everything else should follow from that.

And then there is the distinction between those who want to lock a look in camera (so it becomes harder to mess with your intent during post-production) and those who prefer to capture it all as neutral and pristine as possible to allow for maximum flexibility in post (so you can change your intent I guess?).

 

Link to comment
Share on other sites

1 hour ago, Michael S said:

I believe the point OP is trying to make is: "A part of the people who are shooting video have specific ideas about what a good image is but I think they are wrong. These people often have their origin in photography more so then cinematography which would explain their preference for specific visual attributes. Cinematographers however have very different criteria to judge an image and should not take their cues from these people."

I do think that photography and cinema do each have their own language. Being a good photographer doesn't make you a good cinematographer or vice versa. An image which works as a photo might not work as part of a narrative sequence and a great scene from a movie might very well fall flat as a still. However I think this distinction has nothing to do with a particular aesthetic. A good photographer may just as well "dirty-up" the image as part of his work. The significant distinction is intent. Professional photographers and cinematographers first think about what they want to achieve with their images and then use anything in their toolbox to achieve that, be it softening, sharpening, fish-eye distortion, rectilinear (distortion), vintage aberrations etc. The not so professional doesn't think it through that much and uses what he has, or simply uses what he saw others using because it worked really well or looked cool without thinking about how appropriate it is for what he is trying to do.

The starting point should be intent, why do I shoot this image? Everything else should follow from that.

And then there is the distinction between those who want to lock a look in camera (so it becomes harder to mess with your intent during post-production) and those who prefer to capture it all as neutral and pristine as possible to allow for maximum flexibility in post (so you can change your intent I guess?).

 

Yeah, I think a lot gets lost in translation on the web. There's a lot of brand loyalty and gear loyalty but that ignores how subjective creative choices can be. For instances a lot of the great cinematographers were bleach bypass processing or using diffusion or even weirder stuff – like Janusz Kaminski on Saving Private Ryan – whereas Deakins was and is looking more for technical perfection. Impossible to say which lens or camera is best without knowing what style you're after.

Then there are workflow issues too.

Fincher has lost me over time with his faux-vintage stuff. I liked Zodiac except for the CG anamorphic lens flares. The faux-anamorphic look in Mindhunter was not for me, but it looked unique. Mank isn't a look I'm really into. But it won an Oscar.

On the other hand, I thought the Lighthouse was totally deserving even though others might feel the opposite. 

So I guess taste is subjective too. 😕 

I think, online in particular, the more specific your question, the better the answers you'll get. 

Link to comment
Share on other sites

8 hours ago, HockeyFan12 said:

I used to hate on Netflix and their 4K requirement, but I had the chance to work with some footage that I then saw projected in 2K in a theater and the truth is I'm pretty sure you need more resolution for YouTube (maybe because of compression, maybe because you're so close to the screen) than you do for theatrical films (or tv)

So I guess I get the Netflix requirement.

As @TomTheDP said, it's about the bitrate not the resolution.

Here's a test I did to see what is visible once it's gone through the streaming compression.  

The structure of the test was that I took 8K RAW footage (RED Helium IIRC) and exported it as Prores HQ in different resolutions and then put them all onto a 4K timeline and uploaded to YT.  This simulates shooting in different resolutions but uploading them all at 4K with an identical bitrate, so removes the streaming bitrates from the equation.

I really can't tell any meaningful difference until around 2K when watching it on YT.  The file I uploaded to YT fares slightly better, but even that was compressed, and a 4K RAW export would likely be more resolving still.

 

8 hours ago, Video Hummus said:

My point is why do you fucking care what resolution the camera shoots in.

Scenario 1: The resolution wars

  • Manufacturers increase the resolution of their cameras to entice people to buy them
  • File sizes go through the roof, requiring thousands of dollars of equipment upgrades to store, and process the footage
  • Features like IBIS, connectivity, colour science, reliability, take a back seat
  • Shooting is somewhat frustrating, the post process more demanding, upload times gruelling
  • We watch everything through streaming services where the compression obliterates the effects of the extra resolution (see the above test) and things like the colour are on full display

Scenario 2: The quality wars

  • Manufacturers increase the quality of their cameras by going all-in on features like great-performing IBIS, full-connectivity, almost unimaginably good colour science, and solid reliability to entice people to buy them
  • File sizes stay manageable without requiring expensive upgrades
  • Shooting is easy and the equipment convenient and pleasant to use, the post process is smooth, upload times are reasonable
  • We watch everything through streaming services where the resolution is appropriate to the compression and things like the colour are on full display, creating a wonderful image regardless of how the images are viewed

The first is the business model of the mid-tier cameras - huge resolutions and basic flaws.  

The second is the business model of the high-end companies like ARRI and RED.  The OG Alexa and Amira are still in demand because the second scenario is still in demand.  Shoot with an Amira and while the images aren't super-big-resolution, the equipment is reliable with great features and the quality of the resolution you do have is wonderful.

7 hours ago, TomTheDP said:

Most content is going to end up in the trash so the idea of future proofing is extremely silly to me. Content is more disposable then ever. Consumerism is driving most things rather then necessity. All the hype over 4k and I have never even noticed a difference between IMAX and standard theater projections. But my opinion doesn't matter and if I was shooting for Netflix or any major studio I would be using an LF not my Classic.

When I went 4K I was talking about future proofing my images, but I eventually worked out that the logic is faulty.  I shoot my own family, so I could make the argument that my content is actually far less disposable than commercial projects but resolution doesn't matter in that context either, and even if it did the difference between 2K and 8K will be insignificant when compared with the 500K VR multi-spectral holographic freaking-who-knows-what they'll have at that point.  

I regularly watch content on streaming services that was shot in SD 4:3 - oh the horror!  Oh, but hang on, you hit play and the image quality instantly reveals itself, sometimes even showing glitches from the tape it was digitised from, but then someone starts talking and instantly you're thinking about what they're saying and what it means, and the 0.7K resolution and 4:3 aspect ratio disappears.

5 hours ago, Michael S said:

I believe the point OP is trying to make is: "A part of the people who are shooting video have specific ideas about what a good image is but I think they are wrong. These people often have their origin in photography more so then cinematography which would explain their preference for specific visual attributes. Cinematographers however have very different criteria to judge an image and should not take their cues from these people."

I do think that photography and cinema do each have their own language. Being a good photographer doesn't make you a good cinematographer or vice versa. An image which works as a photo might not work as part of a narrative sequence and a great scene from a movie might very well fall flat as a still. However I think this distinction has nothing to do with a particular aesthetic. A good photographer may just as well "dirty-up" the image as part of his work. The significant distinction is intent. Professional photographers and cinematographers first think about what they want to achieve with their images and then use anything in their toolbox to achieve that, be it softening, sharpening, fish-eye distortion, rectilinear (distortion), vintage aberrations etc. The not so professional doesn't think it through that much and uses what he has, or simply uses what he saw others using because it worked really well or looked cool without thinking about how appropriate it is for what he is trying to do.

The starting point should be intent, why do I shoot this image? Everything else should follow from that.

And then there is the distinction between those who want to lock a look in camera (so it becomes harder to mess with your intent during post-production) and those who prefer to capture it all as neutral and pristine as possible to allow for maximum flexibility in post (so you can change your intent I guess?).

Well said.

It's about the outcome and understanding the end goal.  I was hoping that by posting a bunch of real images that it would put things into perspective and people would understand that resolutions about 4K (and maybe above 2K) really aren't adding much to the image, but come at the expense of improving other things that matter more.

Link to comment
Share on other sites

10 hours ago, Michael S said:

And then there is the distinction between those who want to lock a look in camera (so it becomes harder to mess with your intent during post-production) and those who prefer to capture it all as neutral and pristine as possible to allow for maximum flexibility in post (so you can change your intent I guess?).

Hello 🤗

That’s my tribe.

I shoot specifically for SOOC results for a couple of reasons:

A: If it looks right, it is right and even though I only shoot for paying clients, my first and only priority is myself in regard to the aesthetic. Arrogant? Far from it… In 10+ years no one has ever questioned my grade and those that choose to book me do so because they like what I produce.

B: Time in post. The grading side of one of my typical 10 minute productions is about an hour or less.

C. The simple fact that I have never been able to get a better looking end result using log. At least not consistently. Every year I think that maybe I should try log again and run back to back tests and every time I do, it’s the same result and the SOOC wins. SOOC of course is not any cameras default setting but the deliberate choice of: profile tweaks + lens choice + filter choice + as much as I am able to, lighting and direction of light.

In regard to the latter, maybe I’m just shit at grading…🤪

Link to comment
Share on other sites

On 2/4/2022 at 2:08 PM, TomTheDP said:

...
Most content is going to end up in the trash so the idea of future proofing is extremely silly to me. Content is more disposable then ever. Consumerism is driving most things rather then necessity. All the hype over 4k and I have never even noticed a difference between IMAX and standard theater projections. But my opinion doesn't matter and if I was shooting for Netflix or any major studio I would be using an LF not my Classic.

 

Confirmed to be 4k.

About YouTube its more of a bitrate issue then resolution from what I have noticed. 2k videos uploaded in 4k look much better then 2k videos uploaded in HD.

It's always been an annoyance for me, where the original capture looks much better then the compressed or shitty screen you are viewing it on. Internet compression hates saturation, noise, and shadows.

 

Concerning 8K & Apple headsets, forget about future-proofing, poor choice of words.

Basically I like the idea of surrounding my full field of vision with video I shot myself.

It’s an old idea, Omnimax theaters were doing it years ago with a huge
half-circle dome screen covering 180 degrees of your vision. IMAX is just a big flat screen.
Think planetarium dome night sky shows.

With an 8K camera & 8K headset, it should be trivially easy to experiment
With your own equivalent of Omnimax.

This would be of more interest to the hobbyists here, doing their own projects.

But I should have waited on posting about this, its off-topic & it’s too early yet.
The Apple headset won’t be here for about year and the specs are not published.

As far as “VR” that needs needs signing in on Facebook/Meta (Oculus), or anywhere else,
I’ll have nothing to do with It.

Link to comment
Share on other sites

I don't think the camera plays an important part in the final look of a well produced movie or series. The camera is a tool and pros work around the limits of their tools. If 90% of the look was the camera, one DP would be as good as another one. And set design, dressing, color grading would all be nearly useless. There is a reason end credits last 5-10 minutes.

Of course all this reverse in the case of the videomaker, a situation in which one person is responsible for everything. There the camera plays a great role. 

Link to comment
Share on other sites

Netflix - and Amazon like that 4k (and in Amazon's case HDR) goodness. It doesn't matter if its not needed or if it doesn't add anything to the story, it makes people feel good about buying a 4k tv and having something in 4kHDR to watch. That adds to their subscriber base, keeps profits high and shareholders happy, and gets them to keep churning out content. Here in the US I can get a 65" 4k HDR tv for under $400, 1080p is dead at the retail level. Its crazy how fast that happened too, three or four years ago HDTV's were still a massive part of the mix. Now they're throwaway sets not even worthy of a live display at Best Buy or Walmart, just cheap aisle fillers. Netflix and Amazon know this better than anyone, and the Netflix approved camera list clearly reflects that. 

Chris

Link to comment
Share on other sites

22 hours ago, kye said:

Scenario 1: The resolution wars

  • Manufacturers increase the resolution of their cameras to entice people to buy them
  • File sizes go through the roof, requiring thousands of dollars of equipment upgrades to store, and process the footage
  • Features like IBIS, connectivity, colour science, reliability, take a back seat
  • Shooting is somewhat frustrating, the post process more demanding, upload times gruelling
  • We watch everything through streaming services where the compression obliterates the effects of the extra resolution (see the above test) and things like the colour are on full display

Scenario 2: The quality wars

  • Manufacturers increase the quality of their cameras by going all-in on features like great-performing IBIS, full-connectivity, almost unimaginably good colour science, and solid reliability to entice people to buy them
  • File sizes stay manageable without requiring expensive upgrades
  • Shooting is easy and the equipment convenient and pleasant to use, the post process is smooth, upload times are reasonable
  • We watch everything through streaming services where the resolution is appropriate to the compression and things like the colour are on full display, creating a wonderful image regardless of how the images are viewed

You can have both. 

Link to comment
Share on other sites

10 hours ago, kye said:

Where?

In the last 8 years

  1. Streaming resolution has gone up. I remember when YouTube maxed at 720p and they called it HD!
  2. Streaming bitrates have gone up.
  3. Better codecs are being used.
  4. TV manufactures and streaming services are working on a true "cinema" display standard for viewing.
  5. Storage $/GB have continued to decline.
  6. Camera media is getting larger and more inexpensive (You can get a 512GB CFExpress card from Angelbird that can handle 8K for $179).
  7. Cameras are universally adding 10-bit color in camera.
  8. Some cameras are offering 12-bit internal RAW at fairly reasonable bitrates.
  9. Camera resolutions are going up
  10. Color science is improving in multiple brands
  11. Professional grading and editing software is more available and cheaper than ever
  12. You can learn how to professional grade your footage FREE to maximize the 10-bit / 12-bit color coming out of your camera.
On 2/5/2022 at 2:43 AM, MrSMW said:

That’s my tribe.

I shoot specifically for SOOC results for a couple of reasons:

And that is perfectly valid way to shoot. And so you buy the camera that works best for you and gives you the best SOOC result. As things evolve you switch up your gear to get better results and an easier workflow. This is how technology works. Now you can get cameras that shoot seasoned SOOC colors that look decent in 10-bit. Many of your clients will probably own or WILL own a 10-bit HDR TV / monitor where the footage will shine and look good.

 

On 2/4/2022 at 10:01 PM, kye said:

It's about the outcome and understanding the end goal.  I was hoping that by posting a bunch of real images that it would put things into perspective and people would understand that resolutions about 4K (and maybe above 2K) really aren't adding much to the image, but come at the expense of improving other things that matter more.

Then take your 5K/6K/8K oversampling camera and put it in a 2K/4K mode with as high as a bit depth as you can get. Color your footage however you want.

Hell, if you camera shoots 12-bit+ RAW. Then shoot that and then conform it to ARRI LOG-C and grade it just like an ARRI camera minus a few stops of dynamic range. You can do that now! So until ARRI releases a mirrorless camera with IBIS you won't be happy. I would say the other brands are coming up faster to reaching that point than you think. You just might have to accept that your 2-4K 10-bit plus footage will be oversampled from 8K or more.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...