Jump to content

Why Do People Still Shoot at 24FPS? It always ruins the footage for me


Recommended Posts

24 frames a second, that was the faster speed for us back in high school, when I started with my Minolta Super 8 camera.

Me and my friends were happily shooting 18 frames per second (silent) silly comedies and we loved it.

If I was doing narrative now, I'd have a hard time deciding between 24 or 30. 

For family and nature these days, 60p looks best to me.

Link to post
Share on other sites
  • Replies 61
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

No offence taken!   I've played with shutter angle and waving my hand on front of my face and I've gotten a sense of how 24p is different to reality.  The subjective experience for me is that 24p

Because some dead French men of the XIX century and his pals made a bunch of experiments proyecting 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23 FPS. Then they arrived at 24 FPS an

For me it takes the edge of of reality. It’s a visual pretense that tells you what you’re watching is a bit of a conceit. There’s some real psychological power in that. Assumptions and biases are made

Posted Images

On 1/27/2021 at 4:36 AM, SteveV4D said:

Shoot what you're happy with.  If I listened to every opinion on YouTube or forum, I'd be a confused mess.  I've had suggestions even here to shoot 1080p, and 8K and 4K and 6K and then there's the thoughts on 8 bit vs 10 bit vs 12 bit and 60fps and 24fps and 25fps and 30fps.  You can find a YouTube video suggesting each and everyone if you look.  

Its like upscaling 1080p to 4K, what's the point, when you can shoot 4K.  If you want the motion blur of 24p, film it.  If not, film something else.  I'll use motion blur to correct where I couldn't use the right shutter speed.  I won't use it to pretend I'm shooting something I wasn't, unless a client asks me to.

I agree with the sentiment about shooting what you're happy with.

I'd also add the caveat of forming your own opinions by doing real testing in controlled conditions and making sure you're getting the tech right.  

I used to read lots of stuff, and was that "confused mess" that you mentioned, and the way I got out of it was to actually test things myself and see what the effect actually was.  All technical choices result in an aesthetic of some kind, and film is a creative pursuit, so it's creating a finished product with the intended aesthetic.  If there was no aesthetic component then scripts would be technical manuals or lists of facts about a situation, and there would be no need to show anything other than diagrams on the screen, everything else is about the aesthetic.

So when I see someone criticising a technical choice I just see someone who doesn't understand something.  We criticise what we don't understand.  I think film/video is especially prone to that because there is such a depth of knowledge required and it's deceptively simple, so people don't know that they don't know things.

As an example, shooting 1080p and upscaling to 4K has much merit and I do it.  I shoot 1080p because I have a GH5 which shoots 1080p in 200Mbps 10-bit 422 ALL-I in both 24p and 60p.  I like the 'look' of 1080p and I can edit / colour those files on my laptop without having to transcode, so it doesn't cost me rendering time or money to buy a new computer.  I upscale to 4K because YouTube compression is less-worse at 4K than at 1080p.  You might think that it's a case of shooting in a 'worse' codec than YT, but it's not, I'm shooting 200Mbps and 4K YT is something like 10Mbps.  I've done A/B tests and you can tell which is the 4K master and which is the 2K master upscaled to 4K for YT, but you have to pixel peep and you have to know where to look.  
For my purposes shooting 1080p gives me almost as many benefits as shooting 4K with a very significant advantage in post-production.  That will be a different equation for other people, and I don't criticise everyone for shooting 4K, everyone has to make their own judgements.  These judgements also include non-camera stuff and even non-filmmaking things.

Link to post
Share on other sites

Not all decisions are based on personal preference, but can be a product of the gear you use.  My Pocket cameras are cropped for 1080p recording, making it less desirable to shoot in.  I also prefer shooting in DCI 4K, which uses the full sensor of the Pockets and my GH5 that I also use is limited to 24p for that mode.  For some reason, Panasonic adopts this frequency setting for Pal, ntsc and cinema, which the Pocket doesn't.  The GH5s allows 25p DCI, but doesn't have IBIS.  

That said, I'm not sure 24p and 25p yield much difference, in regards to the stutter effect.

There is also the minor difference between 23.98 and true 24p.  Some cameras say 24 fps when they are actually 23.98.  I should do my own tests and see if this can cause stutter if used on a pure 24p timeline.  Or if encoding 24 as a 23.98 finished file can either improve or worsen the situation.

Link to post
Share on other sites
56 minutes ago, SteveV4D said:

Not all decisions are based on personal preference, but can be a product of the gear you use.  My Pocket cameras are cropped for 1080p recording, making it less desirable to shoot in.  I also prefer shooting in DCI 4K, which uses the full sensor of the Pockets and my GH5 that I also use is limited to 24p for that mode.  For some reason, Panasonic adopts this frequency setting for Pal, ntsc and cinema, which the Pocket doesn't.  The GH5s allows 25p DCI, but doesn't have IBIS.  

I kind of include that in your personal decision-making process, but it's true that you can't decide on features individually, you pick the camera that has the best offering for your particular situation, which obviously includes your budget as well.

One of the things that I really see as fundamentally ridiculous in modern cameras is the absolutely atrocious codecs for the lower resolutions.  Many cameras have 100Mbps 4K but 25Mbps 1080p.  The logic is ridiculous.... let's make a camera that can process 260Mpps (Million pixels per second - 4K at 30p), can compress 260Mpps, and can write 100Mbps to the card, then we'll take the 1080p video mode and write to the card at one-quarter of the cameras capacity!  It's not like they're protecting their cine line by crippling the 1080p mode.

I think poor 1080p codecs are one of the main reasons that 1080 has such a bad reputation online, people compare their 4K mode with their 1080p mode and think that's a fair comparison.  

Of course, in your case you could shoot 1080p in Prores HQ, which I understand is downsampled from the whole sensor.  You might be a huge fan of RAW, which I understand the benefits of for some applications, and this leads me to another annoyance, and that is why they haven't implemented higher bit-depths.  Prores 4444 or Prores XQ have 12-bit and higher bitrates than HQ, so might be a sensible half-step between HQ and RAW, and it allows all the benefits of a downsampled image.

56 minutes ago, SteveV4D said:

That said, I'm not sure 24p and 25p yield much difference, in regards to the stutter effect.

There is also the minor difference between 23.98 and true 24p.  Some cameras say 24 fps when they are actually 23.98.  I should do my own tests and see if this can cause stutter if used on a pure 24p timeline.  Or if encoding 24 as a 23.98 finished file can either improve or worsen the situation.

If you're putting 24p onto a 23.976 (or vice-versa) then it will cause a skipped / jumped frame every 16 minutes, but because it's a rounding situation, the first frame would happen around the 8 minute mark.  However, every time you cut from one clip to another you're effectively resetting the clock, so you'd have to have a clip longer than 8 minutes on the not-matching timeline, or you'd have one frame error per 16 minutes (with the first error around the 8 minute mark) if your master was being broadcast on the wrong framerate.  

I don't think that's a real problem in practice.

Link to post
Share on other sites
52 minutes ago, kye said:

Of course, in your case you could shoot 1080p in Prores HQ, which I understand is downsampled from the whole sensor.  You might be a huge fan of RAW, which I understand the benefits of for some applications

BRAW isn't really RAW when you look further into it, and perhaps should be looked at just another recording codec like ProRes with more latitude to adjust white balance and exposure with minimal degradation to the image.   The workflow with Resolve makes it no brainer, making it so hard to lose the extra flexibility once you get use to it, to claw back those shadows and highlights that would otherwise be lost.  I do miss those RAW controls in Resolve when dealing with H264 and H265 codecs.

52 minutes ago, kye said:

One of the things that I really see as fundamentally ridiculous in modern cameras is the absolutely atrocious codecs for the lower resolutions.  Many cameras have 100Mbps 4K but 25Mbps 1080p.  The logic is ridiculous.... let's make a camera that can process 260Mpps (Million pixels per second - 4K at 30p), can compress 260Mpps, and can write 100Mbps to the card, then we'll take the 1080p video mode and write to the card at one-quarter of the cameras capacity!  It's not like they're protecting their cine line by crippling the 1080p mode.

I think poor 1080p codecs are one of the main reasons that 1080 has such a bad reputation online, people compare their 4K mode with their 1080p mode and think that's a fair comparison.  

This is only going to get worse as we move into 8K cameras.  I think its more a case that camera manufacturers get keyed on the high resolution and just include 1080p more as an after thought - only adding more functionality when bowing to pressure.  Canon R5 is only getting a 120fps in 1080p in the next firmware after the 4K option has been around since launch.  As you say, why the delay?  Does 1080p have a bad reputation?  A preference for 4K isn't always a negative on 1080p.  I often deliver in 1080p and even my 4K videos are watched that way.  I just prefer to work with 4K masters.  Especially as they look better when I playback on my 4K monitors and TV.  🙂

52 minutes ago, kye said:

If you're putting 24p onto a 23.976 (or vice-versa) then it will cause a skipped / jumped frame every 16 minutes, but because it's a rounding situation, the first frame would happen around the 8 minute mark.  However, every time you cut from one clip to another you're effectively resetting the clock, so you'd have to have a clip longer than 8 minutes on the not-matching timeline, or you'd have one frame error per 16 minutes (with the first error around the 8 minute mark) if your master was being broadcast on the wrong framerate. 

Okay that makes sense.   What would happen if you're working with 24p, but set the encoding of the finished video to 23.976 (and vice versa), would this cause stuttering?

When I playback 25p footage from my Drone on a 24p project setting in Resolve, I often experience stuttering.  Once I change the attributes of the files to 24p, the playback is a lot smoother.  So I wonder if from that, then its true there is some issues with playback if mistakes are made when dealing with a 24p timeline.

Link to post
Share on other sites
28 minutes ago, SteveV4D said:

When I playback 25p footage from my Drone on a 24p project setting in Resolve, I often experience stuttering.  Once I change the attributes of the files to 24p, the playback is a lot smoother.  So I wonder if from that, then its true there is some issues with playback if mistakes are made when dealing with a 24p timeline.

This I think is normal, if you put a 25p on 24p timeline Resolve needs to adapt to depending on your retiming settings in the project it will have some sort of issue (frame blend, frame skip, whatever) but it will playback at the original speed. If you change the attribute to 24p basically you are slowing down your video by 4%, for a drone shoot it may not be noticeable but for other things yes. Basically is playing it back at 24 instead of 25 so your clip is longer by 4% and slower by 4%.

Link to post
Share on other sites
13 hours ago, SteveV4D said:

BRAW isn't really RAW when you look further into it, and perhaps should be looked at just another recording codec like ProRes with more latitude to adjust white balance and exposure with minimal degradation to the image.   The workflow with Resolve makes it no brainer, making it so hard to lose the extra flexibility once you get use to it, to claw back those shadows and highlights that would otherwise be lost.  I do miss those RAW controls in Resolve when dealing with H264 and H265 codecs.

This is only going to get worse as we move into 8K cameras.  I think its more a case that camera manufacturers get keyed on the high resolution and just include 1080p more as an after thought - only adding more functionality when bowing to pressure.  Canon R5 is only getting a 120fps in 1080p in the next firmware after the 4K option has been around since launch.  As you say, why the delay?  Does 1080p have a bad reputation?  A preference for 4K isn't always a negative on 1080p.  I often deliver in 1080p and even my 4K videos are watched that way.  I just prefer to work with 4K masters.  Especially as they look better when I playback on my 4K monitors and TV.  🙂

I understand the attraction of codecs like BRAW, but going back to my original point - it's the bit-depth and DR that is the main attraction for them.  The only reason you can WB in post is because of the extra bit-depth, and the extra DR is just that they haven't clipped the DR from the sensor, but both of those can easily be matched by other codecs, Prores 4444 and XQ for example, if implemented correctly.

The downsides of any RAW/semi-RAW format are that you're either getting the full-sensor resolution or you're getting a cropped image.  The full sensor resolution option requires more processing power in post to decode that resolution, then downscale/upscale it to whatever timeline resolution you're running, and only then can you process it at your timeline resolution.  This gives you the benefits of oversampling, but it makes your computer do the work in post, every time you hit play.
The other issue is getting a cropped image.  This has three downsides: you don't retain the FOV of your lens, and you lose the oversampling, which causes both a loss of colour subsampling resolution (a 3840x2160 / 1920x1080 sensor readout is only what - 420 colour?) and the other thing you lose is the noise reduction effect of downsampling from many pixels.

A proper implementation of Prores 4444 / XQ or even a 12-bit h265 ALL-I file with sufficient bitrate that was downsampled from the whole sensor would side-step all of these issues.  It's why I shoot with the 200Mbps 10-bit 422 ALL-I 1080p mode on the GH5 - it gives me all the things I'm talking about except the 12-bit.  The GH5 was released in 2017.  Things have only gotten worse.

I predict that in 3 years two-thirds the 4K devotees here will be tearing their hair out because the $4000 cameras will be offering 2500Mbps 8K and 80Mbps 4K and 20Mbps 1080 and everyone will be crying at how much it costs to have a computer that can edit 10-bit 8K h265 files.  Then everyone will make the investment, and a couple of years after that.....   h266.

13 hours ago, SteveV4D said:

Okay that makes sense.   What would happen if you're working with 24p, but set the encoding of the finished video to 23.976 (and vice versa), would this cause stuttering?

Any time you mix those two frame-rates you're going to have the 8/16 minute problem.

13 hours ago, SteveV4D said:

When I playback 25p footage from my Drone on a 24p project setting in Resolve, I often experience stuttering.  Once I change the attributes of the files to 24p, the playback is a lot smoother.  So I wonder if from that, then its true there is some issues with playback if mistakes are made when dealing with a 24p timeline.

24/25p is a whole different thing.  23.976 vs 24p has a frame rate difference of 0.004 fps.  24 vs 25 has a 1 fps difference.  I'll let you do the math 🙂 

With drones it doesn't really matter anyway - no audio to sync to and I doubt you're doing much stuff where the timing is critical.

Link to post
Share on other sites
On 1/25/2021 at 6:34 PM, herein2020 said:

I know this is probably very controversial, but I ask myself this question every time I see a video shot in the USA at 24FPS instead of 30FPS; why did they do that? 

Because some dead French men of the XIX century and his pals made a bunch of experiments proyecting 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23 FPS.

Then they arrived at 24 FPS and went "yeah, that looks about right and there's no perceivable difference between 24,25,26,27,28,29 FPS and 24 is cheaper to roll".

That's the legacy.

Like the wheel, why try to change what is already damn near perfect?

Link to post
Share on other sites
2 hours ago, EduPortas said:

Because some dead French men of the XIX century and his pals made a bunch of experiments proyecting 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23 FPS.

Then they arrived at 24 FPS and went "yeah, that looks about right and there's no perceivable difference between 24,25,26,27,28,29 FPS and 24 is cheaper to roll".

That's the legacy.

Like the wheel, why try to change what is already damn near perfect?

One of the things that 24p gives me is a certain surrealist aesthetic.  What I mean is that 24p isn't quite real, it's more like an impression of reality rather than an accurate representation of reality itself.  Things that make video more realistic like 60fps, rec709 accurate colours, HDR, super high resolution, 3D, etc seem to make it less 'cinematic'.

Of course, this is an aesthetic choice - if you want to make videos that seem very real then those things are great.  Games or POV videos should be more realistic, so those things are benefits in that case.

I shoot travel and events of my family and friends, so my videos are like a vignette of memory, and in alignment with that the aesthetic I want is fuzzy and impressionistic like memory.  I also like the idea of giving the same larger-than-life aesthetic that feature films have when viewed in the cinema.  I find that 24p is one of the things that helps generate that aesthetic.

Link to post
Share on other sites
1 hour ago, kye said:

I shoot travel and events of my family and friends, so my videos are like a vignette of memory, and in alignment with that the aesthetic I want is fuzzy and impressionistic like memory.  I also like the idea of giving the same larger-than-life aesthetic that feature films have when viewed in the cinema.  I find that 24p is one of the things that helps generate that aesthetic.

Thats very interesting to me, because I am shooting the same things with the opposite preference. I do like cinema's impressionistic effects so I understand what you'e

saying. So you would have little use for 60p, or 4K even. That makes sense. You prefer to make a movie, not a documentary.

There was a science fiction story in which there was a special window glass, called "slow glass". Light would enter it but take many years to pass through before it came

out the other side. Characters in the story could see their loved ones alive as they were many years ago, on the other side of the window. As if they are still there,

still young.

That's what I like about shooting 60p 4K, the people on the screen are more immediate, more real and alive. That's why I am glad I shot

HDV starting in 2006, even though it was 30i. Because it had 60 interlaced fields per second. Handbrake will render a special "BOB" deinterlace that results in full frames

from those fields, and 30i becomes 60p. We enjoy seeing our kids when they were little years ago, and the more "alive" the better. More like a slow glass window.

Next best thing to time travel!

Link to post
Share on other sites
51 minutes ago, Jay60p said:

Thats very interesting to me, because I am shooting the same things with the opposite preference. I do like cinema's impressionistic effects so I understand what you'e

saying. So you would have little use for 60p, or 4K even. That makes sense. You prefer to make a movie, not a documentary.

There was a science fiction story in which there was a special window glass, called "slow glass". Light would enter it but take many years to pass through before it came

out the other side. Characters in the story could see their loved ones alive as they were many years ago, on the other side of the window. As if they are still there,

still young.

That's what I like about shooting 60p 4K, the people on the screen are more immediate, more real and alive. That's why I am glad I shot

HDV starting in 2006, even though it was 30i. Because it had 60 interlaced fields per second. Handbrake will render a special "BOB" deinterlace that results in full frames

from those fields, and 30i becomes 60p. We enjoy seeing our kids when they were little years ago, and the more "alive" the better. More like a slow glass window.

Next best thing to time travel!

Very interesting and I understand the logic completely.  'Slow glass' is an interesting concept for sure.

How do you edit your videos?  or do you edit at all?

One thing I had to work through for myself was how to edit, in terms of the philosophy of editing.  For example, the real-life experience of travel is of there being yelling and stress before leaving because the kids left their packing to the last minute and then can't find things and didn't put their devices on charge (they're teenagers so we let them make their own mistakes lol) and then boredom and awkward conversation in the uber going to the airport and then stress at the airport followed by boredom waiting for the flight and then.....  I had to work out if I wanted to put that stuff in there or not.  Putting it in would be more like Slow Glass but it's not the kind of travel video that anyone wants to watch, and filming it certainly isn't something that would help the situation while it's happening!

I concluded that I would only shoot when it didn't hinder the activity itself, and would only put things in the edit that my wife would accept or that the kids would be ok with when they are in their mid-twenties.  I also realised that the process of editing (and by extension when you even pick up a camera and hit record) is editing of some kind, which is the process of sorting things according to some criteria, and your chances of that criteria not being hugely biassed is very low, especially with people you know or love.  

I do have a secret long-term project of sorts in that I sometimes shoot random test footage in the house for things like low-light performance and stuff, and that often includes little shots of the family watching TV or whatever happens to be happening, and the 'project' is that I don't delete it unless I'm made to do so on the spot.  I won't be pulling that footage out and editing it, but it will be there when I'm old and I suspect that the family will be able to look back and not care that they were in their pyjamas and hadn't combed their hair or whatever.  

Link to post
Share on other sites
6 minutes ago, kye said:

Very interesting and I understand the logic completely.  'Slow glass' is an interesting concept for sure.

How do you edit your videos?  or do you edit at all?

One thing I had to work through for myself was how to edit, in terms of the philosophy of editing.  For example, the real-life experience of travel is of there being yelling and stress before leaving because the kids left their packing to the last minute and then can't find things and didn't put their devices on charge (they're teenagers so we let them make their own mistakes lol) and then boredom and awkward conversation in the uber going to the airport and then stress at the airport followed by boredom waiting for the flight and then.....  I had to work out if I wanted to put that stuff in there or not.  Putting it in would be more like Slow Glass but it's not the kind of travel video that anyone wants to watch, and filming it certainly isn't something that would help the situation while it's happening!

I concluded that I would only shoot when it didn't hinder the activity itself, and would only put things in the edit that my wife would accept or that the kids would be ok with when they are in their mid-twenties.  I also realised that the process of editing (and by extension when you even pick up a camera and hit record) is editing of some kind, which is the process of sorting things according to some criteria, and your chances of that criteria not being hugely biassed is very low, especially with people you know or love.  

I do have a secret long-term project of sorts in that I sometimes shoot random test footage in the house for things like low-light performance and stuff, and that often includes little shots of the family watching TV or whatever happens to be happening, and the 'project' is that I don't delete it unless I'm made to do so on the spot.  I won't be pulling that footage out and editing it, but it will be there when I'm old and I suspect that the family will be able to look back and not care that they were in their pyjamas and hadn't combed their hair or whatever.  

I was actually editing what I posted because the text got chopped up, then your reply was announced! then I realized I had taken too long on my edit, it would not save. Oh well.

Yes I edit out the nasty or unpleasant stuff. I love the look of narrative camera work and actually do try to get the shots play cinematically.

I spend a lot of time on the editing which is why I now have a big backlog of footage to do.  And I also often do camera test shots, often I end up keeping because of who is in them. 

For many years I could shoot my family with no one paying any attention to the camera, but now they are teenagers who turn to the camera and say "why are you filming?"

Worse than that, teenagers are boring, always sitting on a couch watching their phones! There was a lot more interesting activities going on in the house before they got to high school. But then Covid has screwed up the past year as far as any interesting excursions & events.

All I have of myself as a kid was someone's 8mm shots at my first birthday party that they gave to my parents. The rest is only snapshots. My kids will have hundreds of hours of video, mostly in high definition and stereo sound. Will they enjoy it? Sometimes I think it is mostly for me and my wife. Oh well.

I still love shooting it!

 

 

Link to post
Share on other sites
5 hours ago, kye said:

One of the things that 24p gives me is a certain surrealist aesthetic.  What I mean is that 24p isn't quite real, it's more like an impression of reality rather than an accurate representation of reality itself.

One of the things I've been told is that because 24 fps is just over most people's threshold for persistence of vision, the brain has to work a little harder to fill in the gaps than it does at 30, 48, 60fps, etc.  So it helps trigger the imaging/imagination part of your brain to more readily accept what you are seeing.  It sounds a little pseudo-sciencey, but I've noticed that anything from sets to VFX pop out to my eye as fake at higher frame rates.  Even 30 fps has a cheaper looking quality to me.

Personally, for sports or ultra-realism, high frame rates make sense.  And there have been film formats before the digital age that experimented with high frame rates, but they never seem to catch on for narratives.  24fps may have been chosen somewhat arbitrarily and with cost in mind, but I think it has remained a standard for a reason.  

Another advantage to 24 fps is it can be easily halved, quartered, thirded, and more without partial frames.  Kind of like how we have 24 hours in a day or 12 inches to a foot-- only luddites use weird things like 100cm to a meter.  😉

Link to post
Share on other sites
9 hours ago, kye said:

I understand the attraction of codecs like BRAW, but going back to my original point - it's the bit-depth and DR that is the main attraction for them.  The only reason you can WB in post is because of the extra bit-depth, and the extra DR is just that they haven't clipped the DR from the sensor, but both of those can easily be matched by other codecs, Prores 4444 and XQ for example, if implemented correctly.

You're missing the other advantages; the fact that BRAW has been developed to work with both cameras and the Resolve software.  BRAW opens up another level of controls that allow adjusting the WB and ISO more in tune with the settings on the BM cameras.  BRAW also has multiple compression settings and can give you that flexibility, whilst keeping file sizes smaller.  ProRes 444 and XQ doesn't give you those options, and lacks the full range that BRAW can give.  Plus, my question is, why shoot with ProRes 4444, when I can shoot with BRAW.  There are limitations, but this may well change....

9 hours ago, kye said:

The downsides of any RAW/semi-RAW format are that you're either getting the full-sensor resolution or you're getting a cropped image.  The full sensor resolution option requires more processing power in post to decode that resolution, then downscale/upscale it to whatever timeline resolution you're running, and only then can you process it at your timeline resolution.  This gives you the benefits of oversampling, but it makes your computer do the work in post, every time you hit play.

You're missing the new URSA 12k that only shoots in BRAW, but offers 8K and 4K at full sensor readout.  Remember BRAW isn't proper RAW.  As technology develops, this tech will be shifted to lower end cameras and who knows, the next Pocket or BM Micro camera will be 8K that can also do 4K BRAW at full sensor.  I should also point out that playing with downloaded 12K BRAW, I got smoother playback than 4K H265 files, so processing power isn't always about resolution.  I tried editing some AVCHD files I shot back in 2013 and it was an awful experience, and those were 1080p.  But I can playback smoothly 12K BRAW; go figure....  

9 hours ago, kye said:

A proper implementation of Prores 4444 / XQ or even a 12-bit h265 ALL-I file with sufficient bitrate that was downsampled from the whole sensor would side-step all of these issues.  It's why I shoot with the 200Mbps 10-bit 422 ALL-I 1080p mode on the GH5 - it gives me all the things I'm talking about except the 12-bit.  The GH5 was released in 2017.  Things have only gotten worse.

I predict that in 3 years two-thirds the 4K devotees here will be tearing their hair out because the $4000 cameras will be offering 2500Mbps 8K and 80Mbps 4K and 20Mbps 1080 and everyone will be crying at how much it costs to have a computer that can edit 10-bit 8K h265 files.  Then everyone will make the investment, and a couple of years after that.....   h266.

Let's not predict the issues of the future.  I'm editing easier now than I did 4 to 6 years ago, despite the increase in resolution.  For me, things have gotten better.  And no camera in the area I am looking at even offers ProRes 4444/XQ, whilst BRAW is already available.  Will a 12bit H265 file work; well, judging by the performance of 10 bit H265 files from the R5, it wouldn't make editing any easier - probably a lot more difficult.  For me that would be a step back, and actually be worse than taking on extra resolution, given my experiences testing the 12K BRAW file.   

I appreciate your view that a lower resolution carries advantages, of course it does.  Those advantages will diminish as time goes on, even if 1080p is possible to shoot with on the cameras of tomorrow.  How many cameras offer 720p shooting? 

Utilising the full resolution of a camera in post yields its own advantages too.  Some of my colour grading, where I am selecting and motion tracking sections of the image, are a lot easier and reliable with more resolution than less.  Down-sampling 4K in post, will be better than letting your camera do it live.  Some cropping is possible with minimum loss to overall image quality, especially if watched in 1080p.   Some of my clients have asked for some pretty major crops in my recorded image, which would have been impossible to implement in 1080p, but with 4K, can be okay, even if not desirable or perfect.  Plus DCI gives me a slightly wider image on my Pockets and GH5s, which I find useful, especially with a MFT sensor.

If I can handle the extra file sizes, and playback is great; then I'm not benefitting so much from lowering the resolution of my filming.  

Link to post
Share on other sites
1 hour ago, Towd said:

One of the things I've been told is that because 24 fps is just over most people's threshold for persistence of vision, the brain has to work a little harder to fill in the gaps than it does at 30, 48, 60fps, etc.  So it helps trigger the imaging/imagination part of your brain to more readily accept what you are seeing.  It sounds a little pseudo-sciencey, but I've noticed that anything from sets to VFX pop out to my eye as fake at higher frame rates.  Even 30 fps has a cheaper looking quality to me.

Personally, for sports or ultra-realism, high frame rates make sense.  And there have been film formats before the digital age that experimented with high frame rates, but they never seem to catch on for narratives.  24fps may have been chosen somewhat arbitrarily and with cost in mind, but I think it has remained a standard for a reason.  

Another advantage to 24 fps is it can be easily halved, quartered, thirded, and more without partial frames.  Kind of like how we have 24 hours in a day or 12 inches to a foot-- only luddites use weird things like 100cm to a meter.  😉

Which is why I prefer to work with inches than CM's, even though I live in a Country of luddites.  😉

I agree, a faster FPS is desirable in Sport, but like you, I found the higher frame rate makes CGI look fake.  Watching The Hobbit movies in that HFR didn't do much to convince me it was the future, anymore than 3D did.  Sure, some audience did embrace it and I wonder if the children today will grow up and make 3D movies at 120fps.  Then again, there is still a taste for film, even though many have grown up with digital.  Gimmicks like 3D, 360 degree recording seem to come and then quickly go, as the novelty wears off.   

Link to post
Share on other sites

In the technical side, there's nothing really wrong with 24FPS, the problem comes in the displays, most of consumer displays are not native compatible with 24 fps, so they end up "doubling" some frames and others don't, that makes the jitter.

120hz/240hz displays are better for this task as they are even numbers with 24.

Now on the artistic decision, I shot in 24fps, but I choose to do so because I do narrative content and films. I prefer the abstraction that gives from reality by shooting in lower frames rates, I think that it gives some space for that imagination to play and induce into another reality. But maybe, if you what to make an hyper realistic drama, maybe 60fps will fit better for the context.

Link to post
Share on other sites
53 minutes ago, SteveV4D said:

I agree, a faster FPS is desirable in Sport, but like you, I found the higher frame rate makes CGI look fake.  Watching The Hobbit movies in that HFR didn't do much to convince me it was the future, anymore than 3D did.  Sure, some audience did embrace it and I wonder if the children today will grow up and make 3D movies at 120fps.

Yeah, The Hobbit in HFR was a surprising wake up for me regarding why I like 24fps.  Everything just popped out and looked weird, and I remember a lot of people liking the look.  The newest TVs with their "frame smoothing" feature that generates in-between frames on 24p content drive me just as crazy when I visit a friend's house.

I don't think it necessarily means we'll never be watching movies at 120fps in 32k, but it seems we'll need some technical revolution to help blend in all the cheats used in narrative content from sets and fake backgrounds to VFX.  Maybe in the next decade or two, someone will invent some A.I. filter to better blend the fakery into the real stuff before we run our 120fps export.

Link to post
Share on other sites
1 hour ago, Towd said:

Yeah, The Hobbit in HFR was a surprising wake up for me regarding why I like 24fps.  Everything just popped out and looked weird, and I remember a lot of people liking the look.  The newest TVs with their "frame smoothing" feature that generates in-between frames on 24p content drive me just as crazy when I visit a friend's house.

I don't think it necessarily means we'll never be watching movies at 120fps in 32k, but it seems we'll need some technical revolution to help blend in all the cheats used in narrative content from sets and fake backgrounds to VFX.  Maybe in the next decade or two, someone will invent some A.I. filter to better blend the fakery into the real stuff before we run our 120fps export.

Exactly, now kids these days are all on about 60fps footage, but it looks shitty in my eyes at least (for gaming I can understand, or slow motion). Maybe I am getting too old. 

Link to post
Share on other sites

60p looks weird. People say it looks like real life but I don't agree. My eyes see motion blur if I wave my hand quickly. 

30p does make panning smoother unless you pan really slow then 24p is fine. I'd be hard pressed to see a difference between 30p and 24p outside of panning. 

Link to post
Share on other sites
8 hours ago, Jay60p said:

I was actually editing what I posted because the text got chopped up, then your reply was announced! then I realized I had taken too long on my edit, it would not save. Oh well.

Yes I edit out the nasty or unpleasant stuff. I love the look of narrative camera work and actually do try to get the shots play cinematically.

I spend a lot of time on the editing which is why I now have a big backlog of footage to do.  And I also often do camera test shots, often I end up keeping because of who is in them. 

For many years I could shoot my family with no one paying any attention to the camera, but now they are teenagers who turn to the camera and say "why are you filming?"

Worse than that, teenagers are boring, always sitting on a couch watching their phones! There was a lot more interesting activities going on in the house before they got to high school. But then Covid has screwed up the past year as far as any interesting excursions & events.

All I have of myself as a kid was someone's 8mm shots at my first birthday party that they gave to my parents. The rest is only snapshots. My kids will have hundreds of hours of video, mostly in high definition and stereo sound. Will they enjoy it? Sometimes I think it is mostly for me and my wife. Oh well.

I still love shooting it!

Sounds familiar, except the part about 8mm footage being available.  I think there's a VHS tape with me on it when I was about 10, but I lack a VCR, so it'll sit in a box for a while I'd imagine.  I seem to remember the tape wasn't that interesting lol.

I do wonder if the footage would be interesting to our future selves and descendants, but if I ask the question of myself, the answer is that yes, I'd be very interested in seeing clips of my grandparents or great-grandparents, even if they were shot on a potato.  But if it's not the case and one day my storage goes belly-up I'll have had enough fun along the way for it to have all been worthwhile!

6 hours ago, Towd said:

One of the things I've been told is that because 24 fps is just over most people's threshold for persistence of vision, the brain has to work a little harder to fill in the gaps than it does at 30, 48, 60fps, etc.  So it helps trigger the imaging/imagination part of your brain to more readily accept what you are seeing.  It sounds a little pseudo-sciencey, but I've noticed that anything from sets to VFX pop out to my eye as fake at higher frame rates.  Even 30 fps has a cheaper looking quality to me.

Personally, for sports or ultra-realism, high frame rates make sense.  And there have been film formats before the digital age that experimented with high frame rates, but they never seem to catch on for narratives.  24fps may have been chosen somewhat arbitrarily and with cost in mind, but I think it has remained a standard for a reason.  

Another advantage to 24 fps is it can be easily halved, quartered, thirded, and more without partial frames.  Kind of like how we have 24 hours in a day or 12 inches to a foot-- only luddites use weird things like 100cm to a meter.  😉

It makes sense that higher framerates might be more revealing and therefore make CGI imperfections more visible.  Certainly HD and FHD had that effect - I heard that productions had to spend more money on makeup because little imperfections that didn't used to be visible became a problem and they had to work slower.

4 hours ago, SteveV4D said:

You're missing the other advantages; the fact that BRAW has been developed to work with both cameras and the Resolve software.  BRAW opens up another level of controls that allow adjusting the WB and ISO more in tune with the settings on the BM cameras.  BRAW also has multiple compression settings and can give you that flexibility, whilst keeping file sizes smaller.  ProRes 444 and XQ doesn't give you those options, and lacks the full range that BRAW can give.  Plus, my question is, why shoot with ProRes 4444, when I can shoot with BRAW.  There are limitations, but this may well change....

You're missing the new URSA 12k that only shoots in BRAW, but offers 8K and 4K at full sensor readout.  Remember BRAW isn't proper RAW.  As technology develops, this tech will be shifted to lower end cameras and who knows, the next Pocket or BM Micro camera will be 8K that can also do 4K BRAW at full sensor.  I should also point out that playing with downloaded 12K BRAW, I got smoother playback than 4K H265 files, so processing power isn't always about resolution.  I tried editing some AVCHD files I shot back in 2013 and it was an awful experience, and those were 1080p.  But I can playback smoothly 12K BRAW; go figure....  

Let's not predict the issues of the future.  I'm editing easier now than I did 4 to 6 years ago, despite the increase in resolution.  For me, things have gotten better.  And no camera in the area I am looking at even offers ProRes 4444/XQ, whilst BRAW is already available.  Will a 12bit H265 file work; well, judging by the performance of 10 bit H265 files from the R5, it wouldn't make editing any easier - probably a lot more difficult.  For me that would be a step back, and actually be worse than taking on extra resolution, given my experiences testing the 12K BRAW file.   

I appreciate your view that a lower resolution carries advantages, of course it does.  Those advantages will diminish as time goes on, even if 1080p is possible to shoot with on the cameras of tomorrow.  How many cameras offer 720p shooting? 

Utilising the full resolution of a camera in post yields its own advantages too.  Some of my colour grading, where I am selecting and motion tracking sections of the image, are a lot easier and reliable with more resolution than less.  Down-sampling 4K in post, will be better than letting your camera do it live.  Some cropping is possible with minimum loss to overall image quality, especially if watched in 1080p.   Some of my clients have asked for some pretty major crops in my recorded image, which would have been impossible to implement in 1080p, but with 4K, can be okay, even if not desirable or perfect.  Plus DCI gives me a slightly wider image on my Pockets and GH5s, which I find useful, especially with a MFT sensor.

If I can handle the extra file sizes, and playback is great; then I'm not benefitting so much from lowering the resolution of my filming.  

Lots of stuff in here.  I understand about BRAW, but wasn't explicitly aware that you could get lower resolution BRAW from the whole sensor.  I'd be interested to know if it's downsampling or simply line-skipping / pixel-binning.  If it's downsampling then that's a cool thing.

The comparisons between RAW / BRAW / Prores and h265 / AVCHD performance in post is really about IPB vs ALL-I.  Another example of something that the GH5 does right but no-one else is doing because they can sell you their own RAW flavour and make you buy their external recorder or their NLE.  Tech companies are assholes sometimes.

I understand that extra resolution in post has advantages and the more sophisticated your workflows then the more resolution is useful to you.  I've kind of gone the other way with my workflow development.  I started out wanting the highest quality capture (which meant 4K with sharp lenses) and was aiming to do all the hard work in post.  Now I've worked out what look / aesthetic I want, I am aiming to get it right in camera as much as I can.  I have moved away from shooting to crop, post-stabilisation, and shooting log to colour later.  

I've worked out that I have much more in common with film-makers rather than videographers.  My client is me and I can make technical decisions in order to please myself and fulfil my vision and objectives, instead of having clients to please who change their minds after the shoot and don't know much but demand the most impractical but least relevant things, like 4K delivery for social media, or zooming into things in post etc.  If I was a videographer I'd probably be shooting in 6K RAW, putting that on my business cards, and recycling my hard drives on a regular basis.  It's not a job I envy, that's for sure!!

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...