Jump to content

Camera resolutions by cinematographer Steve Yeldin


John Matthews
 Share

Recommended Posts

Concerning my previous post, I do think that Rodney Charters and Steve Yedlin differ in their theories about acquisition of the image. It seems Charters looks for the camera that gives the best image where as Yedlin prefers the most neutral image as possible, then add character in the post pipeline.

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

Could someone explain to me the "compression" issue that was covered in part 2 of the video??? The part which showed all the macro blocking?

What was causing the compression that he brought up? It wasn't just simple down-scaling from 4K (or 6K) in to 1080p, was it??? 

I admit I was having a hard time staying awake during the second one because I started watching around midnight. And much of what was been shown was going over my head.

there was one section though on the effects of "compression" that showed off some serious macroblocking. This seemed - to me at least - to be one of the most serious problems.

I tried re-watching that segment but couldn't figure out what was causing the compression issue. Again, I was kind of struggling to stay awake and got frustrated with trying to find the exact part in the video where he covered the part on compression (and macro-blocking).

Link to comment
Share on other sites

Regarding Compression Question I asked Above ^:

It starts at the 38:16 mark of the Part 2 video.

I guess what he is doing is trying to emulate what a highly compressed codec does when applied to a UHD or above resolution (spoiler alert: it leads to banding).

It was difficult to understand because 1) I have no idea how nuke works, and 2) he wasn't really explaining what he was trying to show (just showing it "with compression" as well as "without compression").

At least I guess that is what he is trying to show.

I don't know whether those compression artifacts are also caused by using Long GOP as opposed to All-I compression, or whether it is just a matter of bitrate and / or bit depth.

So would using a "consumer camera" like an S1 but recording externally in ProRes HQ or DNxHR HQX avoid that compression?

Or does one have to export RAW to avoid the compression artifacts?  

Link to comment
Share on other sites

1 hour ago, Mark Romero 2 said:

Regarding Compression Question I asked Above ^:

It starts at the 38:16 mark of the Part 2 video.

I guess what he is doing is trying to emulate what a highly compressed codec does when applied to a UHD or above resolution (spoiler alert: it leads to banding).

It was difficult to understand because 1) I have no idea how nuke works, and 2) he wasn't really explaining what he was trying to show (just showing it "with compression" as well as "without compression").

At least I guess that is what he is trying to show.

I don't know whether those compression artifacts are also caused by using Long GOP as opposed to All-I compression, or whether it is just a matter of bitrate and / or bit depth.

So would using a "consumer camera" like an S1 but recording externally in ProRes HQ or DNxHR HQX avoid that compression?

Or does one have to export RAW to avoid the compression artifacts?  

I can try to upwrap this portion of the video for you. Regardless of how much compression was used, his point was to show that, even with a 4k or 8k image, compression plays a significant role in the final image. Just increasing the megapixels isn't enough and would decrease the quality, not increase it. Concretely, a 4k image compressed to 10mbps will not produce a better image than a 1080p image at 10mbps. His point was to show resolution doesn't necessarily mean a quality image; there are many factors, compression being one of them.

IMO, the issue of Long GOP and ALL-I is mute with modern encoders, but, if looking for a formula, 50mbps long-gop is equal to 100mbps ALL-I. You'll get significant space savings and 99% the quality in most situations. I cannot comment on working with Raw... I'd never do that for my purposes.

Link to comment
Share on other sites

1 hour ago, John Matthews said:

I can try to upwrap this portion of the video for you. Regardless of how much compression was used, his point was to show that, even with a 4k or 8k image, compression plays a significant role in the final image. Just increasing the megapixels isn't enough and would decrease the quality, not increase it. Concretely, a 4k image compressed to 10mbps will not produce a better image than a 1080p image at 10mbps. His point was to show resolution doesn't necessarily mean a quality image; there are many factors, compression being one of them.

IMO, the issue of Long GOP and ALL-I is mute with modern encoders, but, if looking for a formula, 50mbps long-gop is equal to 100mbps ALL-I. You'll get significant space savings and 99% the quality in most situations. I cannot comment on working with Raw... I'd never do that for my purposes.

Thanks for the explanation. Much appreciated.

Another question if I may:

My understanding is that h.264 and h.265 have a 1:2 relationship, meaning that h.264 at 100Mbs is equal to h.265 at 50Mbs.

But do we know if either one is better (or has less artifacts)? Or is it pretty much more or less the same? 

Or do I have my numbers mixed up?

 

Link to comment
Share on other sites

GOING OFF TOPIC (SORT OF):

I have to admit that one of the the things that sold me on the Sony a6500 was a couple of tests by Max Yuryev and a test by Kai and what'shisface on Digital Rev where they demonstrated that there was a LOT more detail (acutance???) in the 4K a6500 image than in a lot of similar cameras shooting in 4K. I think on digital Rev they compared the a6500 to the a7S II in 4K, which has bigger pixels, but isn't supersampled from 6K sensor readout to 4K recording the way the 24MP sensor of the a6500 is.

So aside from the rolling shutter, is super sampling 4K from 6K (or, conversely, supersampling 1080p from 4K) the way to go??? If I recall correctly, doesn't the FX 9 have a 6K sensor that supersamples for 4K recording??? 

The other thing is dynamic range for mirrorless cameras and cinema cameras that DON'T have the 16-bit digital to analog converter that Arri cameras have. Don't cameras with MORE megapixels have generally better dynamic range (while cameras with LESS but larger megapixels have better low light)?

It seems that way for RAW STILLS but don't know if that holds true for video when the codecs are the same. 

Link to comment
Share on other sites

On 3/28/2021 at 4:19 PM, John Matthews said:

I remember watching both of these. In a nutshell, up-resing tech was good enough for them in 2014, I imagine it's a little better in 2021. They'd much rather work like that and continue with a speedy 2k pipeline. And once again, audiences cannot tell the difference.

I'm familiar with those videos, although they were from a different time, when 2K was still a mainstream thing.  The one I watched was more recent, and in the context of newer 4K / 6K and even 8K cameras.  Obviously the ability to track things in 3D space accurately is pretty crucial if you're doing heavy VFX work like in Hollywood action blockbusters with a camera moving through a scene (say, on a boom) and then having to insert dozens/hundreds of 3D rendered objects into that scene, even extending to entire characters in the film.  I've seen tracking and how it works in small proportions of a single pixel, which may have considerable impact on the location of something if the object is far away from the reference points.  Obviously in cases like that having RAW 8K would be far nicer for the VFX team to reference rather than a blocky-by-comparison 2K image.

This is, of course, talking about capture resolution, and not about final output resolution.

On 3/28/2021 at 5:08 PM, John Matthews said:

Concerning my previous post, I do think that Rodney Charters and Steve Yedlin differ in their theories about acquisition of the image. It seems Charters looks for the camera that gives the best image where as Yedlin prefers the most neutral image as possible, then add character in the post pipeline.

I think there's a spectrum of shooters ranging from people who get everything right in-camera and almost won't even colour the footage, through to those who shoot for complete accuracy and want to do as much as possible in post.  It will depend on your preferences, your budget (to hire a VFX team), and your schedule.   

17 hours ago, Mark Romero 2 said:

Could someone explain to me the "compression" issue that was covered in part 2 of the video??? The part which showed all the macro blocking?

What was causing the compression that he brought up? It wasn't just simple down-scaling from 4K (or 6K) in to 1080p, was it??? 

I admit I was having a hard time staying awake during the second one because I started watching around midnight. And much of what was been shown was going over my head.

there was one section though on the effects of "compression" that showed off some serious macroblocking. This seemed - to me at least - to be one of the most serious problems.

I tried re-watching that segment but couldn't figure out what was causing the compression issue. Again, I was kind of struggling to stay awake and got frustrated with trying to find the exact part in the video where he covered the part on compression (and macro-blocking).

He mentioned in the video that he applied a compression deliberately, in order to investigate what effects it would have on the image quality.  He said he chose something akin to what gets streamed to peoples houses, or is in DSLR cameras.  I'd guess something in the 25Mbps ballpark.  

It probably goes without saying that it's more difficult to tell the differences between resolutions if they're both going through a cheese grater at the end (or beginning!) of the image pipeline!

14 hours ago, Mark Romero 2 said:

GOING OFF TOPIC (SORT OF):

I have to admit that one of the the things that sold me on the Sony a6500 was a couple of tests by Max Yuryev and a test by Kai and what'shisface on Digital Rev where they demonstrated that there was a LOT more detail (acutance???) in the 4K a6500 image than in a lot of similar cameras shooting in 4K. I think on digital Rev they compared the a6500 to the a7S II in 4K, which has bigger pixels, but isn't supersampled from 6K sensor readout to 4K recording the way the 24MP sensor of the a6500 is.

So aside from the rolling shutter, is super sampling 4K from 6K (or, conversely, supersampling 1080p from 4K) the way to go??? If I recall correctly, doesn't the FX 9 have a 6K sensor that supersamples for 4K recording??? 

The other thing is dynamic range for mirrorless cameras and cinema cameras that DON'T have the 16-bit digital to analog converter that Arri cameras have. Don't cameras with MORE megapixels have generally better dynamic range (while cameras with LESS but larger megapixels have better low light)?

It seems that way for RAW STILLS but don't know if that holds true for video when the codecs are the same. 

Downsampling is definitely advantageous to overall image quality, for multiple reasons:

  • a 4K sensor gives you a <4K image after debayering, so downsampling means drawing from more pixels on the input than you're pushing out the output, which helps
  • random noise gets partially eliminated due to the averaging that occurs in the downsampling process

One thing that is noteworthy though is that if you're shooting with a compressed codec, for example h264/5, that the artefacts are often X pixels wide (for example, regardless of the resolution, the 'ripple' on a hard edge is likely to be the same number of pixels wide) so in that instance you may be better off recording your files in-camera in a higher resolution and then downscaling in post, where the downscaling process can average out more of those artefacts.  
This is something that's likely to be situation and camera dependent, but is worth a test if you're able to.  For example, shoot something in 6K and in 4K and put them both on a 4K timeline and see which looks cleaner, or 4K and 1080 on a 1080 timeline.   
The downside of this is that even if both resolutions had the same bitrate, and therefore file sizes, your computer will have to decode and then downscale more pixels from the higher resolution clip, increasing the computational load on your editing computer.
Like with all things, do your own testing and see what you can see 🙂 

Link to comment
Share on other sites

17 hours ago, Mark Romero 2 said:

GOING OFF TOPIC (SORT OF):

I have to admit that one of the the things that sold me on the Sony a6500 was a couple of tests by Max Yuryev and a test by Kai and what'shisface on Digital Rev where they demonstrated that there was a LOT more detail (acutance???) in the 4K a6500 image than in a lot of similar cameras shooting in 4K. I think on digital Rev they compared the a6500 to the a7S II in 4K, which has bigger pixels, but isn't supersampled from 6K sensor readout to 4K recording the way the 24MP sensor of the a6500 is.

So aside from the rolling shutter, is super sampling 4K from 6K (or, conversely, supersampling 1080p from 4K) the way to go??? If I recall correctly, doesn't the FX 9 have a 6K sensor that supersamples for 4K recording??? 

The other thing is dynamic range for mirrorless cameras and cinema cameras that DON'T have the 16-bit digital to analog converter that Arri cameras have. Don't cameras with MORE megapixels have generally better dynamic range (while cameras with LESS but larger megapixels have better low light)?

It seems that way for RAW STILLS but don't know if that holds true for video when the codecs are the same. 

I think the A7S sensor has comparable dynamic range to the A73 /A7r3. In terms of Sony and RED their full frame sensors have the most dynamic range. The RED monstro is 8k and has the most dynamic range though the Gemini has similar dynamic range and only 12mp vs 35mp on the Monstro. Arri dynamic range is the same regardless of sensor size or resolution. The URSA 12k and 4.6k have similar dynamic range, though the 4.6k is dual output while the 12k isn’t as far as I know. 

Link to comment
Share on other sites

15 hours ago, TomTheDP said:

I think the A7S sensor has comparable dynamic range to the A73 /A7r3. In terms of Sony and RED their full frame sensors have the most dynamic range. The RED monstro is 8k and has the most dynamic range though the Gemini has similar dynamic range and only 12mp vs 35mp on the Monstro. Arri dynamic range is the same regardless of sensor size or resolution. The URSA 12k and 4.6k have similar dynamic range, though the 4.6k is dual output while the 12k isn’t as far as I know. 

Depends on at what ISO.

The A7s sensor does not have a particularly large DR range at base but it slopes down a  lot less steeply than other cameras.    Its party trick is still having a high DR at higher ISOs though even at 12800 and maybe 25600 others are at least about as good.    Over that though and the old first version A7s still does very well (for those few of us that want it).

 

Link to comment
Share on other sites

On 3/27/2021 at 8:18 PM, kye said:

You can't make comparisons, discuss, criticise, or even comment on something you haven't watched.

You are incorrect.

 

Regardless, I have watched enough of the Yedlin videos to know that they are significantly flawed.  I know for a fact that:

  1. Yedlin's setup cannot prove anything conclusive in regards to perceptible differences between various higher resolutions, even if we assume that a cinema audience always views a projected 2K or 4K screen.  Much of the required discernability for such a comparison is destroyed by his downscaling a 6K file to 4K (and also to 2K and then back up to 4K) within a node editor, while additionally rendering the viewer window to an HD video file.  To properly make any such comparison, we must at least start with 6K footage from a 6K camera, 4K footage from a 4K camera, 2K footage from a 2K camera, etc.
  2. Yedlin's claim here that the node editor viewer's pixels match 1-to-1 to the pixels on the screen of those watching the video is obviously false.  The pixels in his viewer window don't even match 1-to-1 the pixels of his rendered HD video.  This pixel mismatch is a critical flaw that invalidates almost all of his demonstrations that follow.
  3. At one point, Yedlin compared the difference between 6K and 2K by showing the magnified individual pixels.  This magnification revealed that the pixel size and pixel quantity did not change when he switched between resolutions, nor did the subject's size in the image.  Thus, he isn't actually comparing different resolutions in much of the video -- if anything, he is comparing scaling methods.

 

 

On 3/27/2021 at 8:18 PM, kye said:

Your criticisms are of things he didn't say.  That's called a straw man argument - https://en.wikipedia.org/wiki/Straw_man

In one of my earlier posts above, I provided a link to a the section of Yedlin's video in which he demonstrates the exact flaw that I criticized.  Somehow, you missed the fact that what I claimed about the video is actually true.  I also mentioned other particular problems of the video in my earlier posts, and you missed those points as well.  So, no  "straw man" here.

 

As I have suggested in another thread, please learn some reading comprehension skills, so that I and others don't have to keep repeating ourselves.

 

By the way, please note that directly above (within this post) is a numbered list in which I state flaws inherent in Yedlin's and please see that with each numbered point I include a link to the pertinent section of Yedlin's video.  You can either address those points or not, but please don't keep claiming that I have not watched the video.

 

 

On 3/27/2021 at 8:18 PM, kye said:

I'm not surprised that the criticisms you're raising aren't valid, as you've displayed a lack of critical thinking on many occasions, but what I am wondering is how you think you can criticise something you haven't watched?

The only thing I can think of is that you don't understand how logic, or logical discourse actually works, which unfortunately makes having a reasonable discussion impossible.

In light of the fact that I actually linked portions of the video and made other comments about other parts of the video, logic dictates that I have at least watched those portions of the video.  So, stating that I have not watched the video is illogical.

 

Unless, of course, you missed those points in my post, in which case I would urge you once again to please develop your reading comprehension.

 

 

On 3/27/2021 at 8:18 PM, kye said:

This whole thread is about a couple of videos that John has posted, and yet you're in here arguing with people about what is in them when you haven't watched them, let alone understood them.

Actually, neither you nor any other poster has directly addressed the flaws in Yedlin's video that I pointed out.  If you think that I do not understand the videos, perhaps you could explain what is wrong with my specific points.  You can start with the numbered list within this post.

Link to comment
Share on other sites

13 hours ago, tupp said:

You are incorrect.

 

Regardless, I have watched enough of the Yedlin videos to know that they are significantly flawed.  I know for a fact that:

  1. Yedlin's setup cannot prove anything conclusive in regards to perceptible differences between various higher resolutions, even if we assume that a cinema audience always views a projected 2K or 4K screen.  Much of the required discernability for such a comparison is destroyed by his downscaling a 6K file to 4K (and also to 2K and then back up to 4K) within a node editor, while additionally rendering the viewer window to an HD video file.  To properly make any such comparison, we must at least start with 6K footage from a 6K camera, 4K footage from a 4K camera, 2K footage from a 2K camera, etc.
  2. Yedlin's claim here that the node editor viewer's pixels match 1-to-1 to the pixels on the screen of those watching the video is obviously false.  The pixels in his viewer window don't even match 1-to-1 the pixels of his rendered HD video.  This pixel mismatch is a critical flaw that invalidates almost all of his demonstrations that follow.
  3. At one point, Yedlin compared the difference between 6K and 2K by showing the magnified individual pixels.  This magnification revealed that the pixel size and pixel quantity did not change when he switched between resolutions, nor did the subject's size in the image.  Thus, he isn't actually comparing different resolutions in much of the video -- if anything, he is comparing scaling methods.

 

 

In one of my earlier posts above, I provided a link to a the section of Yedlin's video in which he demonstrates the exact flaw that I criticized.  Somehow, you missed the fact that what I claimed about the video is actually true.  I also mentioned other particular problems of the video in my earlier posts, and you missed those points as well.  So, no  "straw man" here.

 

As I have suggested in another thread, please learn some reading comprehension skills, so that I and others don't have to keep repeating ourselves.

 

By the way, please note that directly above (within this post) is a numbered list in which I state flaws inherent in Yedlin's and please see that with each numbered point I include a link to the pertinent section of Yedlin's video.  You can either address those points or not, but please don't keep claiming that I have not watched the video.

 

 

In light of the fact that I actually linked portions of the video and made other comments about other parts of the video, logic dictates that I have at least watched those portions of the video.  So, stating that I have not watched the video is illogical.

 

Unless, of course, you missed those points in my post, in which case I would urge you once again to please develop your reading comprehension.

 

 

Actually, neither you nor any other poster has directly addressed the flaws in Yedlin's video that I pointed out.  If you think that I do not understand the videos, perhaps you could explain what is wrong with my specific points.  You can start with the numbered list within this post.

The whole video made sense to me.

 

What you are not understanding (BECAUSE YOU HAVEN'T WATCHED IT) is that you can't just criticise bits of it because the logic of it builds over the course of the video.  It's like you've read a few random pages from a script and are then criticising them by saying they don't make sense in isolation.

The structure of the video is this:

  • He outlines the context of what he is doing and why
  • He talks about how to get a 1:1 view of the pixels
  • He shows how in a 1:1 view of the pixels that the resolutions aren't discernable
  • Then he goes on to explore the many different processes, pipelines, and things that happen in the real world (YES, INCLUDING RESIZING) and shows that under these situations the resolutions aren't discernible either

You have skipped enormous parts of the video, and you can't do that.

Once again, you can't skip parts of a logical progression, or dare I say it "proof", and expect for it to make sense.

Your posts don't make sense if I skip every third word, or if I only read the first and last line.

Yedlin is widely regarded as a pioneer in the space of colour science, resolution, FOV, and other matters.  His blog posts consist of a mixture of logical arguments, mathematics, physics and controlled tests.  These are advanced topics and not many others have put the work in to perform these tests.  
The reason that I say this is that not everyone will understand these tests.  Not everyone understands the correct and incorrect ways to use reductive logic, logical interpolation, extrapolation, equivalence, inference, exception, boundaries, or other logical devices.  

I highly doubt that you would understand the logic that he presents, but one thing I can tell with absolute certainty, is that you can't understand it without actually WATCHING IT.

Link to comment
Share on other sites

15 minutes ago, kye said:

The whole video made sense to me.

Good for you!

 

 

15 minutes ago, kye said:

What you are not understanding (BECAUSE YOU HAVEN'T WATCHED IT) is that you can't just criticise bits of it because the logic of it builds over the course of the video.  It's like you've read a few random pages from a script and are then criticising them by saying they don't make sense in isolation.

Nope.  The three points that I made prove that Yedlin's comparisons are invalid in regards to comparing the discernability of different resolutions.

 

If you can explain exactly how he gets around those three problems, I will take back my criticism of Yedlin.  So far, no one has given any explanation of how his set up could possibly work.

 

 

20 minutes ago, kye said:

The structure of the video is this:

  • He outlines the context of what he is doing and why

Yes.  I mentioned that section and linked it in a previous post.  There is no way his set up can provide anything conclusive in regards to the lack of any discernability between different resolutions.

 

 

22 minutes ago, kye said:

He talks about how to get a 1:1 view of the pixels

He assumes that the 1-to-1 pixel view happens automatically, in a section that I linked in my previous post.  Again, here is the link to that passage in Yedlin's video.

 

It is impossible to get a 1-to-1 view of the pixels that appear within Yedlin's node editor -- those individual pixels were lost the moment he rendered the HD video.

 

So, most of his comparisons are meaningless.

 

 

29 minutes ago, kye said:

He shows how in a 1:1 view of the pixels that the resolutions aren't discernable

No he doesn't show a 1-to-1 pixel view, because it is impossible to actually see the individual pixels within his node editor viewer.  Those pixels were blended together when he rendered the HD video.

 

In addition, even if Yedlin was able to achieve a 1-to-1 pixel match in his rendered video, the downscaling and/or downscaling and upscaling performed in the node editor destroys any discernable difference in resolutions.  He is merely comparing scaling algorithms -- not actual resolution differences

 

Furthermore, Yedlin reveals to us what is actually happening in many of his comparisons when we see the magnified view.  The pixel size, the pixel number and the image framing all remain identical while he switches between different resolutions.  So, again, Yedlin is not comparing different resolutions -- he is merely comparing scaling algorithms

 

 

45 minutes ago, kye said:
  • Then he goes on to explore the many different processes, pipelines, and things that happen in the real world (YES, INCLUDING RESIZING) and shows that under these situations the resolutions aren't discernible either

That is at the heart of what he is demonstrating.  Yedlin is comparing the results of various downscaling and upscaling methods.  He really isn't  comparing different resolutions.

 

 

49 minutes ago, kye said:

You have skipped enormous parts of the video, and you can't do that.

Once again, you can't skip parts of a logical progression, or dare I say it "proof", and expect for it to make sense.

Your posts don't make sense if I skip every third word, or if I only read the first and last line.

Yedlin is not comparing different resolutions -- he is merely comparing scaling algorithms.  You need to face that fact.

 

 

50 minutes ago, kye said:

Yedlin is widely regarded as a pioneer in the space of colour science, resolution, FOV, and other matters.

Yedlin is a good shooter, but he is hardly an imaging scientist and he certainly is no pioneer.  There are just too many flaws and uncontrolled variables in his comparisons to draw any reasonable conclusions.  He covers the same ground and makes the same classic mistakes of others who have preceded him, so he doesn't really offer anything new.

 

Furthermore, as @jcs pointed out, Yedlin's comparisons are "way too long and rambly."

 

 

1 hour ago, kye said:

His blog posts consist of a mixture of logical arguments, mathematics, physics and controlled tests.  These are advanced topics and not many others have put the work in to perform these tests.  

From what I have seen in these resolution videos and in his other comparisons, Yedlin glosses over inconvenient points that contradict his bias, and his methods are slipshot.  In addition, I haven't noticed him contributing anything new to any of the topics which he addresses.

 

 

1 hour ago, kye said:

The reason that I say this is that not everyone will understand these tests.  Not everyone understands the correct and incorrect ways to use reductive logic, logical interpolation, extrapolation, equivalence, inference, exception, boundaries, or other logical devices.  

Indeed... I don't think that Yedlin even understands his own tests.

 

 

1 hour ago, kye said:

I highly doubt that you would understand the logic that he presents, but one thing I can tell with absolute certainty, is that you can't understand it without actually WATCHING IT.

Well, I don't understand how a resolution comparison is valid if one doesn't actually compare different resolutions.  You obviously cannot explained how such a comparison is possible.

 

So, I am not going to risk wasting an hour of my time to watching more of a video comparison that is fatally flawed from the get-go.

 

Link to comment
Share on other sites

4 hours ago, tupp said:

So, I am not going to risk wasting an hour of my time to watching more of a video comparison that is fatally flawed from the get-go.

You only think it's flawed because you didn't watch the parts of it that explain it.

4 hours ago, tupp said:

If you can explain exactly how he gets around those three problems, I will take back my criticism of Yedlin.  So far, no one has given any explanation of how his set up could possibly work.

I have spent many many hours preparing a direct response to answer your question, and all the other questions you have put forward. I think after a thorough examination you will find all the answers to your questions and queries.

Please click the below for all the information that you need:

 

Link to comment
Share on other sites

37 minutes ago, kye said:

You only think it's flawed because you didn't watch the parts of it that explain it.

The video's setup is flawed, and there are no parts in the video which explain how that setup could possibly work to show differences in actual resolution.  If you disagree and if you think that you "understand" the video more than I, then you should have no trouble countering the three specific flaws of the video that I numbered above.

 

However, I doubt that you actually understand the video nor the topic, as you can't even link to the points in the video that might explain how it works.

 

 

44 minutes ago, kye said:

I have spent many many hours preparing a direct response to answer your question, and all the other questions you have put forward. I think after a thorough examination you will find all the answers to your questions and queries.

Please click the below for all the information that you need:

I see.  So, you have actually no clue about the topic, and you are just trolling.

Link to comment
Share on other sites

1 hour ago, tupp said:

The video's setup is flawed, and there are no parts in the video which explain how that setup could possibly work to show differences in actual resolution.  If you disagree and if you think that you "understand" the video more than I, then you should have no trouble countering the three specific flaws of the video that I numbered above.

 

However, I doubt that you actually understand the video nor the topic, as you can't even link to the points in the video that might explain how it works.

 

 

I see.  So, you have actually no clue about the topic, and you are just trolling.

A friend recommended a movie to me, but it looked really long.

I watched the first scene and then the last scene, and the last scene made no sense.  It had characters in it I didn't know about, and it didn't explain how the characters I did know got there.  The movie is obviously fundamentally flawed, and I'm not watching it.  I told my friends that it was flawed, but they told me that it did make sense and the parts I didn't watch explained the whole story, but I'm not going to watch a movie that is fundamentally flawed!

They keep telling me to watch the movie, but they're obviously idiots, because it's fundamentally flawed.

They also sent me some recipes, and the chocolate cake recipe had ingredient three as eggs and ingredient seven as cocoa powder (I didn't read the other ingredients) but you can't make a cake using only eggs and cocoa powder - the recipe is fundamentally flawed.  My friend said that the other ingredients are required in order to get a cake, but I'm not going to bother going back and reading the whole recipe and then spending time and money making it when it's obviously flawed.  

My friends really are stupid.  I've told them about the bits that I saw, and they kept telling me that a movie and a recipe only make sense if you go through the whole thing, but that's not how I do things, so obviously they're wrong.  

It makes me cry for the state of humanity when that movie was not only made, but it won 17 oscars, and that cake recipe was named Oprahs cake of the month.  People really must be stupid.

Link to comment
Share on other sites

On 3/25/2021 at 11:01 PM, John Matthews said:

I've been watching some resolution insights by cinematographer Steve Yeldin that I think many might find very interesting. Not sure if this has already been posted...

It would be interesting to discuss.

I also found many of his articles particularly interesting, especially given that some have very carefully prepared tests to demonstrate how they translate to the real world.

http://www.yedlin.net/NerdyFilmTechStuff/index.html

Did you find anything else in here that was of interest? 

Link to comment
Share on other sites

3 hours ago, kye said:

and they kept telling me that a movie and a recipe only make sense if you go through the whole thing, but that's not how I do things, so obviously they're wrong.

Unless it's Wonder Woman 1984 when just watching the trailer only (ie, not the movie) is your best course of action.

Link to comment
Share on other sites

7 hours ago, kye said:

A friend recommended a movie to me, but it looked really long.

I watched the first scene and then the last scene, and the last scene made no sense.  It had characters in it I didn't know about, and it didn't explain how the characters I did know got there.  The movie is obviously fundamentally flawed, and I'm not watching it.  I told my friends that it was flawed, but they told me that it did make sense and the parts I didn't watch explained the whole story, but I'm not going to watch a movie that is fundamentally flawed!

They keep telling me to watch the movie, but they're obviously idiots, because it's fundamentally flawed.

They also sent me some recipes, and the chocolate cake recipe had ingredient three as eggs and ingredient seven as cocoa powder (I didn't read the other ingredients) but you can't make a cake using only eggs and cocoa powder - the recipe is fundamentally flawed.  My friend said that the other ingredients are required in order to get a cake, but I'm not going to bother going back and reading the whole recipe and then spending time and money making it when it's obviously flawed.  

My friends really are stupid.  I've told them about the bits that I saw, and they kept telling me that a movie and a recipe only make sense if you go through the whole thing, but that's not how I do things, so obviously they're wrong.  

It makes me cry for the state of humanity when that movie was not only made, but it won 17 oscars, and that cake recipe was named Oprahs cake of the month.  People really must be stupid.

Oh, that is such a profound story.  I am sorry to hear that you lost your respect for Oprah.

 

Certainly, there are some lengthy videos that cannot be criticized after merely knowing the premise, such as this 3-hour video that proves that the Earth is flat.  It doesn't make any sense at the outset and it is rambling, but you have to watch the entire 2 hours and 55 minutes, because (as you described the Yedlin video in an earlier post) "the logic of it builds over the course of the video."  Let me know what you think after you have watched the entire flat Earth video.

 

Now, reading your story has moved me, so I watched the entire Yedlin video!

 

Guess what? -- The video is still fatally flawed, and I found even more problems.  Here are four of the videos main faults:

  1. Yedlin's setup doesn't prove anything conclusive in regards to perceptible differences between various higher resolutions, even if we assume that a cinema audience always views a projected 2K or 4K screen.  Much of the required discernability for such a comparison is destroyed by his downscaling a 6K file to 4K (and also to 2K and then back up to 4K) within a node editor, while additionally rendering the viewer window to an HD video file.  To properly make any such comparison, we must at least start with 6K footage from a 6K camera, 4K footage from a 4K camera, 2K footage from a 2K camera, etc.
  2. Yedlin's claim here that the node editor viewer's pixels match 1-to-1 to the pixels on the screen of those watching the video is obviously false.  The pixels in his viewer window don't even match 1-to-1 the pixels of his rendered HD video.  This pixel mismatch is a critical flaw that invalidates almost all of his demonstrations that follow.
  3. At one point, Yedlin compared the difference between 6K and 2K by showing the magnified individual pixels.  This magnification revealed that the pixel size and pixel quantity did not change when he switched between resolutions, nor did the subject's size in the image.  Thus, he isn't actually comparing different resolutions in much of the video -- if anything, he is comparing scaling methods.
  4. Yedlin glosses over the factor of screen size and viewing angle.  He cites dubious statistics regarding common viewing angles which he uses to make the shaky conclusion that larger screens aren't needed.  Additionally, he avoids consideration of the fact that larger screens are integral when considering resolution -- if an 8K screen and an HD screen have the same pixel size, at a given distance the 8k screen will occupy 16 times the area of the HD screen.  That's a powerful fact regarding resolution, but Yedlin dismisses larger screens as "specialty thing,"

 

Now that I have watched the entire video and have fully understood everything that Yedlin was trying to convey, perhaps you could counter the four problems of Yedlin's video listed directly above.  I hope that you can do so, because, otherwise, I just wasted over an hour of my time that I cannot get back.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...