Jump to content

Camera resolutions by cinematographer Steve Yeldin


John Matthews
 Share

Recommended Posts

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

In all seriousness, at the time of his postulates, Netflix was rejecting the arri alexa for not being native 4k (and other platforms were following suit) and in Mr. Yedlin’s line of work it was really creating issues during pre production or in pitch meetings. On the high end there was the alexa65 but not everyone wanted the hassle of it (limited sets of lenses, a lot more power hungry) and for many pro cinematographers it felt preposterous for the producers to select the camera package on specs basis instead of them proposing a package based on the artistic look. I think it was not meant for lower budget photographers/videographers/Indy dps to stop us from buying a 4k or higher resolution camera. At least at personal title it didn’t deterred me from buying a sony a6300 at the time his first video came out.

Link to comment
Share on other sites

The a6300 is a good example of exactly the point he’s trying to make, the distinction between resolution and fidelity. I had a shoot with a Sony a7sii and an Ursa back to back. The Ursa was shot in 1920x1080 prores 444, the a7sii shot UHD in mega compressed h264. The difference in PERCEIVED resolution between these two cameras is night and day, with the Ursa kicking the A7s it’s butt, because the image of the former is so much more robust in terms of compression noise, color accuracy, banding, edge detail, and so on. One is more resolute but the other camera has a lot more perceived resolution because the image is better. If you blow up the images and ask random viewers which camera is ‘sharper’, 9 out of 10 times people will say the Ursa.

Link to comment
Share on other sites

17 hours ago, tupp said:

Now that I have watched the entire video and have fully understood everything that Yedlin was trying to convey, perhaps you could counter the four problems of Yedlin's video listed directly above.  I hope that you can do so, because, otherwise, I just wasted over an hour of my time that I cannot get back.

Welcome to the conversation.

I'll happily discuss your points now you have actually watched it, but will take the opportunity to re-watch it before I reply.

I find it incredibly rude that you are so protective of your own time and yet feel so free to recklessly disregard the time of others by talking about something you didn't care to watch and also by risking the outcome that you were talking out your rear end without knowing it, but I'll still talk about the topic as even selfish people can still make sense.  A stopped clock is also right twice a day, so we'll see how things shake out.  Watching is not understanding, so you've passed the first bar, which is necessary but not sufficient.

3 hours ago, seanzzxx said:

The a6300 is a good example of exactly the point he’s trying to make, the distinction between resolution and fidelity. I had a shoot with a Sony a7sii and an Ursa back to back. The Ursa was shot in 1920x1080 prores 444, the a7sii shot UHD in mega compressed h264. The difference in PERCEIVED resolution between these two cameras is night and day, with the Ursa kicking the A7s it’s butt, because the image of the former is so much more robust in terms of compression noise, color accuracy, banding, edge detail, and so on. One is more resolute but the other camera has a lot more perceived resolution because the image is better. If you blow up the images and ask random viewers which camera is ‘sharper’, 9 out of 10 times people will say the Ursa.

This is an excellent example and I think speaks to the type of problem.

Light bouncing off reality is of (practically) infinite resolution, and then it goes through the air, then:

  • through your filters
  • lens elements (optical distortions) and diffraction from the aperture
  • sensor stack
  • and is then sensed by individual pixels on the sensor (can diffraction happen on edges of the pixels?)
  • it then gets quantised to a value and processed RAW in-camera and potentially recorded or output at this point
  • In some cameras it then goes on to be non-linearly resized (eg, to compensate for lens distortions - do cameras do this for video?)
  • rescaled to the output resolution 
  • processed at the output resolution (eg, sharpening, colour science, vignetting, bit-depth quantisation, etc)
  • then compressed into a codec at a given bitrate

Every one of these things degrades the image, and the damage is cumulative.  All-but-one of them could be perfect, but if one is terrible then the output will still be terrible.  Damage may also be of different kinds that might be mostly independent, eg resolution vs colour fidelity, so you might have an image pipeline that has problems across many 'dimensions'.

Going back to your example and the Sony A7s2, if you take RAW stills then I'm sure the images can be sharp and great - in which case, the optics and sensor can be spectacular and it's the video processing and compression that is at fault.  This is yet another reason that I think resolutions above 2K are a bad thing - most cameras aren't getting anything like the resolution that high-quality 2K can offer, but they still require a more and more powerful computer to edit and work with the mushy and compressed-to-death images.

Any camera that can take high-quality stills but produces mushy video is very frustrating as they could have spent the time improving the output instead of just upping the specs without getting the associated benefits.

Link to comment
Share on other sites

Like everything in life, context is paramount. The example of the ursa/a7sII is (without context) quite misleading. First it depends on what Ursa, 4k, 4.6k, pro, g2. Second it’s not useful to compare one shoot with one camera with a shoot (even if it’s the next day) with another camera (unless you are shooting exactly in the same conditions. And thirdly, I wouldn’t use an ursa for any shoot that I would consider using an a7sII, the Sony in log starts at iso1600, most ursas you wouldn’t want to use 1600 for anything. The size and weight of both is a great differenciator for the types of use of each one.

On the other hand, I agree resolution isn’t everything and a 1080p image of anything blackmagic is quite a good image, mainly (for this tread about “k”s and resolution) because it’s not as compressed and in every ursa, it’s downsampled from bigger sensor resolutions. And that’s precisely why I bought the a6300 in 2016 for, it downsampled 6k sensor info to give a 4k image, which at the time I wanted for a 1080/2k delivery. Now the context is this: I use anamorphic lenses and they are not “sharp” and with a 2x lens in a 16:9 sensor you crop a lot on the sides and I wanted as much resolution as I could cheaply get for that. The gh5s was not out yet, the bmpcc4k was very limited in dynamic range or iso, there was no 4k mirrorless canon, nikon or Fuji cameras. 

Link to comment
Share on other sites

A bit off topic but since I am unlikely to be able to buy the latest greatest (barring a lottery win that will come any day/month/year now), given that up rez in photos has gotten very good recently (not perfect but very good), are there any standard definition video cameras I should be looking at to buy cheap (SD card or hard disc or even record to CD) on the chance that video up rez will be great in a few years?   

Something that has a really nice look to it even though only standard def?    Probably CCD?

Old pro or hobbyist?

Some prices on Ebay are ridiculous though for many Handy cam type cameras even for mini tape machines.

Just curious.

Link to comment
Share on other sites

Ok..  Let's discuss your comments.

On 3/31/2021 at 11:32 PM, tupp said:

1)

Yedlin's setup doesn't prove anything conclusive in regards to perceptible differences between various higher resolutions, even if we assume that a cinema audience always views a projected 2K or 4K screen.  Much of the required discernability for such a comparison is destroyed by his downscaling a 6K file to 4K (and also to 2K and then back up to 4K) within a node editor, while additionally rendering the viewer window to an HD video file.  To properly make any such comparison, we must at least start with 6K footage from a 6K camera, 4K footage from a 4K camera, 2K footage from a 2K camera, etc.

His first test, which you can see the results of at 6:40-8:00 compares two image pipelines:

  1. 6K image downsampled to 4K, which is then viewed 1:1 in his viewer by zooming in 2x
  2. 6K image downsampled to 2K, then upsampled to 4K, which is then viewed 1:1 in his viewer by zooming in 2x

As this view is 2X digitally zoomed in, each pixel is twice as large as it would be if you were viewing the source video on your monitor, so the test is actually unfair.  There is obviously a difference in the detail that's actually there, and this can be seen when he zooms in radically at 7:24, but when viewed at 1:1 starting at 6:40 there is perceptually very little difference, if any.

Regardless of if the image pipeline is "proper" (and I'll get to that comment in a bit), if downscaling an image to 2K then back up again isn't visible, the case that resolutions higher than 2K are perceptually differentiable is pretty weak even straight out of the gate.

On 3/31/2021 at 11:32 PM, tupp said:

2)

Yedlin's claim here that the node editor viewer's pixels match 1-to-1 to the pixels on the screen of those watching the video is obviously false.  The pixels in his viewer window don't even match 1-to-1 the pixels of his rendered HD video.  This pixel mismatch is a critical flaw that invalidates almost all of his demonstrations that follow.

Are you saying that the pixels in the viewer window in Part 2 don't match the pixels in Part 1?

Even if this was the case, it still doesn't invalidate comparisons like the one at 6:40 where there is very little difference between two image pipelines where one has significantly less resolution than the other and yet they appear perceptually very similar / identical. 

On 3/31/2021 at 11:32 PM, tupp said:

3)

At one point, Yedlin compared the difference between 6K and 2K by showing the magnified individual pixels.  This magnification revealed that the pixel size and pixel quantity did not change when he switched between resolutions, nor did the subject's size in the image.  Thus, he isn't actually comparing different resolutions in much of the video -- if anything, he is comparing scaling methods.

He is comparing scaling methods - that's what he is talking about in this section of the video.  

This use of scaling algorithms may seem strange if you think that your pipeline is something like 4K camera -> 4K timeline -> 4K distribution, or the same in 2K, as you have mentioned in your first point, but this is false.  There are no such pipelines, and pipelines such as this are impossible.  This is because the pixels in the camera aren't pixels at all, rather they are photosites that sense either Red or Green or Blue.  Whereas the pixels in your NLE and on your monitor or projector are actually Red and Greed and Blue.  

The 4K -> 4K -> 4K pipeline you mentioned is actually ~8M colour values -> ~24M colour values -> ~24M colour values.

The process of taking an array of photosites that are only one colour and creating an image where every pixel has values for Red Green and Blue is called debayering, and it involves scaling.

This is a good link to see what is going on: https://pixinsight.com/doc/tools/Debayer/Debayer.html

From that article: "The Superpixel method is very straightforward. It takes four CFA pixels (2x2 matrix) and uses them as RGB channel values for one pixel in the resulting image (averaging the two green values). The spatial resolution of the resulting RGB image is one quarter of the original CFA image, having half its width and half its height."

Also from the article: "The Bilinear interpolation method keeps the original resolution of the CFA image. As the CFA image contains only one color component per pixel, this method computes the two missing components using a simple bilinear interpolation from neighboring pixels."

As you can see, both of those methods talk about scaling.  Let me emphasise this point - any time you ever see a digital image taken with a digital camera sensor, you are seeing a rescaled image.

Therefore Yedlin's use of scaling is on an image pipeline is using scaling on an image that has already been scaled from the sensor data to an image that has three times as many colour values as the sensor captured.

On 3/31/2021 at 11:32 PM, tupp said:

4)

Yedlin glosses over the factor of screen size and viewing angle.  He cites dubious statistics regarding common viewing angles which he uses to make the shaky conclusion that larger screens aren't needed.  Additionally, he avoids consideration of the fact that larger screens are integral when considering resolution -- if an 8K screen and an HD screen have the same pixel size, at a given distance the 8k screen will occupy 16 times the area of the HD screen.  That's a powerful fact regarding resolution, but Yedlin dismisses larger screens as "specialty thing,"

A quick google revealed that there are ~1500 IMAX theatre screens worldwide, and ~200,000 movie theatres worldwide.  Sources:

That's less than 1%.  You could make the case that there are other non-IMAX large screens around the world, and that's fine, but when you take into account that over 200 Million TVs are sold worldwide each year, even the number of standard movie theatres becomes a drop in the ocean when you're talking about the screens that are actually used for watching movies or TVs worldwide.  Source: https://www.statista.com/statistics/461316/global-tv-unit-sales/

If you can't tell the difference between 4K and 2K image pipelines at normal viewing distances and you are someone that posts on a camera forum about resolution then the vast majority of people watching a movie or a TV show definitely won't be able to tell the difference.

On 3/31/2021 at 11:32 PM, tupp said:

Now that I have watched the entire video and have fully understood everything that Yedlin was trying to convey, perhaps you could counter the four problems of Yedlin's video listed directly above.  I hope that you can do so, because, otherwise, I just wasted over an hour of my time that I cannot get back.

Let's recap:

  • Yedlin's use of rescaling is applicable to digital images because every image from every digital camera sensor that basically anyone has ever seen has already been rescaled by the debayering process by the time you can look at it
  • There is little to no perceptual difference when comparing a 4K image directly with a copy of that same image that has been downscaled to 2K and the upscaled to 4K again, even if you view it at 2X
  • The test involved swapping back and forth between the two scenarios, where in the real world you are unlikely to ever get to see the comparison, like that or even at all
  • The viewing angle of most movie theatres in the world isn't sufficient to reveal much difference between 2K and 4K, let alone the hundreds of millions of TVs sold every year which are likely to have a smaller viewing angle than normal theatres
  • These tests you mentioned above all involved starting with a 6K image from an Alexa 65, one of the highest quality imaging devices ever made for cinema
  • The remainder of the video discusses a myriad of factors that are likely to be present in real-life scenarios that further degrade image resolution, both in the camera and in the post-production pipeline
  • You haven't shown any evidence that you have watched past the 10:00 mark in the video

Did I miss anything?

Link to comment
Share on other sites

For anyone who is reading this far into a thread about resolution but for some reason doesn't have an hour of their day to hear from an industry expert on the subject, the section of the video I have linked to below is a very interesting comparison of multiple cameras (film and digital) with varying resolution sensors, and it's quite clear that the level of perceptual detail coming out of a camera is not that strongly related to the sensor resolution:

 

Link to comment
Share on other sites

On 3/31/2021 at 10:38 PM, seanzzxx said:

I had a shoot with a Sony a7sii and an Ursa back to back. The Ursa was shot in 1920x1080 prores 444, the a7sii shot UHD in mega compressed h264. The difference in PERCEIVED resolution between these two cameras is night and day, with the Ursa kicking the A7s it’s butt, because the image of the former is so much more robust in terms of compression noise, color accuracy, banding, edge detail, and so on.

Keep in mind that resolution is important to color depth.  When we chroma subsample to 4:2:0 (as likely with your A7SII example), we throw away chroma resolution and thus, reduce color depth.  Of course, compression also kills a lot of the image quality.

 

 

On 3/31/2021 at 10:38 PM, seanzzxx said:

One is more resolute but the other camera has a lot more perceived resolution because the image is better.

Yedlin also used the term "resolute" in his video.  I am not sure that it means what you and Yedlin think it means.

 

 

On 4/2/2021 at 6:16 AM, kye said:

His first test, which you can see the results of at 6:40-8:00 compares two image pipelines:

  1. 6K image downsampled to 4K, which is then viewed 1:1 in his viewer by zooming in 2x
  2. 6K image downsampled to 2K, then upsampled to 4K, which is then viewed 1:1 in his viewer by zooming in 2x

It is impossible for you (the viewer of Yedlin's video) to see 1:1 pixels (as I will demonstrate), and it is very possible that Yedlin is not viewing the pixels 1:1 in his viewer.

Merely zooming "2X" does not guarantee that he nor we are seeing 1:1 pixels.  That is a faulty assumption.

 

 

On 4/2/2021 at 6:16 AM, kye said:

As this view is 2X digitally zoomed in, each pixel is twice as large as it would be if you were viewing the source video on your monitor, so the test is actually unfair.

Well, it's a little more complex than that.

The size of the pixels that you see is always the size of the the pixels of your display, unless, of course, the zoom is sufficient to render the image pixels larger than the display pixels.  Furthermore, blending and/or interpolation of pixels is suffered if the image pixels do not match 1:1 those of the display, or if the size image pixels are larger than those of the display while not being a mathematical square of the display pixels.

Unfortunately, all of images that Yedlin presents as 1:1 most definitely are not a 1:1 match, with the pixels corrupted by blending/interpolation (and possibly compression).

 

 

On 4/2/2021 at 6:16 AM, kye said:

There is obviously a difference in the detail that's actually there, and this can be seen when he zooms in radically at 7:24, but when viewed at 1:1 starting at 6:40 there is perceptually very little difference, if any.

When Yedlin zooms-in, we see a 1:1 pixel match between the two images, so there is no actual difference in resolution in that instance -- an actual resolution difference is not being compared here nor in most of the subsequent "1:1" comparisons.

What is being compared in such a scenario is merely scaling algorithms/techniques.  However, any differences even in those algorithms get hopelessly muddled due to the fact that the pixels that you (and possibly Yedlin) see are not actually a 1:1 match, and are thus additionally blended and interpolated.

Such muddling destroys any possibility of making a true resolution comparison.

 

 

On 4/2/2021 at 6:16 AM, kye said:

Regardless of if the image pipeline is "proper" (and I'll get to that comment in a bit), if downscaling an image to 2K then back up again isn't visible, the case that resolutions higher than 2K are perceptually differentiable is pretty weak even straight out of the gate.

No.  Such a notion is erroneous as the comparison method is inherently faulty and as the image "pipeline" Yedlin used unfortunately is leaky and septic (as I will show).

Again, if one is to conduct a proper resolution comparison, the pixels from the original camera image should never be blended:  an 8K captured image should be viewed on an 8K monitor; a 6K captured image should be viewed on an 6K monitor; a 4K captured image should be viewed on an 4K monitor; 2K captured image should be viewed on an 2K monitor; etc.

Scaling algorithms, interpolations and blending of the pixels corrupts the testing process.

 

 

On 3/31/2021 at 8:32 AM, tupp said:

2) Yedlin's claim here that the node editor viewer's pixels match 1-to-1 to the pixels on the screen of those watching the video is obviously false.  The pixels in his viewer window don't even match 1-to-1 the pixels of his rendered HD video.  This pixel mismatch is a critical flaw that invalidates almost all of his demonstrations that follow.

On 4/2/2021 at 6:16 AM, kye said:

Are you saying that the pixels in the viewer window in Part 2 don't match the pixels in Part 1?

I thought that I made it clear in my previous post.  However, I will paraphrase it so that you might understand what is actually going on:  There is no possible way that you ( @kye ) can observe the comparisons with a 1:1 pixel match to those of the images shown in Yedlin's node editor viewer.

In addition, it is very possible that even Yedlin's own viewer when set at 100% is not actually showing a 1:1 pixel match to Yedlin.

Such a pixel mismatch is a fatal flaw when trying to compare resolutions.  Yedlin claims that he established a 1:1 match, because he knows that it is an important requirement for comparing resolutions, but he did not acheive a 1:1 pixel match.

So, almost everything about his comparisons is meaningless.

 

 

On 4/2/2021 at 6:16 AM, kye said:

Even if this was the case, it still doesn't invalidate comparisons like the one at 6:40 where there is very little difference between two image pipelines where one has significantly less resolution than the other and yet they appear perceptually very similar / identical. 

Again, Yedlin is not actually comparing resolutions in this instance.  He is merely comparing scaling algorithms and interpolations here and elsewhere in his video, scaling comparisons which are crippled by his failure to achieve a 1:1 pixel match in the video.

Yedlin could have verified a 1:1 pixel match by showing a pixel chart within his viewer when it was set to 100%.

Here are a couple of pixel charts:

pixel_chart.jpg.1e3a764b81402a61718f612e6bd54949.jpgresolution.color-rc4s.gif.d44f3789bec2d0ba4bb3ccfd8ae3de37.gif

If the the charts are displayed at 1:1 pixels, you should easily observe with a magnifier that the all of the black pixel rulings that are integers (1, 2, 3, etc.) are cleanly defined with no blending into adjacent pixels.  On the other hand, all of the black pixel rulings that are non-integers (1.3, 1.6, 2.1, 2.4, 3.3, etc.) should show blending on their edges with a 1:1 match.

Without such a chart it is difficult to confirm that one pixel of the image coincides with one pixel in Yedlin's video.  Either Steve Yedlin, ASC was not savvy enough to include the fundamental verification of a pixel chart or he intentionally avoided verification of a pixel match.

However, Yedlin unwittingly provided something that proves his failure to achieve a 1:1 match.

At 15:03 in the video, Yedlin zooms way in to a frozen frame, and he draws a precise 4x4 pixel square over the image.  At the 16:11 mark in the video, he zooms back out to the 100% setting in his viewer, showing the box at the alleged 1:1 pixels

You can freeze the video at that point and see for yourself with a magnifier that the precise 4x4 pixel square has blended edges (unlike the clean-edged integer rulings on the pixel charts).  However, Yedlin claims there is a 1:1 pixel match!

I went even further than just using magnifier.  I zoomed-in to that "1:1" frame using two different methods, and then I made a side-by-side comparison image:

pixel_box_comp.thumb.png.b43aca31ac8981b7845a11193c5df42e.png

All three images in the above comparison were taken from the actual video posted on Yedlin's site.

The far left image shows Yedlin's viewer fully zoomed-in when he draws the precise 4x4 pixel square.  The middle and right images are zoomed into Yedlin's viewer when it is set to 100% (with an allegedly 1:1 pixel match).

There is no denying the excessive blending and interpolation revealed when zooming-in to the square or when magnifying one's display.  No matter how finely one can change the zoom amount in one's video player, one will never be able to see a 1:1 pixel match with Yedlin's video, because the blending/interpolation is inherent in the video.  Furthermore, the blending/interpolation is possibly introduced by Yedlin's node editor viewer when it is set to 100%.

Hence, Yedlin's claimed 1:1 pixel match is false.

By the way, in my comparison photo above, the middle image is from a tiff created by ffmpeg, to avoid  further compression.  The right image was made by merely zooming into the frozen frame playing on the viewer of the Natron compositor.

 

 

On 3/31/2021 at 8:32 AM, tupp said:

At one point, Yedlin compared the difference between 6K and 2K by showing the magnified individual pixels.  This magnification revealed that the pixel size and pixel quantity did not change when he switched between resolutions, nor did the subject's size in the image.  Thus, he isn't actually comparing different resolutions in much of the video -- if anything, he is comparing scaling methods.

On 4/2/2021 at 6:16 AM, kye said:

He is comparing scaling methods - that's what he is talking about in this section of the video.  

Correct.  That is what I have stated repeatedly.

The thing is, he uses this same method in almost every comparison, so he is merely comparing scaling methods throughout the video -- he is not comparing actual resolution.

 

 

On 4/2/2021 at 6:16 AM, kye said:

This use of scaling algorithms may seem strange if you think that your pipeline is something like 4K camera -> 4K timeline -> 4K distribution, or the same in 2K, as you have mentioned in your first point, but this is false.  There are no such pipelines, and pipelines such as this are impossible.

What?  Of course there are such "pipelines."  One can shoot with a 4K camera and process the resulting 4K files in post and then display in 4K, and the resolution never increases nor decreases at any point in the process.

 

 

On 4/2/2021 at 6:16 AM, kye said:

This is because the pixels in the camera aren't pixels at all, rather they are photosites that sense either Red or Green or Blue.  Whereas the pixels in your NLE and on your monitor or projector are actually Red and Greed and Blue.

Are you trying to validate Yedlin's upscaling/downscaling based on semantics?

It is generally accepted that a photosite on a sensor is a single microscopic receptor often filtered with a single color.  A combination of more than one adjacent receptors with red, green, blue (and sometimes clear) filters is often called a pixel or pixel group.  Likewise, an adjacent combination of RGB display cells is usually called a pixel.

However you choose to define the terms or to group the receptors/pixels, it will have little bearing on the principles that we are discussing.

 

 

On 4/2/2021 at 6:16 AM, kye said:

The 4K -> 4K -> 4K pipeline you mentioned is actually ~8M colour values -> ~24M colour values -> ~24M colour values.

Huh?  What do you mean here?  How do you get those color value numbers from 4K?  Are you saying that all cameras are under-sampling compared to image processing and displays?

Regardless of how the camera resolution is defined, one can create a camera image and then process that image in post and then display the image all without any increase nor decrease of the resolution at any step in the process.  In fact, such image processing with consistent resolution at each step is quite common.

 

 

On 4/2/2021 at 6:16 AM, kye said:

The process of taking an array of photosites that are only one colour and creating an image where every pixel has values for Red Green and Blue is called debayering, and it involves scaling.

It's called debayering... except when it isn't.  There is no debayering with:  an RGB striped sensor; an RGBW sensor; a monochrome sensor; a scanning sensor; a Foveon sensor; an X-Trans sensor; and three-chip cameras; etc.  Additionally, raw files made with a Bayer matrix sensor are not debayered.

I see where this is going, and your argument is simply a matter of whether we agree to determine resolution by counting the separate red, green and blue cells or whether we determine resolution by counting RGB pixel groups formed by combining those adjacent red, green and blue scales.

 

 

On 4/2/2021 at 6:16 AM, kye said:

This is a good link to see what is going on: https://pixinsight.com/doc/tools/Debayer/Debayer.html

From that article: "The Superpixel method is very straightforward. It takes four CFA pixels (2x2 matrix) and uses them as RGB channel values for one pixel in the resulting image (averaging the two green values). The spatial resolution of the resulting RGB image is one quarter of the original CFA image, having half its width and half its height."

Also from the article: "The Bilinear interpolation method keeps the original resolution of the CFA image. As the CFA image contains only one color component per pixel, this method computes the two missing components using a simple bilinear interpolation from neighboring pixels."

As you can see, both of those methods talk about scaling.

Jeez Louise... did you just recently learn about debayering algorithms?

The conversion of adjacent photosites into a single RGB pixel group (Bayer or not) isn't considered "scaling" by most.  Even if you define it as such, that notion is irrelevant to our discussion -- we necessarily have to assume that a digital camera's resolution is given by either the output of it's ADC or by the resolution of the camera files.

We just have to agree on whether we are counting the individual color cells or the combined RGB pixel groups.  Once we agree upon the camera resolution, that resolution need never change throughout the rest of the "imaging pipeline."

 

 

On 4/2/2021 at 6:16 AM, kye said:

Let me emphasise this point - any time you ever see a digital image taken with a digital camera sensor, you are seeing a rescaled image.

You probably shouldn't have emphasized that point, because you are incorrect, even if we use your definition of "scaling."

There are no adjacent red green or blue photosites to combine ("scale") with digital Foveon sensors, digital three chip cameras and digital monochrome sensors.

 

 

On 4/2/2021 at 6:16 AM, kye said:

Therefore Yedlin's use of scaling is on an image pipeline is using scaling on an image that has already been scaled from the sensor data to an image that has three times as many colour values as the sensor captured.

Please, I doubt that even Yedlin would go along with you on this line of reasoning.

We can determine the camera resolution merely from output of the ADC or from the camera files.  We just have to agree on whether we are counting the individual color cells or the combined RGB pixel groups. After we agree on the camera resolution, that resolution need never change throughout the rest of the "imaging pipeline."

Regardless of these semantics, Yedlin is just comparing scaling methods and not actual resolution.

 

 

On 4/2/2021 at 6:16 AM, kye said:

A quick google revealed that there are ~1500 IMAX theatre screens worldwide, and ~200,000 movie theatres worldwide.  Sources:

That's less than 1%.  You could make the case that there are other non-IMAX large screens around the world, and that's fine, but when you take into account that over 200 Million TVs are sold worldwide each year, even the number of standard movie theatres becomes a drop in the ocean when you're talking about the screens that are actually used for watching movies or TVs worldwide.  Source: https://www.statista.com/statistics/461316/global-tv-unit-sales/

When one is trying to determine if higher resolutions can yield an increase in discernability or in perceptible image quality, then it is irrelevant to consider the statistics of common or uncommon setups.

The  alleged commonality and feasibility of the setup is a topic that should be left for another discussion, and such notions should not influence nor interfere with any scientific testing nor with the weight of any findings of the tests.

By dismissing greater viewing angles as uncommon, Yedlin reveals his bias.  Such dismissiveness of important variables corrupts his comparisons and conclusions, as he avoids testing larger viewing angles, and he merely concludes that larger screens are "special".

 

 

On 4/2/2021 at 6:16 AM, kye said:

If you can't tell the difference between 4K and 2K image pipelines at normal viewing distances and you are someone that posts on a camera forum about resolution then the vast majority of people watching a movie or a TV show definitely won't be able to tell the difference

Well if I had a 4K monitor, I imagine that I could tell the difference between a 4K and 2K image.

Not that it matters, but close viewing proximity is likely much more common than Yedlin realizes and more more common than your web searching shows.  In addition to IMAX screens, movie theaters with seats close to the screen, amusement park displays and jumbo-trons, many folks position their computer monitors close enough to see the individual pixels (at least when they lean forward).  If one can see individual pixels, a higher resolution monitor of the same size can make those individual pixels "disappear."  So, higher resolution can yield a dramatic difference in discernability, even in common everyday scenarios.

Furthermore, a higher resolution monitor with the same size pixels as a lower resolution monitor gives a much more expansive viewing angle.  As many folks use multiple computer monitors side-by-side, the value of such a wide view is significant.

 

 

On 4/2/2021 at 6:16 AM, kye said:

Let's recap:

  • Yedlin's use of rescaling is applicable to digital images because every image from every digital camera sensor that basically anyone has ever seen has already been rescaled by the debayering process by the time you can look at it

Whatever.  You can claim that combining adjacent colored photosites into a single RGB pixel group is "scaling."  Nevertheless, the resolution need never change at any point in the "imaging pipeline."

Regardless, Yedlin is merely comparing scaling methods and not resolution.

 

 

On 4/2/2021 at 6:16 AM, kye said:

There is little to no perceptual difference when comparing a 4K image directly with a copy of that same image that has been downscaled to 2K and the upscaled to 4K again, even if you view it at 2X

Well, we can't really draw such a conclusion from Yedlin's test, considering all of the corruption from blending and interpolation caused by his failure to achieve a 1:1 pixel match.


 

On 4/2/2021 at 6:16 AM, kye said:

The test involved swapping back and forth between the two scenarios, where in the real world you are unlikely to ever get to see the comparison, like that or even at all

How is this notion relevant or a recap?

 

 

On 4/2/2021 at 6:16 AM, kye said:

The viewing angle of most movie theatres in the world isn't sufficient to reveal much difference between 2K and 4K, let alone the hundreds of millions of TVs sold every year which are likely to have a smaller viewing angle than normal theatres

Your statistics and what you consider be likely or common in regards to viewing angles/proximity is irrelevant in determining the actual discernability differences between resolutions.  Also, you and Yedlin dismiss sitting in the very front row of a movie theater.

 

 

On 4/2/2021 at 6:16 AM, kye said:

These tests you mentioned above all involved starting with a 6K image from an Alexa 65, one of the highest quality imaging devices ever made for cinema

That impresses you?

Not sure how that point is relevant (nor how it is a recap), but please ask yourself:  if there is no difference in discernability between higher resolutions, why would Arri (the maker of some of the highest quality cinema cameras) offer a 6K camera?

 

 

On 4/2/2021 at 6:16 AM, kye said:

The remainder of the video discusses a myriad of factors that are likely to be present in real-life scenarios that further degrade image resolution, both in the camera and in the post-production pipeline

Yes, but such points don't shed light on any fundamental differences in the discernability of different resolutions.  Also how is this notion a recap?

 

 

On 4/2/2021 at 6:16 AM, kye said:

You haven't shown any evidence that you have watched past the 10:00 mark in the video

You are incorrect and this notion is not a recap.

Please note that my comments in an earlier post regarding Yedlin's dismissing wider viewing angles referred to and linked to a section at 0:55:27 in his 1:06:54 video.

 

 

On 4/2/2021 at 6:16 AM, kye said:

Did I miss anything?

You only missed all of the points that I made above in this post and earlier posts.

 

 

On 4/2/2021 at 6:30 AM, kye said:

and it's quite clear that the level of perceptual detail coming out of a camera is not that strongly related to the sensor resolution:

No, it's not clear.  Yedlin's "resolution" comparisons are corrupted by the fact that the pixels are not a 1:1 match and by the fact that he is actually comparing scaling methods -- not resolution.

Link to comment
Share on other sites

@tupp

You raise a number of excellent points, but have missed the point of the test.

The overall context is that for a viewer, sitting at a common viewing distance, the difference won't be discernible.  This is why the comparison is about perceptual resolution and not actual resolution.

Yedlin claims that the video will appear 1:1, which I took to mean that it wouldn't be a different size, and you have taken to mean that every pixel on his computer will appear as a single pixel on your/my computer and will not have any impact on any of the other surrounding pixels.  

Obviously this is false, as you have shown from your blown up screen captures.  This does not prove scaling though.  As you showed, two viewers rendered different outputs, and I tried it in Quicktime and VLC and got two different results again.  

Problem number one is that the viewing software is altering the image (or at least all but one that we tried).

Problem number two is that we're both viewing the file from Yedlin's site, which is highly compressed.  In fact, it is a h264 stream, and 2.32Gb, something like 4Mbps.  The uncompressed file would have been 1192Mbps and in the order of 600Gb, and not much smaller had he used a lossless compression, so completely beyond any practical consideration.  Assuming I've done my maths correctly, that's a compression ratio of something like 250:1 - a ratio that you couldn't even hope would yield a pixel-not-destroyed image.

The reason I bring up these two points is that they will also be true for the consumption of any media by that viewer that the test is about.

There's no point arguing that his test is invalid as it doesn't apply to someone watching an uncompressed video stream on a screen that is significantly larger than the TXH and SMPTE recommendations suggest, because, frankly, who gives a toss about that person?  I'm not that person, probably no-one else here is that person, and if you are that person, then good for you, but it's irrelevant.

You made a good point about 3CCD cameras, which I'd forgotten about, and even if you disagree about debayering and mismatched photosites and pixels, none of that stuff matters if the image is going to get compressed for digital distribution and then decoded by any number of decoders that will generate a different pixel-to-pixel readout.

Essentially you're arguing about how visible something is at the step before it gets put through a cheese-grater on its way to the people who actually watch the movies and pay for the whole thing.

In terms of why they make higher resolution cameras?  There are two main reasons I can see:

The first is that VFX folks want as much resolution as possible as it helps keep things perceptually flawless after they mess with them.  This is likely the primary reason that companies like ARRI are putting out higher resolution models.

The second reason is that electronics companies are companies, and in a capitalist society, companies exist to make money, and to do that you need to make people keep buying things, which is done through planned obsolescence and incremental improvements, such as getting everyone to buy 4K TVs, and then 4K cameras to go with those 4K TVs.  This is likely the driver of all the camera manufacturers who also sell TVs, which is....  basically every consumer camera company.  Not a whole lot of people buying a GH5 are doing VFX with it, although cropping in post is one relatively common exception to that.

So, although I disagree with you on some of the technical aspects along the way, the fact that his test isn't "1:1" in whatever ways you think it should be is irrelevant, because people watch things after compression, after being decoded by unknown algorithms.  
That's not even taking into account the image processing witchcraft that things like Smooth Motion that completely invents entirely new frames and is half of what the viewer will actually see, or uncalibrated displays etc.  Yes, these things don't exist in theatres, but how many hours do you spend watching something in a theatre vs at home?  The average person spends almost all their time watching on a TV at home, so the theatre percentage is pretty small.

Link to comment
Share on other sites

On 4/4/2021 at 7:18 PM, kye said:

You raise a number of excellent points,

Yes, and I have repeated those same points many times prior in this discussion.

 

On 4/4/2021 at 7:18 PM, kye said:

but have missed the point of the test.

The overall context is that for a viewer, sitting at a common viewing distance, the difference won't be discernible.

I was the one who linked the section in Yedlin's video that mentions viewing distances and viewing angles, and I repeatedly noted that he dismissed wider viewing angles and larger screens.

How do you figure that I missed Yedlin's point in that regard?

 

On 4/4/2021 at 7:18 PM, kye said:

This is why the comparison is about perceptual resolution and not actual resolution.

Not sure what you mean here nor why anyone would ever need to test "actual" resolution.  The "actual" resolution is automatically the "actual" resolution, so there is no need to test it to determine if it is "actual".

Regardless, I have used the term "discernability" frequently enough in this discussion so that even someone suffering from chronic reading comprehension deficit should realize that I am thoroughly aware that Yedlin is (supposedly) testing differences perceived from different resolutions.

Again, Yedlin is "actually" comparing scaling methods with a corrupt setup.

 

On 4/4/2021 at 7:18 PM, kye said:

Yedlin claims that the video will appear 1:1, which I took to mean that it wouldn't be a different size, and you have taken to mean that every pixel on his computer will appear as a single pixel on your/my computer and will not have any impact on any of the other surrounding pixels.  

Not sure how 1-to-1 pixels can be interpreted in any way other than every single pixel of the tested resolution matches every single pixel on the screen of the person viewing the comparison.

Furthermore, Yedlin was specific and emphatic on this point:

Quote

"We are going to go 1-to-1 pixels or 'crop to fit.'  Now what that means is... within this 4K image, there is an area that is 2K or HD size pixels across, and what we're going to do is extract that and fill the frame with it.

So when we go 1-to-1 pixels on a 4K mastered image, we take a fully, professionally mastered 4K image... we DO NOT change the image structure AT ALL.  We take all of those true 4K pixels exactly as they were mastered, and we fit each one of them onto one of your HD screen pixels."

Emphasis is Yedlin's.

It's interesting that this basic, fundamental premise of Yedlin's comparison was misunderstood by the one who insisted that I must watch the entire video to understand it -- while I only had to watch a few minutes at the beginning of the video to realize that Yedlin had not achieved the required 1-to-1 pixel match.

It makes perfect sense that 1-to-1 pixels means that every single pixel on the test image matches every single picture on one's display -- that condition is crucial for a resolution test to be valid.  If the pixels are blended or otherwise corrupted, then the resolution test (which automatically considers how someone perceives those pixels) is worthless.

 

On 4/4/2021 at 7:18 PM, kye said:

Obviously this is false, as you have shown from your blown up screen captures.

It's obvious, and, unfortunately, it ruins Yedlin's comparisons.

 

On 4/4/2021 at 7:18 PM, kye said:

 This does not prove scaling though.  As you showed, two viewers rendered different outputs, and I tried it in Quicktime and VLC and got two different results again.  

Problem number one is that the viewing software is altering the image (or at least all but one that we tried).

Well, I didn't compare the exact same frames, and it is unlikely that you viewed the same frames if you froze the video in Quicktime and VLC.

When I play the video of Yedlin's frozen frame while zoomed-in on the Natron viewer, there are noticeable dancing artifacts that momentarily change the pixel colors a bit.  However, since they repeat an identical pattern every time that I play the same moment in the video, it is likely those artifacts are inherent in Yedlin's render.

Furthermore, blending is indicated as the general color and shade of the square's pixels and of those pixels immediately adjacent have less contrast with each other.

In addition, the mottled pixel pattern of square and it's nearby pixels in the ffmpeg image generally matches the pixel pattern of those in the Natron image, while both images are unlike the drawn sqaure in Yedlin's zoomed-in viewer which is very smooth and precise.

When viewing the square with a magnifier on the display, it certainly looks like its edges are blended -- just like the non-integer rulings on the pixel chart in my previous post.

I suspect that Yedlin's node editor viewer is blending the pixels, even though it is set to 100%.

Again, Yedlin easily could have provided verification of a 1-to-1 pixel match by showing a pixel chart in his viewer, but he didn't.

 

On 4/4/2021 at 7:18 PM, kye said:

Problem number two is that we're both viewing the file from Yedlin's site, which is highly compressed.  In fact, it is a h264 stream, and 2.32Gb, something like 4Mbps.

... and who's fault is that?

 

On 4/4/2021 at 7:18 PM, kye said:

The uncompressed file would have been 1192Mbps and in the order of 600Gb, and not much smaller had he used a lossless compression, so completely beyond any practical consideration.  Assuming I've done my maths correctly, that's a compression ratio of something like 250:1 - a ratio that you couldn't even hope would yield a pixel-not-destroyed image.

Then perhaps Yedlin shouldn't misinform his easily impressionable followers by making the false claim that he has achieved a 1-to-1 pixel match on his followers' displays.

On the other hand, Yedlin could have additionally made short, uncompressed clips of such comparisons, or he could have provided uncompressed still frames -- but he did neither.

 

On 4/4/2021 at 7:18 PM, kye said:

The reason I bring up these two points is that they will also be true for the consumption of any media by that viewer that the test is about.

It really is not that difficult to create uncompressed short clips or stills with a 1-to-1 pixel match, just as it was done in the pixel charts above.

 

On 4/4/2021 at 7:18 PM, kye said:

There's no point arguing that his test is invalid as it doesn't apply to someone watching an uncompressed video stream on a screen that is significantly larger than the TXH and SMPTE recommendations suggest, because, frankly, who gives a toss about that person?

I am arguing that his resolution test is invalid primarily because he is actually comparing scaling methods and even that comparison is rendered invalid by the fact that he did not show a 1-to-1 pixel match.

In regards to Yedlin's and your dismissal of wider viewing angles because they are not common (nor recommended by SMPTE 🙄), again, such a notion reveals bias and should not be considered in any empirical tests designed to determine discernability/quality differences between different resolutions.

A larger viewing angle is an important, valuable variable that cannot be ignored -- that's why Yedlin mentioned it (but immediately dismissed it as "not common").

 

On 4/4/2021 at 7:18 PM, kye said:

I'm not that person, probably no-one else here is that person, and if you are that person, then good for you, but it's irrelevant.

Not that it matters, but there are many folks with multi-monitor setups that yield wider viewing angles than what you and Yedlin tout as "common."

 

Also, again, there are a lot of folks that can see the individual pixels on their monitors, and increasing the resolution can render individual pixels not discernible.

 

On 4/4/2021 at 7:18 PM, kye said:

You made a good point about 3CCD cameras, which I'd forgotten about,

Have you also forgotten about RGB striped sensors, RGBW sensors, Foveon sensors, monochrome sensors, X-Trans sensors and linear scanning sensors?  None of those sensors have a Bayer matrix.

 

On 4/4/2021 at 7:18 PM, kye said:

and even if you disagree about debayering and mismatched photosites and pixels, none of that stuff matters if the image is going to get compressed for digital distribution and then decoded by any number of decoders that will generate a different pixel-to-pixel readout.

It matters when trying to make a valid comparison of different resolutions.

Also, achieving a 1-to-1 pixel match and controlling other variables is not that difficult, but one must first fundamentally understand the subject that one is testing.

 

On 4/4/2021 at 7:18 PM, kye said:

Essentially you're arguing about how visible something is at the step before it gets put through a cheese-grater on its way to the people who actually watch the movies and pay for the whole thing.

Nope.  I am arguing that Yedlin's ill-conceived resolution comparison with all of its wild, uncontrolled variables is not valid.

 

On 4/4/2021 at 7:18 PM, kye said:

In terms of why they make higher resolution cameras?  There are two main reasons I can see:

The first is that VFX folks want as much resolution as possible as it helps keep things perceptually flawless after they mess with them.  This is likely the primary reason that companies like ARRI are putting out higher resolution models.

That's not a bad reason to argue for higher resolution, but I doubt that it is the primary reason that Arri made a 6K camera.

 

On 4/4/2021 at 7:18 PM, kye said:

The second reason is that electronics companies are companies, and in a capitalist society, companies exist to make money, and to do that you need to make people keep buying things, which is done through planned obsolescence and incremental improvements, such as getting everyone to buy 4K TVs, and then 4K cameras to go with those 4K TVs.  This is likely the driver of all the camera manufacturers who also sell TVs, which is....  basically every consumer camera company.

I can't wait to get a Nikon, Canon, Olympus or Fuji TV!

 

On 4/4/2021 at 7:18 PM, kye said:

Not a whole lot of people buying a GH5 are doing VFX with it, although cropping in post is one relatively common exception to that.

Cropping is a valid reason for higher resolution (but I abhor the practice).

My guess is that Arri decided to make a 6K camera because the technology exists, because producers were already spec'ing Arri's higher-res competition and because they wanted to add another reason to attract shooters to their larger format Alexa.

 

On 4/4/2021 at 7:18 PM, kye said:

So, although I disagree with you on some of the technical aspects along the way, the fact that his test isn't "1:1" in whatever ways you think it should be is irrelevant, because people watch things after compression, after being decoded by unknown algorithms.

Perhaps you should confront Yedlin with that notion, because it is with Yedlin that you are now arguing.

As linked and quoted above, Yedlin thought that it was relevant to establish a 1-to-1 pixel match, and to provide a lengthy explanation in that regard.

Furthermore, at the 04:15 mark in the same video, Yedlin adds that a 1-to-1 pixel match:

Quote

"would be a more 'rigorous' way to view the image, because we're looking at TRUE 4K pixels..."

Emphasis is Yedlin's.

 

On 4/4/2021 at 7:18 PM, kye said:

That's not even taking into account the image processing witchcraft that things like Smooth Motion that completely invents entirely new frames and is half of what the viewer will actually see, or uncalibrated displays etc.  Yes, these things don't exist in theatres, but how many hours do you spend watching something in a theatre vs at home?  The average person spends almost all their time watching on a TV at home, so the theatre percentage is pretty small.

The point of a resolution discernability comparison (and other empirical tests) is to eliminate/control all variables except for the ones that are being compared.

The calibration and image processing of home TVs and/or theater projectors is a whole other topic of discussion that needn't (and shouldn't) influence the testing of the single independent variable of differing resolution.

If you don't think that it matters to establish a 1-to-1 pixel match in such a test, then please take up that issue with Yedlin -- who evidently disagrees with you!

Link to comment
Share on other sites

2 hours ago, tupp said:

Yes, and I have repeated those same points many times prior in this discussion.

 

I was the one who linked the section in Yedlin's video that mentions viewing distances and viewing angles, and I repeatedly noted that he dismissed wider viewing angles and larger screens.

How do you figure that I missed Yedlin's point in that regard?

 

Not sure what you mean here nor why anyone would ever need to test "actual" resolution.  The "actual" resolution is automatically the "actual" resolution, so there is no need to test it to determine if it is "actual".

Regardless, I have used the term "discernability" frequently enough in this discussion so that even someone suffering from chronic reading comprehension deficit should realize that I am thoroughly aware that Yedlin is (supposedly) testing differences perceived from different resolutions.

Again, Yedlin is "actually" comparing scaling methods with a corrupt setup.

 

Not sure how 1-to-1 pixels can be interpreted in any way other than every single pixel of the tested resolution matches every single pixel on the screen of the person viewing the comparison.

Furthermore, Yedlin was specific and emphatic on this point:

Emphasis is Yedlin's.

It's interesting that this basic, fundamental premise of Yedlin's comparison was misunderstood by the one who insisted that I must watch the entire video to understand it -- while I only had to watch a few minutes at the beginning of the video to realize that Yedlin had not achieved the required 1-to-1 pixel match.

It makes perfect sense that 1-to-1 pixels means that every single pixel on the test image matches every single picture on one's display -- that condition is crucial for a resolution test to be valid.  If the pixels are blended or otherwise corrupted, then the resolution test (which automatically considers how someone perceives those pixels) is worthless.

 

It's obvious, and, unfortunately, it ruins Yedlin's comparisons.

 

Well, I didn't compare the exact same frames, and it is unlikely that you viewed the same frames if you froze the video in Quicktime and VLC.

When I play the video of Yedlin's frozen frame while zoomed-in on the Natron viewer, there are noticeable dancing artifacts that momentarily change the pixel colors a bit.  However, since they repeat an identical pattern every time that I play the same moment in the video, it is likely those artifacts are inherent in Yedlin's render.

Furthermore, blending is indicated as the general color and shade of the square's pixels and of those pixels immediately adjacent have less contrast with each other.

In addition, the mottled pixel pattern of square and it's nearby pixels in the ffmpeg image generally matches the pixel pattern of those in the Natron image, while both images are unlike the drawn sqaure in Yedlin's zoomed-in viewer which is very smooth and precise.

When viewing the square with a magnifier on the display, it certainly looks like its edges are blended -- just like the non-integer rulings on the pixel chart in my previous post.

I suspect that Yedlin's node editor viewer is blending the pixels, even though it is set to 100%.

Again, Yedlin easily could have provided verification of a 1-to-1 pixel match by showing a pixel chart in his viewer, but he didn't.

 

... and who's fault is that?

 

Then perhaps Yedlin shouldn't misinform his easily impressionable followers by making the false claim that he has achieved a 1-to-1 pixel match on his followers' displays.

On the other hand, Yedlin could have additionally made short, uncompressed clips of such comparisons, or he could have provided uncompressed still frames -- but he did neither.

 

It really is not that difficult to create uncompressed short clips or stills with a 1-to-1 pixel match, just as it was done in the pixel charts above.

 

I am arguing that his resolution test is invalid primarily because he is actually comparing scaling methods and even that comparison is rendered invalid by the fact that he did not show a 1-to-1 pixel match.

In regards to Yedlin's and your dismissal of wider viewing angles because they are not common (nor recommended by SMPTE 🙄), again, such a notion reveals bias and should not be considered in any empirical tests designed to determine discernability/quality differences between different resolutions.

A larger viewing angle is an important, valuable variable that cannot be ignored -- that's why Yedlin mentioned it (but immediately dismissed it as "not common").

 

Not that it matters, but there are many folks with multi-monitor setups that yield wider viewing angles than what you and Yedlin tout as "common."

 

Also, again, there are a lot of folks that can see the individual pixels on their monitors, and increasing the resolution can render individual pixels not discernible.

 

Have you also forgotten about RGB striped sensors, RGBW sensors, Foveon sensors, monochrome sensors, X-Trans sensors and linear scanning sensors?  None of those sensors have a Bayer matrix.

 

It matters when trying to make a valid comparison of different resolutions.

Also, achieving a 1-to-1 pixel match and controlling other variables is not that difficult, but one must first fundamentally understand the subject that one is testing.

 

Nope.  I am arguing that Yedlin's ill-conceived resolution comparison with all of its wild, uncontrolled variables is not valid.

 

That's not a bad reason to argue for higher resolution, but I doubt that it is the primary reason that Arri made a 6K camera.

 

I can't wait to get a Nikon, Canon, Olympus or Fuji TV!

 

Cropping is a valid reason for higher resolution (but I abhor the practice).

My guess is that Arri decided to make a 6K camera because the technology exists, because producers were already spec'ing Arri's higher-res competition and because they wanted to add another reason to attract shooters to their larger format Alexa.

 

Perhaps you should confront Yedlin with that notion, because it is with Yedlin that you are now arguing.

As linked and quoted above, Yedlin thought that it was relevant to establish a 1-to-1 pixel match, and to provide a lengthy explanation in that regard.

Furthermore, at the 04:15 mark in the same video, Yedlin adds that a 1-to-1 pixel match:

Emphasis is Yedlin's.

 

The point of a resolution discernability comparison (and other empirical tests) is to eliminate/control all variables except for the ones that are being compared.

The calibration and image processing of home TVs and/or theater projectors is a whole other topic of discussion that needn't (and shouldn't) influence the testing of the single independent variable of differing resolution.

If you don't think that it matters to establish a 1-to-1 pixel match in such a test, then please take up that issue with Yedlin -- who evidently disagrees with you!

Why do you care if the test only applies to the 99.9999% of content viewed by people worldwide that has scaling and compression?

Link to comment
Share on other sites

53 minutes ago, kye said:

Why do you care if the test only applies to the 99.9999% of content viewed by people worldwide that has scaling and compression?

Yedlin's resolution comparison doesn't apply to anything, because it is not valid.   He is not even testing resolution.

 

Perhaps you should try to convince Yedlin that such demonstrations don't require 1-to-1 pixel matches, because you think that 99.9999% of content has scaling and compression, and, somehow, that validates resolution tests with blended pixels and other wild, uncontrolled variables.

Link to comment
Share on other sites

3 hours ago, tupp said:

Yedlin's resolution comparison doesn't apply to anything, because it is not valid.   He is not even testing resolution.

 

Perhaps you should try to convince Yedlin that such demonstrations don't require 1-to-1 pixel matches, because you think that 99.9999% of content has scaling and compression, and, somehow, that validates resolution tests with blended pixels and other wild, uncontrolled variables.

He took an image from a highly respected cinema camera, put it onto a 4K timeline, then exported that timeline to a 1080p compressed file, and then transmitted that over the internet to viewers.

Yeah, that doesn't apply to anything else that ever happens, you're totally right, no-one has ever done that before and no-one will ever do that again.....    🙄🙄🙄

Link to comment
Share on other sites

31 minutes ago, kye said:

He took an image from a highly respected cinema camera,

So what?  Again, does that impress you?

Furthermore, you suggested above that Yedlin's test applies to 99.9999% of content -- do you think that 99.9999% of content is shot with an Alexa65?

 

 

33 minutes ago, kye said:

put it onto a 4K timeline, then exported that timeline to a 1080p compressed file, and then transmitted that over the internet to viewers.

Well, you left out a few steps of upscale/downscale acrobatics that negate the comparison as a resolution test.

Most importantly, you forgot to mention that he emphatically stated that he had achieved a 1-to-1 pixel match on anyone's HD screen, but that he actually failed to do so, thus, invalidating all of his resolution demos.

 

 

39 minutes ago, kye said:

Yeah, that doesn't apply to anything else that ever happens, you're totally right, no-one has ever done that before and no-one will ever do that again.....    🙄🙄🙄

You are obviously an expert on empirical testing/comparisons...

Link to comment
Share on other sites

3 hours ago, tupp said:

Furthermore, you suggested above that Yedlin's test applies to 99.9999% of content -- do you think that 99.9999% of content is shot with an Alexa65?

His test applies to the situations where there is image scaling and compression involved, which is basically every piece of content anyone consumes.

3 hours ago, tupp said:

Well, you left out a few steps of upscale/downscale acrobatics that negate the comparison as a resolution test.

Most importantly, you forgot to mention that he emphatically stated that he had achieved a 1-to-1 pixel match on anyone's HD screen, but that he actually failed to do so, thus, invalidating all of his resolution demos..

If you're going to throw away an entire analysis based on a single point, then have a think about this:

1<0 and the sky is blue.

uh oh, now I've said that 1<0, which clearly it isn't, then the sky can't be blue because everything I said must now logically be wrong and cannot be true!

Link to comment
Share on other sites

16 hours ago, tupp said:

Yedlin's resolution comparison doesn't apply to anything, because it is not valid.   He is not even testing resolution.

Do you know the word humility? Yedlin's not just any old dude on the internet... the guy's an industry insider with butt-loads of films to back it up. I think he might know something on the topic of resolution.

Link to comment
Share on other sites

On 4/8/2021 at 2:00 AM, kye said:

His test applies to the situations where there is image scaling and compression involved, which is basically every piece of content anyone consumes.

No it doesn't.

Different image scaling methods applied to videos made by different encoding sources yield different results.  The number of such different possible combinations is further compounded when adding compression.

We can't say that Yedlin's combination of camera, node editor, node editor viewer, peculiar 1920x1280 resolution and NLE render method will match anyone else's.

 

On 4/7/2021 at 10:07 PM, tupp said:

Well, you left out a few steps of upscale/downscale acrobatics that negate the comparison as a resolution test.

Most importantly, you forgot to mention that he emphatically stated that he had achieved a 1-to-1 pixel match on anyone's HD screen, but that he actually failed to do so, thus, invalidating all of his resolution demos..

On 4/8/2021 at 2:00 AM, kye said:

If you're going to throw away an entire analysis based on a single point,

... Two points... two very significant points.

Even though you quoted me stating both of them, you somehow forgot one of those points in your immediate reply.

On the other hand, it takes only a single dismissed point to bring catastrophe to an analysis or endeavor, even if that analysis/endeavor involves development by thousands of people.

For example, there was an engineer named Roger who tested one part of a highly complex machine that many thousands of people worked on.  Roger discovered a problem with that single part that would cause the entire machine to malfunction.

Roger urged the his superiors to halt the use of the machine, but his superiors dismissed Roger's warning -- his superiors were not willing "to throw away their entire analysis and planned timetable based on a single point."

Here is the result of the decision to dismiss Roger's seemingly trivial objection.

Here is Roger's story.

 

On 4/8/2021 at 2:00 AM, kye said:

then have a think about this:

1<0 and the sky is blue.

uh oh, now I've said that 1<0, which clearly it isn't, then the sky can't be blue because everything I said must now logically be wrong and cannot be true!

Is it a fact that the sky is blue?:

golden_sunset_4k-wide.jpg&f=1&nofb=1

 

Regardless, a single, simple fact can debunk an entire world of analysis.

To those who believe that the Earth is flat, here is what was known as the "black swan photo," which shows two oil platforms are off the coast of California:

Screenshot_20200311_200146-540x247.png

 

From the given camera position, the bottom of the most distant platform -- "Habitat" --  should be obscured by the curvature of the Earth.  So, since the photo show's Habitat's supports meeting the water line, flat earthers conclude that there is no curvature of the Earth, and, thus, the Earth must be flat.

However, the point was made that there is excessive distortion from atmospheric refraction, as evidenced by Habitat's crooked cranes.  Someone also provided a photo of the platforms from near the same camera position with no refractive muddling:

bs-1-over-the-horizon.png?w=676

This photo without distortion shows that the supports of the distant Habitat platform are actually obscured by the Earth's curvature, which "throws away the entire analysis" made by those who assert that the Earth is flat.

Here is a video of 200 "proofs" that the Earth is flat, but all 200 of those proofs are rendered invalid -- in light of the single debunking of the "black swan" photo.

Likewise, Yedlin's test is muddled by blended resolution and by a faulty testing method.  So, we cannot conclude much about actual resolution differences from Yedlin's comparison.

By the way, you have to watch the entire video of 200 flat earth proofs (2 hours) "to understand it."     Enjoy!

 

 

On 4/8/2021 at 10:25 AM, John Matthews said:

Do you know the word humility?

I certainly do.  Please explain what does humility have to do with fact?

Incidentally, I did say that Yedlin was a good shooter, but that doesn't mean that he is a good scientist.

 

On 4/8/2021 at 10:25 AM, John Matthews said:

Yedlin's not just any old dude on the internet... the guy's an industry insider with butt-loads of films to back it up.

It doesn't matter if Yedlin is the King of Siam, the Pope or and "inside straight" -- his method is faulty, as is his test setup.

 

On 4/8/2021 at 10:25 AM, John Matthews said:

I think he might know something on the topic of resolution.

Yedlin failed to meet his own required criteria for his test setup, which he laid out emphatically and at length in the beginning of his video.

Furthermore, his method of downscaling (and then upscaling) and then showing the results only at a single resolution (4K) does not truly compare different resolutions.

 

Link to comment
Share on other sites

5 hours ago, tupp said:

Different image scaling methods applied to videos made by different encoding sources yield different results.  The number of such different possible combinations is further compounded when adding compression.

We can't say that Yedlin's combination of camera, node editor, node editor viewer, peculiar 1920x1280 resolution and NLE render method will match anyone else's.

Based on that, there is no exact way to test resolutions that will apply to any situation beyond the specific combination being tested.

So, let's take that as true, and do a non-exact way based upon a typical image pipeline.

I propose comparing the image from a 6K cinema camera being put onto a 4K timeline vs a 2K timeline, and to be sure, let's zoom in to 200% so we can see the differences a little more than they would normally be visible.

This is what Yedlin did.

5 hours ago, tupp said:

On the other hand, it takes only a single dismissed point to bring catastrophe to an analysis or endeavor, even if that analysis/endeavor involves development by thousands of people.

A single wrong point invalidates an analysis if, and only if, the subsequent analysis is dependent on that point.

Yedlins was not.

5 hours ago, tupp said:

Yedlin failed to meet his own required criteria for his test setup, which he laid out emphatically and at length in the beginning of his video.

No he didn't.  You have failed to understand his first point, and then subsequently to that, you have failed to realise that his first point isn't actually critical to the remainder of his analysis.

You have stated that there is scaling because the blown up versions didn't match, which isn't valid because:

  • different image rendering algorithms can cause them to not match, therefore you don't actually know for sure that they don't match (it could simply be that your viewer didn't match but his did)
  • you assumed that there was scaling involved because the grey box had impacted pixels surrounding it, which could also have been caused by compression, so this doesn't prove scaling
  • and actually neither of those matter anyway, because even if there was scaling, basically every image we see has been scaled and compressed

Your "problem" is that you misinterpreted a point, but even if you hadn't misinterpreted it could have been caused by other factors, and even if it wasn't, aren't relevant to the end result anyway.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...