Jump to content

Camera resolutions by cinematographer Steve Yeldin


John Matthews
 Share

Recommended Posts

4 hours ago, kye said:

Based on that, there is no exact way to test resolutions that will apply to any situation beyond the specific combination being tested.

There is a way to actually test resolution which I have mentioned more than once before in this thread. --  test an 8K image on an 8K display, test a 6K image on a 6K display, test a 4K image on a 4K display, etc.

That scenario is as exact as we can get.  That setup is actually testing true resolution.

 

5 hours ago, kye said:

So, let's take that as true, and do a non-exact way based upon a typical image pipeline.

I propose comparing the image from a 6K cinema camera being put onto a 4K timeline vs a 2K timeline, and to be sure, let's zoom in to 200% so we can see the differences a little more than they would normally be visible.

Huh?  Not sure how that scenario logically follows your notion that "there is no exact way to test resolutions," even if you ignore the simple, straightforward resolution testing method that I have previously suggested in this thread.

What you propose is not actually testing resolution.

Furthermore, it is likely that the "typical image pipeline" for video employs the same resolution throughout the process.

In addition, if you "zoom in," you have to make sure that you achieve a 1-to-1 pixel match with no pixel blending (as stressed by Yedlin), if you want to make sure that we are truly comparing the actual pixels.  However, zooming-in sacrifices color depth, which could skew the comparison.

 

5 hours ago, kye said:

This is what Yedlin did.

You don't say?

He also laid out the required criteria of a 1-to-1 pixel match, which he did not achieve and which you dismiss.

 

5 hours ago, kye said:

A single wrong point invalidates an analysis if, and only if, the subsequent analysis is dependent on that point.

Yedlins was not.

TWO points:

  1. Yedlin's downscaling/upscaling method doesn't really test resolution which invalidates the method as a "resolution test;"
  2. Yedlin's failure to meet his own required 1-to-1 pixel match criteria invalidates the analysis.

 

11 hours ago, tupp said:

Yedlin failed to meet his own required criteria for his test setup, which he laid out emphatically and at length in the beginning of his video.

5 hours ago, kye said:

No he didn't.  You have failed to understand his first point, and then subsequently to that, you have failed to realise that his first point isn't actually critical to the remainder of his analysis.

Yedlin took the very first 4 minutes and 23 seconds in his video to emphasize the 1-to-1 pixel match and its importance.   Such a match is required for us to view "true, 4K pixels," as stated by Yedlin.  If we can't see the true pixels, we can't conclude much about the discernability of resolutions.  Without that 1-to-1 pixel match we might as well just view the monitor through a 1/2 Promist filter.

Yedlin didn't achieve that 1-to-1 pixel match, as you have already admitted.

 

5 hours ago, kye said:

You have stated that there is scaling because the blown up versions didn't match,

No.  I stated that there is some form of blending and/or interpolation happening.  I suspect that the blending/interpolation occurs within the viewer of Yedlin's node editor.

He should have made straight renders for us to view, and included a pixel chart at the beginning of those renders.

 

5 hours ago, kye said:

different image rendering algorithms can cause them to not match, therefore you don't actually know for sure that they don't match (it could simply be that your viewer didn't match but his did)

Yes, but, again, the problem could begin with Yedlin's node editor viewer.

 

6 hours ago, kye said:

you assumed that there was scaling involved because the grey box had impacted pixels surrounding it, which could also have been caused by compression, so this doesn't prove scaling

I did not describe it as "scaling."  As I have said, it appears to be some form of blending/interpolation.  Compression could contribute to the problem, but it is not certain that it is doing so in this instance.

 

6 hours ago, kye said:

and actually neither of those matter anyway, because even if there was scaling, basically every image we see has been scaled and compressed

Not that I claimed that it was "scaling," but whether or not we commonly see images "scaled" has no bearing on conducting a true resolution test.

Compare differences in true resolutions first, and discuss elsewhere the effects of blending, interpolation, compression and scaling.

 

6 hours ago, kye said:

Your "problem" is that you misinterpreted a point, but even if you hadn't misinterpreted it could have been caused by other factors, and even if it wasn't, aren't relevant to the end result anyway.

It is plain what Yedlin meant by saying right up front in his video that a rigorous resolution test uses a 1-to-1 pixel match to show "true 4K pixels."

Evidently, it is you who has misinterpreted Yedlin.  Again, if you think that a 1-to-1 pixel match is not important to a resolution test, you really should confront Yedlin with that notion.

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
On 4/4/2021 at 10:01 PM, tupp said:

The far left image shows Yedlin's viewer fully zoomed-in when he draws the precise 4x4 pixel square.  The middle and right images are zoomed into Yedlin's viewer when it is set to 100% (with an allegedly 1:1 pixel match).

There is no denying the excessive blending and interpolation revealed when zooming-in to the square or when magnifying one's display.  No matter how finely one can change the zoom amount in one's video player, one will never be able to see a 1:1 pixel match with Yedlin's video, because the blending/interpolation is inherent in the video.  Furthermore, the blending/interpolation is possibly introduced by Yedlin's node editor viewer when it is set to 100%.

Hence, Yedlin's claimed 1:1 pixel match is false.

By the way, in my comparison photo above, the middle image is from a tiff created by ffmpeg, to avoid  further compression.  The right image was made by merely zooming into the frozen frame playing on the viewer of the Natron compositor.

The attached image shows 1px b/w grid, generated in AE in FHD sequence, exported in ProRes, upscaled to QHD with ffmpeg ("-vf scale=3840:2160:flags=neighbor" options), imported back to AE, ovelayed over original one in same composition, magnified to 200% in viewer, screengrabbed and enlarged another 2x in PS with proper scaling settings. And no subsampling present, as you can see. So it's totally possible to upscale image or show it in 1:1 view without modifying original pixels - just don't use fractional enlargement ratios and complex scaling. Not shure about Natron though - never used it. Just don't forget to "open image in new tab" and to view in original scale.

On 4/4/2021 at 10:01 PM, tupp said:

The thing is, he uses this same method in almost every comparison, so he is merely comparing scaling methods throughout the video -- he is not comparing actual resolution.

But that's real life - most productions have different resolution footage on input (A/B/drone/action cams), and multiple resolutions on output - QHD/FHD for streaming/TV and DCI-something for DCP at least. So it's all about scaling and matching the look, and it's the subject of Yedlin's research.

More to say, even in rare "resolution preserving" cases when filming resolution perfectly matches projection resolution there are such things as lens abberations/distorions correction, image stabilization, rolling shutter jello removal and reframing in post. And it works well usually because of reasons covered by Yedlin.

And sometimes resolution, processing and scaling play funny tricks out of nothing. Last project I was making some simple clean-ups. Red Helium 8K shots, exported as DPX sequences to me. 80% of processed shots were rejected by colourist and DoP as "blurry, unfitting the rest of footage". Long story short, DPX files were rendered by technician in full-res/premium quality debayer, while colourist with DoP were grading 8K at half res scaled down to 2K big screen projection - and it was giving more punch and microcontrast on large screen then higher quality and resolution DPXes with same grading and projection scaling.

2x2x2.png

Link to comment
Share on other sites

On 4/2/2021 at 4:14 AM, noone said:

A bit off topic but since I am unlikely to be able to buy the latest greatest (barring a lottery win that will come any day/month/year now), given that up rez in photos has gotten very good recently (not perfect but very good), are there any standard definition video cameras I should be looking at to buy cheap (SD card or hard disc or even record to CD) on the chance that video up rez will be great in a few years?   

Something that has a really nice look to it even though only standard def?    Probably CCD?

Old pro or hobbyist?

Some prices on Ebay are ridiculous though for many Handy cam type cameras even for mini tape machines.

Just curious.

I used to use a Canon XM2 (miniDV) and really liked it - very pleasing image. It had a fun, adjustable strobe effect too.

Canon-Dm-Xm2-3Ccd-Camcorder-Mini-Dv-Digi

Link to comment
Share on other sites

7 hours ago, tupp said:

There is a way to actually test resolution which I have mentioned more than once before in this thread. --  test an 8K image on an 8K display, test a 6K image on a 6K display, test a 4K image on a 4K display, etc.

That scenario is as exact as we can get.  That setup is actually testing true resolution.

A 4K camera has one third the number of sensors than a 4K monitor has emitters.  This means that debayering involves interpolation, and means your proposal involves significant interpolation, and therefore fails your own criteria.

Link to comment
Share on other sites

I'm going to regret getting involved here, but @tuppI think you are technically correct about resolution in the abstract. But I think that Yedlin is doing his experiments in the context of real cameras and workflow, not an abstract.

I mean, it's completely obvious to anyone who has ever played a video game that there is a huge, noticeable difference between 4k and 2k, once we take out optical softness, noise, debayering artifacts, and compression. If we're debating differences between Resolutions with a capital R, let's answer with a resounding "Yes it makes a difference" and move on. The debate only makes sense in the context of a particular starting point and workflow because in actual resolution on perfect images the difference is very clear.

And yeah, maybe Yedlin isn't 100% scientific about it, maybe he uses incorrect terms, and I think we all agree he failed to tighten his argument into a concise presentation. I don't really know if discussing his semantics and presentation is as interesting as trying to pinpoint what does and doesn't matter for our own projects... but if you enjoy it carry on 🙂

I will say that for my film projects, I fail to see any benefit past 2k. I've watched my work on a 4k screen, and it doesn't really look any better in motion. Same goes for other movies I watch. 720p to 1080p, I appreciate the improvement. But 4k really never makes me enjoy it any more.

Link to comment
Share on other sites

23 hours ago, KnightsFan said:

I'm going to regret getting involved here, but @tuppI think you are technically correct about resolution in the abstract. But I think that Yedlin is doing his experiments in the context of real cameras and workflow, not an abstract.

I mean, it's completely obvious to anyone who has ever played a video game that there is a huge, noticeable difference between 4k and 2k, once we take out optical softness, noise, debayering artifacts, and compression. If we're debating differences between Resolutions with a capital R, let's answer with a resounding "Yes it makes a difference" and move on. The debate only makes sense in the context of a particular starting point and workflow because in actual resolution on perfect images the difference is very clear.

And yeah, maybe Yedlin isn't 100% scientific about it, maybe he uses incorrect terms, and I think we all agree he failed to tighten his argument into a concise presentation. I don't really know if discussing his semantics and presentation is as interesting as trying to pinpoint what does and doesn't matter for our own projects... but if you enjoy it carry on 🙂

I will say that for my film projects, I fail to see any benefit past 2k. I've watched my work on a 4k screen, and it doesn't really look any better in motion. Same goes for other movies I watch. 720p to 1080p, I appreciate the improvement. But 4k really never makes me enjoy it any more.

I think perhaps the largest difference between video and video games is that video games (and any computer generated imagery in general) can have a 100% white pixel right next to a 100% black pixel, whereas cameras don't seem to do that.

In Yedlins demo he zooms into the edge of the blind and shows the 6K straight from the Alexa with no scaling and the "edge" is actually a gradient that takes maybe 4-6 pixels to go from dark to light.  I don't know if this is do to with lens limitations, to do with sensor diffraction, OLPFs, or debayering algorithms, but it seems to match everything I've ever shot.  

It's not a difficult test to do..  take any camera that can shoot RAW and put it on a tripod, set it to base ISO and aperture priority, take it outside, open the aperture right up, focus it on a hard edge that has some contrast, stop down by 4 stops, take the shot, then look at it in an image editor and zoom way in to see what the edge looks like.

In terms of Yedlins demo, I think the question is if having resolution over 2K is perceptible under normal viewing conditions.  When he zooms in a lot it's quite obvious that there is more resolution there, but the question isn't if more resolution has more resolution, because we know that of course it does, and VFX people want as much of it as possible, but can audiences see the difference?  I'm happy from the demo to say that it's not perceptually different.

Of course, it's also easy to run Yedlins test yourself at home as well.  Simply take a 4K video clip and export it at native resolution and at 2K, you can export it lossless if you like.  Then bring both versions and put them onto a 4K timeline, and then just watch it on a 4K display, you can even cut them up and put them side-by-side or do whatever you want.  If you don't have a camera that can shoot RAW then take a timelapse with RAW still images and use that as the source video, or download some sample footage from RED, which has footage up to 8K RAW available to download free from their website.

Link to comment
Share on other sites

@kye The interesting thing about resolution is that the sharper the edges, the fewer samples are needed to describe it. A perfectly flat video game edge between white and black only needs 2 samples of 1 bit each, whereas the gradient produced by a diffusion filter, or a soft bokeh ball, has essentially infinite analog resolution. before it is quantized (both spatially and in wavelength) by the digital sensor.

It is also true that no bayer sensor will produce completely flat edges, as bayer algorithms usually have each pixel affected by the pixels around them. Not just those directly adjacent, but 2 or 3 pixels away.

I think a big part of our acceptance of lower resolution video is cultural. We like soft images, the same way we like cinematography that obscures things and leaves imagery to our imaginations (e.g. The Godfather). I'm still a big believer in high digital resolution, but mainly for the reason I mentioned: it describes soft edges more accurately. Soft analog with sharp digital is my preferred look. And I think that over 1080p, you really see diminishing returns in terms of enjoyment of the movie as an end product. I haven't done my own tests, but a possible explanation for Yedlin's conclusion is that when he upscales 2k->4k, he's not reproducing high frequency details from a true resolution chart, it's that the scaling algorithm is doing a "pretty good" job of recreating the original gradients produced by pleasantly soft optics. I could be wrong though.

Link to comment
Share on other sites

On 4/12/2021 at 12:00 PM, slonick81 said:

The attached image shows 1px b/w grid, generated in AE in FHD sequence, exported in ProRes, upscaled to QHD with ffmpeg ("-vf scale=3840:2160:flags=neighbor" options), imported back to AE, ovelayed over original one in same composition, magnified to 200% in viewer, screengrabbed and enlarged another 2x in PS with proper scaling settings. And no subsampling present, as you can see. So it's totally possible to upscale image or show it in 1:1 view without modifying original pixels - just don't use fractional enlargement ratios and complex scaling. Not shure about Natron though - never used it. Just don't forget to "open image in new tab" and to view in original scale.

Thank you for taking the time to create a demo.

I agree that achieving a 1-to-1 pixel match when outputting is easy, and I stated so earlier in this thread: 

On 4/7/2021 at 2:40 PM, tupp said:

It really is not that difficult to create uncompressed short clips or stills with a 1-to-1 pixel match, just as it was done in the pixel charts above.

Indeed, with vigilant QC, a 1-to-1 pixel match can be maintained throughout an entire "imaging pipline."

However, Yedlin did not achieve such a pixel match, even though he spoke for over 4 minutes on the importance of a 1-to-1 pixel match for his resolution comparison.

I do appreciate your support, but your presentation doesn't seem conclusive.  Perhaps I have misunderstood what you are trying to demonstrate.

After opening your demo image on a separate tab, the only things that seem to be cleanly rendered without any pixel blending, is the thinner vertical white line (a precise three pixels wide) and the wider horizontal white line (a precise six pixels wide).  The other two white lines suffer from blending. Everything else in the image also suffers from pixel blending.

If you make another attempt, please use this pixel chart as your original image, as it gives more, clearer information:

pixel_chart.jpg.1e3a764b81402a61718f612e6bd54949.jpg

If the chart rulings labled "1" are a clean, single-pixel wide, then you have achieved a 1-to-1 pixel match.

 

 

On 4/12/2021 at 12:00 PM, slonick81 said:

But that's real life - most productions have different resolution footage on input (A/B/drone/action cams), and multiple resolutions on output - QHD/FHD for streaming/TV and DCI-something for DCP at least.

Not that it matters, but there is no doubt that in "real life" many shooters capture at a certain resolution and maintain that same resolution throughout the "imaging pipline" and then output at that very same resolution.  Judging from quite a few of the posts on the EOSHD forum, many people don't even use lower res proxy files during editing.

Regardless, when testing for differences in discernabillity from various resolutions, it is important to control all of the variables other than resolution, as other variables that run wild and uncontrolled can muddle any slight discernability distinction between different resolutions.  If we allow such varibles to eliminate the discernability differences in the results, then what is the point of testing different resolutions?

The title of Yedlin's comparison video is "Camera Resolutions," and we cannot conclude anything about differing resolutions until we eliminate the influence of all variables (other than resolution) that might muddle the results.

 

 

On 4/12/2021 at 12:00 PM, slonick81 said:

So it's all about scaling and matching the look, and it's the subject of Yedlin's research.

Judging from the video's title title and from Yedlin's insistence on a 1-to-1 pixel match, Yedlin's comparison should center around detecting discernability differences of various resolutions.

However, it appears that we agree that Yedlin is actually testing scaling methods and not resolution, as I (and others) have stated repeatedly:

On 4/2/2021 at 6:16 AM, kye said:

He is comparing scaling methods - that's what he is talking about in this section of the video.  

On 4/4/2021 at 12:01 PM, tupp said:

Correct.  That is what I have stated repeatedly.

The thing is, he uses this same method in almost every comparison, so he is merely comparing scaling methods throughout the video -- he is not comparing actual resolution.

Once again, I appreciate your support.

On the other hand, even if the subject of Yedlin's comparison is scaling methods and other image processing effects, his test is hardly exhaustive nor conclusive -- there are zillions of different possible scaling and image processing combinations, and he shows only three or four.

Considering the countless possible combinations of scaling and/or image processing, it is slipshod reasoning to conclude that there is no practical difference between resolutions, just because Yedlin used some form of scaling/image-processing and in "real life" others may also use some form of scaling/image-processing.

Such coarse reasoning is analogous to the notion that there is no difference between Nickleback and the Beatles, because, in "real life," both bands used some form of guitars and drums.

There are just too many possible scaling/image-processing variables and combinations in "real life" for Yedlin to properly address, even within his lengthy video.

 

 

On 4/12/2021 at 12:00 PM, slonick81 said:

More to say, even in rare "resolution preserving" cases when filming resolution perfectly matches projection resolution

Shooting and outputting in the same resolution probably isn't rare.  I would guess that doing so is quite common.  However, until someone can find some actual statistics, we have no way of knowing what is the actual case.

In addition, I would guess that most digital displays are not projectors.  Certainly, there might be existing sales statistics on monitor sales vs. projector sales.

 

 

On 4/12/2021 at 12:00 PM, slonick81 said:

there are such things as lens abberations/distorions correction, image stabilization, rolling shutter jello removal and reframing in post.

In a resolution tests, all such variables should be controlled/eliminated so that they do influence the results.

 

 

On 4/12/2021 at 12:00 PM, slonick81 said:

And it works well usually because of reasons covered by Yedlin.

Please cite those reasons covered by Yedlin.

 

 

On 4/12/2021 at 12:00 PM, slonick81 said:

And sometimes resolution, processing and scaling play funny tricks out of nothing. Last project I was making some simple clean-ups. Red Helium 8K shots, exported as DPX sequences to me. 80% of processed shots were rejected by colourist and DoP as "blurry, unfitting the rest of footage". Long story short, DPX files were rendered by technician in full-res/premium quality debayer, while colourist with DoP were grading 8K at half res scaled down to 2K big screen projection - and it was giving more punch and microcontrast on large screen then higher quality and resolution DPXes with same grading and projection scaling.

Again, we agree here and I have touched on similar points in this thread: 

On 3/26/2021 at 1:00 AM, tupp said:

Resolution and sharpness are not the same thing.  There is a contrast element to sharpness, and it involves different levels (macro contrast micro contrast, etc.).  One can see the effects of different levels of contrast when doing frequency separation work in images.  Not sure if Yedlin specifically covers contrast's relation to sharpness in these videos.  By the way, here is a recent demonstration of when micro features and macro features don't match.

 

On 3/27/2021 at 12:17 PM, tupp said:

By the way, my above quote from @jcs came from a 2-page "detail enhancement" thread on EOSHD.  The inventive and original approach introduced within @jcs's opening post gives significant insight into sharpness/acuity properties that are more important than simple resolution.

 

 

On 4/12/2021 at 3:31 PM, kye said:

A 4K camera has one third the number of sensors than a 4K monitor has emitters.  This means that debayering involves interpolation, and means your proposal involves significant interpolation, and therefore fails your own criteria.

Did you just recently learn that some camera manufacturers sometimes "fudge" resolution figures by counting photosites instead of RGB pixel groups?

Did you forget that not all 4K sensors have a Bayer matrix?: 

On 4/4/2021 at 12:01 PM, tupp said:

There is no debayering with:  an RGB striped sensor; an RGBW sensor; a monochrome sensor; a scanning sensor; a Foveon sensor; an X-Trans sensor; and three-chip cameras; etc.

 

Also, did you forget that I have already covered such photosite/pixel-group counting in this thread?:

On 4/4/2021 at 12:01 PM, tupp said:

The conversion of adjacent photosites into a single RGB pixel group (Bayer or not) isn't considered "scaling" by most.  Even if you define it as such, that notion is irrelevant to our discussion -- we necessarily have to assume that a digital camera's resolution is given by either the output of it's ADC or by the resolution of the camera files.

We just have to agree on whether we are counting the individual color cells or the combined RGB pixel groups.  Once we agree upon the camera resolution, that resolution need never change throughout the rest of the "imaging pipeline."

Not that it matters, but there are plenty of camera sensors that have true 4K resolution.  Indeed, the maximum resolution figure for any current monochrome sensor cannot be "fudged," as there is no blending of RGB pixels with such a sensor.  It's could be the same situation Foveon sensors, as the RGB receptors are all in the same photosite.

 

 

On 4/12/2021 at 5:00 PM, KnightsFan said:

@tuppI think you are technically correct about resolution in the abstract.

I make no claims in regards to resolution in "the abstract."  I simply assert that Yedin's comparison is not a valid resolution test.


 

On 4/12/2021 at 5:00 PM, KnightsFan said:

But I think that Yedlin is doing his experiments in the context of real cameras and workflow, not an abstract.

In light of Yedlin's insistence on a 1-to-1 pixel match and his use of what he calls "crop to fit" over "resample to fit", I am not sure that Yedlin shares your notion on the context of his comparison.

Also, what kind of camera and/or workflow is not "real."

 

 

On 4/12/2021 at 5:00 PM, KnightsFan said:

I mean, it's completely obvious to anyone who has ever played a video game that there is a huge, noticeable difference between 4k and 2k, once we take out optical softness, noise, debayering artifacts, and compression.

I would guess that one could often discern the difference between 4K and 2K, but I will have to take your word on that.

 

 

On 4/12/2021 at 5:00 PM, KnightsFan said:

If we're debating differences between Resolutions with a capital R, let's answer with a resounding "Yes it makes a difference" and move on.

I am not "debating differences between resolutions."  My concern is that folks are taking Yedlin's comparison as a valid resolution test.

 

 

On 4/12/2021 at 5:00 PM, KnightsFan said:

The debate only makes sense in the context of a particular starting point and workflow because in actual resolution on perfect images the difference is very clear.

Well, if that is so, then Yedlin's test is not valid, as his comparison shows no difference between "6K" and "2k" and as his "workflow" is exceedingly particular.

 

 

On 4/12/2021 at 5:00 PM, KnightsFan said:

And yeah, maybe Yedlin isn't 100% scientific about it,

The thing is, in any type of empirical testing one usually has to be "100% scientific" to draw any solid conclusions.  If one fails to properly address or control any influential variables, then the results can be corrupted and misleading.  In a proper test, the only variable that is allowed to change is the one(s) that is being tested.

 

 

On 4/12/2021 at 5:00 PM, KnightsFan said:

maybe he uses incorrect terms, and I think we all agree he failed to tighten his argument into a concise presentation.

Yedlin's made-up terms are not really consequential to what is being discussed in this thread.

In regards to the lack of tightness of Yedlin's argument/presentation, I suspect that there are a few posters in this thread who disagree with you.

 

 

On 4/12/2021 at 5:00 PM, KnightsFan said:

I don't really know if discussing his semantics and presentation is as interesting as trying to pinpoint what does and doesn't matter for our own projects... but if you enjoy it carry on 🙂

I am not at odds with Yedlin's semantics nor am I arguing semantics.  What gave you that notion?

The method and execution shown in Yedlin's presentation are faulty to the point that they misinform folks in a way that definitely could matter to their own projects.

If you think the topics that I argue don't matter for you own projects, that is fine, but please do not unfairly single me out as the only one who is discussing those topics.  There are at least two sides to a discussion, and if you look at every one of my posts in this thread you will see that I quote someone and then respond.  I merely react to someone else who is discussing the very same topic that evidently doesn't matter to you.  Why have you not directly addressed those others like you have done with me?

By the way, I do not "enjoy" constantly having to repeat the same simple facts that some refuse to comprehend and/or accept.

 

 

On 4/12/2021 at 5:00 PM, KnightsFan said:

I will say that for my film projects, I fail to see any benefit past 2k. I've watched my work on a 4k screen, and it doesn't really look any better in motion. Same goes for other movies I watch. 720p to 1080p, I appreciate the improvement. But 4k really never makes me enjoy it any more.

Great!  I am happy that you have made your decision without relying Yedlin's muddled comparison.

 

 

4 hours ago, kye said:

In Yedlins demo he zooms into the edge of the blind and shows the 6K straight from the Alexa with no scaling and the "edge" is actually a gradient that takes maybe 4-6 pixels to go from dark to light.

There will likely be less of a gradient with a lower resolution.

 

 

4 hours ago, kye said:

In terms of Yedlins demo, I think the question is if having resolution over 2K is perceptible under normal viewing conditions.

The viewing conditions in Yedlin's demo are not "normal":

  1. He uses a framing that he calls "crop to fit" which is not the normal framing;
  2. we see the entire comparison through the viewer of Yedlin's node editor;
  3. the image suffers at least one additional pass of pixel blending/interpolation that would not be present under normal conditions.

 

 

4 hours ago, kye said:

When he zooms in a lot it's quite obvious that there is more resolution there,

It is obvious when he zooms in here that the actual resolution of the two compared images is identical, as the pixel size does not change.  However, there might be a difference in the interpolation and/or "micro contrast."

 

 

4 hours ago, kye said:

I'm happy from the demo to say that it's not perceptually different.


Okay, but the demo is flawed.

 

 

4 hours ago, kye said:

Of course, it's also easy to run Yedlins test yourself at home as well.  Simply take a 4K video clip and export it at native resolution and at 2K, you can export it lossless if you like.  Then bring both versions and put them onto a 4K timeline, and then just watch it on a 4K display,

Don't forget to verify a 1-to-1 pixel match within the NLE timeline viewer with a pixel chart.

If the NLE viewer the introduces it's own blending/interpolation, try making short clips and play them in a loop back-to-back with a player that can give true 1-to-1 full screen 4K.

Link to comment
Share on other sites

8 hours ago, KnightsFan said:

@kye The interesting thing about resolution is that the sharper the edges, the fewer samples are needed to describe it. A perfectly flat video game edge between white and black only needs 2 samples of 1 bit each, whereas the gradient produced by a diffusion filter, or a soft bokeh ball, has essentially infinite analog resolution. before it is quantized (both spatially and in wavelength) by the digital sensor.

Yes and no - if the edge is at an angle then you need an infinite resolution to avoid having a grey-scale pixel in between the two flat areas of colour.  VFX requires softening (blurring) in order to not appear aliased, or must be rendered where a pixel is taken to have the value of light within an arc (which might partially hit an object but also partially miss it) rather than at a single line (with is either hit or miss because it's infinitely thin).

8 hours ago, KnightsFan said:

It is also true that no bayer sensor will produce completely flat edges, as bayer algorithms usually have each pixel affected by the pixels around them. Not just those directly adjacent, but 2 or 3 pixels away.

Tupp disagrees with us on this point, but yes.  I haven't read much about debayering, but it makes sense if the interpolation is a higher-order function than linear from the immediate pixels.

8 hours ago, KnightsFan said:

I think a big part of our acceptance of lower resolution video is cultural. We like soft images, the same way we like cinematography that obscures things and leaves imagery to our imaginations (e.g. The Godfather). I'm still a big believer in high digital resolution, but mainly for the reason I mentioned: it describes soft edges more accurately. Soft analog with sharp digital is my preferred look. And I think that over 1080p, you really see diminishing returns in terms of enjoyment of the movie as an end product. I haven't done my own tests, but a possible explanation for Yedlin's conclusion is that when he upscales 2k->4k, he's not reproducing high frequency details from a true resolution chart, it's that the scaling algorithm is doing a "pretty good" job of recreating the original gradients produced by pleasantly soft optics. I could be wrong though.

There is a cultural element, but Yedlins test was strictly about perceptibility, not preference.

When he upscales 2K->4K he can't reproduce the high frequency details because they're gone.  It's like if I described the beach as being low near the water and higher up further away from the water, you couldn't take my information and recreate the curve of the beach from that, let alone the ripples in the sand from the wind or the texture of the footprints in it - all that information is gone and all I have given you is a straight line.

In digital systems there's a thing called the nyquist frequency which in digital audio terms says that the highest frequency that can be reproduced is half the sampling rate.  ie, the highest frequency is when the data goes "100, 0, 100, 0, 100 , 0" and in the image the effect is that if I say that the 2K pixels are "100, 0, 100" then that translates to a 4K image with "100, ?, 0, ?, 100, ?" so the best we can do is simply guess what those pixel values were, based on the surrounding pixels, but we can't know if one of those edges was sharp or not.  The right 4K image might be "100, 50, 0, 0, 100, 100" but how would we know?  The information that one of those edges was soft and one was sharp is lost forever.

Link to comment
Share on other sites

6 hours ago, tupp said:

If you make another attempt, please use this pixel chart as your original image, as it gives more, clearer information:

Shure. Exactly this image has heavy compression artifacts and I was unable to find the original chart but I got the idea and recreated these pixel-wide colored "E" and did the same upscale-view-grab pattern. And, well, it does preserve sharp pixel edges, no subsampling.

I dont have access to Nuke right now, not going to mess with warez in the middle of a work week for the sake of internet dispute, and I'm not 100% shure about the details, but last time I was composing something in Nuke it had no problems with 1:1 view, especially considering I was making titles and credits as well. And what Yedlin is doing - comparing at 100% 1:1 - it looks right.

Yedlin is not questioning the capability of given codec/format to store given amount of resolution lines. He is discussing about _percieved_ resolution. It means that image should be a) projected, b) well, percieved. So he chooses common ground - 4K projection, crops out 1:1 portion of it and cycles through different cameras. And his idea sounds valid - starting from certain point digital resolution is less important then other factors existing before (optical system resolving, DoF/motion blur, AA filter) and after (rolling shutter, processing, sharpening/NR, compression) the resolution is created. He doesn't state that there is zero difference and he doesn't touch special technical cases like VFX or intentional heavy reframing in post, where additional resolution may be beneficial.

7 hours ago, tupp said:

Please cite those reasons covered by Yedlin.

The whole idea of his works: starting from certain point of technical resolution percieved resolution of real life images does not suffer from upsampling and does not benefit from downscaling that much. For example, on the second image I added a numerically subtle transform to chart in AE before grabbing the screen: +5% scale, 1° rotation, slight skew - essentially what you will get with nearly any stabilization plugin, and it's a mess in term of technical resolution. But we do it here and there without any dramatic degradation to real footage.

pixel_chart_2x2x2.png

pixel_chart_2x2x2_transform.png

Link to comment
Share on other sites

On 4/14/2021 at 4:35 AM, slonick81 said:

Shure. Exactly this image has heavy compression artifacts and I was unable to find the original chart but I got the idea and recreated these pixel-wide colored "E" and did the same upscale-view-grab pattern. And, well, it does preserve sharp pixel edges, no subsampling.

I applaud you for making your single-pixel "E's!"

Likewise, I couldn't find the original version of that pixel chart, but the black integer rulings are clean.  I apologize that I forgot to mention to concentrate on the black pixel rulings, as I did earlier in the thread:

On 4/4/2021 at 12:01 PM, tupp said:

If the the charts are displayed at 1:1 pixels, you should easily observe with a magnifier that the all of the black pixel rulings that are integers (1, 2, 3, etc.) are cleanly defined with no blending into adjacent pixels.  On the other hand, all of the black pixel rulings that are non-integers (1.3, 1.6, 2.1, 2.4, 3.3, etc.) should show blending on their edges with a 1:1 match.

I had also linked another chart that is a GIF, so it doesn't suffer from compression, and all rulings of any color shown are clean, but it lacks the non-integer rulings:

resolution.color-rc4s.gif.d44f3789bec2d0ba4bb3ccfd8ae3de37.gif

 

On 4/14/2021 at 4:35 AM, slonick81 said:

I dont have access to Nuke right now, not going to mess with warez in the middle of a work week for the sake of internet dispute, and I'm not 100% shure about the details, but last time I was composing something in Nuke it had no problems with 1:1 view, especially considering I was making titles and credits as well.

Natron is free and open source, and is available on most platforms.

Creating something in a compositor and observing the work at "100%" within a compositor viewer might differ from seeing the actual results at 1-to-1 pixels.  It is important to also see your actual results, rather than just a screenshot of a viewer with an enlarged image.

 

On 4/14/2021 at 4:35 AM, slonick81 said:

And what Yedlin is doing - comparing at 100% 1:1 - it looks right.

We seem to agree on many things, but we depart here.  Not sure how such a sizeable leap of reasoning is possible from the shaky ground on which Yedlin's comparisons are based.

 

On 4/14/2021 at 4:35 AM, slonick81 said:

Yedlin is not questioning the capability of given codec/format to store given amount of resolution lines.

"Resolution lines" usually refers to a quality of optical systems while the resolution of digital video "codec" involves pixels, but I think I know what you are trying to say.

I agree that Yedlin is not trying to test the capability of any codec/format to store a given amount of pixels.  There is absolutely no reason for such a test.

 

On 4/14/2021 at 4:35 AM, slonick81 said:

He is discussing about _percieved_ resolution.

... which is precisely what we are discussing in this thread.  This issue has been covered in earlier posts.  Why would anyone test "non-perceived" resolution?

 

On 4/14/2021 at 4:35 AM, slonick81 said:

It means that image should be a) projected, b) well, percieved.

"Projected?" -- not necessarily.  "Percieved?" -- of course.

 

On 4/14/2021 at 4:35 AM, slonick81 said:

So he chooses common ground - 4K projection,

Why do you say "projection."  I am not "perceiving" a projection.

 

On 4/14/2021 at 4:35 AM, slonick81 said:

crops out 1:1 portion of it and cycles through different cameras.

Any crop can be a "1:1 portion," but for review purposes it is important to maintain standardized aspect ratios and resolution formats.

For some reason, Yedlin chose to show his 1920x1080 "crop-to-fit"  from a 4K image, within an often square software "viewer" (thus, forcing an additional crop), and then he outputted everything to a 1920x1280 file.  WTF?!

It's a crazy and wild comparison.

By the way, the comparisons mostly discussed so far in this thread involve an image from a single 6K camera.  That image has been been scaled to different resolutions.

 

On 4/14/2021 at 4:35 AM, slonick81 said:

And his idea sounds valid - starting from certain point digital resolution is less important then other factors existing before (optical system resolving, DoF/motion blur, AA filter) and after (rolling shutter, processing, sharpening/NR, compression) the resolution is created.

Again, I make no claims as to whether digital resolution may be more or less important than other factors, but I argue that Yedlin's comparisons are useless in providing any solid conclusions in that regard.

Yedlin went into his comparison with a significant confirmation bias.  Your statement above acknowledges his strong leanings:  "his idea sounds valid - starting from certain point digital resolution is less important then other factors."  By glossing over uncontrolled variables and by ignoring significant potential objections, Yedlin tries to convert his bias into reality, rather than conducting a proper, controlled and objectively analyzed test.

 

On 4/14/2021 at 4:35 AM, slonick81 said:

He doesn't state that there is zero difference

He suggests that there is no perceptual difference between 6K and 2K.

He tries to prove that notion by downscaling the original 6K image to 2K, and then by comparing the original 6K and downscaled 2K image cropped within a 4K node editor viewer, and then by outputting a screen capture of that node editor viewer to a video file with an odd resolution -- all of this is done without addressing any blending/interpolation/compression variables that occur during each step.

Again, WTF?!

 

On 4/14/2021 at 4:35 AM, slonick81 said:

and he doesn't touch special technical cases like VFX or intentional heavy reframing in post, where additional resolution may be beneficial.

He is heavily reframing the original image within his node editor viewer.

 

On 4/13/2021 at 8:50 PM, tupp said:

Please cite those reasons covered by Yedlin.

On 4/14/2021 at 4:35 AM, slonick81 said:

The whole idea of his works:

I wan't asking about the whole idea of his work. You said:

On 4/12/2021 at 12:00 PM, slonick81 said:

More to say, even in rare "resolution preserving" cases when filming resolution perfectly matches projection resolution there are such things as lens abberations/distorions correction, image stabilization, rolling shutter jello removal and reframing in post. And it works well usually because of reasons covered by Yedlin.

I was asking you to cite (with links to Yedlin's video) those specific reasons covered by Yedlin.

 

On 4/14/2021 at 4:35 AM, slonick81 said:

starting from certain point of technical resolution percieved resolution of real life images does not suffer from upsampling and does not benefit from downscaling that much.

That notion may or may not be correct, but Yedin's convoluted and muddled comparison is inconclusive in regards to possible discernability distinctiveness of different resolutions

 

On 4/14/2021 at 4:35 AM, slonick81 said:

For example, on the second image I added a numerically subtle transform to chart in AE before grabbing the screen: +5% scale, 1° rotation, slight skew - essentially what you will get with nearly any stabilization plugin, and it's a mess in term of technical resolution. But we do it here and there without any dramatic degradation to real footage.

Thank you for creating that image.  It is important for us see such images with a 1-to-1 pixel match -- not enlarged.

It appears that your use of the term "technical resolution" includes some degree or delineation of blended/interpolated pixels, not unlike those shown by the non-integer rulings in the first pixel chart that I posted.  I am not sure if such blending/interpolation is visually quantifiable (even considering the pixel chart).

Nevertheless, introducing any such blending/interpolation into a resolution comparison unnecessarily complicates a resolution comparison, and Yedlin does nothing to address nor to quantify the resulting "technical resolution" introduced by all of the scaling, interpolation and compression possibly introduced by the many convoluted steps of his test.

On the other hand, you have given a specific combination of adjusted variables (+5% scale, 1° rotation, slight skew), variables which Yedlin fails to record and report. 

However, how do we know that the "technical resolution" of your specific combination will match that of, say:

  • a raw 4K shoot, edited in 4K in Sony Vegas with no stabilization nor scaling and then delivered in 4K Prores 4:4:4?;
  • someone shooting home movies in 4:2:0 AVCHD, edited on MovieMaker with sharpness and IS set at full and inadvertently output to some odd resolution in a highly compressed M4V codec?;
  • someone shooting an EOSM with ML at 2.5K raw and scaled to HD in editing in Cinelerra with no IS and with 50% sharpening and output to an h264, All-I file?

Do you think that just because all of these examples may or may not use some form of IS, sharpening compression that they will all yield identical results in regards to the degree of pixel blending/interpolation?

Again, just because Nickleback and the Beatles both use some form of guitars and drums, that doesn't mean that their results are the same.

If one intends to make any solid conclusions from a test, it is imperative to eliminate and/or assess the influence of any influential variables that are not being tested.  Yedlin did not achieve his self-touted 1-to-1 pixel match required to eliminate the influence of pixel blending, scaling, interpolation and compression, and he made no effort to properly address nor quantify the influence of those variables.

Link to comment
Share on other sites

@tupp

Your obsession with pixels not impacting the pixels adjacent to them means that your arguments don't apply in the real world.  I don't understand why you keep pursuing this "it's not perfect so it can't be valid" line of logic.

Bayer sensors require debayering, which is a process involving interpolation.  I have provided links to articles explaining this but you seem to ignore this inconvenient truth.

Even if we ignore the industry trend of capturing images at a different resolution than they are delivered in, it still means that your mythical image pipeline that doesn't involve any interpolation is limited to cameras that capture such a tiny fraction of the images we watch they may as well not exist.

Your criticisms also don't allow for compression, which is applied to basically every image that is consumed.  This is a fundamental issue because compression blurs edges and obscures detail significantly, making many differences that might be visible in the mastering suite invisible in the final delivered stream.  Once again, this means your comparison is limited to some utopian fairy-land that doesn't apply here in our dimension.

I don't understand why you persist.

Even if you were right about everything else (which you're not), you would only be proving the statement "4K is perceptually different to 2K when you shoot with cameras that no-one shoots with, match resolutions through the whole pipeline, and deliver in a format no-one delivers in".

Obviously, such a statement would be pointless.

Link to comment
Share on other sites

15 hours ago, kye said:

 

I don't understand why you persist.

Even if you were right about everything else (which you're not), you would only be proving the statement "4K is perceptually different to 2K when you shoot with cameras that no-one shoots with, match resolutions through the whole pipeline, and deliver in a format no-one delivers in".

Obviously, such a statement would be pointless.

Not worth bothering with, just like equivalence arguments Kye.     Like hitting your head against a brick wall.

Link to comment
Share on other sites

On 4/15/2021 at 4:06 PM, kye said:

Your obsession with pixels not impacting the pixels adjacent to them means that your arguments don't apply in the real world.

Pixels "impact" adjacent pixels?  Sounds imaginative.  Please explain how that works in the "real world."

 

 

On 4/15/2021 at 4:06 PM, kye said:

 I don't understand why you keep pursuing this "it's not perfect so it can't be valid" line of logic.

Yedlin's comparison is far from perfect.  In general:

On 4/15/2021 at 2:09 PM, tupp said:

Yedlin went into his comparison with a significant confirmation bias.  Your statement above acknowledges his strong leanings:  "his idea sounds valid - starting from certain point digital resolution is less important then other factors."  By glossing over uncontrolled variables and by ignoring significant potential objections, Yedlin tries to convert his bias into reality, rather than conducting a proper, controlled and objectively analyzed test.

On 4/15/2021 at 2:09 PM, tupp said:

Nevertheless, introducing any such blending/interpolation into a resolution comparison unnecessarily complicates a resolution comparison, and Yedlin does nothing to address nor to quantify the resulting "technical resolution" introduced by all of the scaling, interpolation and compression possibly introduced by the many convoluted steps of his test.

 

 

On 4/15/2021 at 4:06 PM, kye said:

Bayer sensors require debayering

Actually, they don't, especially if one shoots in black and white.

 

 

On 4/15/2021 at 4:06 PM, kye said:

which is a process involving interpolation.  I have provided links to articles explaining this but you seem to ignore this inconvenient truth.

      Irony:

On 4/4/2021 at 12:01 PM, tupp said:

The conversion of adjacent photosites into a single RGB pixel group (Bayer or not) isn't considered "scaling" by most.  Even if you define it as such, that notion is irrelevant to our discussion -- we necessarily have to assume that a digital camera's resolution is given by either the output of it's ADC or by the resolution of the camera files.

We just have to agree on whether we are counting the individual color cells or the combined RGB pixel groups.  Once we agree upon the camera resolution, that resolution need never change throughout the rest of the "imaging pipeline."

On 4/4/2021 at 7:18 PM, kye said:

< ... crickets... >

  ... and...

On 4/4/2021 at 12:01 PM, tupp said:

We can determine the camera resolution merely from output of the ADC or from the camera files.  We just have to agree on whether we are counting the individual color cells or the combined RGB pixel groups. After we agree on the camera resolution, that resolution need never change throughout the rest of the "imaging pipeline."

On 4/4/2021 at 7:18 PM, kye said:

< ... crickets... >

... and...

On 4/13/2021 at 8:50 PM, tupp said:

Did you forget that not all 4K sensors have a Bayer matrix?: 

On 4/4/2021 at 12:01 PM, tupp said:

There is no debayering with:  an RGB striped sensor; an RGBW sensor; a monochrome sensor; a scanning sensor; a Foveon sensor; an X-Trans sensor; and three-chip cameras; etc.

 

On 4/14/2021 at 3:25 AM, kye said:

< ... crickets... >

 

 

On 4/15/2021 at 4:06 PM, kye said:

Even if we ignore the industry trend of capturing images at a different resolution than they are delivered in,

Okay, not that it actually matters to this resolution discussion, but you have been hinting that there is a difference in resolution between what the camera sensor captures and the actual resolution of the rest of the "imaging pipeline,"  So, let's clarify your point:

Are you saying that camera sensors capture images at a lower resolution than the rest of the "imaging pipeline," and, at some point in the subsequent process, the sensor image is somehow upscaled to match the resolution of the latter "imaging pipeline?"  Is that what you think?

 

 

On 4/15/2021 at 4:06 PM, kye said:

it still means that your mythical image pipeline that doesn't involve any interpolation is limited to cameras that capture such a tiny fraction of the images we watch they may as well not exist.

The camera's resolution is determined at the output of the ADC or from the recorded files.  It is irrelevant to consider any interpolation nor processing prior to that point that cannot be adjusted.

Also, you seem to insinuate that cameras with non-Bayer sensors are uncommon.  Regardless of the statistical percentage of Bayer matrix cameras to non-Bayer camers, that point is irrelevant to the general discernability of different resolutions perceived on a display. 

Even Yedlin did not try to argue that Bayer sensors somehow make a difference in his resolution comparison.

 

 

On 4/15/2021 at 4:06 PM, kye said:

Your criticisms also don't allow for compression, which is applied to basically every image that is consumed.  This is a fundamental issue because compression blurs edges and obscures detail significantly, making many differences that might be visible in the mastering suite invisible in the final delivered stream.  Once again, this means your comparison is limited to some utopian fairy-land that doesn't apply here in our dimension.

Again... irony:

On 4/13/2021 at 8:50 PM, tupp said:

On the other hand, even if the subject of Yedlin's comparison is scaling methods and other image processing effects, his test is hardly exhaustive nor conclusive -- there are zillions of different possible scaling and image processing combinations, and he shows only three or four.

Considering the countless possible combinations of scaling and/or image processing, it is slipshod reasoning to conclude that there is no practical difference between resolutions, just because Yedlin used some form of scaling/image-processing and in "real life" others may also use some form of scaling/image-processing.

Such coarse reasoning is analogous to the notion that there is no difference between Nickleback and the Beatles, because, in "real life," both bands used some form of guitars and drums.

There are just too many possible scaling/image-processing variables and combinations in "real life" for Yedlin to properly address, even within his lengthy video.

... and...

On 4/15/2021 at 2:09 PM, tupp said:

Yedlin does nothing to address nor to quantify the resulting "technical resolution" introduced by all of the scaling, interpolation and compression possibly introduced by the many convoluted steps of his test.

 

 

On 4/15/2021 at 4:06 PM, kye said:

I don't understand why you persist.

One could ask the exact same question of you.

My answer is that it is important to point out misleading information, especially when it comes in the form of flawed advice from a prominent person with a large, impressionable following.

In addition, there are way too many slipshod imaging tests posted on the Internet. More folks should be aware of the current prevalence of low testing standards.

 

 

On 4/15/2021 at 4:06 PM, kye said:

Even if you were right about everything else (which you're not), you would only be proving the statement "4K is perceptually different to 2K when you shoot with cameras that no-one shoots with

Not that such commonality matters nor is true, nor that your 4K/2K point actually makes sense, but lots of folks shoot with non-Bayer matrix cameras.

For instance, anyone shooting with a Fuji X-T3 or X-T4 is not using a Bayer sensor.  Any photographer using a scanning back is not using a Bayer sensor.    Anyone shooting with a Foveon sensor is not debayering anything.  Anyone shooting with a monochrome sensor is not debayering their images.

 

 

On 4/15/2021 at 4:06 PM, kye said:

match resolutions through the whole pipeline, and deliver in a format no-one delivers in".

Well, we need to clarify exactly what you are hinting at here (see above paragraph mentioning "imaging pipeline").

 

 

On 4/16/2021 at 8:01 AM, noone said:

Not worth bothering with, just like equivalence arguments Kye.

Coincidentally, in our most recent "equivalence" discussion, someone linked yet another problematic Yedlin test.

 

 

On 4/16/2021 at 8:01 AM, noone said:

Like hitting your head against a brick wall.

I am beginning to agree with that.

Link to comment
Share on other sites

6 hours ago, tupp said:

My answer is that it is important to point out misleading information, especially when it comes in the form of flawed advice from a prominent person with a large, impressionable following.

In addition, there are way too many slipshod imaging tests posted on the Internet. More folks should be aware of the current prevalence of low testing standards.

This is my concern too.  Hopefully I have dissuaded them from your arguments sufficiently.

6 hours ago, tupp said:

Are you saying that camera sensors capture images at a lower resolution than the rest of the "imaging pipeline," and, at some point in the subsequent process, the sensor image is somehow upscaled to match the resolution of the latter "imaging pipeline?"  Is that what you think?

Once again, you're deliberately oversimplifying this in order to try and make my arguments sound silly, because you can't argue against their logic in a calm and rational way.

This is how a camera sensor works:a-RGB-color-space-b-a-Bayer-Color-Filter

Look at the pattern of the red photosites that is captured by the camera.  It is missing every second row and every second column.  

In order to work out a red value for every pixel in the output, it must interpolate the values from what it did measure.  Just like upscaling an image.

6 hours ago, tupp said:

Not that such commonality matters nor is true, nor that your 4K/2K point actually makes sense, but lots of folks shoot with non-Bayer matrix cameras.

For instance, anyone shooting with a Fuji X-T3 or X-T4 is not using a Bayer sensor.  Any photographer using a scanning back is not using a Bayer sensor.    Anyone shooting with a Foveon sensor is not debayering anything. 

This is typical of the arguments you are making in this thread.  It is technically correct and sounds like you might be raising valid objections.  Unfortunately this is just technical nit-picking and shows that you are missing the point, either deliberately or naively.

My point has been, ever since I raised it, that camera sensors have significant interpolation.  This is a problem for your argument as your entire argument is that Yedlins test is invalid because the pixels blended with each other (as you showed in your frame-grabs) and you claimed this was due to interpolation / scaling / or some other resolution issue.  

Your criticism then is that a resolution test cannot involve interpolation, and the problem with that is that almost every camera has interpolation built-in fundamentally.

I mentioned bayer sensors, and you said the above.

I showed above that bayer sensors have less red photosites than output pixels, therefore they must interpolate, but what about the Fuji X-T3?

The Fuji cameras have a X-Trans sensor, which looks like this:

bayerxtransfeat.jpg

Notice something about that?  Correct - it too doesn't have a red value for every pixel, or a green value for every pixel, or a blue value for every pixel.  Guess what that means - interpolation!

"Scanning back" you say.  Well, that's a super-broad term, but it's a pretty niche market.  I'm not watching that much TV shot with a medium format camera.  If you are, well, good for you.

And finally, Foveon.  Now we get to a camera that doesn't need to interpolate because it measures all three colours for each pixel:

Foveon-X3-diagram.jpg

So I made a criticism about interpolation by mentioning bayer sensors, and you criticised my argument by picking up on the word "debayer" but included the X-Trans sensor in your answer, when the X-Trans sensor has the same interpolation that you are saying can't be used!

You are not arguing against my argument, you are just cherry picking little things to try and argue against in isolation.  A friend PM'd me to say that he thought you were just arguing for its own sake, and I don't know if that's true or not, but you're not making sensible counter-arguments to what I'm actually saying.

So, you criticise Yedlin for his use of interpolation:

On 4/16/2021 at 5:09 AM, tupp said:

Nevertheless, introducing any such blending/interpolation into a resolution comparison unnecessarily complicates a resolution comparison, and Yedlin does nothing to address nor to quantify the resulting "technical resolution" introduced by all of the scaling, interpolation and compression possibly introduced by the many convoluted steps of his test.

and yet you previously said that "We can determine the camera resolution merely from output of the ADC or from the camera files."  

You're just nit-picking on tiny details but your argument contains all manner of contradictions.

Link to comment
Share on other sites

On 4/16/2021 at 12:09 AM, tupp said:

Natron is free and open source, and is available on most platforms.

Creating something in a compositor and observing the work at "100%" within a compositor viewer might differ from seeing the actual results at 1-to-1 pixels.  It is important to also see your actual results, rather than just a screenshot of a viewer with an enlarged image.

So Natron we go!

As you can see, latest win version with default settings shows no subsampling on enlarged view. The only thing we should care about is filter type in reformat node. It complements original small 558x301 image to FHD with borders around, but centering introduces 0.5 pixel vertical shift due to uneven Y dimension of original image (301 px) so "Impulse" filter type is set for "nearest neighbour" interpolation. If you uncheck "Center" it will place our chart in bottom left corner and remove any influence of "Filter" setting.

The funniest thing is that even non-round resize in viewer won't introduce any soft subsampling with these settings. You can notice some pixel line doubling but no soft transitions.

And yes, I converted the chart to .bmp because natron couldn't read .gif.

On 4/16/2021 at 12:09 AM, tupp said:

Why do you say "projection."  I am not "perceiving" a projection.

It's the only thing you're percieving. Unless you're Neuralink test volunteer, maybe.

On 4/16/2021 at 12:09 AM, tupp said:

Any crop can be a "1:1 portion," but for review purposes it is important to maintain standardized aspect ratios and resolution formats.

For some reason, Yedlin chose to show his 1920x1080 "crop-to-fit"  from a 4K image, within an often square software "viewer" (thus, forcing an additional crop), and then he outputted everything to a 1920x1280 file.  WTF?!

It's a crazy and wild comparison.

By the way, the comparisons mostly discussed so far in this thread involve an image from a single 6K camera.  That image has been been scaled to different resolutions.

Well, that's how any kind of compositing is done. CG artist switches back and forth from "fit in view" to any magnification needed for the job, using "1:1" scale to justify real details. Working screen resolution can be any, the more the better, of course, but for the sake of working space for tools, not resolution itself. And this is exactly what Yedlin is doing: sitting in his composing suite of choice (Nuke), showing his nodes and settings, zooming in and out but mostly staying at 1:1, grabbing at resolution he is comfortable with (thus 1920x1280 file - it's a window screen record from larger display)

In general: I mostly posted to show one simple thing - you can evaluate footage in 1:1 mode in composer viewer, and round multiple scaling doesn't introduce any false details. I considered it as a given truth. But you questioned it and I decided to check. So, for AE, PS, ffmpeg, Natron and most likely Nuke it's true (with some attention to settings). Сoncerning Yedlin's research - it was made in a very natural way for me, as if I was evaluating footage myself, and it summarised well a general impression I got working on video/movie productions - resolution is not a decisive factor nowdays. Like, for last 5 years I need one hand's fingers to count projects when director/DoP/producer was seeking intentionally for more resolution. You see it wrong or flawed - fine, I don't feel any necessity to change your mind, looks like it's more a kye's battle to fight.

natron_part_zoom.png

natron_test01.png

Link to comment
Share on other sites

  

On 4/17/2021 at 11:11 AM, tupp said:

My answer is that it is important to point out misleading information, especially when it comes in the form of flawed advice from a prominent person with a large, impressionable following.

In addition, there are way too many slipshod imaging tests posted on the Internet. More folks should be aware of the current prevalence of low testing standards.

 

On 4/17/2021 at 6:17 PM, kye said:

This is my concern too.

Wait.  You, too, are concerned about slipshod testing?

Then, how do you reconcile Yedlin's failure to achieve his self-imposed (and required) 1-to-1 pixel match?...  you know, Yedlin's supposed 1-to-1 pixel match that you formerly took the trouble to explain and defend: 

On 3/30/2021 at 3:48 PM, kye said:
  • He talks about how to get a 1:1 view of the pixels
  • He shows how in a 1:1 view of the pixels that the resolutions aren't discernable

As I have shown, Yedlin did not get a 1:1 view of the pixels, and now it appears that the 1:1 pixel match is suddenly unimportant to one of us.

To mix metaphors, one of us seems to have changed one's tune and moved the goal posts out of the stadium.

Also, how do you reconcile Yedlin’s failure to even address and/or quantify the effect of all of the pixel blending, interpolation, scaling and compression that occur in his test?  There is no way for us to know to what degree the "spatial resolution" is affected by all of the complex imaging convolutions of Yedlin's test.

 

On 4/17/2021 at 11:11 AM, tupp said:

Are you saying that camera sensors capture images at a lower resolution than the rest of the "imaging pipeline," and, at some point in the subsequent process, the sensor image is somehow upscaled to match the resolution of the latter "imaging pipeline?"  Is that what you think?

On 4/17/2021 at 6:17 PM, kye said:

Once again, you're deliberately oversimplifying this in order to try and make my arguments sound silly,

There is absolutely no need for me to try and make... such an attempt.

I merely  asked you to clarify your argument regarding sensor resolution, because you have repeatedly ignored my rebuttal to your "Bayer interpolation" notion, and because you have also mentioned "sensor scaling" several times.  Some cameras additionally upscale the actual sensor resolution after the sensor is interpolated, so I wanted to make sure that you were not referring to such upscaling, and, hence, ignoring my repeated responses.

 

On 4/17/2021 at 6:17 PM, kye said:

because you can't argue against their logic in a calm and rational way.

Absolutely.

I haven't been posting numerous detailed points with supporting examples.  In addition, you haven't conveniently ignored any of those points.

 

On 4/17/2021 at 6:17 PM, kye said:

This is how a camera sensor works:

Look at the pattern of the red photosites that is captured by the camera.  It is missing every second row and every second column. 

In order to work out a red value for every pixel in the output, it must interpolate the values from what it did measure.  Just like upscaling an image.

Demosaicing is not "just like" upscaling an image.  Furthermore, the results of demosaicing are quite the opposite from the results of the unintended pixel blending/degradation that we see in Yedlin's results.

Also, not that it actually matters to testing resolution, but, again:

  • some current cameras do not use Bayer sensors;
  • some cameras have color sensors that don't require interpolation;
  • monochrome sensors don't need interpolation.

 

On 4/17/2021 at 11:11 AM, tupp said:

Not that such commonality matters nor is true, nor that your 4K/2K point actually makes sense, but lots of folks shoot with non-Bayer matrix cameras.

For instance, anyone shooting with a Fuji X-T3 or X-T4 is not using a Bayer sensor.  Any photographer using a scanning back is not using a Bayer sensor.    Anyone shooting with a Foveon sensor is not debayering anything.

On 4/17/2021 at 6:17 PM, kye said:

This is typical of the arguments you are making in this thread.  It is technically correct and sounds like you might be raising valid objections.  Unfortunately this is just technical nit-picking and shows that you are missing the point, either deliberately or naively.

It's not just "technically" correct -- it IS correct.  Not everyone shoots with a Bayer sensor camera.

What is "missing the point" (and is also incorrect) is your insistence that Bayer sensors and their interpolation somehow excuse Yedlin's failure to achieve a 1-to-1 pixel match in his test.

 

 

On 4/17/2021 at 6:17 PM, kye said:

My point has been, ever since I raised it, that camera sensors have significant interpolation.

You are incorrect.  Not all camera sensors require interpolation.

 

On 4/17/2021 at 6:17 PM, kye said:

This is a problem for your argument as your entire argument is that Yedlins test is invalid because the pixels blended with each other (as you showed in your frame-grabs) and you claimed this was due to interpolation / scaling / or some other resolution issue.

No, it is not a problem for my argument, because CFA interpolation is irrelevant and very different from the unintentional pixel blending suffered in Yedlin's comparison.

Yedlin's failure to acheive a 1-to-1 pixel match certainly invalidates his test, but that isn't my entire argument (on which I have corrected you repeatedly).

I have made two major points:

On 4/12/2021 at 7:27 AM, tupp said:
  1. Yedlin's downscaling/upscaling method doesn't really test resolution which invalidates the method as a "resolution test;"
  2. Yedlin's failure to meet his own required 1-to-1 pixel match criteria invalidates the analysis.

 

 

On 4/17/2021 at 6:17 PM, kye said:

Your criticism then is that a resolution test cannot involve interpolation, and the problem with that is that almost every camera has interpolation built-in fundamentally.

No.  The starting images for the comparison are simply the starting images for the comparison.  There are many variables that might affect the sharpness of those starting images, such as, they may have been shot with softer vintage lenses, or shot with a diffusion filter or, if they were taken with a sensor that was demosaiced, they might have used a coarse or fine algorithm.  None of those variables matter to our subsequent comparison, as long as the starting images are sharp enough to demonstrate the potential discernability between the different resolutions being tested.

You don't seem to understand the difference between sensor CFA interpolation and the unintended and uncontrolled pixel blending introduced by Yedlins test processes, which is likely why you equate them as the same thing.

The sensor interpolation is an attempt to maintain the full, highest resolution possible utilizing all of the sensor's photosites (plus such interpolation helps avoid aliasing).

In contrast, Yedlin's unintended and uncontrolled pixel blending degrades and "blurs" the resolution.  With such accidental pixel "blurring," a 2K file could look like a 6K file, especially if both images come from the same 6K file and if both images are shown at the same 4K resolution.

Regardless, the resolution of the camera's ADC output or the camera's image files is a given property that we must accept as the starting resolution for any tests, and, again, some camera sensors do not require interpolation.

Additionally, with typical downsampling (say, from 8K to 4K, or from 6K to 4K, or from 4K to HD), the CFA interpolation impacts the final "spatial" resolution significantly less than that of the original resolution.  So, if we start a comparison with a downsampled image as the highest resolution, then we avoid influence of sensor interpolation.

On the other hand, if CFA interpolation impacts resolution (as you claim), then shooting at 6K and then downsampling to 2K will likely give different results than shooting at 6K and separately shooting the 2K image with a 2K camera.  This is because the interpolation cell area of the 2K sensor is relatively coarser/larger within the frame than that of the 6K interpolation cell area.  So, unfortunately, Yedlin's comparison doesn't apply to actually shooting the 2K image with a 2K camera.

 

 

On 4/17/2021 at 6:17 PM, kye said:

I mentioned bayer sensors, and you said the above.

I showed above that bayer sensors have less red photosites than output pixels, therefore they must interpolate, but what about the Fuji X-T3?

The Fuji cameras have a X-Trans sensor, which looks like this:

Notice something about that?  Correct - it too doesn't have a red value for every pixel, or a green value for every pixel, or a blue value for every pixel.  Guess what that means - interpolation!

Except you might also notice that the X-Tran sensor does not have a Bayer matrix.  You keep harping on Bayer sensors, but the Bayer matrix is only one of several CFAs in existence.

By the way, the Ursa 12K uses an RGBW sensor, and each RGBW pixel group has 6 red photosites, 6 green photosites, 6 blue photosites and 18 clear photosites.  The Ursa 12K is not a Bayer sensor camera.

It is likely that you are not aware of the fact that if an RGB sensor has enough resolution (Bayer or otherwise), then there is no need for the type interpolation that you have shown.  "Guess what that means" -- there are already non-Foveon, single sensor, RGB cameras that need no CFA interpolation.

However, regardless of whether or not Yedlin's source images came from a sensor that required interpolation, Yedlin's unintended and uncontrolled pixel blending ruins his resolution comparison (along with his convoluted method of upscaling/downscaling/Nuke-viewer-'cropping-to-fit'").

 

On 4/17/2021 at 6:17 PM, kye said:

"Scanning back" you say.  Well, that's a super-broad term, but it's a pretty niche market.

You recklessly dismiss many high-end photographers who use scanning backs.

Also, linear scanning sensors are used in a lot of other imaging applications, such as film scanners, tabletop scanners, special effects imaging, etc.

 

On 4/17/2021 at 6:17 PM, kye said:

I'm not watching that much TV shot with a medium format camera.

That's interesting, because the camera that Yedlin used for his resolution comparison (you know, the one which you which you declared is "one of the highest quality imaging devices ever made for cinema")...  well, that camera is an Alexa65 -- a medium format camera.

Insinuating that medium format doesn't matter is yet another reckless dismissal.

Similarly reckless is Yedlin's dismissal of shorter viewing distances and wider viewing angles.  Here is a chart that can help one find the minimum viewing distance where one does not perceive individual display pixels (best view at 100%, 1-to-1 pixels):

pixel-blend3.thumb.png.7f1a471900ef49719e55ba6d234c4426.png

If any of the green lines appear as a series of tiny green dots (or tiny green "slices") instead of a smooth green line, you are discerning the individual display pixels.

For all of those who see the tiny green dots, you are viewing your display at what is dismissed by Yedlin as an uncommon "specialty" distance.  Your viewing setup is irrelevant according to Yedlin.

To make the green lines smooth, merely back away from the monitor (or get a monitor with a higher resolution).

 

 

On 4/17/2021 at 6:17 PM, kye said:

And finally, Foveon.  Now we get to a camera that doesn't need to interpolate because it measures all three colours for each pixel:

So I made a criticism about interpolation by mentioning bayer sensors, and you criticised my argument by picking up on the word "debayer" but included the X-Trans sensor in your answer, when the X-Trans sensor has the same interpolation that you are saying can't be used!

Wait a second!... what happened to your addressing the Foveon sensor?  How do you reconcile the existence of the Foveon sensor with your rabid insistence that all camera sensor's require interpolation.

By the way, demosaicing the X-Trans sensor doesn't use the same algorithm to that of a Bayer sensor.

 

 

On 4/17/2021 at 6:17 PM, kye said:

You are not arguing against my argument, you are just cherry picking little things to try and argue against in isolation.  A friend PM'd me to say that he thought you were just arguing for its own sake, and I don't know if that's true or not, but you're not making sensible counter-arguments to what I'm actually saying.

I have responded directly to almost everything that you have posted.

Perhaps you and your friend should actually read my points and try to comprehend them.

 

 

On 4/17/2021 at 6:17 PM, kye said:

So, you criticise Yedlin for his use of interpolation:

Yedlin didn't "use" interpolation -- the unintentional pixel blending was an accident that corrupts his tests.

Blending pixels blurs the "spatial" resolution.  Such blurring can make 2K look like 6K.  The amount of blur is a matter of degree.

To what degree did Yedlin's accidental pixel blending blur the "spatial" resolution?  Of course, nobody can answer that question, as that accidental blurring cannot be quantified by Yedlin nor anyone else.

If only Yedlin had ensured the 1-to-1 pixel match that he touted and claimed to have achieved...  However, even then we would still have to contend with all of the downscaling/upscaling/crop-to-fit/Nuke-viewer convolutions.

I honestly can't believe that I am having to explain all of this.

 

 

On 4/15/2021 at 2:09 PM, tupp said:

Nevertheless, introducing any such blending/interpolation into a resolution comparison unnecessarily complicates a resolution comparison, and Yedlin does nothing to address nor to quantify the resulting "technical resolution" introduced by all of the scaling, interpolation and compression possibly introduced by the many convoluted steps of his test.

On 4/17/2021 at 6:17 PM, kye said:

and yet you previously said that "We can determine the camera resolution merely from output of the ADC or from the camera files."  

Yes.  There is no contradiction between those two statements.

Sensor CFA interpolation is very different from accidental pixel blending that occurs during a resolution test.  In fact, such sensor interpolation yields the opposite effect from pixel blending -- sensor interpolation attempts to increase actual resolution while pixel blending "blurs" the spatial resolution.

Furthermore, sensor CFA interpolation is not always required, and we have to accept a given camera's resolution inherent in the starting images of our test (interpolated sensor or not).

 

 

On 4/17/2021 at 6:17 PM, kye said:

You're just nit-picking on tiny details but your argument contains all manner of contradictions.

Yedlin's accidental bluring of the pixels is a major problem that invalidates his resolution comparison.

In addition, all of the convulted scaling and display peculiarities that Yedlin employs severely skew the results.

 

 

 

On 4/19/2021 at 10:14 AM, slonick81 said:

So Natron we go!

Well, it appears that you had no trouble learning Natron!

 

On 4/19/2021 at 10:14 AM, slonick81 said:

As you can see, latest win version with default settings shows no subsampling on enlarged view.

That could be a problem if the viewer is not set at 100%.

 

On 4/19/2021 at 10:14 AM, slonick81 said:

The only thing we should care about is filter type in reformat node.

I am not sure why we should care about that nor why we need to reformat.

 

On 4/19/2021 at 10:14 AM, slonick81 said:

It complements original small 558x301 image to FHD with borders around, but centering introduces 0.5 pixel vertical shift due to uneven Y dimension of original image (301 px) so "Impulse" filter type is set for "nearest neighbour" interpolation. If you uncheck "Center" it will place our chart in bottom left corner and remove any influence of "Filter" setting.

Why did you do all of that?  All we need to see is the pixel chart in the viewer, which should be set at 100%, just like this image (view image at 100%, 1-to-1 pixels):

natron_viewer100percent.png.ac64ade997233e10a206df7b9b876505.png

 

On 4/19/2021 at 10:14 AM, slonick81 said:

The funniest thing is that even non-round resize in viewer won't introduce any soft subsampling with these settings. You can notice some pixel line doubling but no soft transitions.

That could cause a perceptual problem if the viewer is not set at 100%.

 

On 4/19/2021 at 10:14 AM, slonick81 said:

And yes, I converted the chart to .bmp because natron couldn't read .gif.

I converted the pixel chart to a PNG image.

 

On 4/15/2021 at 2:09 PM, tupp said:

Why do you say "projection."  I am not "perceiving" a projection.

On 4/19/2021 at 10:14 AM, slonick81 said:

It's the only thing you're percieving. Unless you're Neuralink test volunteer, maybe.

I perceive an LED screen, not a projection.

 

On 4/15/2021 at 2:09 PM, tupp said:

Any crop can be a "1:1 portion," but for review purposes it is important to maintain standardized aspect ratios and resolution formats.

For some reason, Yedlin chose to show his 1920x1080 "crop-to-fit"  from a 4K image, within an often square software "viewer" (thus, forcing an additional crop), and then he outputted everything to a 1920x1280 file.  WTF?!

It's a crazy and wild comparison.

By the way, the comparisons mostly discussed so far in this thread involve an image from a single 6K camera.  That image has been been scaled to different resolutions.

On 4/19/2021 at 10:14 AM, slonick81 said:

Well, that's how any kind of compositing is done. CG artist switches back and forth from "fit in view" to any magnification needed for the job, using "1:1" scale to justify real details.

It seems that the purpose of Yedlin's comparison is to test if there is a discernible difference between higher resolutions -- not to show how CG artists work.

 

On 4/19/2021 at 10:14 AM, slonick81 said:

Working screen resolution can be any, the more the better, of course,

This statement seems to contradict Yedlin's confirmation bias.

 

On 4/19/2021 at 10:14 AM, slonick81 said:

And this is exactly what Yedlin is doing: sitting in his composing suite of choice (Nuke), showing his nodes and settings, zooming in and out but mostly staying at 1:1, grabbing at resolution he is comfortable with (thus 1920x1280 file - it's a window screen record from larger display)

In what should have been a straightforward, fully framed, 1-to-1 resolution test, Yedlin is shows his 1920x1080 "crop-to-fit" section of a 4K image, within a mostly square Nuke viewer (which results in an additional crop), and then he outputted everything to a 1920x1280 file, that suffers from accidental pixel blending.

It's a crazy and slipshod comparison.

 

On 4/19/2021 at 10:14 AM, slonick81 said:

In general: I mostly posted to show one simple thing - you can evaluate footage in 1:1 mode in composer viewer, and round multiple scaling doesn't introduce any false details.

That's actually two simple things.

As I have said to you before, I agree with you that a 1:1 pixel match in is possible in a compositor viewer, and Yedlin could have easily acheived a 1-to-1 pixel match in his final results, as he claimed that he did.  Whether or not Yedlin's Nuke viewer is viewer is showing 1:1 is still unknown, but now we know that the Natron viewer can do so.

In regards to the round multiple scaling not showing any false details, I am not sure that your images are conclusive. I see distortions in both images, and pixel blending in one.

 

On 4/19/2021 at 10:14 AM, slonick81 said:

I considered it as a given truth. But you questioned it and I decided to check. So, for AE, PS, ffmpeg, Natron and most likely Nuke it's true (with some attention to settings).

Nuke is still a question mark in regards to the 1-to-1 pixel match.

 

On 4/19/2021 at 10:14 AM, slonick81 said:

Сoncerning Yedlin's research - it was made in a very natural way for me, as if I was evaluating footage myself, and it summarised well a general impression I got working on video/movie productions - resolution is not a decisive factor nowdays.

A "general impression" is not conclusive proof, and Yedlin's method and execution are flawed.

 

On 4/19/2021 at 10:14 AM, slonick81 said:

Like, for last 5 years I need one hand's fingers to count projects when director/DoP/producer was seeking intentionally for more resolution. You see it wrong or flawed - fine,

Again, I make no claim for or against more resolution.

What I see as "wrong and flawed" is Yedlin's method and execution of his resolution comparison.

 

On 4/19/2021 at 10:14 AM, slonick81 said:

I don't feel any necessity to change your mind,

Likewise, but it appears that you have an incorrect impression of what I argue.

 

Link to comment
Share on other sites

5 hours ago, tupp said:

Perhaps you and your friend should actually read my points and try to comprehend them.

We have, we did, and....  *sigh*

Let me ask you this.  If Yedlin has made such basic failures, and you claim to be sufficiently knowledgeable to be able to easily see through them when others do not, why don't you go ahead and do a test that meets the criteria you say he hasn't met?

I will then proceed to persistently claim you haven't met your own criteria, criticise every line you have written in isolation, and generally take the perspective that if the test does not directly apply to every single camera ever made, every screen and every eyeball in existence then it can't have any value whatsoever.  I think it will be fun, I've seen it done recently with such gusto....

Link to comment
Share on other sites

23 minutes ago, kye said:

Let me ask you this.  If Yedlin has made such basic failures, and you claim to be sufficiently knowledgeable to be able to easily see through them when others do not, why don't you go ahead and do a test that meets the criteria you say he hasn't met?

I will then proceed to persistently claim you haven't met your own criteria, criticise every line you have written in isolation,

Well, I posted an image above of a compositor with an image displayed within it's viewer which was set at 100%.  Unlike Yedlin, I think that I was able to achieve a 1-to-1 pixel match, but, by all means, please scrutinize it.

 

 

28 minutes ago, kye said:

and generally take the perspective that if the test does not directly apply to every single camera ever made, every screen and every eyeball in existence then it can't have any value whatsoever.  I think it will be fun, I've seen it done recently with such gusto....

There is no need for any such resolution tests to apply to any particular camera or display.  I certainly never made such a claim.

Link to comment
Share on other sites

56 minutes ago, tupp said:

Well, I posted an image above of a compositor with an image displayed within it's viewer which was set at 100%.  Unlike Yedlin, I think that I was able to achieve a 1-to-1 pixel match, but, by all means, please scrutinize it.

The whole point of the test was to compare the perceptibility of 2K vs higher resolutions.

This is the point you keep missing.

56 minutes ago, tupp said:

There is no need for any such resolution tests to apply to any particular camera or display.  I certainly never made such a claim.

Determining if there is a difference between 2K and some other resolution on a camera that no-one ever uses is a useless test.

Once again, missing the point.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...