Jump to content


  • Posts

  • Joined

  • Last visited

Everything posted by tupp

  1. Your video is easily better than 95% of the work that is out there -- great eye and a nicely coordinated edit. The narrator did a great job, as well. Did someone give her line readings? I noticed that the interior shot had slight noise, but I was pixel peeping. I would guess that you had to stop down due to the high scene contrast. It certainly was not enough noise to warrant using an unwieldy cinema camera, although it might have been interesting to try a minor adjustment to the E-M10's "Highlights & Shadows" setting. Your video is a superb and inspiring piece that brings life to a mundane subject.
  2. The red line didn't help, but a black Sharpie could quickly fix that.
  3. Film grain is "organized" to actually form/create the image, while noise is random (except for FPN) and obscures the image. Noise doesn't "destroy" resolution -- it just obscures pixels. On the other hand, with too much noise the image forming pixels are not visible, so there's no discernible image and, thus, no resolution. Grain actually forms the image on film. Noise obscures the image. Yep. That notion is subjective, but not uncommon. Actual electronic noise is random, whether digital or analog. It could be argued that FPN and extraneous signal interference are not "noise." This is another subjective but common notion. Again, with the medium of film, grain is organized to actually form the image -- noise is random and noise obscures the image. To me, it doesn't make much sense to remove random noise and then add an overlay of random grain -- grain that does absolutely nothing to form the image.
  4. Yes. Fringing can sometimes be an issue with some ellipsoidal fixtures. Instruments that use lenses made for slide/film projectors usually exhibit minimal fringing. Use an open-face tungsten or HMI source with a snoot. Add a tube of blackwrap if the snoot is too short. That should eliminate most of your spill, and the open-face filament/arc will give you sharp shadows if the fixture is set to full flood. Both open-face and Fresnel fixtures always give their sharpest shadows on full flood.
  5. I know what it is like to suffer attacks and abuse for merely telling the truth and stating fact. All I can say is:
  6. Okay. If you can keep your set-up and shoot time down to an hour total and if you are only using two or three fixtures, then it is probably better to use batteries. Bring spares. LED fixtures might need to be fairly close to the subject when shooting in the daytime.
  7. You have plenty of work light? Are you shooting nighttime exteriors 1000km form the nearest town? What is "IV/PTC?"
  8. If your LED lights actually draw a total of 600W, you might suffer continual battery management/charge anxiety. Also, when you kill your battery-powered set lights in between takes and in between set-ups, will you also have a separate battery-powered work-light running? On the other hand, a genny with a 600W constant capacity is not that big (but it's good practice to use a genny that is rated at twice the anticipated power draw). You can keep it and a gas can in a large plastic bin(s) to protect your car's interior from gasoline/oil. By the way, gennys last longer than batteries because their tanks have much larger power capacities than typical batteries -- not because they can be "topped-up." A battery system can be "topped-up" with a parallel, rectified circuit and/or switches. If you are recording sound, make sure to have at least 150 feet of 12 guage stingers (extension cords) just to run power from the genny to the set, and hide the genny behind a distant building or thick bushes. You can also build a sound shield with stands, heavy vinyl and a furniture pad (or a thick blanket). If you can arrange in advance to run power from a nearby building, that might be an even better solution.
  9. There might be a way to attach a speedbooster. This is a really interesting thread!
  10. Yes! The EOSM with ML raw is amazing, and @ZEEK has a great eye! EOSM ML raw videos are constantly appearing on YouTube. This guy also does nice work with the EOSM.
  11. Perhaps it was interlaced (60i), and then deinterlaced to 30p. There are probably plug-ins/filters in those programs that might work, but I am not familiar those apps.
  12. Look again. I am not trolling. I have presented facts, often with detailed explanations and with supporting images and links. These facts show that Yedlin's test is faulty, and that we cannot draw any sound conclusions from his comparison. In addition, I have point-out contradictions and/or falsehoods (or lies) of others, also using facts/links. If my posting methods look like trolling to you, please explain how it is so. It doesn't sound like you are actually interested in the topic of this thread. No need to apologize for your shallowness and frivolity. Those are two real funny (and original) jokes. Another real funny joke. I am not sure that I believe you. So, you don't really have anything to add to the topic of the discussion with this solitary post late in the thread. You have made similar posts near the end of another extended thread, posts which likewise had no relation to that discussion. What is the purpose of these late, irrelevant posts? I don't wonder about that. Please... If you have something worthwhile to offer to the discussion, I'd like to hear it, but don't talk down to someone who is actually contributing facts, explanations and supporting evidence that relate directly to the topic of this thread.
  13. Ffmpeg has a "pullup" filter, but removing pulldowns can be done in ffmpeg without that special filter. Mencoder and AviSynth can also remove pulldowns. Several NLEs and post programs have plugins that do the same. However, the pulled-down 30fps footage is usually interlaced. Can you post a few seconds of the 30fps footage?
  14. What were you hoping to achieve with your personal insults of me below: I didn't mind any of these blatant insults nor the numerous sarcastic insinuations that you have made about me (I have made one or two sarcastic innuendos about you -- but not as many as you have about me). I don't mind it that you constantly contradict yourself and project those contraditions on me, nor do I mind when you inadvertently disprove your own points, nor do I care when you just make stuff up to suit your position. However, when you lied about me making a fictitious claim comparing myself to Yedlin, you went too far. Here is your lie: I never made any such claim. You need to take back that lie. Classic projection... You should ask yourself the same question -- how are your personal insults (listed above), contradictions and falsehoods helping anyone? I have given many reason's in great detail on why Yedlin's test is not valid, and I even linked two straightforward resolution demos that disagree with the results of Yedlin's more convoluted test. Additionally, you unwittingly presented results from a thorough empirical study that directly contradict the results of Yedlin's test. Those results showed a "pretty obvious" (your own words) distinguishability between two resolutions that differ by 2x, while Yedlin's results show no distinguishability between two resolutions with a more disparate resolution difference of 3x -- and the study that you presented was also made with an Alexa! I cannot conceive of any additional argument against the validity of Yedlin's comparison that is more convincing than those obviously damning results of the empirical study that you yourself inadvertently presented in this thread. Indeed, it's a question you should ask of yourself.
  15. I thought that you were just being trollish, but now it seems that you are truly delusional. Somehow in your mind you get the notion that I am "criticizing Yedlin for using a 6K camera on a 4K timeline" from this passage: Nowhere in that passage do I mention Yedlin, nor do I mention a camera, nor do I ever refer to anyone "using a 6K image on a 4K timeline." Most importantly, I was not criticizing anyone in that passage. Anybody can go to that post and see for themselves that I was simply making a direct response to your quoted statement: Even YOU did not refer to Yedlin, nor to a camera nor to using 6K on a 4K timeline. Making up things in your mind is harmless, but posting lies about someone is too much. You need to take back your lie that I claimed that my intellect was elevated in comparison to Yedlin's. Also, if you are on meds, keep taking them regularly.
  16. No I didn't. What is the matter with you -- why do you always make up false realities? Also, I already corrected you when you stated this very same falsehood before. I never criticized Yedlin for using ANY particular camera -- YOU are the one who is particular about cameras. The fact is that I have repeatedly stated that the particular camera used in a resolution doesn't really matter: Once again, here are the two primary points on which I criticized Yedlin's test (please read these two points carefully and try to retain them so that I don't have to repeat them again): The particular camera used for a resolution test doesn't matter! Actually, I linked two tests. I am not familiar with the camera in the other test. It is truly saddening to witness your continued desperate attempts to twist my statements in an attempt to create a contradiction in my points. I have repeatedly stated that the camera doesn't matter as long as it's effective resolution is high enough and it's image is sharp enough. So, camera interpolation is irrelevant, as I have also already specifically explained. The "interpolation" that Yedlin's test suffered was pixel blending that happened accidentally in post (as I have stated repeatedly) -- it had nothing to do with sensor interpolation. Yes. *Both* tests that I linked are not perfect. The first tester has a confirmation bias, and he doesn't give a lot of details on his settings, and the second comparison just doesn't give a lot of settings details, and the chosen subject is not that great. Nevertheless, both comparisons are implemented in a much cleaner and more straightforward manner than Yedlin's video, and both tests clearly show a discernible distinction between resolutions having only a 2x difference. Unless the testers somehow skewed the lower resolution images to look softer than normal, that clear resolution difference cannot be reconciled with the lack of discernability between 6K and 2K in Yedlin's comparison. Again, it's sad that you have to grasp at straws by using insults, instead of reasonably arguing the points. Your declaration regarding Yedlin's demo doesn't change the fact that it is not valid. So far, the most thorough comparison presented in this thread is the Alexa test you linked that shows a "pretty obvious" (your words) distinguishability between resolutions having a mere 2x difference. The test that you linked is the most empirical, because it: This: I never made such a claim, and you've crossed the line with your falsehoods here. Unless you can find and link any post of mine in which I claimed that my intellect was elevated in comparison to Yedlin's, you are a liar.
  17. As I answered that question before, I have already demonstrated that it is easy to achieve a 1-to-1 pixel match. However, I will add that there is no sense/logic to your notion that I should perform such a test myself, just because I have shown that Yedlin's comparison is invalid. I have demonstrated that we can draw no conclusions regarding the discernability of differing resolutions from Yedlin's flawed demo, and that fact is all that matters to this discussion. In addition, there are several comparisons which already exist that don't suffer the same blunders inherent in Yedlin's test. So, why would I need to bother making yet another one? The execution in these demos is not perfect, but they are implemented in a much cleaner and more straightforward manner than Yedlin's video. Here is one resolution comparison shot with a GH5s at 10bit and evidently captured in both HD and UHD. Does a GH5s use the correct type of sensor and is it also common/uncommon enough and does it additionally have enough quality -- to meet your approval for a resolution test? The tester emphasizes the 200% zoom, but at 100% zoom I can see the resolution difference between UHD and HD on my HD monitor. However, this test should really be viewed on a 4K monitor at minimum viewing distance, giving priority to the 100% zoom. This comparison is more straightforward than Yedlin's test. There is no upscaling/downscaling, and the images are not screen-captures of a software viewer, and there is no evidence of accidental pixel blending. However, it should be noted that the tester performs the comparison with a confirmation bias. Here is a similar resolution comparison showing faster moving subjects. Likewise, I can see a difference in resolution on my HD monitor with 100% zoom, but the video should really be viewed on a 4K monitor at minimum viewing distance. Now, I do not claim that higher resolutions are better than lower resolutions. I simply state the fact that I can discern a difference in resolution between HD and 4K/UHD at a 100% zoom, when viewing these two comparisons on my HD monitor. One more thing... if the Nuke viewer behaves the same as the Natron viewer, I suspect that peculiarities in the way the viewer renders pixels contributed significantly to Yedlin's "6K = 2K" results. Combining this potential discrepancy generated by the Nuke viewer with Yedlin's upscaling/downscaling and with the accidental pixel blending, it is easy imagine how a "6K" image would look almost exactly like a "2K" image, when viewed on "4K" timeline.
  18. Yedlin's test isn't really applicable to any "world," because his method is flawed, and because he botched the required 1-to-1 pixel match. Again, the type of camera/sensor doesn't really matter to a resolution test, as long the camera has enough resolution for the test. Resolution tests should be (and always are) camera agnostic -- there is no reason for them not to be so. What does that notion have to do with testing resolution? There's not much to a resolution test other than using a camera with a high enough resolution and properly controlling the variables. There is no "special" resolution testing criteria for "world class" filmmakers that would not also apply to those shooting home movies -- nor vice versa. Furthermore, I don't recall Yedlin declaring any such special criteria in his video, but I do recall him talking about standard viewing angles/distances in home living rooms. By the way, if you think that there are special criteria in resolution testing for "world class" filmmakers, please list them. The fact is that Yedlin's resolution test is flawed. Not sure how to take this. By "more productive discussions," I hope that you mean "additional productive discussions" and not "discussions that are more productive." Your example involves color channel resolution resulting from a crude Bayer sensor interpolation, which has nothing to do with chroma subsampling. Chroma subsampling occurs after sensor interpolation (if a sensor needs interpolation). In addition, chroma subsampling also applies to imaging generated by other means -- not just camera images. By posting those resolution charts, along with stating, "Pretty obvious that the red has significantly less resolution than the green," you have unwittingly demonstrated that there is a discernible distinction between different resolutions and/or that there is a large problem with Yedlin's test. Of course, we all know that the green photosites in a Bayer matrix have twice (2x) the resolution as the red photosites. So, you are actually admitting that there is obvious discernibility between two resolutions that differ by 2x -- in a test shot with an Alexa camera. Furthermore, Yedlin's Alexa65 test shows no discernible difference between two even more disparate resolutions -- 6K:2K is a 3x difference. So, how is it that we see an *obvious* distinction between resolutions that differ by only 2x in a well controlled, empirical study, while we see no distinction between resolutions that differ by 3x in Yedlin's uncontrolled test? It certainly appears that something is wrong with Yedlin's comparison. As I have suggested, the problem lies with his convoluted, nonsensical methods and in his failure to achieve a 1-to-1 pixel match. Again, contrary to your relatively recent stance, it doesn't matter to a resolution test whether or not the test camera uses chroma subsampling or uses a Bayer sensor or is common or uncommon with a certain level of quality. All of your constantly changing conditions on what particular camera can work in resolution tests are irrelevant. Resolution tests should be -- and always are -- camera agnostic, as long as the camera has enough resolution and sharpness for the resolution test. Earlier in the thread, you said yourself: So, you are saying that ANY raw camera can be used to duplicate the zoomed-in moments in Yedlin's test. In the very same post, you also stated: So, you additionally say that we can use ANY 4K camera that can shoot raw video (or even a still camera) to duplicate Yedlin's entire test. The question is: which stance of yours is true? For resolution tests, can we use ANY raw 4k camera or are we now required to use the particular common/uncommon camera with the particular sensor or quality that happens to suit your inclination at the moment? Of course, even if there is an prominent difference in the resolution between green and red/blue photosites it doesn't matter -- as long as the resolution of the red/blue photosites is high enough for the resolution test. Furthermore, the study that you linked showed the results of a better debayering algorithm in which the chart clearly shows the red channel having the same resolution as the green channel. So, your notion that the difference in resolution between green and red/blue photosites doesn't really matter to a resolution test -- as long as the camera's effective resolution (after debayering) is high enough and as long as the image is sharp enough. Additionally, there is a huge difference in execution between your linked empirical camera study and Yedlin's sloppy comparison. Unlike Yedlin's demo, the empirical study: is conceived and performed with no confirmation bias; is made in accordance with empirical guidelines set by TECH 3335 and EBU R 118; directly shows us the actual results, with no screenshots of results displayed within a compositor viewer; doesn't upscale/downscale test images to other resolutions; uses a precision resolution test chart that clearly shows what is occurring. Can you imagine how how muddled those fine resolution charts would look if the tester had accidentally blurred the spatial resolution as Yedlin did?
  19. Well, you seem to live in a world in which you make up your own "realities" about what I am saying and that's okay, but posting insinuations based on those fantasies is a whole other thing. Regardless, such fantasies have no relation the fatal problems with Yedlin's test. However, just to make it clear, I never suggested that chroma subsampling should be avoided in resolution tests. If fact I implied the opposite by pointing out that 4:2:0 cameras are more common than the Alexa65, and thus, according to your logic, we shouldn't use an exceedingly uncommon camera such as the Alexa65: Of course, chroma subsampling essentially is a reduction in color resolution (and, hence, color depth), but it doesn't really affect a resolution test, as long as the resulting images have enough resolution for required for the given resolution comparison. Again, none of this discussion on cameras/sensors has any bearing on the fatal problems with Yedlin's test. Classic projection and irony. Slipshod tests like Yedlin's are for those who need conformation of their bias.
  20. What I get is that there are some hair-brained notions floating around this thread on what cameras can and cannot be used in a resolution test, notions which have absolutely no bearing on the fatal flaws suffered in Yedlin's test. No. I rejected the test primarily because: And again, the test doesn't "involve" interpolation: Incidentally, Yedlin's test is titled "Camera Resolutions," because it is intended to test resolution -- not post "interpolation." A 1-to-1 pixel match is required for a resolution test, and Yedlin spent over four minutes explaining the importance of a 1-to-1 pixlel match at the very beginning of his video. In addition, you even explained and defended Yedlin's supposed 1-to-1 match... that is, until I showed that Yedlin failed to achieve it. Yedlin's haphazard test accidentally blurred the "spatial" resolution of the images, which makes it impossible to discern any possible difference between resolutions (Yedlin's downscaling/upscaling convolutions notwithstanding). What is it that you do not understand about this simple, basic fact? Not that this matters to the fact that Yedlin's test is fatally flawed, but you seem stuck on the notion that the results of using a camera with an interpolated sensor vs. using a camera with a non-interpolated sensor will somehow differ resolution-wise -- even if both cameras possess the very same effective resolution. However, the reality is that the particular camera that is used is irrelevant, as long as the captured images meet the resolution requirements for the comparison. If a non-interpolated camera sensor has the same effective resolution as a sensor that is interpolated, then both cameras with both such sensors are equally qualified to shoot images for the same resolution test. No, I didn't. You are making that up. In addition, how do you make sense of your notion that a 6K camera necessarily involves interpolation while a 4K camera doesn't involve interpolation? Well, then who cares about Yedlin's test that used an exceedingly uncommon camera? You said: No, it doesn't. The most common digital video cameras shoot with a 4:2:0 color (chroma) subsample. However, it is unlikely that anyone who would go to the trouble and expense to rent an Alexa65 package (along with all of the effort to achieve an appropriate high-end production value) would do any kind of subsampling at all. So, no -- the Alexa65 doesn't "share the same chroma subsampling properties of most cameras." The actual number on how many projects are down-sampled is likely unknown, but, in regards to down-sampling, again: So, Yedlin's results are further misleading in the sense that his "2K" image down-sampled from 6K is not the same as the more common down-sample of 4K to 2K, nor is it the same as 2K shot with a 2K camera. Using your logic from earlier in the thread, the results from an uncommon, high-end Alexa65 cannot possibly be applicable to those from the cameras that most of us use, because our cameras are lower quality. On the other hand, the Ursa 12K -- with a non-Bayer sensor -- is a also a high quality imaging device with twice the resolution of the Alexa65. Are you saying that the Ursa 12K -- with a non-Bayer sensor -- isn't good enough for a resolution test? Regardless, such notions of what cameras can and cannot be used in a resolution test have absolutely no bearing on the fatal flaws suffered in Yedlin's test. Really? Well, it seems like the red herring is actually your hypothetical test with a Foveon sensor that "does not share the same colour subsampling properties of most cameras," because the title of Yedlin's video is "Camera Resolutions," which indicates that his test compares differences in "perceptible" resolution -- not differences in color nor color "subsampling."
  21. If we follow your reasoning, then who cares about Yedlin's comparison? He used an exceedingly uncommon camera for his test. You have to rent an Alexa65, and doing so is extremely expensive. Furthermore, on my entire continent there are only five locations where the Alexa65 is available for rent. I am not sure what cameras in your mind are uncommon, but I would bet that there are more Foveon cameras in use than the Alexa65. Likewise, with X-Trans cameras, scanning back cameras and Ursa 12K's. You said yourself, "I'm not watching that much TV shot with a medium format camera," and the Alexa65 is a medium format camera! So, why should anyone care about Yedlin's test when he used such an uncommon camera? You seem stuck on the notion that the results of using an uncommon camera vs. using a common camera will somehow differ resolution-wise -- even if both cameras possess the very same effective resolution. However, the reality is that the particular camera that is used is irrelevant, as long as the captured images meet the resolution requirements for the comparison. If an uncommon camera has the same effective resolution as a common camera, then both cameras are equally qualified to shoot images for the same resolution test. It's a simple concept that's easy to grasp. Well, please explain how your notion of an SD resolution comparison is applicable to our discussion on a comparison higher resolutions.
  22. Setting aside the fact that this is the first time that you have made that particular claim regarding "the whole point of the test," a 1-to-1 pixel match is crucial for proper perceptibility in a resolution comparison. Yedlin spent over four minutes in the beginning of his video explaining that fact, and you additionally explained and defended the 1-to-1 pixel match. I and @slonick81 were able to achieve a 1-to-1 pixel match within a compositor viewer, but Yedlin did not do so. So, Yedlin failed to provide the crucial 1-to-1 pixel match required for proper perceptibility of 2K and higher resolutions. Right. I keep missing the point that the comparison is between 2K and higher resolutions: You have fascinating and imaginative interpretation skills. What gave you the notion that anyone referred to, "a camera that no-one ever uses?" Again, it is irrelevant whether the starting images were captured with an common or uncommon camera, as long as those images are sharp enough and of a high enough resolution, which I previously indicated here: So, as long as there is enough sharpness and resolution in the starting images, the resolution test is "camera agnostic" -- as it should be. In addition, the tests should be "post image processing" agnostic, with no peculiar nor unintended/uncontrolled side-effects. Unfortunately, the side-effect of pixel blending and post interpolation are big problems with Yedlin's test, so the results of his comparison are not "post image processing" agnostic and only apply to his peculiar post set-up and rendering settings, whatever they may be. Now, what was that you said about my "missing the point" on "determining if there is a difference between 2K and some other resolution?"
  23. Well, I posted an image above of a compositor with an image displayed within it's viewer which was set at 100%. Unlike Yedlin, I think that I was able to achieve a 1-to-1 pixel match, but, by all means, please scrutinize it. There is no need for any such resolution tests to apply to any particular camera or display. I certainly never made such a claim.
  24. Wait. You, too, are concerned about slipshod testing? Then, how do you reconcile Yedlin's failure to achieve his self-imposed (and required) 1-to-1 pixel match?... you know, Yedlin's supposed 1-to-1 pixel match that you formerly took the trouble to explain and defend: As I have shown, Yedlin did not get a 1:1 view of the pixels, and now it appears that the 1:1 pixel match is suddenly unimportant to one of us. To mix metaphors, one of us seems to have changed one's tune and moved the goal posts out of the stadium. Also, how do you reconcile Yedlin’s failure to even address and/or quantify the effect of all of the pixel blending, interpolation, scaling and compression that occur in his test? There is no way for us to know to what degree the "spatial resolution" is affected by all of the complex imaging convolutions of Yedlin's test. There is absolutely no need for me to try and make... such an attempt. I merely asked you to clarify your argument regarding sensor resolution, because you have repeatedly ignored my rebuttal to your "Bayer interpolation" notion, and because you have also mentioned "sensor scaling" several times. Some cameras additionally upscale the actual sensor resolution after the sensor is interpolated, so I wanted to make sure that you were not referring to such upscaling, and, hence, ignoring my repeated responses. Absolutely. I haven't been posting numerous detailed points with supporting examples. In addition, you haven't conveniently ignored any of those points. Demosaicing is not "just like" upscaling an image. Furthermore, the results of demosaicing are quite the opposite from the results of the unintended pixel blending/degradation that we see in Yedlin's results. Also, not that it actually matters to testing resolution, but, again: some current cameras do not use Bayer sensors; some cameras have color sensors that don't require interpolation; monochrome sensors don't need interpolation. It's not just "technically" correct -- it IS correct. Not everyone shoots with a Bayer sensor camera. What is "missing the point" (and is also incorrect) is your insistence that Bayer sensors and their interpolation somehow excuse Yedlin's failure to achieve a 1-to-1 pixel match in his test. You are incorrect. Not all camera sensors require interpolation. No, it is not a problem for my argument, because CFA interpolation is irrelevant and very different from the unintentional pixel blending suffered in Yedlin's comparison. Yedlin's failure to acheive a 1-to-1 pixel match certainly invalidates his test, but that isn't my entire argument (on which I have corrected you repeatedly). I have made two major points: No. The starting images for the comparison are simply the starting images for the comparison. There are many variables that might affect the sharpness of those starting images, such as, they may have been shot with softer vintage lenses, or shot with a diffusion filter or, if they were taken with a sensor that was demosaiced, they might have used a coarse or fine algorithm. None of those variables matter to our subsequent comparison, as long as the starting images are sharp enough to demonstrate the potential discernability between the different resolutions being tested. You don't seem to understand the difference between sensor CFA interpolation and the unintended and uncontrolled pixel blending introduced by Yedlins test processes, which is likely why you equate them as the same thing. The sensor interpolation is an attempt to maintain the full, highest resolution possible utilizing all of the sensor's photosites (plus such interpolation helps avoid aliasing). In contrast, Yedlin's unintended and uncontrolled pixel blending degrades and "blurs" the resolution. With such accidental pixel "blurring," a 2K file could look like a 6K file, especially if both images come from the same 6K file and if both images are shown at the same 4K resolution. Regardless, the resolution of the camera's ADC output or the camera's image files is a given property that we must accept as the starting resolution for any tests, and, again, some camera sensors do not require interpolation. Additionally, with typical downsampling (say, from 8K to 4K, or from 6K to 4K, or from 4K to HD), the CFA interpolation impacts the final "spatial" resolution significantly less than that of the original resolution. So, if we start a comparison with a downsampled image as the highest resolution, then we avoid influence of sensor interpolation. On the other hand, if CFA interpolation impacts resolution (as you claim), then shooting at 6K and then downsampling to 2K will likely give different results than shooting at 6K and separately shooting the 2K image with a 2K camera. This is because the interpolation cell area of the 2K sensor is relatively coarser/larger within the frame than that of the 6K interpolation cell area. So, unfortunately, Yedlin's comparison doesn't apply to actually shooting the 2K image with a 2K camera. Except you might also notice that the X-Tran sensor does not have a Bayer matrix. You keep harping on Bayer sensors, but the Bayer matrix is only one of several CFAs in existence. By the way, the Ursa 12K uses an RGBW sensor, and each RGBW pixel group has 6 red photosites, 6 green photosites, 6 blue photosites and 18 clear photosites. The Ursa 12K is not a Bayer sensor camera. It is likely that you are not aware of the fact that if an RGB sensor has enough resolution (Bayer or otherwise), then there is no need for the type interpolation that you have shown. "Guess what that means" -- there are already non-Foveon, single sensor, RGB cameras that need no CFA interpolation. However, regardless of whether or not Yedlin's source images came from a sensor that required interpolation, Yedlin's unintended and uncontrolled pixel blending ruins his resolution comparison (along with his convoluted method of upscaling/downscaling/Nuke-viewer-'cropping-to-fit'"). You recklessly dismiss many high-end photographers who use scanning backs. Also, linear scanning sensors are used in a lot of other imaging applications, such as film scanners, tabletop scanners, special effects imaging, etc. That's interesting, because the camera that Yedlin used for his resolution comparison (you know, the one which you which you declared is "one of the highest quality imaging devices ever made for cinema")... well, that camera is an Alexa65 -- a medium format camera. Insinuating that medium format doesn't matter is yet another reckless dismissal. Similarly reckless is Yedlin's dismissal of shorter viewing distances and wider viewing angles. Here is a chart that can help one find the minimum viewing distance where one does not perceive individual display pixels (best view at 100%, 1-to-1 pixels): If any of the green lines appear as a series of tiny green dots (or tiny green "slices") instead of a smooth green line, you are discerning the individual display pixels. For all of those who see the tiny green dots, you are viewing your display at what is dismissed by Yedlin as an uncommon "specialty" distance. Your viewing setup is irrelevant according to Yedlin. To make the green lines smooth, merely back away from the monitor (or get a monitor with a higher resolution). Wait a second!... what happened to your addressing the Foveon sensor? How do you reconcile the existence of the Foveon sensor with your rabid insistence that all camera sensor's require interpolation. By the way, demosaicing the X-Trans sensor doesn't use the same algorithm to that of a Bayer sensor. I have responded directly to almost everything that you have posted. Perhaps you and your friend should actually read my points and try to comprehend them. Yedlin didn't "use" interpolation -- the unintentional pixel blending was an accident that corrupts his tests. Blending pixels blurs the "spatial" resolution. Such blurring can make 2K look like 6K. The amount of blur is a matter of degree. To what degree did Yedlin's accidental pixel blending blur the "spatial" resolution? Of course, nobody can answer that question, as that accidental blurring cannot be quantified by Yedlin nor anyone else. If only Yedlin had ensured the 1-to-1 pixel match that he touted and claimed to have achieved... However, even then we would still have to contend with all of the downscaling/upscaling/crop-to-fit/Nuke-viewer convolutions. I honestly can't believe that I am having to explain all of this. Yes. There is no contradiction between those two statements. Sensor CFA interpolation is very different from accidental pixel blending that occurs during a resolution test. In fact, such sensor interpolation yields the opposite effect from pixel blending -- sensor interpolation attempts to increase actual resolution while pixel blending "blurs" the spatial resolution. Furthermore, sensor CFA interpolation is not always required, and we have to accept a given camera's resolution inherent in the starting images of our test (interpolated sensor or not). Yedlin's accidental bluring of the pixels is a major problem that invalidates his resolution comparison. In addition, all of the convulted scaling and display peculiarities that Yedlin employs severely skew the results. Well, it appears that you had no trouble learning Natron! That could be a problem if the viewer is not set at 100%. I am not sure why we should care about that nor why we need to reformat. Why did you do all of that? All we need to see is the pixel chart in the viewer, which should be set at 100%, just like this image (view image at 100%, 1-to-1 pixels): That could cause a perceptual problem if the viewer is not set at 100%. I converted the pixel chart to a PNG image. I perceive an LED screen, not a projection. It seems that the purpose of Yedlin's comparison is to test if there is a discernible difference between higher resolutions -- not to show how CG artists work. This statement seems to contradict Yedlin's confirmation bias. In what should have been a straightforward, fully framed, 1-to-1 resolution test, Yedlin is shows his 1920x1080 "crop-to-fit" section of a 4K image, within a mostly square Nuke viewer (which results in an additional crop), and then he outputted everything to a 1920x1280 file, that suffers from accidental pixel blending. It's a crazy and slipshod comparison. That's actually two simple things. As I have said to you before, I agree with you that a 1:1 pixel match in is possible in a compositor viewer, and Yedlin could have easily acheived a 1-to-1 pixel match in his final results, as he claimed that he did. Whether or not Yedlin's Nuke viewer is viewer is showing 1:1 is still unknown, but now we know that the Natron viewer can do so. In regards to the round multiple scaling not showing any false details, I am not sure that your images are conclusive. I see distortions in both images, and pixel blending in one. Nuke is still a question mark in regards to the 1-to-1 pixel match. A "general impression" is not conclusive proof, and Yedlin's method and execution are flawed. Again, I make no claim for or against more resolution. What I see as "wrong and flawed" is Yedlin's method and execution of his resolution comparison. Likewise, but it appears that you have an incorrect impression of what I argue.
  • Create New...