Jump to content

tupp

Members
  • Posts

    1,148
  • Joined

  • Last visited

Reputation Activity

  1. Like
    tupp reacted to slonick81 in Camera resolutions by cinematographer Steve Yeldin   
    Shure. Exactly this image has heavy compression artifacts and I was unable to find the original chart but I got the idea and recreated these pixel-wide colored "E" and did the same upscale-view-grab pattern. And, well, it does preserve sharp pixel edges, no subsampling.
    I dont have access to Nuke right now, not going to mess with warez in the middle of a work week for the sake of internet dispute, and I'm not 100% shure about the details, but last time I was composing something in Nuke it had no problems with 1:1 view, especially considering I was making titles and credits as well. And what Yedlin is doing - comparing at 100% 1:1 - it looks right.
    Yedlin is not questioning the capability of given codec/format to store given amount of resolution lines. He is discussing about _percieved_ resolution. It means that image should be a) projected, b) well, percieved. So he chooses common ground - 4K projection, crops out 1:1 portion of it and cycles through different cameras. And his idea sounds valid - starting from certain point digital resolution is less important then other factors existing before (optical system resolving, DoF/motion blur, AA filter) and after (rolling shutter, processing, sharpening/NR, compression) the resolution is created. He doesn't state that there is zero difference and he doesn't touch special technical cases like VFX or intentional heavy reframing in post, where additional resolution may be beneficial.
    The whole idea of his works: starting from certain point of technical resolution percieved resolution of real life images does not suffer from upsampling and does not benefit from downscaling that much. For example, on the second image I added a numerically subtle transform to chart in AE before grabbing the screen: +5% scale, 1° rotation, slight skew - essentially what you will get with nearly any stabilization plugin, and it's a mess in term of technical resolution. But we do it here and there without any dramatic degradation to real footage.


  2. Like
    tupp reacted to slonick81 in Camera resolutions by cinematographer Steve Yeldin   
    The attached image shows 1px b/w grid, generated in AE in FHD sequence, exported in ProRes, upscaled to QHD with ffmpeg ("-vf scale=3840:2160:flags=neighbor" options), imported back to AE, ovelayed over original one in same composition, magnified to 200% in viewer, screengrabbed and enlarged another 2x in PS with proper scaling settings. And no subsampling present, as you can see. So it's totally possible to upscale image or show it in 1:1 view without modifying original pixels - just don't use fractional enlargement ratios and complex scaling. Not shure about Natron though - never used it. Just don't forget to "open image in new tab" and to view in original scale.
    But that's real life - most productions have different resolution footage on input (A/B/drone/action cams), and multiple resolutions on output - QHD/FHD for streaming/TV and DCI-something for DCP at least. So it's all about scaling and matching the look, and it's the subject of Yedlin's research.
    More to say, even in rare "resolution preserving" cases when filming resolution perfectly matches projection resolution there are such things as lens abberations/distorions correction, image stabilization, rolling shutter jello removal and reframing in post. And it works well usually because of reasons covered by Yedlin.
    And sometimes resolution, processing and scaling play funny tricks out of nothing. Last project I was making some simple clean-ups. Red Helium 8K shots, exported as DPX sequences to me. 80% of processed shots were rejected by colourist and DoP as "blurry, unfitting the rest of footage". Long story short, DPX files were rendered by technician in full-res/premium quality debayer, while colourist with DoP were grading 8K at half res scaled down to 2K big screen projection - and it was giving more punch and microcontrast on large screen then higher quality and resolution DPXes with same grading and projection scaling.

  3. Like
    tupp reacted to KnightsFan in Camera resolutions by cinematographer Steve Yeldin   
    I'm going to regret getting involved here, but @tuppI think you are technically correct about resolution in the abstract. But I think that Yedlin is doing his experiments in the context of real cameras and workflow, not an abstract.
    I mean, it's completely obvious to anyone who has ever played a video game that there is a huge, noticeable difference between 4k and 2k, once we take out optical softness, noise, debayering artifacts, and compression. If we're debating differences between Resolutions with a capital R, let's answer with a resounding "Yes it makes a difference" and move on. The debate only makes sense in the context of a particular starting point and workflow because in actual resolution on perfect images the difference is very clear.
    And yeah, maybe Yedlin isn't 100% scientific about it, maybe he uses incorrect terms, and I think we all agree he failed to tighten his argument into a concise presentation. I don't really know if discussing his semantics and presentation is as interesting as trying to pinpoint what does and doesn't matter for our own projects... but if you enjoy it carry on 🙂
    I will say that for my film projects, I fail to see any benefit past 2k. I've watched my work on a 4k screen, and it doesn't really look any better in motion. Same goes for other movies I watch. 720p to 1080p, I appreciate the improvement. But 4k really never makes me enjoy it any more.
  4. Like
    tupp reacted to kye in Camera resolutions by cinematographer Steve Yeldin   
    I think perhaps the largest difference between video and video games is that video games (and any computer generated imagery in general) can have a 100% white pixel right next to a 100% black pixel, whereas cameras don't seem to do that.
    In Yedlins demo he zooms into the edge of the blind and shows the 6K straight from the Alexa with no scaling and the "edge" is actually a gradient that takes maybe 4-6 pixels to go from dark to light.  I don't know if this is do to with lens limitations, to do with sensor diffraction, OLPFs, or debayering algorithms, but it seems to match everything I've ever shot.  
    It's not a difficult test to do..  take any camera that can shoot RAW and put it on a tripod, set it to base ISO and aperture priority, take it outside, open the aperture right up, focus it on a hard edge that has some contrast, stop down by 4 stops, take the shot, then look at it in an image editor and zoom way in to see what the edge looks like.
    In terms of Yedlins demo, I think the question is if having resolution over 2K is perceptible under normal viewing conditions.  When he zooms in a lot it's quite obvious that there is more resolution there, but the question isn't if more resolution has more resolution, because we know that of course it does, and VFX people want as much of it as possible, but can audiences see the difference?  I'm happy from the demo to say that it's not perceptually different.
    Of course, it's also easy to run Yedlins test yourself at home as well.  Simply take a 4K video clip and export it at native resolution and at 2K, you can export it lossless if you like.  Then bring both versions and put them onto a 4K timeline, and then just watch it on a 4K display, you can even cut them up and put them side-by-side or do whatever you want.  If you don't have a camera that can shoot RAW then take a timelapse with RAW still images and use that as the source video, or download some sample footage from RED, which has footage up to 8K RAW available to download free from their website.
  5. Like
    tupp reacted to kye in Camera resolutions by cinematographer Steve Yeldin   
    His test applies to the situations where there is image scaling and compression involved, which is basically every piece of content anyone consumes.
    If you're going to throw away an entire analysis based on a single point, then have a think about this:
    1<0 and the sky is blue.
    uh oh, now I've said that 1<0, which clearly it isn't, then the sky can't be blue because everything I said must now logically be wrong and cannot be true!
  6. Like
    tupp reacted to John Matthews in Camera resolutions by cinematographer Steve Yeldin   
    Do you know the word humility? Yedlin's not just any old dude on the internet... the guy's an industry insider with butt-loads of films to back it up. I think he might know something on the topic of resolution.
  7. Like
    tupp reacted to kye in Camera resolutions by cinematographer Steve Yeldin   
    @tupp
    You raise a number of excellent points, but have missed the point of the test.
    The overall context is that for a viewer, sitting at a common viewing distance, the difference won't be discernible.  This is why the comparison is about perceptual resolution and not actual resolution.
    Yedlin claims that the video will appear 1:1, which I took to mean that it wouldn't be a different size, and you have taken to mean that every pixel on his computer will appear as a single pixel on your/my computer and will not have any impact on any of the other surrounding pixels.  
    Obviously this is false, as you have shown from your blown up screen captures.  This does not prove scaling though.  As you showed, two viewers rendered different outputs, and I tried it in Quicktime and VLC and got two different results again.  
    Problem number one is that the viewing software is altering the image (or at least all but one that we tried).
    Problem number two is that we're both viewing the file from Yedlin's site, which is highly compressed.  In fact, it is a h264 stream, and 2.32Gb, something like 4Mbps.  The uncompressed file would have been 1192Mbps and in the order of 600Gb, and not much smaller had he used a lossless compression, so completely beyond any practical consideration.  Assuming I've done my maths correctly, that's a compression ratio of something like 250:1 - a ratio that you couldn't even hope would yield a pixel-not-destroyed image.
    The reason I bring up these two points is that they will also be true for the consumption of any media by that viewer that the test is about.
    There's no point arguing that his test is invalid as it doesn't apply to someone watching an uncompressed video stream on a screen that is significantly larger than the TXH and SMPTE recommendations suggest, because, frankly, who gives a toss about that person?  I'm not that person, probably no-one else here is that person, and if you are that person, then good for you, but it's irrelevant.
    You made a good point about 3CCD cameras, which I'd forgotten about, and even if you disagree about debayering and mismatched photosites and pixels, none of that stuff matters if the image is going to get compressed for digital distribution and then decoded by any number of decoders that will generate a different pixel-to-pixel readout.
    Essentially you're arguing about how visible something is at the step before it gets put through a cheese-grater on its way to the people who actually watch the movies and pay for the whole thing.
    In terms of why they make higher resolution cameras?  There are two main reasons I can see:
    The first is that VFX folks want as much resolution as possible as it helps keep things perceptually flawless after they mess with them.  This is likely the primary reason that companies like ARRI are putting out higher resolution models.
    The second reason is that electronics companies are companies, and in a capitalist society, companies exist to make money, and to do that you need to make people keep buying things, which is done through planned obsolescence and incremental improvements, such as getting everyone to buy 4K TVs, and then 4K cameras to go with those 4K TVs.  This is likely the driver of all the camera manufacturers who also sell TVs, which is....  basically every consumer camera company.  Not a whole lot of people buying a GH5 are doing VFX with it, although cropping in post is one relatively common exception to that.
    So, although I disagree with you on some of the technical aspects along the way, the fact that his test isn't "1:1" in whatever ways you think it should be is irrelevant, because people watch things after compression, after being decoded by unknown algorithms.  
    That's not even taking into account the image processing witchcraft that things like Smooth Motion that completely invents entirely new frames and is half of what the viewer will actually see, or uncalibrated displays etc.  Yes, these things don't exist in theatres, but how many hours do you spend watching something in a theatre vs at home?  The average person spends almost all their time watching on a TV at home, so the theatre percentage is pretty small.
  8. Like
    tupp got a reaction from John Matthews in Camera resolutions by cinematographer Steve Yeldin   
    Keep in mind that resolution is important to color depth.  When we chroma subsample to 4:2:0 (as likely with your A7SII example), we throw away chroma resolution and thus, reduce color depth.  Of course, compression also kills a lot of the image quality.
     
     
    Yedlin also used the term "resolute" in his video.  I am not sure that it means what you and Yedlin think it means.
     
     
    It is impossible for you (the viewer of Yedlin's video) to see 1:1 pixels (as I will demonstrate), and it is very possible that Yedlin is not viewing the pixels 1:1 in his viewer.
    Merely zooming "2X" does not guarantee that he nor we are seeing 1:1 pixels.  That is a faulty assumption.
     
     
    Well, it's a little more complex than that.
    The size of the pixels that you see is always the size of the the pixels of your display, unless, of course, the zoom is sufficient to render the image pixels larger than the display pixels.  Furthermore, blending and/or interpolation of pixels is suffered if the image pixels do not match 1:1 those of the display, or if the size image pixels are larger than those of the display while not being a mathematical square of the display pixels.
    Unfortunately, all of images that Yedlin presents as 1:1 most definitely are not a 1:1 match, with the pixels corrupted by blending/interpolation (and possibly compression).
     
     
    When Yedlin zooms-in, we see a 1:1 pixel match between the two images, so there is no actual difference in resolution in that instance -- an actual resolution difference is not being compared here nor in most of the subsequent "1:1" comparisons.
    What is being compared in such a scenario is merely scaling algorithms/techniques.  However, any differences even in those algorithms get hopelessly muddled due to the fact that the pixels that you (and possibly Yedlin) see are not actually a 1:1 match, and are thus additionally blended and interpolated.
    Such muddling destroys any possibility of making a true resolution comparison.
     
     
    No.  Such a notion is erroneous as the comparison method is inherently faulty and as the image "pipeline" Yedlin used unfortunately is leaky and septic (as I will show).
    Again, if one is to conduct a proper resolution comparison, the pixels from the original camera image should never be blended:  an 8K captured image should be viewed on an 8K monitor; a 6K captured image should be viewed on an 6K monitor; a 4K captured image should be viewed on an 4K monitor; 2K captured image should be viewed on an 2K monitor; etc.
    Scaling algorithms, interpolations and blending of the pixels corrupts the testing process.
     
     
    I thought that I made it clear in my previous post.  However, I will paraphrase it so that you might understand what is actually going on:  There is no possible way that you ( @kye ) can observe the comparisons with a 1:1 pixel match to those of the images shown in Yedlin's node editor viewer.
    In addition, it is very possible that even Yedlin's own viewer when set at 100% is not actually showing a 1:1 pixel match to Yedlin.
    Such a pixel mismatch is a fatal flaw when trying to compare resolutions.  Yedlin claims that he established a 1:1 match, because he knows that it is an important requirement for comparing resolutions, but he did not acheive a 1:1 pixel match.
    So, almost everything about his comparisons is meaningless.
     
     
    Again, Yedlin is not actually comparing resolutions in this instance.  He is merely comparing scaling algorithms and interpolations here and elsewhere in his video, scaling comparisons which are crippled by his failure to achieve a 1:1 pixel match in the video.
    Yedlin could have verified a 1:1 pixel match by showing a pixel chart within his viewer when it was set to 100%.
    Here are a couple of pixel charts:

    If the the charts are displayed at 1:1 pixels, you should easily observe with a magnifier that the all of the black pixel rulings that are integers (1, 2, 3, etc.) are cleanly defined with no blending into adjacent pixels.  On the other hand, all of the black pixel rulings that are non-integers (1.3, 1.6, 2.1, 2.4, 3.3, etc.) should show blending on their edges with a 1:1 match.
    Without such a chart it is difficult to confirm that one pixel of the image coincides with one pixel in Yedlin's video.  Either Steve Yedlin, ASC was not savvy enough to include the fundamental verification of a pixel chart or he intentionally avoided verification of a pixel match.
    However, Yedlin unwittingly provided something that proves his failure to achieve a 1:1 match.
    At 15:03 in the video, Yedlin zooms way in to a frozen frame, and he draws a precise 4x4 pixel square over the image.  At the 16:11 mark in the video, he zooms back out to the 100% setting in his viewer, showing the box at the alleged 1:1 pixels
    You can freeze the video at that point and see for yourself with a magnifier that the precise 4x4 pixel square has blended edges (unlike the clean-edged integer rulings on the pixel charts).  However, Yedlin claims there is a 1:1 pixel match!
    I went even further than just using magnifier.  I zoomed-in to that "1:1" frame using two different methods, and then I made a side-by-side comparison image:

    All three images in the above comparison were taken from the actual video posted on Yedlin's site.
    The far left image shows Yedlin's viewer fully zoomed-in when he draws the precise 4x4 pixel square.  The middle and right images are zoomed into Yedlin's viewer when it is set to 100% (with an allegedly 1:1 pixel match).
    There is no denying the excessive blending and interpolation revealed when zooming-in to the square or when magnifying one's display.  No matter how finely one can change the zoom amount in one's video player, one will never be able to see a 1:1 pixel match with Yedlin's video, because the blending/interpolation is inherent in the video.  Furthermore, the blending/interpolation is possibly introduced by Yedlin's node editor viewer when it is set to 100%.
    Hence, Yedlin's claimed 1:1 pixel match is false.
    By the way, in my comparison photo above, the middle image is from a tiff created by ffmpeg, to avoid  further compression.  The right image was made by merely zooming into the frozen frame playing on the viewer of the Natron compositor.
     
     
    Correct.  That is what I have stated repeatedly.
    The thing is, he uses this same method in almost every comparison, so he is merely comparing scaling methods throughout the video -- he is not comparing actual resolution.
     
     
    What?  Of course there are such "pipelines."  One can shoot with a 4K camera and process the resulting 4K files in post and then display in 4K, and the resolution never increases nor decreases at any point in the process.
     
     
    Are you trying to validate Yedlin's upscaling/downscaling based on semantics?
    It is generally accepted that a photosite on a sensor is a single microscopic receptor often filtered with a single color.  A combination of more than one adjacent receptors with red, green, blue (and sometimes clear) filters is often called a pixel or pixel group.  Likewise, an adjacent combination of RGB display cells is usually called a pixel.
    However you choose to define the terms or to group the receptors/pixels, it will have little bearing on the principles that we are discussing.
     
     
    Huh?  What do you mean here?  How do you get those color value numbers from 4K?  Are you saying that all cameras are under-sampling compared to image processing and displays?
    Regardless of how the camera resolution is defined, one can create a camera image and then process that image in post and then display the image all without any increase nor decrease of the resolution at any step in the process.  In fact, such image processing with consistent resolution at each step is quite common.
     
     
    It's called debayering... except when it isn't.  There is no debayering with:  an RGB striped sensor; an RGBW sensor; a monochrome sensor; a scanning sensor; a Foveon sensor; an X-Trans sensor; and three-chip cameras; etc.  Additionally, raw files made with a Bayer matrix sensor are not debayered.
    I see where this is going, and your argument is simply a matter of whether we agree to determine resolution by counting the separate red, green and blue cells or whether we determine resolution by counting RGB pixel groups formed by combining those adjacent red, green and blue scales.
     
     
    Jeez Louise... did you just recently learn about debayering algorithms?
    The conversion of adjacent photosites into a single RGB pixel group (Bayer or not) isn't considered "scaling" by most.  Even if you define it as such, that notion is irrelevant to our discussion -- we necessarily have to assume that a digital camera's resolution is given by either the output of it's ADC or by the resolution of the camera files.
    We just have to agree on whether we are counting the individual color cells or the combined RGB pixel groups.  Once we agree upon the camera resolution, that resolution need never change throughout the rest of the "imaging pipeline."
     
     
    You probably shouldn't have emphasized that point, because you are incorrect, even if we use your definition of "scaling."
    There are no adjacent red green or blue photosites to combine ("scale") with digital Foveon sensors, digital three chip cameras and digital monochrome sensors.
     
     
    Please, I doubt that even Yedlin would go along with you on this line of reasoning.
    We can determine the camera resolution merely from output of the ADC or from the camera files.  We just have to agree on whether we are counting the individual color cells or the combined RGB pixel groups. After we agree on the camera resolution, that resolution need never change throughout the rest of the "imaging pipeline."
    Regardless of these semantics, Yedlin is just comparing scaling methods and not actual resolution.
     
     
    When one is trying to determine if higher resolutions can yield an increase in discernability or in perceptible image quality, then it is irrelevant to consider the statistics of common or uncommon setups.
    The  alleged commonality and feasibility of the setup is a topic that should be left for another discussion, and such notions should not influence nor interfere with any scientific testing nor with the weight of any findings of the tests.
    By dismissing greater viewing angles as uncommon, Yedlin reveals his bias.  Such dismissiveness of important variables corrupts his comparisons and conclusions, as he avoids testing larger viewing angles, and he merely concludes that larger screens are "special".
     
     
    Well if I had a 4K monitor, I imagine that I could tell the difference between a 4K and 2K image.
    Not that it matters, but close viewing proximity is likely much more common than Yedlin realizes and more more common than your web searching shows.  In addition to IMAX screens, movie theaters with seats close to the screen, amusement park displays and jumbo-trons, many folks position their computer monitors close enough to see the individual pixels (at least when they lean forward).  If one can see individual pixels, a higher resolution monitor of the same size can make those individual pixels "disappear."  So, higher resolution can yield a dramatic difference in discernability, even in common everyday scenarios.
    Furthermore, a higher resolution monitor with the same size pixels as a lower resolution monitor gives a much more expansive viewing angle.  As many folks use multiple computer monitors side-by-side, the value of such a wide view is significant.
     
     
    Whatever.  You can claim that combining adjacent colored photosites into a single RGB pixel group is "scaling."  Nevertheless, the resolution need never change at any point in the "imaging pipeline."
    Regardless, Yedlin is merely comparing scaling methods and not resolution.
     
     
    Well, we can't really draw such a conclusion from Yedlin's test, considering all of the corruption from blending and interpolation caused by his failure to achieve a 1:1 pixel match.

     
    How is this notion relevant or a recap?
     
     
    Your statistics and what you consider be likely or common in regards to viewing angles/proximity is irrelevant in determining the actual discernability differences between resolutions.  Also, you and Yedlin dismiss sitting in the very front row of a movie theater.
     
     
    That impresses you?
    Not sure how that point is relevant (nor how it is a recap), but please ask yourself:  if there is no difference in discernability between higher resolutions, why would Arri (the maker of some of the highest quality cinema cameras) offer a 6K camera?
     
     
    Yes, but such points don't shed light on any fundamental differences in the discernability of different resolutions.  Also how is this notion a recap?
     
     
    You are incorrect and this notion is not a recap.
    Please note that my comments in an earlier post regarding Yedlin's dismissing wider viewing angles referred to and linked to a section at 0:55:27 in his 1:06:54 video.
     
     
    You only missed all of the points that I made above in this post and earlier posts.
     
     
    No, it's not clear.  Yedlin's "resolution" comparisons are corrupted by the fact that the pixels are not a 1:1 match and by the fact that he is actually comparing scaling methods -- not resolution.
  9. Like
    tupp reacted to seanzzxx in Camera resolutions by cinematographer Steve Yeldin   
    The a6300 is a good example of exactly the point he’s trying to make, the distinction between resolution and fidelity. I had a shoot with a Sony a7sii and an Ursa back to back. The Ursa was shot in 1920x1080 prores 444, the a7sii shot UHD in mega compressed h264. The difference in PERCEIVED resolution between these two cameras is night and day, with the Ursa kicking the A7s it’s butt, because the image of the former is so much more robust in terms of compression noise, color accuracy, banding, edge detail, and so on. One is more resolute but the other camera has a lot more perceived resolution because the image is better. If you blow up the images and ask random viewers which camera is ‘sharper’, 9 out of 10 times people will say the Ursa.
  10. Like
    tupp reacted to kye in Camera resolutions by cinematographer Steve Yeldin   
    For anyone who is reading this far into a thread about resolution but for some reason doesn't have an hour of their day to hear from an industry expert on the subject, the section of the video I have linked to below is a very interesting comparison of multiple cameras (film and digital) with varying resolution sensors, and it's quite clear that the level of perceptual detail coming out of a camera is not that strongly related to the sensor resolution:
     
  11. Like
    tupp reacted to kye in Camera resolutions by cinematographer Steve Yeldin   
    Ok..  Let's discuss your comments.
    His first test, which you can see the results of at 6:40-8:00 compares two image pipelines:
    6K image downsampled to 4K, which is then viewed 1:1 in his viewer by zooming in 2x 6K image downsampled to 2K, then upsampled to 4K, which is then viewed 1:1 in his viewer by zooming in 2x As this view is 2X digitally zoomed in, each pixel is twice as large as it would be if you were viewing the source video on your monitor, so the test is actually unfair.  There is obviously a difference in the detail that's actually there, and this can be seen when he zooms in radically at 7:24, but when viewed at 1:1 starting at 6:40 there is perceptually very little difference, if any.
    Regardless of if the image pipeline is "proper" (and I'll get to that comment in a bit), if downscaling an image to 2K then back up again isn't visible, the case that resolutions higher than 2K are perceptually differentiable is pretty weak even straight out of the gate.
    Are you saying that the pixels in the viewer window in Part 2 don't match the pixels in Part 1?
    Even if this was the case, it still doesn't invalidate comparisons like the one at 6:40 where there is very little difference between two image pipelines where one has significantly less resolution than the other and yet they appear perceptually very similar / identical. 
    He is comparing scaling methods - that's what he is talking about in this section of the video.  
    This use of scaling algorithms may seem strange if you think that your pipeline is something like 4K camera -> 4K timeline -> 4K distribution, or the same in 2K, as you have mentioned in your first point, but this is false.  There are no such pipelines, and pipelines such as this are impossible.  This is because the pixels in the camera aren't pixels at all, rather they are photosites that sense either Red or Green or Blue.  Whereas the pixels in your NLE and on your monitor or projector are actually Red and Greed and Blue.  
    The 4K -> 4K -> 4K pipeline you mentioned is actually ~8M colour values -> ~24M colour values -> ~24M colour values.
    The process of taking an array of photosites that are only one colour and creating an image where every pixel has values for Red Green and Blue is called debayering, and it involves scaling.
    This is a good link to see what is going on: https://pixinsight.com/doc/tools/Debayer/Debayer.html
    From that article: "The Superpixel method is very straightforward. It takes four CFA pixels (2x2 matrix) and uses them as RGB channel values for one pixel in the resulting image (averaging the two green values). The spatial resolution of the resulting RGB image is one quarter of the original CFA image, having half its width and half its height."
    Also from the article: "The Bilinear interpolation method keeps the original resolution of the CFA image. As the CFA image contains only one color component per pixel, this method computes the two missing components using a simple bilinear interpolation from neighboring pixels."
    As you can see, both of those methods talk about scaling.  Let me emphasise this point - any time you ever see a digital image taken with a digital camera sensor, you are seeing a rescaled image.
    Therefore Yedlin's use of scaling is on an image pipeline is using scaling on an image that has already been scaled from the sensor data to an image that has three times as many colour values as the sensor captured.
    A quick google revealed that there are ~1500 IMAX theatre screens worldwide, and ~200,000 movie theatres worldwide.  Sources:
    "We have more than 1,500 IMAX theatres in more than 80 countries and territories around the globe." https://www.imax.com/content/corporate-information  "In 2020, the number of digital cinema screens worldwide amounted to over 203 thousand – a figure which includes both digital 3D and digital non-3D formats." https://www.statista.com/statistics/271861/number-of-digital-cinema-screens-worldwide/ That's less than 1%.  You could make the case that there are other non-IMAX large screens around the world, and that's fine, but when you take into account that over 200 Million TVs are sold worldwide each year, even the number of standard movie theatres becomes a drop in the ocean when you're talking about the screens that are actually used for watching movies or TVs worldwide.  Source: https://www.statista.com/statistics/461316/global-tv-unit-sales/
    If you can't tell the difference between 4K and 2K image pipelines at normal viewing distances and you are someone that posts on a camera forum about resolution then the vast majority of people watching a movie or a TV show definitely won't be able to tell the difference.
    Let's recap:
    Yedlin's use of rescaling is applicable to digital images because every image from every digital camera sensor that basically anyone has ever seen has already been rescaled by the debayering process by the time you can look at it There is little to no perceptual difference when comparing a 4K image directly with a copy of that same image that has been downscaled to 2K and the upscaled to 4K again, even if you view it at 2X The test involved swapping back and forth between the two scenarios, where in the real world you are unlikely to ever get to see the comparison, like that or even at all The viewing angle of most movie theatres in the world isn't sufficient to reveal much difference between 2K and 4K, let alone the hundreds of millions of TVs sold every year which are likely to have a smaller viewing angle than normal theatres These tests you mentioned above all involved starting with a 6K image from an Alexa 65, one of the highest quality imaging devices ever made for cinema The remainder of the video discusses a myriad of factors that are likely to be present in real-life scenarios that further degrade image resolution, both in the camera and in the post-production pipeline You haven't shown any evidence that you have watched past the 10:00 mark in the video Did I miss anything?
  12. Like
    tupp reacted to kye in Camera resolutions by cinematographer Steve Yeldin   
    A friend recommended a movie to me, but it looked really long.
    I watched the first scene and then the last scene, and the last scene made no sense.  It had characters in it I didn't know about, and it didn't explain how the characters I did know got there.  The movie is obviously fundamentally flawed, and I'm not watching it.  I told my friends that it was flawed, but they told me that it did make sense and the parts I didn't watch explained the whole story, but I'm not going to watch a movie that is fundamentally flawed!
    They keep telling me to watch the movie, but they're obviously idiots, because it's fundamentally flawed.
    They also sent me some recipes, and the chocolate cake recipe had ingredient three as eggs and ingredient seven as cocoa powder (I didn't read the other ingredients) but you can't make a cake using only eggs and cocoa powder - the recipe is fundamentally flawed.  My friend said that the other ingredients are required in order to get a cake, but I'm not going to bother going back and reading the whole recipe and then spending time and money making it when it's obviously flawed.  
    My friends really are stupid.  I've told them about the bits that I saw, and they kept telling me that a movie and a recipe only make sense if you go through the whole thing, but that's not how I do things, so obviously they're wrong.  
    It makes me cry for the state of humanity when that movie was not only made, but it won 17 oscars, and that cake recipe was named Oprahs cake of the month.  People really must be stupid.
  13. Like
    tupp reacted to kye in Camera resolutions by cinematographer Steve Yeldin   
    The whole video made sense to me.
     
    What you are not understanding (BECAUSE YOU HAVEN'T WATCHED IT) is that you can't just criticise bits of it because the logic of it builds over the course of the video.  It's like you've read a few random pages from a script and are then criticising them by saying they don't make sense in isolation.
    The structure of the video is this:
    He outlines the context of what he is doing and why He talks about how to get a 1:1 view of the pixels He shows how in a 1:1 view of the pixels that the resolutions aren't discernable Then he goes on to explore the many different processes, pipelines, and things that happen in the real world (YES, INCLUDING RESIZING) and shows that under these situations the resolutions aren't discernible either You have skipped enormous parts of the video, and you can't do that.
    Once again, you can't skip parts of a logical progression, or dare I say it "proof", and expect for it to make sense.
    Your posts don't make sense if I skip every third word, or if I only read the first and last line.
    Yedlin is widely regarded as a pioneer in the space of colour science, resolution, FOV, and other matters.  His blog posts consist of a mixture of logical arguments, mathematics, physics and controlled tests.  These are advanced topics and not many others have put the work in to perform these tests.  
    The reason that I say this is that not everyone will understand these tests.  Not everyone understands the correct and incorrect ways to use reductive logic, logical interpolation, extrapolation, equivalence, inference, exception, boundaries, or other logical devices.  
    I highly doubt that you would understand the logic that he presents, but one thing I can tell with absolute certainty, is that you can't understand it without actually WATCHING IT.
  14. Like
    tupp reacted to kye in Camera resolutions by cinematographer Steve Yeldin   
    Hahaha..  I think that this is regarded as a bit of an outlier in terms of the demand placed on the colourist and post-production, but yes, being a professional colourist isn't top of my career choices either!
    I'm less familiar with the inner workings of how Steve works in post, although I get the impression that although he has very specific requirements, he's also much more hands on during that process, so it's less of a case of making specific requests of others, but once again, I haven't seen anything one way or the other.
    @noone
    I watched a great panel discussion between a few industry pros (I just had a look for it and unfortunately can't find it) debating resolution, and the pattern was completely obvious.  The cinematographers wanted to shoot 2K, or as close to it as possible, because it makes their life easier and the films are all mastered in 2K anyway.  The post-production reps wanted as much resolution as possible (8K or even more if possible) because it's really useful for tracking and VFX work, which they said is now pretty much a fixture of all productions these days.
    So in that sense, I think it's just about what kind of production you're shooting, and once again, being aware of what you're trying to accomplish and then using the right tools for the job.
    You can't make comparisons, discuss, criticise, or even comment on something you haven't watched.
    As someone who HAS watched it, more than once actually, I found that it worked methodically, building the logic one step at a time, taking the viewer through quite a complex analysis.  I found it engaging and was surprised that it didn't seem to drag, and found that it covered all the variables, including all the nuance of various post-production image pipelines, including the upscaling downscaling and processing of VFX pipelines.
    Your criticisms are of things he didn't say.  That's called a straw man argument - https://en.wikipedia.org/wiki/Straw_man
    I'm not surprised that the criticisms you're raising aren't valid, as you've displayed a lack of critical thinking on many occasions, but what I am wondering is how you think you can criticise something you haven't watched?
    The only thing I can think of is that you don't understand how logic, or logical discourse actually works, which unfortunately makes having a reasonable discussion impossible.
    This whole thread is about a couple of videos that John has posted, and yet you're in here arguing with people about what is in them when you haven't watched them, let alone understood them.  I find it baffling, but sadly, not out of character.
  15. Like
    tupp reacted to BenEricson in Alternatives to original BMPCC (Super 16 look)   
    That's a cool little rig. I think I like the image I see from the pocket more, although I have always loved the files from any ML Canon camera. The ease of ProRes HQ is definitely easier from a work flow perspective.
  16. Like
    tupp reacted to John Matthews in Camera resolutions by cinematographer Steve Yeldin   
    For me, he effectively demonstrates the insignificance of taking professionally prepared 4k+ content, downscaling it to 2k, and upscaling it to 4k again. The resulting images, even when compared A/B style, don't show any difference. I'd love for you to prove otherwise. I really didn't think of it like this until after watching him. Again, his point wasn't necessarily this though- it was to show there are many other considerations BEFORE pixel count that show significant importance as long as the detail threshold is met.
  17. Like
    tupp reacted to kye in Camera resolutions by cinematographer Steve Yeldin   
    Well, that went about as I predicted.  In fact it went exactly as I predicted!
    I said:
    Then Tupp said that he didn't watch it, criticised it for doing things that it didn't actually do, then suggests that the testing methodology is false.  What an idiot.
    Is there a block button?  I think I might go look for it.  I think my faith in humanity is being impacted by his uninformed drivel.  I guess not everyone in the industry cares about how they appear online - I typically found the pros I've spoken to to be considered and only spoke from genuine knowledge, but that's definitely not the case here.
    There's irony everywhere, but I'm not sure what you're talking about specifically! 🙂
    I'm not really sure who you think Steve Yedlin actually is?  You're aware that he is a professional cinematographer right?
    I'd suggest you read his articles on colour science and image attributes - they speak to both what he's trying to achieve and you can get a sense of why he does what he does: http://www.yedlin.net/OnColorScience/index.html
    I agree, but I think it's worth stating a couple of caveats.
    Firstly, he shoots large productions that have time to be heavily processed in post, which obviously he does quite a bit of.  Here's a video talking about the post-production process on Mindhunter, which also used heavy processing in post to create a look from a relatively neutral capture: 
    That should  give you a sense of how arduous that kind of thing can be.  Which I think makes processing in post a luxury for most film-makers.  Either you're shooting a project where people aren't being paid by the hour, such as a feature where you're doing most of the work in post yourself.  This is a luxury because you will be able to spend more time than is economical for the project.
    Film-makers who don't have the expertise themselves and would have to pay someone, or more likely they would just try and get things right in-camera, and do whatever they can afford in post.
    The second aspect of this is knowing what you can do in post and what you can't do.  Obviously you can adjust colour, and you can degrade certain elements as well, but we're a long way off being able to change the shape of bokeh, or alter depth of field, or completely perfectly emulate diffusion.
    So it's important to understand what you can and cannot do in post (both from a resourcing / skills perspective as well as from a physics perspective) and take that into account during pre-production. 
    I completely agree with this.  It certainly eliminates great proportions of the people online though.
    I suspect that the main contributor to this is that most people online are being heavily influenced by amateur stills photographers who seem to think that sharpness is the most important image attribute in a camera or lens.  I think this tendency is a reaction to the fact that images from the film days struggled with sharpness (due to both the stocks and lenses), and also early digital struggled due to the relatively low number of MP at the start as well.  I think this will eventually fade as the community gets a more balanced perspective.
    The film industry, on the other hand, still talks about sharpness but does so in a more balanced perspective, and does so in the context of balancing it against other factors to get the overall aesthetic they want, taking into account the post-process they're designing and the fact that distribution is limited to a ~2K perceptual resolution.
  18. Like
    tupp got a reaction from PannySVHS in Alternatives to original BMPCC (Super 16 look)   
    Very cool!
     
    Your rig reminds me of @ZEEK's EOSM Super 16 setup.  It shoots 2.5K, 10-bit continuously or 2.8K, 10-bit continuously with ML at around 16mm and Super 16 frame sizes.
  19. Like
    tupp got a reaction from majoraxis in Camera resolutions by cinematographer Steve Yeldin   
    We've certainly talked about resolution, and other Yedlin videos have been linked in this forum.
     
    I merely scanned the videos (that second video is over an hour in length), so I don't know all the points that he covered.
     
    Resolution and sharpness are not the same thing.  There is a contrast element to sharpness, and it involves different levels (macro contrast micro contrast, etc.).  One can see the effects of different levels of contrast when doing frequency separation work in images.  Not sure if Yedlin specifically covers contrast's relation to sharpness in these videos.  By the way, here is a recent demonstration of when micro features and macro features don't match.
     
    Also, I am not sure that his resolution demo is valid, as he seems to be showing different resolutions on the same monitor.  I noticed in one passage that he was zoomed in to see individual pixels, and, when switching between resolutions, the pixel size and pixel quantity did not change nor did the subject's size in the image.  Something is wrong with that comparison.
     
    To properly demonstrate resolution differences in regards to discernible detail, one really must show a 6K-captured image on a 6K monitor, a 4K-captured image on a 4K monitor and an HD/2K captured image on an HD/2K monitor, etc. -- and all monitors must be the same size and and same distance from the viewer.
     
    The only other demonstration that I have seen by Yedlin also had significant flaws.
     
    Furthermore, there are other considerations, such as how resolution influences color depth and how higher resolution can help transcend conversion/algorithmic losses and how higher resolution allows for cropping, etc.
     
     
    There are problems with the few Yedlin videos that I have seen.  Also, one of his videos linked above is lengthy and somewhat ponderous.
     
     
    I would put the Panavision Genesis (and it's little brother, the Sony F35) up against an Alexa any day, and the Genesis has lower resolution and less dynamic range than the Alexa.  However, the Genesis has a lush, striped, RGB, CCD with true HD -- 1920x1080 RGB pixel groups.
     
    Similarly, I recall that the Dalsa Origin demos showed a thick image (it shot 16-bit, 4K), and the Thompson Viper HD CCD camera yielded great footage.
     
     
    I certainly agree that there is a threshold beyond which higher resolution generally is not necessary in most cases, and I think that that such a threshold has been mentioned a few times in this forum.  On the other hand, I don't think that such a threshold is absolute, as so much of imaging is subjective and a lot of SD productions are still very compelling today.
     
     
    I have shot a fair amount of film, but I would not say that the image quality of film is "better."  It's easier (and more forgiving) to shoot film in some ways, but video is easier in many other ways and it can give a great image.
     
     
    Exactly.
  20. Like
    tupp reacted to John Matthews in Camera resolutions by cinematographer Steve Yeldin   
    I'd recommend you watch the whole video. It was rather eye-opening for me.
    His point is to gather data without any imperfections if possible and add value to his content through a streamlined image processing pipeline, regardless the camera used to capture. I highly doubt any viewer would ever see a flaw with his strategy.
    I'm aware that sharpness is not detail... and he covers that in the video too. Another major point is that no manufacturer is making a new human retina; therefore, the maximum detail has already been hit (even with 1080p!). Any more efforts at showing more detail would require sitting much closer to the content at which point you'd find yourself moving your head around to see the scene, taking you pointlessly out of the story. No viewer wants that. 
     
     
  21. Like
    tupp got a reaction from BenEricson in Camera resolutions by cinematographer Steve Yeldin   
    We've certainly talked about resolution, and other Yedlin videos have been linked in this forum.
     
    I merely scanned the videos (that second video is over an hour in length), so I don't know all the points that he covered.
     
    Resolution and sharpness are not the same thing.  There is a contrast element to sharpness, and it involves different levels (macro contrast micro contrast, etc.).  One can see the effects of different levels of contrast when doing frequency separation work in images.  Not sure if Yedlin specifically covers contrast's relation to sharpness in these videos.  By the way, here is a recent demonstration of when micro features and macro features don't match.
     
    Also, I am not sure that his resolution demo is valid, as he seems to be showing different resolutions on the same monitor.  I noticed in one passage that he was zoomed in to see individual pixels, and, when switching between resolutions, the pixel size and pixel quantity did not change nor did the subject's size in the image.  Something is wrong with that comparison.
     
    To properly demonstrate resolution differences in regards to discernible detail, one really must show a 6K-captured image on a 6K monitor, a 4K-captured image on a 4K monitor and an HD/2K captured image on an HD/2K monitor, etc. -- and all monitors must be the same size and and same distance from the viewer.
     
    The only other demonstration that I have seen by Yedlin also had significant flaws.
     
    Furthermore, there are other considerations, such as how resolution influences color depth and how higher resolution can help transcend conversion/algorithmic losses and how higher resolution allows for cropping, etc.
     
     
    There are problems with the few Yedlin videos that I have seen.  Also, one of his videos linked above is lengthy and somewhat ponderous.
     
     
    I would put the Panavision Genesis (and it's little brother, the Sony F35) up against an Alexa any day, and the Genesis has lower resolution and less dynamic range than the Alexa.  However, the Genesis has a lush, striped, RGB, CCD with true HD -- 1920x1080 RGB pixel groups.
     
    Similarly, I recall that the Dalsa Origin demos showed a thick image (it shot 16-bit, 4K), and the Thompson Viper HD CCD camera yielded great footage.
     
     
    I certainly agree that there is a threshold beyond which higher resolution generally is not necessary in most cases, and I think that that such a threshold has been mentioned a few times in this forum.  On the other hand, I don't think that such a threshold is absolute, as so much of imaging is subjective and a lot of SD productions are still very compelling today.
     
     
    I have shot a fair amount of film, but I would not say that the image quality of film is "better."  It's easier (and more forgiving) to shoot film in some ways, but video is easier in many other ways and it can give a great image.
     
     
    Exactly.
  22. Like
    tupp reacted to BenEricson in Alternatives to original BMPCC (Super 16 look)   
    Save your money and buy a Lumix 12-35 2.8 IS and a nice external battery for your pocket! You can solve the problem with stabilization by using an IS lens and the battery fix is very cheap and effective.
    No need. The Zhiyun Weebill is 400 and does a great job. Smaller, lighter, more portable, battery goes forever with the bmpcc. Gotta be close to 8 or 9 hours.
    Attached a photo of my setup. I'm working on a project with vintage C mount lenses. Not trying to win some sort of depth of field contest. The camera has beautiful texture and looks great at F8 or F11. OLPF shows up next week. Throw a 4 stop ND on there and rate it at ISO50 with a light meter.



  23. Like
    tupp reacted to John Matthews in Camera resolutions by cinematographer Steve Yeldin   
    I've been watching some resolution insights by cinematographer Steve Yeldin that I think many might find very interesting. Not sure if this has already been posted...
    It would be interesting to discuss.
  24. Like
    tupp reacted to kye in Camera resolutions by cinematographer Steve Yeldin   
    I've posted them quite a few times, but it seems like people aren't interested.  They don't follow the links or read the content, and after repeating myths that Steve easily demonstrates to be false, the people go back to talking about if 6K is enough resolution to film a wedding or a CEO talking about quarterly returns, or if they should get the UMP 12K.
    I mentioned this in another thread recently, but it's been over a decade since the Alexa was first released and we have cameras that shoot RAW in 4, 9, and 16 times the resolution of the Alexa, but the Alexa still has obviously superior image quality, so I really wonder what the hell it is that we're even talking about here....
  25. Like
    tupp reacted to John Matthews in Camera resolutions by cinematographer Steve Yeldin   
    Yes, as he says in the video, people are just looking at that ONE number to make easy choice as to which camera is better. Maybe this is what separates a real cinematographer from wannabes. The image is what counts, not the megapixels (after you get to the "accepted" amount of detail threshold).
×
×
  • Create New...