Jump to content

kye

Members
  • Posts

    7,653
  • Joined

  • Last visited

Everything posted by kye

  1. Ok.. Let's discuss your comments. His first test, which you can see the results of at 6:40-8:00 compares two image pipelines: 6K image downsampled to 4K, which is then viewed 1:1 in his viewer by zooming in 2x 6K image downsampled to 2K, then upsampled to 4K, which is then viewed 1:1 in his viewer by zooming in 2x As this view is 2X digitally zoomed in, each pixel is twice as large as it would be if you were viewing the source video on your monitor, so the test is actually unfair. There is obviously a difference in the detail that's actually there, and this can be seen when he zooms in radically at 7:24, but when viewed at 1:1 starting at 6:40 there is perceptually very little difference, if any. Regardless of if the image pipeline is "proper" (and I'll get to that comment in a bit), if downscaling an image to 2K then back up again isn't visible, the case that resolutions higher than 2K are perceptually differentiable is pretty weak even straight out of the gate. Are you saying that the pixels in the viewer window in Part 2 don't match the pixels in Part 1? Even if this was the case, it still doesn't invalidate comparisons like the one at 6:40 where there is very little difference between two image pipelines where one has significantly less resolution than the other and yet they appear perceptually very similar / identical. He is comparing scaling methods - that's what he is talking about in this section of the video. This use of scaling algorithms may seem strange if you think that your pipeline is something like 4K camera -> 4K timeline -> 4K distribution, or the same in 2K, as you have mentioned in your first point, but this is false. There are no such pipelines, and pipelines such as this are impossible. This is because the pixels in the camera aren't pixels at all, rather they are photosites that sense either Red or Green or Blue. Whereas the pixels in your NLE and on your monitor or projector are actually Red and Greed and Blue. The 4K -> 4K -> 4K pipeline you mentioned is actually ~8M colour values -> ~24M colour values -> ~24M colour values. The process of taking an array of photosites that are only one colour and creating an image where every pixel has values for Red Green and Blue is called debayering, and it involves scaling. This is a good link to see what is going on: https://pixinsight.com/doc/tools/Debayer/Debayer.html From that article: "The Superpixel method is very straightforward. It takes four CFA pixels (2x2 matrix) and uses them as RGB channel values for one pixel in the resulting image (averaging the two green values). The spatial resolution of the resulting RGB image is one quarter of the original CFA image, having half its width and half its height." Also from the article: "The Bilinear interpolation method keeps the original resolution of the CFA image. As the CFA image contains only one color component per pixel, this method computes the two missing components using a simple bilinear interpolation from neighboring pixels." As you can see, both of those methods talk about scaling. Let me emphasise this point - any time you ever see a digital image taken with a digital camera sensor, you are seeing a rescaled image. Therefore Yedlin's use of scaling is on an image pipeline is using scaling on an image that has already been scaled from the sensor data to an image that has three times as many colour values as the sensor captured. A quick google revealed that there are ~1500 IMAX theatre screens worldwide, and ~200,000 movie theatres worldwide. Sources: "We have more than 1,500 IMAX theatres in more than 80 countries and territories around the globe." https://www.imax.com/content/corporate-information "In 2020, the number of digital cinema screens worldwide amounted to over 203 thousand – a figure which includes both digital 3D and digital non-3D formats." https://www.statista.com/statistics/271861/number-of-digital-cinema-screens-worldwide/ That's less than 1%. You could make the case that there are other non-IMAX large screens around the world, and that's fine, but when you take into account that over 200 Million TVs are sold worldwide each year, even the number of standard movie theatres becomes a drop in the ocean when you're talking about the screens that are actually used for watching movies or TVs worldwide. Source: https://www.statista.com/statistics/461316/global-tv-unit-sales/ If you can't tell the difference between 4K and 2K image pipelines at normal viewing distances and you are someone that posts on a camera forum about resolution then the vast majority of people watching a movie or a TV show definitely won't be able to tell the difference. Let's recap: Yedlin's use of rescaling is applicable to digital images because every image from every digital camera sensor that basically anyone has ever seen has already been rescaled by the debayering process by the time you can look at it There is little to no perceptual difference when comparing a 4K image directly with a copy of that same image that has been downscaled to 2K and the upscaled to 4K again, even if you view it at 2X The test involved swapping back and forth between the two scenarios, where in the real world you are unlikely to ever get to see the comparison, like that or even at all The viewing angle of most movie theatres in the world isn't sufficient to reveal much difference between 2K and 4K, let alone the hundreds of millions of TVs sold every year which are likely to have a smaller viewing angle than normal theatres These tests you mentioned above all involved starting with a 6K image from an Alexa 65, one of the highest quality imaging devices ever made for cinema The remainder of the video discusses a myriad of factors that are likely to be present in real-life scenarios that further degrade image resolution, both in the camera and in the post-production pipeline You haven't shown any evidence that you have watched past the 10:00 mark in the video Did I miss anything?
  2. kye

    V5 - Any ETA?

    I prefer V8.
  3. Welcome to the conversation. I'll happily discuss your points now you have actually watched it, but will take the opportunity to re-watch it before I reply. I find it incredibly rude that you are so protective of your own time and yet feel so free to recklessly disregard the time of others by talking about something you didn't care to watch and also by risking the outcome that you were talking out your rear end without knowing it, but I'll still talk about the topic as even selfish people can still make sense. A stopped clock is also right twice a day, so we'll see how things shake out. Watching is not understanding, so you've passed the first bar, which is necessary but not sufficient. This is an excellent example and I think speaks to the type of problem. Light bouncing off reality is of (practically) infinite resolution, and then it goes through the air, then: through your filters lens elements (optical distortions) and diffraction from the aperture sensor stack and is then sensed by individual pixels on the sensor (can diffraction happen on edges of the pixels?) it then gets quantised to a value and processed RAW in-camera and potentially recorded or output at this point In some cameras it then goes on to be non-linearly resized (eg, to compensate for lens distortions - do cameras do this for video?) rescaled to the output resolution processed at the output resolution (eg, sharpening, colour science, vignetting, bit-depth quantisation, etc) then compressed into a codec at a given bitrate Every one of these things degrades the image, and the damage is cumulative. All-but-one of them could be perfect, but if one is terrible then the output will still be terrible. Damage may also be of different kinds that might be mostly independent, eg resolution vs colour fidelity, so you might have an image pipeline that has problems across many 'dimensions'. Going back to your example and the Sony A7s2, if you take RAW stills then I'm sure the images can be sharp and great - in which case, the optics and sensor can be spectacular and it's the video processing and compression that is at fault. This is yet another reason that I think resolutions above 2K are a bad thing - most cameras aren't getting anything like the resolution that high-quality 2K can offer, but they still require a more and more powerful computer to edit and work with the mushy and compressed-to-death images. Any camera that can take high-quality stills but produces mushy video is very frustrating as they could have spent the time improving the output instead of just upping the specs without getting the associated benefits.
  4. I also found many of his articles particularly interesting, especially given that some have very carefully prepared tests to demonstrate how they translate to the real world. http://www.yedlin.net/NerdyFilmTechStuff/index.html Did you find anything else in here that was of interest?
  5. A friend recommended a movie to me, but it looked really long. I watched the first scene and then the last scene, and the last scene made no sense. It had characters in it I didn't know about, and it didn't explain how the characters I did know got there. The movie is obviously fundamentally flawed, and I'm not watching it. I told my friends that it was flawed, but they told me that it did make sense and the parts I didn't watch explained the whole story, but I'm not going to watch a movie that is fundamentally flawed! They keep telling me to watch the movie, but they're obviously idiots, because it's fundamentally flawed. They also sent me some recipes, and the chocolate cake recipe had ingredient three as eggs and ingredient seven as cocoa powder (I didn't read the other ingredients) but you can't make a cake using only eggs and cocoa powder - the recipe is fundamentally flawed. My friend said that the other ingredients are required in order to get a cake, but I'm not going to bother going back and reading the whole recipe and then spending time and money making it when it's obviously flawed. My friends really are stupid. I've told them about the bits that I saw, and they kept telling me that a movie and a recipe only make sense if you go through the whole thing, but that's not how I do things, so obviously they're wrong. It makes me cry for the state of humanity when that movie was not only made, but it won 17 oscars, and that cake recipe was named Oprahs cake of the month. People really must be stupid.
  6. You only think it's flawed because you didn't watch the parts of it that explain it. I have spent many many hours preparing a direct response to answer your question, and all the other questions you have put forward. I think after a thorough examination you will find all the answers to your questions and queries. Please click the below for all the information that you need:
  7. Great post and thanks for the images. That is exactly what I am seeing with my Micro footage - the images just look spectacular without having to do almost anything to them. I've done tests where I have been able to match the colours from the BM cameras, but only under "easy" conditions where the GH5 sensor was not stressed. The fact you are talking about the performance under extremely challenging situations only reinforces my impressions. I'll be very curious to see your results. In my Sony sensor thread everyone was talking about how Sony sensors can be graded to look like anything, but I never see people actually trying, or if they do, it's under the easiest of controlled conditions. Here is a test I did a while ago trying to match the GH5 to the BMMCC: There are differences of course, due to using different lenses for starters, but the results would likely be passable. There's no way in hell you can get a good match under more challenging situations. To put it another way, when the OG BMPCC and BMMCC came out people were talking about cutting them together with Alexa footage and that they matched really easily and nicely with little grading required. No-one really says that about affordable modern consumer cameras, and what I hear from the professional colourists is that when someone uses a GH5 or a Sony or an iPhone, it's about doing the best the can and putting the majority of effort into managing the clients expectations.
  8. The whole video made sense to me. What you are not understanding (BECAUSE YOU HAVEN'T WATCHED IT) is that you can't just criticise bits of it because the logic of it builds over the course of the video. It's like you've read a few random pages from a script and are then criticising them by saying they don't make sense in isolation. The structure of the video is this: He outlines the context of what he is doing and why He talks about how to get a 1:1 view of the pixels He shows how in a 1:1 view of the pixels that the resolutions aren't discernable Then he goes on to explore the many different processes, pipelines, and things that happen in the real world (YES, INCLUDING RESIZING) and shows that under these situations the resolutions aren't discernible either You have skipped enormous parts of the video, and you can't do that. Once again, you can't skip parts of a logical progression, or dare I say it "proof", and expect for it to make sense. Your posts don't make sense if I skip every third word, or if I only read the first and last line. Yedlin is widely regarded as a pioneer in the space of colour science, resolution, FOV, and other matters. His blog posts consist of a mixture of logical arguments, mathematics, physics and controlled tests. These are advanced topics and not many others have put the work in to perform these tests. The reason that I say this is that not everyone will understand these tests. Not everyone understands the correct and incorrect ways to use reductive logic, logical interpolation, extrapolation, equivalence, inference, exception, boundaries, or other logical devices. I highly doubt that you would understand the logic that he presents, but one thing I can tell with absolute certainty, is that you can't understand it without actually WATCHING IT.
  9. Yeah. The limitations that Sony deliberately put in their imaging devices really places limitations on the wider industry, all the people that use them, and all the images that come from those imaging devices.
  10. You're right that low light and DR are improving, but in comparison to the Alexa, 10 years has delivered 1600% of the pixels, and <100% of the DR. No-one in their right mind could suggest they're balancing those priorities, and this is in a world where people watch a large percentage of stuff on their phones, in 1080p. A good deal of the richness in the older BM footage is still there in the Prores files, even the low bitrate ones. Even with the 5K sensor and downsampling on the GH5, I'm still only able to approach (and not exceed) the look from the Prores 422 or LT files from the Micro. The DR of recent Sony cameras is very good, but more would also be useful. Think about a shot that is backlit - in order to not blow out the sky you are forced to underexpose the foreground, which is hardly the ideal in terms of exposure. I'd suggest that the phrase is "borderline usable". No-one is talking about 6K and saying we really need 8K and that 6K is "borderline usable". The high ISO performance is great, but once again, they've delivered a 1500% increase in the number of pixels while also raising the ISO performance, it's not like focusing on DR would have been to the exclusion of every other parameter in the whole device.
  11. I agree. If I could rewire my GH5 sensor so that every second pixel went to a different gain ADC circuit and the two signals were then combined to increase the DR coming off the sensor (how the Alexa works) then I wouldn't even think twice about it. Even if it meant sacrificing 75% or more of the pixels I have. A gorgeous image is a gorgeous image in whatever resolution. We've been working on the wrong things.
  12. Bingo - found it: It's older than I thought... which is even more amusing because the VFX guy was asking for 8K before we even had it!
  13. I'm familiar with those videos, although they were from a different time, when 2K was still a mainstream thing. The one I watched was more recent, and in the context of newer 4K / 6K and even 8K cameras. Obviously the ability to track things in 3D space accurately is pretty crucial if you're doing heavy VFX work like in Hollywood action blockbusters with a camera moving through a scene (say, on a boom) and then having to insert dozens/hundreds of 3D rendered objects into that scene, even extending to entire characters in the film. I've seen tracking and how it works in small proportions of a single pixel, which may have considerable impact on the location of something if the object is far away from the reference points. Obviously in cases like that having RAW 8K would be far nicer for the VFX team to reference rather than a blocky-by-comparison 2K image. This is, of course, talking about capture resolution, and not about final output resolution. I think there's a spectrum of shooters ranging from people who get everything right in-camera and almost won't even colour the footage, through to those who shoot for complete accuracy and want to do as much as possible in post. It will depend on your preferences, your budget (to hire a VFX team), and your schedule. He mentioned in the video that he applied a compression deliberately, in order to investigate what effects it would have on the image quality. He said he chose something akin to what gets streamed to peoples houses, or is in DSLR cameras. I'd guess something in the 25Mbps ballpark. It probably goes without saying that it's more difficult to tell the differences between resolutions if they're both going through a cheese grater at the end (or beginning!) of the image pipeline! Downsampling is definitely advantageous to overall image quality, for multiple reasons: a 4K sensor gives you a <4K image after debayering, so downsampling means drawing from more pixels on the input than you're pushing out the output, which helps random noise gets partially eliminated due to the averaging that occurs in the downsampling process One thing that is noteworthy though is that if you're shooting with a compressed codec, for example h264/5, that the artefacts are often X pixels wide (for example, regardless of the resolution, the 'ripple' on a hard edge is likely to be the same number of pixels wide) so in that instance you may be better off recording your files in-camera in a higher resolution and then downscaling in post, where the downscaling process can average out more of those artefacts. This is something that's likely to be situation and camera dependent, but is worth a test if you're able to. For example, shoot something in 6K and in 4K and put them both on a 4K timeline and see which looks cleaner, or 4K and 1080 on a 1080 timeline. The downside of this is that even if both resolutions had the same bitrate, and therefore file sizes, your computer will have to decode and then downscale more pixels from the higher resolution clip, increasing the computational load on your editing computer. Like with all things, do your own testing and see what you can see 🙂
  14. kye

    Lens Weight

    It's even got a nice pincushion distortion as a slight nod to the vintage anamorphics of old... I love it! If we fill it with Helium then it can be even more uplifting, raising the entire production value! It is weather resistant?
  15. I went back and re-examined the two curves and it look like you're right, and that there are complications in the way that they tested the outputs from the UMP, to do with NR or something I think. Personally, I only built this table for my own purposes and the UMP isn't an option that I'd consider due to its size, so it's not a big deal for me.. Looking at the waveforms, it seems like the P6K has maybe 1-2 stops more DR than the UMP 4.6K G2. Regardless, I found the tests at C5D to be useful, and definitely better than the manufacturers (often heavily inflated) claims!
  16. As the owner of a Canon 700D, I'd suggest that it would barely be 720p! But it certainly is a nice example of what can be done with enough skill....
  17. Hahaha.. I think that this is regarded as a bit of an outlier in terms of the demand placed on the colourist and post-production, but yes, being a professional colourist isn't top of my career choices either! I'm less familiar with the inner workings of how Steve works in post, although I get the impression that although he has very specific requirements, he's also much more hands on during that process, so it's less of a case of making specific requests of others, but once again, I haven't seen anything one way or the other. @noone I watched a great panel discussion between a few industry pros (I just had a look for it and unfortunately can't find it) debating resolution, and the pattern was completely obvious. The cinematographers wanted to shoot 2K, or as close to it as possible, because it makes their life easier and the films are all mastered in 2K anyway. The post-production reps wanted as much resolution as possible (8K or even more if possible) because it's really useful for tracking and VFX work, which they said is now pretty much a fixture of all productions these days. So in that sense, I think it's just about what kind of production you're shooting, and once again, being aware of what you're trying to accomplish and then using the right tools for the job. You can't make comparisons, discuss, criticise, or even comment on something you haven't watched. As someone who HAS watched it, more than once actually, I found that it worked methodically, building the logic one step at a time, taking the viewer through quite a complex analysis. I found it engaging and was surprised that it didn't seem to drag, and found that it covered all the variables, including all the nuance of various post-production image pipelines, including the upscaling downscaling and processing of VFX pipelines. Your criticisms are of things he didn't say. That's called a straw man argument - https://en.wikipedia.org/wiki/Straw_man I'm not surprised that the criticisms you're raising aren't valid, as you've displayed a lack of critical thinking on many occasions, but what I am wondering is how you think you can criticise something you haven't watched? The only thing I can think of is that you don't understand how logic, or logical discourse actually works, which unfortunately makes having a reasonable discussion impossible. This whole thread is about a couple of videos that John has posted, and yet you're in here arguing with people about what is in them when you haven't watched them, let alone understood them. I find it baffling, but sadly, not out of character.
  18. Any camera is a good camera if it fulfils your purpose. I think we get into trouble when either 1) people don't actually understand what impacts the end result (and therefore rely on rules of thumb that are often not true) or 2) people don't understand that your needs or goals or priorities aren't the same as theirs (and therefore just tell you that you should do X, and/or that you're wrong for choosing something other than their suggestion).
  19. That makes sense. I'm always curious what people are seeing, what they're paying attention to, and what they prioritise. Much can be learned by understanding how other people see the world. I think you're right in that under more forgiving situations the differences between cameras are much less. I think it's unfortunate that most professionals shoot in controlled situations and demand high-quality outputs, so when you say that you're an amateur they assume you're still shooting in controlled conditions but expect a lower-quality output and therefore your camera demands are less, when actually it's the case that we're often shooting in uncontrolled situations so our requirements are (in some cases) actually more than for their shoots.
  20. It looks quite interesting actually, if you're willing to shoot 1080p. I think in 1080p, you get: Internal RAW in 8/10/12 bit Downscaling from 8K Digital cropping from 1.0 to 5.0 which are all downscaled from the 8K That could mean you could make a tiny portable/pocketable setup, for example if you chose a 20mm lens you could use it like a 20-100mm. Could be interesting for tiny covert setups. I've been compiling a chart of DR tests and the FP seems like the cheapest and smallest way to get a high-DR camera.
  21. Interesting observation. I'm curious what image characteristics you prefer from the Micro over the A7S3? I watched Potatojets video on the FX3 and was particularly impressed by the DR and handling of highlights in the sunset scene where he was backlit and had the sun in shot: Is that something to do with the S-Cinetone profile perhaps? I don't really speak "Sony" so not too sure if that contributes to this look? When I compare the Micro to the GH5 the things that I really feel stands out on the Micro is the DR and the lack of digital sharpness, which just make everything feel like a Hollywood film straight-out-of-camera. I've done comparisons where I shoot the same scene with both cameras and match the colours in post, but it's a pretty arduous task and would be pretty difficult to do unless I had shot every scene with the Micro, which kind of defeats the purpose! Anyway, back to the A7S3 image, it seems to be quite reminiscent of the Micro, at least compared to the GH5 anyway(!) so I'm curious what you're seeing.
  22. Well, that went about as I predicted. In fact it went exactly as I predicted! I said: Then Tupp said that he didn't watch it, criticised it for doing things that it didn't actually do, then suggests that the testing methodology is false. What an idiot. Is there a block button? I think I might go look for it. I think my faith in humanity is being impacted by his uninformed drivel. I guess not everyone in the industry cares about how they appear online - I typically found the pros I've spoken to to be considered and only spoke from genuine knowledge, but that's definitely not the case here. There's irony everywhere, but I'm not sure what you're talking about specifically! 🙂 I'm not really sure who you think Steve Yedlin actually is? You're aware that he is a professional cinematographer right? I'd suggest you read his articles on colour science and image attributes - they speak to both what he's trying to achieve and you can get a sense of why he does what he does: http://www.yedlin.net/OnColorScience/index.html I agree, but I think it's worth stating a couple of caveats. Firstly, he shoots large productions that have time to be heavily processed in post, which obviously he does quite a bit of. Here's a video talking about the post-production process on Mindhunter, which also used heavy processing in post to create a look from a relatively neutral capture: That should give you a sense of how arduous that kind of thing can be. Which I think makes processing in post a luxury for most film-makers. Either you're shooting a project where people aren't being paid by the hour, such as a feature where you're doing most of the work in post yourself. This is a luxury because you will be able to spend more time than is economical for the project. Film-makers who don't have the expertise themselves and would have to pay someone, or more likely they would just try and get things right in-camera, and do whatever they can afford in post. The second aspect of this is knowing what you can do in post and what you can't do. Obviously you can adjust colour, and you can degrade certain elements as well, but we're a long way off being able to change the shape of bokeh, or alter depth of field, or completely perfectly emulate diffusion. So it's important to understand what you can and cannot do in post (both from a resourcing / skills perspective as well as from a physics perspective) and take that into account during pre-production. I completely agree with this. It certainly eliminates great proportions of the people online though. I suspect that the main contributor to this is that most people online are being heavily influenced by amateur stills photographers who seem to think that sharpness is the most important image attribute in a camera or lens. I think this tendency is a reaction to the fact that images from the film days struggled with sharpness (due to both the stocks and lenses), and also early digital struggled due to the relatively low number of MP at the start as well. I think this will eventually fade as the community gets a more balanced perspective. The film industry, on the other hand, still talks about sharpness but does so in a more balanced perspective, and does so in the context of balancing it against other factors to get the overall aesthetic they want, taking into account the post-process they're designing and the fact that distribution is limited to a ~2K perceptual resolution.
  23. I've posted them quite a few times, but it seems like people aren't interested. They don't follow the links or read the content, and after repeating myths that Steve easily demonstrates to be false, the people go back to talking about if 6K is enough resolution to film a wedding or a CEO talking about quarterly returns, or if they should get the UMP 12K. I mentioned this in another thread recently, but it's been over a decade since the Alexa was first released and we have cameras that shoot RAW in 4, 9, and 16 times the resolution of the Alexa, but the Alexa still has obviously superior image quality, so I really wonder what the hell it is that we're even talking about here....
  24. kye

    YouTube question

    There was a recent reaction video that @Tito Ferradans posted, reacting to a video about anamorphic lenses that was published on another YT channel. Apart from fact-checking the other video, almost to death considering that it got a lot of information wrong about anamorphic lenses, Tito also mentioned that the other video had used shots from his video without permission, and that he went through the process to have it taken down as they used his content without his permission. So I think you're meant to get permission, regardless of the circumstances, and if you don't then you leave yourself open to take-down requests. I've heard instances where people had their own content taken-down by other people who claimed their content as their own, and the mechanisms seemed to be skewed towards taking the content down, instead of leaving it up by default. Also, considering I mentioned Tito and anamorphics, go watch his channel - it's awesome! https://www.youtube.com/user/tferradans
  25. I think you make excellent points but disagree with this part of your post, as I think we've been concentrating on the wrong things in camera development. Specifically, we have way more pixels than we need, which you mentioned, but the missing link is dynamic range, which is still very immature in terms of development. Almost everyone can compare an Alexa frame with a sub$2500 digital camera frame on a big TV and see why the Alexa costs more. Under certain controlled situations this difference can be managed and the cheaper camera can come a lot closer, but in uncontrolled lighting and high DR situations it's quite obvious. This comparison still holds if you compare those images on a laptop screen, and even a phone screen in some situations. The fact that the Alexa looks better on a 2.5K laptop display, or a 720p phone screen, means that the image quality cannot be about resolution, as the resolution advantage of the cheaper camera will be eliminated by the downscaled image. What is left is colour science and dynamic range. We should be taking these 8K sensors, putting on the OLPF from a 4K sensor, and sending every fourth pixel to a different ADC pipeline with differing levels of gain, which are then digitally combined to get a very high dynamic range 4K image. The Alexa was released over 10 years ago, shot 2K, and had a dual-gain architecture. Here we are over a decade later and we have cameras that have 16 times as many pixels, but still don't match the DR, and still don't look as good. The real "progress" that has been achieved by the manufacturers is convincing people to buy more pixels despite the fact that they really wanted better pixels.
×
×
  • Create New...