-
Posts
7,847 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
Would that might make an 8K workflow achievable without a super computer? I saw comments from the YT crowd that they couldn't edit the RAW files from the R5 even with their $30K+ Mac Pro computers. Does anyone here even own an R5?
-
Both you and @elgabogomez have said that it looks like both @tupp and I have valid points and that we're trolling each other, and it's a fascinating thing. The more I think about this test, and the points that Tupp has raised and what is wrong with them, the more I realise that this test is kind of the holy grail because it involves almost every technical aspect in video, every workflow aspect, and every perceptual aspect. In fact, not only do you have to understand those, but you have to have an underlying understanding of what is actually going on behind those things. For example, there's the bayer sensor and debayering (and other sensors like X-Trans), there's colour subsampling, and there's image rescaling. If you don't understand these concepts then you can't have this conversation at all, but knowing how to talk about each of them in isolation isn't adequate either. These are all discussed in isolation in film-making, with each of them kind of talked about as being in a different place in the pipeline and for a different purpose, and the definitions of these may not give any clues that they are related, but they are. In fact, they are almost identical. They are simply different applications of the same mathematics, applied in slightly different ways. The fact that the whole image pipeline is being discussed, and in the context of resolution, means that you cannot truly understand the signal path from end to end unless you understand that these all have the same underlying mechanism. This is something I tried to explain, but it's points like this that don't fare well in a conversation where 27 things are all being discussed simultaneously. I think this is a problem that the world is experiencing in many forms - the discussion of things that are very technical and people have vested interests in. Take the example of the age of the earth. Science says that it's a few billion years old. People can react to that information in one of two ways. They can believe the scientists and just take that on face value, or they can question how this incredibly large number was derived (and they have every right to do so and being skeptical is a good thing). The problem comes when the scientists talk about carbon dating, and very quickly we find that almost no-one can wrap their head around the dozens of concepts, methods, processes and even ways of thinking that are required to understand every concept that the analysis that calculates the number is built upon, unless they have a scientific background to begin with. So the conversation goes off in 100 directions (like this thread did) and it ends with either the person believing the scientists on face value (after having seen a lot more information) or the person decides that the scientists are just baffling them with BS and they "go back to believing whatever they want to believe". A funny thing about science is that the deeper you go, it gradually splits into quantum physics and philosophy, both of which end with basically no evidence that we exist at all. There is no "bedrock" of fundamental truth, regardless of how far you dig, so ultimately it comes down to a judgement call that everyone makes. The problem then remains, how do you determine truth about a topic where the summary isn't deemed trustworthy and the analysis is too long / complex for people to understand? I think this is a problem that we haven't really found an answer for yet. Anyway, coming back to this thread, I'm happy that I understand what's going on here, I've seen enough mis-understandings on Tupp part to see where he's going beyond the limits of his technical knowledge, and I'm also backed up by Yedlin, whose writings and demos are (apart from one person posting in this thread) widely respected across the industry.
-
The FX9 and VENICE are FF aren't they? If you're 6K FF then you can be 4K crop, but if you're larger than FF then 6K won't cut it, which is where the 65 is. I agree about the "identical" pixel size, thus my comments about why they might choose this for colour performance. The more I see what happens in the real world, the more I realise this "Netflix demands 4K" is really not a limitation, or at least not if you have a good team. I see an incredible amount of content on Netflix that was purchased from people who shot on sub-4K cameras, and even some of the feature films that Netflix commissioned weren't shot on 4K cameras. I think it's a "rule" on paper only, and if your content is good then they'll do the deal and make the money. As was mentioned above, Canon cameras are often used for doc work, which is often shot outside in uncontrolled conditions, sometimes in a situation where they are just capturing things and not controlling them. In this situation you will expose for your subject, splitting the exposure between the sun and shadow skin tones, and when they run to a position in-between you and the sun, it helps to not have the entire sky (or half the sky) as digital white. However, as you say, the top cine cameras have quite a lot of DR, so it shouldn't be a big deal if you're using those cameras anyway.
-
Which also means that you can have two sets of key accessories (like batteries and media) which can act as backups between the cameras, and some accessories (like lenses) don't need to be duplicated and can be shared across both cameras. This is a big deal - I'd suggest @seanzzxx that you price up the options taking into account all the batteries, chargers, rigging, media, lenses, and all the rest of the stuff that you would need, and then look at each piece and see what you would do if each piece failed, then buy extras so that the camera bodies are the only single points of failure. With two different cameras there are likely to be lots of single points of failure where they would take a camera out of operation so you end up with almost 4 sets of stuff, but do that for having two of the same camera and you can eliminate a lot of that. Plus the extra weight of carrying things around, plus the extra complexity and time spent looking for the accessory from X camera when you've already found that accessory for Y camera but they're not compatible, etc.
-
Doing the maths, the Alexa 65 would have needed to be 8.7K to have a 4K S35 crop, but an 8K sensor would have a 3.7K S35 crop which would probably have been just fine, so I guess 8K looks like the magic number at that sensor size. What's interesting is that the Alexa 65 has 120 pixels per mm horizontally, the 3.4K is also 120pp/mm and the 2.8K is 118pp/mm, so I wonder if they've decided that the pixels need to be at least that big? Certainly, ARRI are aware that having good pixels is more important than having lots of pixels, so they may have done testing and drawn a line at a certain pixel size as being the minimum required for a certain image quality / noise / colour performance. Of course that won't stop other manufacturers cramming as many pixels into their cameras as possible. Every time I see a manufacturer declaring victory with their latest release full of tiny little pixels I think of a salesman in full-tilt sales mode for All-You-Can-Eat Gravel. Sure, it's gravel, which isn't good to eat at all..... but it's ALL YOU CAN EAT so STEP RIGHT UP!!! *sigh*
-
What are you hoping to achieve with these personal comments? How is this helping anyone? This whole thread is about the perception of resolution under various real-world conditions, to which you've added nothing except to endlessly criticise the tests in the first post. This thread has had 5.5K views, and I doubt that the people clicked on the title to read about how one person wouldn't watch the test, then didn't understand the test and then endlessly uses technical concepts out of context to try and invalidate the test, then in the end gets personal because their arguments weren't convincing. This is a thread about the perception of resolution in the real world - how about focusing on that?
-
I understand the logic, but the Alexa 65 suffers from the same problem. If we compare the width of Super35 at 24.89mm and divide it by the width of the Alexa 65 at 54.12mm it is 46% as wide. If we take 46% of the width of the 6.5K from the Alexa 65 then we get 3K, so the Alexa 65 can only shoot 3K in S35 crop mode. I won't pretend to know if 3K (from the Alexa 65) vs 2.7K (from a 1.5 crop into a FF 4K sensor) is a meaningful difference, but it's interesting that the Alexa 65 is a large format flagship model and I would imagine that a C700 with 20 stops of DR would also be trying to have that premium market too. and it also needs pretty rigorous testing processes in terms of managing the image pipeline too. Not an easy thing to test consistently even if you are paying attention to all the details and don't have a vested interest!
-
I second the Fujifilm cameras, they seem to put the most effort into their built-in colour profiles, although for home use most cameras these days create modern video-looking images so if that suits your tastes then you could get away with using the built-in profiles.
-
Do I?
-
When I read this sentence I thought you were going to suggest that 4K wasn't enough. Haven't you heard that Hollywood is finally catching up to the amateur market and going bigger=better with sensor size? Go read the interviews of people who shot with the Alexa 65 and see how they talk about the sensor size. The scale of the image from the larger sensor was completely critical to their vision, they all say it over and over again!
-
You criticised Yedlin for using a 6K camera on a 4K timeline and then linked to a GH5s test (a ~5K camera) as an alternative... what about the evil interpolation that you hold to be most foul? Have you had a change of heart about your own criteria? Have you seen the light? You even acknowledge that the test "is not perfect" - I fear that COVID has driven you to desperation and you are abandoning your previous 'zero-tolerance, even for things that don't matter or don't exist' criteria! Yedlins test remains the most thorough available on the subject, so until I see your test then I will refer to Yedlins as the analysis of reference. Performing your own test should be an absolute breeze to whip up considering how elevated you claim your intellect to be in comparison to Yedlin, who also published such a test.
-
Cool that you're getting use out of yours! I'd imagine that people using it for drone work are also quite satisfied, as that's what it's really designed for. To each their own 🙂
-
Well?
-
You left out drinking. Anyone who goes out and drinks at a bar / pub and gets drunk, even only once per week, is easily buying a cinema camera every year... at Australian prices anyway.
-
The comments that I read was mostly about the sensor, but the rest of the camera may have different chipsets and other stuff going on under the hood so who knows. I highly doubt you'll be able to get firmware from one and load it on the other - obviously things like buttons and UI are probably very different. Like with anything else, we're at the mercy of however much capability the manufactures want to give us.
-
The stream was in the middle of the night for me, but I just watched the recording of the session earlier today and I have to agree - it was spectacular. For those that couldn't attend, here are some of the things I learned from the class: The challenge isn't to grade a shot with 40 nodes, it's delivering the same look over a 3000 shot feature or (even worse) a series that is shot over time, and also doing it within the time/budget that the production has allocated On top of that, what if you're going in the wrong direction and the director doesn't like it? How long did you just waste, and how can you effectively change the grade without breaking all the qualifiers, etc..? Start with an overall adjustment across the whole project, this should be based on the look that the DP wants for the project, and if you've handled the colour spaces correctly, it should correctly expose all the shots on the output that the DP exposed correctly on capture - for many projects that are shot very well this might be all that is required From there you can apply scene-specific adjustments such as warming or cooling scenes so they're coherent with the emotional arc of the story From there you should only be adjusting tiny things on a per-shot basis such as small changes in exposure (eg due to changing lighting conditions) or tweaking distracting elements etc This should mean you can show the complete film to the director with that rough grade and get feedback, with room to change the overall look, the look of various scenes, and then have time left over for incorporating VFX and troubleshooting any difficult sections that need more attention, and mastering for SDR/HDR etc. Interestingly, if you build the global look starting with a CST to transform from the camera colour space/gamma into something standard (such as ARRI LogC or Cineon) and build your look on that, then you can take that same overall node tree and apply it to a different project shot on a different camera and by simply adjusting the CST to convert from the new colour space for the new project then you can quickly use a look between projects. Walter did this live several times and in literally a couple of minutes had a very solid looking grade on a completely different project. He also took great joy in roasting various YT colourists, LUT peddlers, and those that shout at the internet from the very shallow end of the pool, so I found the class hugely entertaining. For members of Lowepost, he also did a short workflow explanation here: https://lowepost.com/insider/colorgrading/hollywood-colorist-walter-csi-about-his-color-grading-process-r48/ Did anyone else watch his masterclass?
-
A video from the OG BMPCC (the P2K!): https://youtu.be/aKyG5JdSUNc. (for some reason YT won't embed the video) I agree. Saying something is technically possible is one thing, but having the skill required and then the time to do so is another thing altogether. Steve Yedlin has done great work matching the Alexa to film, but he built his own software to do so, and he says this about the capability of colour grading software, even as advanced as Resolve and Baselight: My attempts to replicate the look of the P2K and M2K were never focused on achieving a perfect replication, or even of a passable one, but more as a "shoot for the stars and potentially hit the moon" scenario where worst-case is that I'll learn more about colour grading, and I have definitely done that. I have learned a bunch of stuff that I think really bridges the gap between, say, the GH5 and the BMPCC. There are other things I'm still working on though. Shadow contrast and levels are still a huge thing I'm experimenting with now, for example. And the 14mm and 7.5mm both have the same filter thread size, so if you use them as a pair then you can just swap the whole filter stack between them when you change lenses - making the setup must simpler and streamlined.
-
I wasn't saying that it was aimed at the same user, or that the overlap would be 100%, but it's a lot closer than other parts of the market. My point was really that cine cameras have a bunch of things that make sense for cinema, but are a royal PITA for other things, and on this point the FP is quite well aligned. I shoot travel content with a GH5, and if I list every reason that a P4K wouldn't be suitable for me then almost all of them apply to the FP. If I then listed all the things I would be looking for if I shot a narrative piece, the P4K and FP share most of them.
-
I guess I really haven't succeeded then. The imaging pipeline is complex enough that it's difficult to understand the whole thing (which is one of the reasons why Yedlins video is an hour long) and so when you get two people both talking at length using technical language its hard to understand which one of them is correct. This challenge happens in any topic that is complex and where people have vested interests (for example, the topic of carbon dating and the implications it has for the age of the earth and the religious implications for Bible literalists). Is there something I can do to better explain why Yedlins test is valid and Tupps criticisms aren't valid? The reason I haven't backed down is because I don't want people to come away from this thread thinking it doesn't hold up, but unfortunately the more mud gets thrown at something the more that its hard to tell where the truth is.
-
This is from a resolution test of the ARRI Alexa: Source is here: https://tech.ebu.ch/docs/tech/tech3335_s11.pdf (top of page 10) Pretty obvious that the red has significantly less resolution than the green. This is from the number of green vs red photosites on the sensor. But you're totally right - this has no impact on a test about resolution at all!
-
I wish I lived in your world of no colour subsampling and uncompressed image pipelines, I really do. But I don't. Neither does almost anyone else. Yedlins test is for the world we live in, not the one that you hallucinate.
-
If history has taught us anything, it's that Canon will take forever to release the camera we want, and when they do it will be a huge disappointment. If you have work to do and can do it with the equipment you already have, then use that. If you need a new camera to do it, then buy a used good condition copy of the cheapest camera model that can get the job done, and if the stars align and Canon releases a camera that doesn't overheat, combine LOG with 8-bit, or have the DR of the iPhone 4, then you should be able to sell what you have for close to what you paid for it and buy the Canon unicorn.
-
Actually, the curve on the right is closer to what I would typically do:
-
I think there's a real art to blacks in colour grading, I've learned that getting the right levels in the dark parts of the image has a huge impact on image pop and the overall look. I'd suggest putting in a pretty aggressive knee, so that anything lower than a certain value gets compressed but doesn't go completely to black and get clipped. You could put that knee quite close to 0 IRE so you don't have to get washed out looking images, but also it would mean that you'd keep whatever information is in the shadows but still squash the noise so it's not too obvious, and it would also make the image look a bit higher end too as a significant part of the look of high-end cine cameras is how they handle the shadows. I often set up a curve that compresses the shadows more than the highlights and grade under that. This is a random image I found online that shows what such a curve might look like: My curve is often more aggressive than this, and the more aggressive you make the curve the more filmic the final image will look. When you first apply such a curve everything will look over-contrasty, and you will need to manually grade every shot underneath it. Often the Lift Gamma and Gain (LGG) controls are great for this, as the Lift places how far down the curve your blacks go (and also defines overall perceived contrast and adjusts saturation), the Gain places the highlights and gives a nice rolloff (making the edges of any clipping much less obvious) and then you can adjust the overall brightness of the shot with the Gamma. Often you have to go back and forward with these controls as you often pull the Lift down to get the shadows right, then pull Gamma up to adjust the mids but that also pulls the shadows up a bit, so you pull the Lift down more, etc, until you've pushed/pulled the exposure to a point that looks good. I've graded many projects by just applying such a curve, then on each shot tweaking WB, then using LGG controls to get levels, then Saturation, and often that will be all the project needs. If you have a control surface then the LGG adjustments can take very little time and you can rip through an edit very quickly. Happy to elaborate further, just ask.
-
You're really not getting this... You rejected the test because it involves interpolation, which is common to almost every camera, as most cameras have less photosites than their output resolution has colour values. You also rejected the test because the Alexa is a 6K camera and not a 4K camera and therefore involves interpolation. The Alexa isn't a common camera, sure, but it shares the same colour subsampling properties of most cameras, shares the same 'over-capture' aspects as many other workflows, and is a high quality imaging device, so if you can't tell 2K from 4K from an Alexa 65 then it's a good test and it is applicable to most other situations. A camera with a Foveon sensor does not share the same colour subsampling properties of most cameras, therefore isn't a good test, which is why it's a red herring and not applicable to any sensible conversation about perception.