Jump to content

kye

Members
  • Posts

    7,522
  • Joined

  • Last visited

Everything posted by kye

  1. I think the answer is simple, and comes in two parts: They aren't film-makers. The professional equivalent of what YouTubers do would be ultra-low-budget reality TV. Hardly the basis for understanding a cinema camera. They don't understand the needs of others, or that others even have different needs. If someone criticises a cinema camera for "weaknesses" that ARRI and RED also share (eg, lack of IBIS) then it shows how much they really don't know. I recently bought a P2K, but was considering the FP and waited for the FP-L to come out before making my decision. The Sigmas aren't for me for a number of reasons, but I don't think they're bad cameras, just that they're not a good fit for me.
  2. Thanks! I keep banging on about how the cameras used on pro sets are invisible because they often don't get talked about, whereas YouTubers talk about their cameras ad-nauseum. Well, do these look familiar? They switched from GoPros to the 4K BMMCC Studio camera, and piped them into these: http://www.content-technology.com/postproduction/c-mount-industries-rides-on-aja-ki-pro-ultras-for-carpool-karaoke/ There's a reason that the BMMCC is still a current model camera.
  3. @tupp Your obsession with pixels not impacting the pixels adjacent to them means that your arguments don't apply in the real world. I don't understand why you keep pursuing this "it's not perfect so it can't be valid" line of logic. Bayer sensors require debayering, which is a process involving interpolation. I have provided links to articles explaining this but you seem to ignore this inconvenient truth. Even if we ignore the industry trend of capturing images at a different resolution than they are delivered in, it still means that your mythical image pipeline that doesn't involve any interpolation is limited to cameras that capture such a tiny fraction of the images we watch they may as well not exist. Your criticisms also don't allow for compression, which is applied to basically every image that is consumed. This is a fundamental issue because compression blurs edges and obscures detail significantly, making many differences that might be visible in the mastering suite invisible in the final delivered stream. Once again, this means your comparison is limited to some utopian fairy-land that doesn't apply here in our dimension. I don't understand why you persist. Even if you were right about everything else (which you're not), you would only be proving the statement "4K is perceptually different to 2K when you shoot with cameras that no-one shoots with, match resolutions through the whole pipeline, and deliver in a format no-one delivers in". Obviously, such a statement would be pointless.
  4. Yes and no - if the edge is at an angle then you need an infinite resolution to avoid having a grey-scale pixel in between the two flat areas of colour. VFX requires softening (blurring) in order to not appear aliased, or must be rendered where a pixel is taken to have the value of light within an arc (which might partially hit an object but also partially miss it) rather than at a single line (with is either hit or miss because it's infinitely thin). Tupp disagrees with us on this point, but yes. I haven't read much about debayering, but it makes sense if the interpolation is a higher-order function than linear from the immediate pixels. There is a cultural element, but Yedlins test was strictly about perceptibility, not preference. When he upscales 2K->4K he can't reproduce the high frequency details because they're gone. It's like if I described the beach as being low near the water and higher up further away from the water, you couldn't take my information and recreate the curve of the beach from that, let alone the ripples in the sand from the wind or the texture of the footprints in it - all that information is gone and all I have given you is a straight line. In digital systems there's a thing called the nyquist frequency which in digital audio terms says that the highest frequency that can be reproduced is half the sampling rate. ie, the highest frequency is when the data goes "100, 0, 100, 0, 100 , 0" and in the image the effect is that if I say that the 2K pixels are "100, 0, 100" then that translates to a 4K image with "100, ?, 0, ?, 100, ?" so the best we can do is simply guess what those pixel values were, based on the surrounding pixels, but we can't know if one of those edges was sharp or not. The right 4K image might be "100, 50, 0, 0, 100, 100" but how would we know? The information that one of those edges was soft and one was sharp is lost forever.
  5. I think perhaps the largest difference between video and video games is that video games (and any computer generated imagery in general) can have a 100% white pixel right next to a 100% black pixel, whereas cameras don't seem to do that. In Yedlins demo he zooms into the edge of the blind and shows the 6K straight from the Alexa with no scaling and the "edge" is actually a gradient that takes maybe 4-6 pixels to go from dark to light. I don't know if this is do to with lens limitations, to do with sensor diffraction, OLPFs, or debayering algorithms, but it seems to match everything I've ever shot. It's not a difficult test to do.. take any camera that can shoot RAW and put it on a tripod, set it to base ISO and aperture priority, take it outside, open the aperture right up, focus it on a hard edge that has some contrast, stop down by 4 stops, take the shot, then look at it in an image editor and zoom way in to see what the edge looks like. In terms of Yedlins demo, I think the question is if having resolution over 2K is perceptible under normal viewing conditions. When he zooms in a lot it's quite obvious that there is more resolution there, but the question isn't if more resolution has more resolution, because we know that of course it does, and VFX people want as much of it as possible, but can audiences see the difference? I'm happy from the demo to say that it's not perceptually different. Of course, it's also easy to run Yedlins test yourself at home as well. Simply take a 4K video clip and export it at native resolution and at 2K, you can export it lossless if you like. Then bring both versions and put them onto a 4K timeline, and then just watch it on a 4K display, you can even cut them up and put them side-by-side or do whatever you want. If you don't have a camera that can shoot RAW then take a timelapse with RAW still images and use that as the source video, or download some sample footage from RED, which has footage up to 8K RAW available to download free from their website.
  6. I've been trying to pull this apart for a long time, or maybe it just seems like a long time, it's hard to tell! I get the sense that the difference is a culmination of all the little things, and that the Alexa does all of the things you mention very well, and most cameras don't do these things nearly as well. Further to that, each of us has different sensory sensitivities, so while one person might be very bothered by rolling shutter (for example) the next person may not mind so much, etc. Also, the "lesser" cameras, like the GH5, will do some things more poorly than others, for example the 400Mbps ALL-I 10-bit 4K mode isn't as good as an Alexa, but it's significantly better than something like the A7S2 with its 100Mbps 8-bit 4K mode. And finally, the work you are doing will require different aspects, like dynamic range being more important in uncontrolled lighting and rolling shutter being more important in high-movement scenes and (especially) when the camera is moving a lot. So in this sense, camera choice is partly a matter of finding the best overlap between a cameras strengths, your own sensitivities / preferences, and the type of work you are doing. Furthermore, I would imagine that some cameras exceed the Alexas capability, at least in some aspects. These examples are rarer, and it depends on which Alexa you are talking about, but if we take the original Alexa Classic as the reference, then the new Alexa 65 exceeds it in many ways. I believe RED has models that may meet or exceed the Alexa line in terms of Dynamic Range (it's hard to get reliable measures of this so I won't state that as fact) and I'm sure there are other examples. There are other considerations beyond image though, considering that the subject of the image is critical, and I couldn't do my work at all if I had an Alexa, firstly because I couldn't carry the thing for long enough, and secondly that I'd get kicked out of the various places that I like to film, which includes out in public and also in private places like museums, temples, etc which reject "professional" shooting, which they judge by how the camera looks. Everything is a compromise, and the journey is long and deep. I've explored many aspects here on the forums though, and I'm happy to discuss whichever aspects you care to discuss as I enjoy the discussions and learning more. Many of the threads I started seem to fall off, but often I have progressed further than the contributions I have made in the thread, often because I came back to it after a break, or because I've developed a sense of something but can't prove it, so if you're curious about anything then just ask 🙂
  7. A 4K camera has one third the number of sensors than a 4K monitor has emitters. This means that debayering involves interpolation, and means your proposal involves significant interpolation, and therefore fails your own criteria.
  8. Based on that, there is no exact way to test resolutions that will apply to any situation beyond the specific combination being tested. So, let's take that as true, and do a non-exact way based upon a typical image pipeline. I propose comparing the image from a 6K cinema camera being put onto a 4K timeline vs a 2K timeline, and to be sure, let's zoom in to 200% so we can see the differences a little more than they would normally be visible. This is what Yedlin did. A single wrong point invalidates an analysis if, and only if, the subsequent analysis is dependent on that point. Yedlins was not. No he didn't. You have failed to understand his first point, and then subsequently to that, you have failed to realise that his first point isn't actually critical to the remainder of his analysis. You have stated that there is scaling because the blown up versions didn't match, which isn't valid because: different image rendering algorithms can cause them to not match, therefore you don't actually know for sure that they don't match (it could simply be that your viewer didn't match but his did) you assumed that there was scaling involved because the grey box had impacted pixels surrounding it, which could also have been caused by compression, so this doesn't prove scaling and actually neither of those matter anyway, because even if there was scaling, basically every image we see has been scaled and compressed Your "problem" is that you misinterpreted a point, but even if you hadn't misinterpreted it could have been caused by other factors, and even if it wasn't, aren't relevant to the end result anyway.
  9. That could certainly be true. Debayering involves interpolation (like rescaling does) so the different algorithms can create significantly different amounts of edge detail, which at high-bitrate codecs would be quite noticeable even if the radius of the differences was under 2 pixels.
  10. Upgrade paths are always about what you want and value in an image. IIRC you really value the 14-bit RAW (even over the 12-bit RAW) so I'd imagine that any upgrade would have to also shoot RAW? A friend of mine shoots with 5D+ML and apart from going to a full cinema camera, you're going to find that the alternatives all have some problem or other that would be a downgrade from the 5D. The 5D+ML combo isn't perfect by any means, but other cameras haven't necessarily even caught up yet, let alone being improvements - it's still give and take in comparison.
  11. GH5 does this via custom modes. I don't use it for stills really, but I have custom modes for video that have different exposure modes (ie, I have custom modes that are Aperture priority and ones that are Manual mode).
  12. I just signed up for this course... (I linked to the YT video because it has a promo code in the description for a small discount) Walter is a senior colourist at Company3, which is one of (or the lead?) colour and post places in Hollywood, and I haven't seen much info from him so I think this might be a rare chance to get some insights from him.
  13. It would be nice to see some analysis of that. Considering that images are just bunches of numbers, we can analyse them in almost any way that you can imagine, but we do basically no analysis whatsoever, but instead just fight about our preferred religion manufacturer online... It's quite sad really. What a fascinating thing - that BM RAW has processing on the RAW output.. that almost completely defeats the purpose of RAW! There's a way to work out how to un-do it in post, but it's a PITA to do. Sharpening isn't always bad, but it can always be added more in post, so manufacturers should only add it sparingly. I've heard that the Alexa adds a small amount of sharpening to the Prores, as the compression smooths some detail so they add a little back in to match the original look. In a normal camera they add sharpening, then compress the image with h264/5, which I think is the killer combo. Sharpening and blurring are mathematical opposites, so with the right algorithm you should be able to reverse the sharpening in post with blurring, but because there is a compression in-between so the information gets lost and by the time you blur enough to get rid of the edge sharpening then the image is blurry as hell.
  14. To elaborate on what I said before, if they are applying different colour science to the Prores and to the RAW then you'll have to use a different conversion LUT, but I think there is only one LUT for ARRI, right? If so, either the magic is in the LUT or in the camera and being applied to the RAW. I suspect the latter, as it's the only good way to keep it far from prying eyes and people who would steal it.
  15. You raise an interesting point about the Prores vs RAW and I don't think I ever got to the bottom of it. With the Prores they can put whatever processing into the camera that they want (and manufacturers certainly do) but technically the RAW should be straight off the sensor. Of course, in the instance of the Alexa, it isn't straight off the sensor due to the dual-gain architecture which combines two readouts to get (IIRC) higher dynamic range and bit-depth, so there is definitely processing there, although the output is uncompressed. Perhaps they are applying colour science processing at this point as well, I'm not sure. The reason that this question is more than just an academic curiosity is that if they are not applying colour science processing to their RAW, then at least some of the magic of the image is in their conversion LUTs, which we all have access to and could choose to grade under if we chose to (and some do). Yes, testing DR involves working your way through various processing, if you can't get a straight RAW signal. I'm assuming that they would have tested the RAW Alexa footage but they haven't published the charts so who knows. Bit depth and DR are related, but do not need to correlate. For example, I could have a bit-depth of 2 bits and have a DR of 1000 stops. In this example I would say 0 for anything not in direct sun, 1 for anything in direct sun that wasn't the sun, 2 for the sun itself, and only hit 3 if a nearby supernova occurred (gotta protect those highlights!!). Obviously this would have so much banding that it would be ridiculous, but it's possible. Manufacturers don't want to push things too far otherwise they risk this, but you can push it if you wanted to. You're not the only ones, I hear this a lot, especially in the OG BMPCC forums / groups.
  16. I saw this image from Cine-D that shows some of their tests and includes the Alexa - it shows that ARRI was conservative with their figures while most other manufacturers took sometimes wild liberties with the figures. These numbers should be directly comparable to the other tests that they do, as the thresholds and methodologies should be the same.
  17. I understand what you're saying, but would suggest that they are only simple to deal with in post because they've had the most work put into them to achieve the in-camera profiles. It is widely known that the ARRI CEO Glenn Kennel was an expert on film colour before he joined ARRI to help develop the Alexa. Film was in development for decades with spectacular investment into its colour science prior to that, so to base the Alexa colour science on film was to stand on the shoulders of giants. Glenns book is highly recommended and I learned more about the colour science of film from one chapter in it than from reading everything on the topic I could find online for years prior: Also, Apple have put an enormous effort into the colour science of the iPhone, which has now been the most popular camera on earth for quite some time, according to Flickr stats anyway. I have gone on several trips where I was shooting with the XC10 or GH5 and my wife was taking still images with her iPhone, and so I have dozens of instances where my wife and I were standing side-by-side at a vantage point and shooting the exact same scene at the exact same time. Later on in post I tried replicating the colour from her iPhone shots with my footage and only then realised what a spectacular job that Apple have done with their colour science - the images are heavily processed with lots and lots of stuff going on in there. and now that I have a BMMCC and my OG BMPCC is on its way, I will add that the footage from these cameras also grades absolutely beautifully straight-out-of-camera - they too (as well as Fairchild who made the sensor) did a great job on the colour science. The P4K/P6K footage is radically different and doesn't share the same look at all.
  18. kye

    The D-Mount project

    I have a similar project that I shot with the BMMCC and the Cosmicar 12.5/1.9 C-mount and the Voigtlander 42.5/0.95 so I'll have to do the same re-cut process to remove all shots that don't include a model release! I also have an OG BMPCC on its way to me, so am planning on lots more outings with it, likely with the 7.5/2 and 14/2.5, but also perhaps with the 14-42 or 12-32 kit lenses, which have OIS, so should be much more stable 🙂
  19. His test applies to the situations where there is image scaling and compression involved, which is basically every piece of content anyone consumes. If you're going to throw away an entire analysis based on a single point, then have a think about this: 1<0 and the sky is blue. uh oh, now I've said that 1<0, which clearly it isn't, then the sky can't be blue because everything I said must now logically be wrong and cannot be true!
  20. He took an image from a highly respected cinema camera, put it onto a 4K timeline, then exported that timeline to a 1080p compressed file, and then transmitted that over the internet to viewers. Yeah, that doesn't apply to anything else that ever happens, you're totally right, no-one has ever done that before and no-one will ever do that again..... 🙄🙄🙄
  21. Why do you care if the test only applies to the 99.9999% of content viewed by people worldwide that has scaling and compression?
  22. Goodness! We'll be talking about content next!! What has the state of the camera forums come to?!?!?! Just imagine what people could create if they buy cameras that have a thick and luscious image to begin with, AND ALSO learn to colour grade...
  23. kye

    Panasonic GH6

    They should just make a battery grip that contains an M2 SSD slot that automagically connects to the camera and records the compressed raw on that - bingo.. "external" raw. Or licence Prores from Apple and offer those options. ....or just make it possible to select whatever bitrate and bit depth you want and turn off sharpening, that would do it for me.
×
×
  • Create New...