Jump to content

kye

Members
  • Posts

    7,656
  • Joined

  • Last visited

Everything posted by kye

  1. You raise an interesting point about the Prores vs RAW and I don't think I ever got to the bottom of it. With the Prores they can put whatever processing into the camera that they want (and manufacturers certainly do) but technically the RAW should be straight off the sensor. Of course, in the instance of the Alexa, it isn't straight off the sensor due to the dual-gain architecture which combines two readouts to get (IIRC) higher dynamic range and bit-depth, so there is definitely processing there, although the output is uncompressed. Perhaps they are applying colour science processing at this point as well, I'm not sure. The reason that this question is more than just an academic curiosity is that if they are not applying colour science processing to their RAW, then at least some of the magic of the image is in their conversion LUTs, which we all have access to and could choose to grade under if we chose to (and some do). Yes, testing DR involves working your way through various processing, if you can't get a straight RAW signal. I'm assuming that they would have tested the RAW Alexa footage but they haven't published the charts so who knows. Bit depth and DR are related, but do not need to correlate. For example, I could have a bit-depth of 2 bits and have a DR of 1000 stops. In this example I would say 0 for anything not in direct sun, 1 for anything in direct sun that wasn't the sun, 2 for the sun itself, and only hit 3 if a nearby supernova occurred (gotta protect those highlights!!). Obviously this would have so much banding that it would be ridiculous, but it's possible. Manufacturers don't want to push things too far otherwise they risk this, but you can push it if you wanted to. You're not the only ones, I hear this a lot, especially in the OG BMPCC forums / groups.
  2. I saw this image from Cine-D that shows some of their tests and includes the Alexa - it shows that ARRI was conservative with their figures while most other manufacturers took sometimes wild liberties with the figures. These numbers should be directly comparable to the other tests that they do, as the thresholds and methodologies should be the same.
  3. I understand what you're saying, but would suggest that they are only simple to deal with in post because they've had the most work put into them to achieve the in-camera profiles. It is widely known that the ARRI CEO Glenn Kennel was an expert on film colour before he joined ARRI to help develop the Alexa. Film was in development for decades with spectacular investment into its colour science prior to that, so to base the Alexa colour science on film was to stand on the shoulders of giants. Glenns book is highly recommended and I learned more about the colour science of film from one chapter in it than from reading everything on the topic I could find online for years prior: Also, Apple have put an enormous effort into the colour science of the iPhone, which has now been the most popular camera on earth for quite some time, according to Flickr stats anyway. I have gone on several trips where I was shooting with the XC10 or GH5 and my wife was taking still images with her iPhone, and so I have dozens of instances where my wife and I were standing side-by-side at a vantage point and shooting the exact same scene at the exact same time. Later on in post I tried replicating the colour from her iPhone shots with my footage and only then realised what a spectacular job that Apple have done with their colour science - the images are heavily processed with lots and lots of stuff going on in there. and now that I have a BMMCC and my OG BMPCC is on its way, I will add that the footage from these cameras also grades absolutely beautifully straight-out-of-camera - they too (as well as Fairchild who made the sensor) did a great job on the colour science. The P4K/P6K footage is radically different and doesn't share the same look at all.
  4. kye

    The D-Mount project

    I have a similar project that I shot with the BMMCC and the Cosmicar 12.5/1.9 C-mount and the Voigtlander 42.5/0.95 so I'll have to do the same re-cut process to remove all shots that don't include a model release! I also have an OG BMPCC on its way to me, so am planning on lots more outings with it, likely with the 7.5/2 and 14/2.5, but also perhaps with the 14-42 or 12-32 kit lenses, which have OIS, so should be much more stable 🙂
  5. His test applies to the situations where there is image scaling and compression involved, which is basically every piece of content anyone consumes. If you're going to throw away an entire analysis based on a single point, then have a think about this: 1<0 and the sky is blue. uh oh, now I've said that 1<0, which clearly it isn't, then the sky can't be blue because everything I said must now logically be wrong and cannot be true!
  6. He took an image from a highly respected cinema camera, put it onto a 4K timeline, then exported that timeline to a 1080p compressed file, and then transmitted that over the internet to viewers. Yeah, that doesn't apply to anything else that ever happens, you're totally right, no-one has ever done that before and no-one will ever do that again..... 🙄🙄🙄
  7. Why do you care if the test only applies to the 99.9999% of content viewed by people worldwide that has scaling and compression?
  8. Goodness! We'll be talking about content next!! What has the state of the camera forums come to?!?!?! Just imagine what people could create if they buy cameras that have a thick and luscious image to begin with, AND ALSO learn to colour grade...
  9. kye

    Panasonic GH6

    They should just make a battery grip that contains an M2 SSD slot that automagically connects to the camera and records the compressed raw on that - bingo.. "external" raw. Or licence Prores from Apple and offer those options. ....or just make it possible to select whatever bitrate and bit depth you want and turn off sharpening, that would do it for me.
  10. I'm also just looking at skin tones. I should probably try to zoom out a little, but it's interesting to see how we each perceive these things. As @TomTheDP said, Alexas are commonly a bit green. I was completely surprised when I heard this for the first time because you never see it in the final footage, but it's just a thing that everyone deals with apparently. I think that knowledge of colour grading is perhaps the biggest differentiator between how primarily amateur groups and primarily pro groups discuss image quality - the pros seem to view SOOC footage as a raw material whereas amateurs discuss it like it's a final product (or it's only a LUT away from a final product). I saw someone on another forum comparing two RAW-shooting camera models from the same company and their comment was that they really wanted to like the later model but there was a slight texture to the skin tones they didn't care for. The thing is that both the cameras share the same sensor, the person who made the test used the same settings and just applied a technical LUT to get a straight comparison, and they were taken outdoors non-simultaneously so small differences between them were inevitable. They were talking like the 12-bit RAW footage wasn't changeable, yet it is the most neutral flexible codec available. Even the way that cinematographers do latitude tests on cameras that shoot RAW indicates they're viewing the camera with a "shoot it so after colouring it I get the best image" but that mindset is almost completely absent elsewhere, other than the odd cult who have sold all their possessions to this deity they don't understand called ETTR. It's a bizarre world where the leaders are running around screaming "shoot in LOG so your footage looks miserable SOOC, but let's not ever talk about colour grading - you should buy my LUT instead!" and no-one questions this. You'd imagine a counter-culture would have emerged by now, but unfortunately it seems that if anyone is rejecting this premise they haven't done it by empowering themselves by learning about grading..
  11. I think you raise an interesting point about the pipeline. The video files that many cameras produce are only a pale impression of the capabilities of the sensor, and that is definitely the case when cameras have 8-bit low-bitrate codecs. Do you think this is also true for the RAW shooting cameras we now have, such as the BM or Prores-RAW cameras? I ask the question because although I don't think that anything should be missing with those setups, I wonder if there's something you're aware of that I'm not? In terms of DR, I think that we are most certainly not there. I still think that there is benefit to having more DR than even an Alexa has, at least, when shooting fast in uncontrolled circumstances. For example, I would like to have perfect exposure on a subject while also being able to have the sunset in the background, whereas (IIRC) even the widest DR cameras still clip more of the sunset than you'd want if the subject is placed at the right IRE for skin tones. The Alexa (apart from being generally regarded as more capable than ARRI suggest) employs a simultaneously-combined-dual-gain architecture combined with 10-year old sensor tech. The latest sensor tech has gotten a lot closer to that performance using a single-simultaneous-gain architecture, but if we took some of the zillions of pixels we now have and sacrificed them to implement that architecture, we could easily leapfrog the Alexa DR, which I think would create absolutely stunning images beyond what we have seen from current sensors in anything other than their sweet-spots. Going back to the destructive image pipeline that happens inside consumer cameras, it really makes me angry that in many cases, people have bought a sensor with X performance, an image processor with Y performance, and an SDcard card writer with Z performance, but instead of giving the overall performance of the least capable component (bottleneck), we get something like 10% of the bottleneck. Things like having a 709 profile that deliberately clips the top few stops of DR instead of putting in an aggressive knee for example is ridiculous and is simply just an adjustment of the profile itself and requires no hardware changes at all. Then people start wanting to hack the camera, and the manufacturers respond by encrypting and otherwise preventing these alterations. In effect, you are paying extra money for each camera you buy, in order for the manufacturer to prevent you from being able to get the full benefit of the product that you are buying.
  12. Were there any hacks for the G series of cameras?
  13. @tupp You raise a number of excellent points, but have missed the point of the test. The overall context is that for a viewer, sitting at a common viewing distance, the difference won't be discernible. This is why the comparison is about perceptual resolution and not actual resolution. Yedlin claims that the video will appear 1:1, which I took to mean that it wouldn't be a different size, and you have taken to mean that every pixel on his computer will appear as a single pixel on your/my computer and will not have any impact on any of the other surrounding pixels. Obviously this is false, as you have shown from your blown up screen captures. This does not prove scaling though. As you showed, two viewers rendered different outputs, and I tried it in Quicktime and VLC and got two different results again. Problem number one is that the viewing software is altering the image (or at least all but one that we tried). Problem number two is that we're both viewing the file from Yedlin's site, which is highly compressed. In fact, it is a h264 stream, and 2.32Gb, something like 4Mbps. The uncompressed file would have been 1192Mbps and in the order of 600Gb, and not much smaller had he used a lossless compression, so completely beyond any practical consideration. Assuming I've done my maths correctly, that's a compression ratio of something like 250:1 - a ratio that you couldn't even hope would yield a pixel-not-destroyed image. The reason I bring up these two points is that they will also be true for the consumption of any media by that viewer that the test is about. There's no point arguing that his test is invalid as it doesn't apply to someone watching an uncompressed video stream on a screen that is significantly larger than the TXH and SMPTE recommendations suggest, because, frankly, who gives a toss about that person? I'm not that person, probably no-one else here is that person, and if you are that person, then good for you, but it's irrelevant. You made a good point about 3CCD cameras, which I'd forgotten about, and even if you disagree about debayering and mismatched photosites and pixels, none of that stuff matters if the image is going to get compressed for digital distribution and then decoded by any number of decoders that will generate a different pixel-to-pixel readout. Essentially you're arguing about how visible something is at the step before it gets put through a cheese-grater on its way to the people who actually watch the movies and pay for the whole thing. In terms of why they make higher resolution cameras? There are two main reasons I can see: The first is that VFX folks want as much resolution as possible as it helps keep things perceptually flawless after they mess with them. This is likely the primary reason that companies like ARRI are putting out higher resolution models. The second reason is that electronics companies are companies, and in a capitalist society, companies exist to make money, and to do that you need to make people keep buying things, which is done through planned obsolescence and incremental improvements, such as getting everyone to buy 4K TVs, and then 4K cameras to go with those 4K TVs. This is likely the driver of all the camera manufacturers who also sell TVs, which is.... basically every consumer camera company. Not a whole lot of people buying a GH5 are doing VFX with it, although cropping in post is one relatively common exception to that. So, although I disagree with you on some of the technical aspects along the way, the fact that his test isn't "1:1" in whatever ways you think it should be is irrelevant, because people watch things after compression, after being decoded by unknown algorithms. That's not even taking into account the image processing witchcraft that things like Smooth Motion that completely invents entirely new frames and is half of what the viewer will actually see, or uncalibrated displays etc. Yes, these things don't exist in theatres, but how many hours do you spend watching something in a theatre vs at home? The average person spends almost all their time watching on a TV at home, so the theatre percentage is pretty small.
  14. My preference was A, C, E, D, B. The more of these that I watch the more I realise I'm looking at the colour, and in this test I didn't like the green reflection that wasn't there in real life. Of course, as they were all shot in very neutral codecs this should be editable in post relatively easily prior to the 709 conversion. But any of these cameras can create a great image, as Tom said.
  15. kye

    The D-Mount project

    Thanks! Those shots are the more formal shots from the real video is much more casual thing, including things like my wife throwing seaweed at me, and other amusements that I'm not allowed to post publicly.. In that context those shots are quite stable! Compared to the action camera, a tripod would literally make the rig hundreds of times larger, and really defeat the "action" part of it 🙂
  16. If it's worth saying, it's worth repeating, right? also, if it's worth saying it's definitely worth repeating. I completely agree. To put things into perspective, here's a video from the GH1 that I saw shared recently. To my eyes, it look better than almost everything I see posted in the last couple of years. 1080p camera, 1080p upload, from a camera that is so old it doesn't change hands much anymore, but when it does it can be had for about $100.
  17. The mods continue - putting a filter thread on the 15mm f8 lens... I bought a filter adapter to go from something larger down to the 52mm thread that I wanted, then applied PVA glue where I worked out that the lens and filter thread adapter touch: Put it on the lens: I made sure it was flat and waited a long time for it to dry to ensure it had dried all the way through and had hardened properly, etc. Just for fun I took the whole filter stack from the Micro and put in on here, so this is the lens with the glued on adapter, a 52-58 adapter, a UV IR filter, the Tiffen BPM 1/8, and the vND. I have a 52mm vND so I'll just put that on there, but I also bought a super-cheap 52mm diffusion filter from China, so I'll try that on there and see how that goes - it was under $10 so can't really go wrong. I haven't shot anything with the lens yet, but it's on my list....
  18. kye

    The D-Mount project

    First colour video from the SJ4000 / zoom lens combo.... I pushed the colour, both towards warm/magenta and also in saturation, partly to experiment with a punchier look, but also to see how far I could push the image. This is the final look I applied: and this was the untouched SOOC footage: Not a bad look, but not the one I was going for. Sunsets aren't green, after all! The only mod that I still want to make is to extend the lever on the focus ring to give finer control and more leverage (as some parts of the focal range get a bit stiff). It's quite difficult to focus, even with the 4X digital zoom that can be used to 'punch-in' before shooting, and I missed quite a few shots while filming this. Overall though, it's quite a capable package.
  19. Awesome... next steps are: figure out of there is a cheaper way to get that glass, perhaps in a consumer lens figure out if there is a different way to get the same optical recipe (for example the soviet lenses are famously replications of the Zeiss recipes that the soviets took from Germany at the end of WWII) figure out what the image qualities are that you like from those lenses and work out if there is a way to replicate them in other ways, like diffusion filters, streak filters, etc
  20. It depends on the footage you're matching and what the differences are. It's one element of the image, but isn't the only thing, of course.
  21. I'm skeptical. Just because something is the best of the available options doesn't mean that the available options were the best ones that could have been created. During the last decade we've gotten a 16x increase in sensor resolution (8K vs 2K) combined with a radical price decrease (compare the launch price of the Canon R5 or the UMP 12K with the launch price of the original Alexa), and yet we haven't even matched the colour science or dynamic range. The fact we have gotten radical "improvements" but still view the decade-old Alexa as having superior image quality means that the last decade was spent improving things that didn't matter, or at least weren't the most critical. It's like if I cooked you a meal but you found that it tasted quite bad, and I said I was going to work on it. I come back a decade later and you taste the food and it still tastes bad, and you ask me "you said you were going to improve your cooking - what happened" and I replied "I did improve my cooking - now for the same budget I can make a huge amount of the food that tastes like that". The edges you see when you zoom in are to do with the amount of sharpening, noise reduction, and compression being applied to the image, which Sony has (prior to the A7S3) had a pretty poor track record of. The autofocus system of a sensor has nothing to do with compression artefacts.
  22. There are a number of blind camera tests around the place, and I find them useful to compare your image preferences (instead of the prejudices we all have!) so thought collecting them in a single thread might be useful. To get philosophical for a second, I think that educating your eye is of paramount importance. It's easy to "train" your eye through the endless cycle of 1) hear a new camera is released, 2) read the specs and hear about the price 3) build up a bunch of preconceived notions about how good the image will be, 4) see test footage, 5) mentally assume that the images you saw must fit with the positive impression that you created based solely on the specs and price, and 6) repeat for every new camera that is released. That's a great way to train yourself to think that over-sharpened rubbish looks "best". The alternative to this is evaluating images based solely on the image, and to go by feel, rather than pixel-peeping based on spec/price. To this end, it's useful to view blind tests of cameras you can't afford, lenses you can't afford, and old cameras you turn your nose up at because they're not the latest specs. News flash, the Alexa Classic doesn't have the latest specs either, so how many cameras have you dismissed based on specs but were making an exception for the Alexa all this time? Also, although you may find that you like a particular cine lens but can't afford it (or basically any cine lens for that matter), often the cine lenses have the same glass as lenses that cost a tenth or less of the cine version. Furthermore, you may be able to triangulate that you like lenses or a camera / codec combo that gives a particular image look, and perhaps that look can be created by lighting differently, or using filters, or changing the focal lengths you use. Educating your eye can literally lead to you getting better images from what you have without purchasing anything. To kick things off, here's Tom Antos' most recent test, with Sony FX3, Sony FX6 BM Pocket Cinema Camera 6K Pro, RED Komodo, and Z-Cam E2 F6 Test footage: and the results and discussion: Here are his tests from 2019 - BM Pocket 6K, Arri Alexa, RED Raven, Ursa Mini Pro Test footage: and results and discussion: Another blind test from TECH Rehab, comparing Sony F65, Sony F55, Arri Alexa, Kinefinity Mavo 6K, BM Ursa 4K, BMPCC 6K, BMPCC 4K Test footage: and results: Another test from Carls Cinema, comparing OG BMPCC 2K and BMPCC 4K: Another test from Carls Cinema comparing OG BMPCC 2K to GH5: A big shootout from @Mattias Burling comparing a bunch of cameras, but interestingly, also comparing different modes / resolutions of the cameras, and also paired with different lenses because (hold the front page!) the camera isn't the only thing that creates the image. Shocking I know.... I won't name the cameras here, as not even knowing which cameras are in there is part of the test. The test footage: And the results and discussion: More camera tests: Another great camera/lens combo test, this time from @John Brawley: And a blind lens test: If anyone can find the large blind test from 2014 (IIRC?) that included the GH4 as well as a bunch of cine cameras, it would be great to link to it here. I searched for it but all I could find was a few articles that included private vimeo videos, so maybe it's been taken down? It was a very interesting test and definitely worth including. If you know of more, please share! 🙂
  23. kye

    YouTube question

    Or at all. Telling truth to power / truth about power is regarded as an extreme act for good reason.
  24. I've found one of the really important things in matching two cameras is to use a colour checker (under identical lighting conditions) and use the Hue vs Hue, Hue vs Sat, and Hue vs Lum curves to match up the colour patches from the colour checker. If you haven't done that then it's worth a go, as often those curves take a match from being quite bad to really close.
×
×
  • Create New...