-
Posts
7,845 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
I think it's simply a matter of who can see the differences and who can't. When I first started out in video I couldn't tell the difference between 24p and 60p video. Not even a little. Now it's 6 years later and I can even tell the difference between 30p and 24p, and I REALLY don't like 30p. There are enormous differences in what people can and cannot see in images. Lots of things that are debated.... motion cadence 14-bit RAW vs 12-bit vs 10-bit 24p vs 30p vs 60p shutter angles Sony sensors vs others CMOS vs CCD I suspect much of the debate is that people simply can't see the differences, or can and just have different taste.
-
You can actually adjust WB and exposure of LOG images just like they are RAW if you have the right colour management setup. It's complicated, but there is a lot of good info out there if you're curious.
-
There are no definitions of looks. You can't assess if something has the medium format look with a checklist. Ask different people what the look is and you'll get different answers, because people notice different things. There are commonalities, sure, but it's not a precise thing. Also, not all lenses have the same character. Your Noctilux 50mm F1.0 lens might have completely different optical aberrations than the average vintage MF lens, so the feel of it would be very different. It's like cooking. If two people make cakes with the same ratios of flour and water and sugar and eggs, and then all add "flavouring" then will they taste the same? Of course not. The "flavouring" matters, and can vary hugely. Imagine comparing 8mm film and iPhone 4 video. We could go through every category of image assessment and rate them and maybe we'd conclude they both had video quality at 5/10. Do they look the same? Of course not, because the individual characteristics that make up the "8mm look" and the "iPhone 4 look" are very different, despite the fact they've both got a similar amount of imperfections / character / aberrations / etc. It's like if you're making a horror film vs a rom-com. In the horror film you don't just use "horror lenses" or "horror angles" or "horror lighting" or "horror music" or "horror dialogue" or "horror sound design" or "horror colour grading" etc. The horror in the film comes from using all of them. Hopefully the rom-com uses completely different elements in all departments too.. the "look" or "feel" of the final film comes from the combination of many subtle elements combined together. Same with images. People that are into lenses look at sample images and can read them like a book. Some people can even tell what optical formula the lens uses from looking at a single image. The clues are very subtle, but they're all there.
-
Why?
-
There is absolutely a difference of looks between the formats, but it doesn't mean lens equivalency is false. Lens equivalency says that "all else being equal, a 28/2.8 will look the same on FF as a 14/1.4 on MFT" but the thing is, actually making a 28mm F2.8 lens and a 14mm F1.4 lens would end up with subtle differences in how you would do that. The "look" is really a combination of the subtle differences in lens design. The MF look is probably just as much an artefact of history and would incorporate the lens design quirks of the time. A modern MF camera with optically pristine lenses wouldn't have as much of the look as an MF film camera with vintage MF glass. A FF camera with a super-fast lens that has the same design flaws as the common MF lenses would have a lot of the MF look. Lenses aren't perfect, and much of the "look" is due to the imperfections. Reducing the discussion down to FOV and DOF is throwing the baby out with the bathwater.
-
My vague memory was that the GH2 image was more contrasty and had more edge, more of a look to it. From that perspective I can see why someone might prefer it, especially if they had something in their mind about the vibe of the footage and that look was better suited. My experience of the blind tests is that it's all about colour for me, except if there is something obviously wrong with one of the cameras like the codec is breaking or something. I also don't care about resolution after 1080p because I find 4K etc too sharp unless something has been done to tame it, so in these tests I would actually have a slight preference for lower resolution cameras, but ultimately the colour wins out, and that's why I pick the most expensive ones. I think that's because I know you can make an image look less nice, but making them more nice is virtually impossible. Perhaps the only exception to picking the most expensive cameras was the test that Tom Antos did with an Alexa and some BM cameras and others, where I rated the Alexa lower, but that was because it was massively green for some reason, so perhaps something went wrong in doing the test. I'm not critical of Tom though, actually doing your own tests is completely unforgiving and it's easy to miss something. It's also not the same as real shooting, so it's not something that you benefit from shooting a lot either. In the blind tests I must admit that I have really enjoyed the image from the modern BM cameras (P6K and UMP12K and newer) and because this was done blind I know I actually do like them. The differences in the blind tests are often much less than when looking at footage, I suspect it's partly because of prejudice but mostly because when people have access to an Alexa they mostly know what they're doing and use great lenses and light and grade the images really well, so comparing two tests when one is done by 10 professionals in a studio with $10K of lighting and the other is done by some guy in his garage on the weekend, well, you're going to prefer the Alexa of course! That reminds me of this test from a long time ago which has many of the worlds most sought-after lenses, but at 54:40 it has the brilliantly named Dog Schidt lens, which is a Helios 58mm with the coatings removed so they flare a lot. The frames where it's stopped down to F4 (55:32) and without a light creating heaps of flare will show that it's actually a very nice looking lens, and helps you 'calibrate' yourself to the setup they have for the test - very high quality images indeed.
-
I did that test, blind, scoring and taking notes and reviewed my answers. Then I looked up which was which. Then I looked up what each of them cost. Then I cried. I wish there was some kind of prize for being able to sort them in descending order of price, blind, but no reward came. Sadly, I've done that more than once in blind tests.
-
This is why I have emphasised colour grading to folks. Over. and. over. again. lol. I know you finish your images in post and don't expect the camera to create completely finished images, so you're one of the few who understands that a file on the card isn't a finished image, but there aren't that many of us in amateur circles. It really goes to show how ridiculous it is when people are nit-picking straight 709 conversions, as if this is what matters - as if anyone professional would ever use that for literally anything. Even the BTS would get a LUT or basic 5-minute look applied over it. For most high-end films and TV shows, the final grade is more different to a straight 709 conversion than the differences between the 709 conversions of completely different brands of cameras. Not at all... with colour if it looks good, then it is good. The rest is preference and the creative vision for the project.
-
It might be. The only way would be to get your hands on the files yourself, or to have a professional colourist weigh in (which I have suggested....:) ) I don't know, you might be right, but half of what you say makes little to no logical sense. But, people don't make sense, so that's hardly a good predictor. The number of nodes in a node graph is a bit of a red herring really: Pros often only have half a dozen nodes to start off with Huge node trees aren't more complex than simple ones, they just do one operation per node, if you tweak each dial in LightRoom then you're making 15-20 adjustments, so it's not like the pros make more adjustments necessarily Spending $200 extra to have LogC, and still needing to do significant colour grading to the image (which is needed for Alexas and V-Log cameras alike), but not wanting to have a node with a CST in it makes very little sense... like saying no to climbing Everest because you can't be bothered putting your socks on I really only see two situations where it would make sense. The first is where you like the GH7 LogC + ARRI LUT look a lot better as a starting point for the grade than you like the GH7 VLog + CST + ARRI LUT. The second is where you want to match it to an Alexa and the LogC gets you closer as a starting position.
-
-
I think the LUT bros are going to lose market share to the Film Look Creator when Resolve 19 comes out of beta, but realistically there will probably be so much market growth with new video creators that their sales might still rise in absolute terms. I'm wondering how much more we'll hear about the GH7 LogC. It's early so people are still finding out and maybe there will be all this information and body of knowledge that gradually makes it into the non-industry / YT / online space, but I also wonder if "GH7 LogC doesn't match Alexa" will be the last we hear from it and it just disappears. ARRI have been talking about the "workflow" benefits, and the ARRI guy said that it allows people to put LogC footage into the NLE and then grade in the log space and then convert to 709 at the end, instead of starting with 709 footage and grading that. When I heard that I was just like "huh?" because people buying flagship cameras haven't done that in a decade, and even colourists are gradually moving from grading in LogC to ACES or Davinci Intermediate. Maybe I'm missing something incredible, but if so, no-one has said anything yet, and I subscribe to the right kinds of places to hear it...
-
I've seen the insides of a significant number of offices, and I can assure you, useless people who can survive in the corporate environment can easily do it without creating any tangible outputs at all. It's incredibly difficult to actually deliver IT changes in a large enterprise IT environment, so only the most switched on and talented people are able to do it. Saying that people are successfully making changes to justify their useless jobs is kind of like saying that there are all these useless film-makers making feature films that get cinema releases just to justify themselves.
-
That's the US... so a budget of 30K per cam minus 25K for insurance coverage leaves 5K...
-
I'd suggest not getting too far into the weeds with this - people get all funny about things like this when in reality they don't really have much significance. Start with a goal, such as the ability to shoot some situation or other and have the final graded results be of a certain quality, and then work out what is required for that. Others might disagree about this, but I think there isn't a single situation where the difference between uncompressed raw and 3:1 compressed raw actually makes any visible difference to the final edit, let alone what is visible once it is compressed to the deliverable. Doubly-so if you're delivering to a streaming service who will compress the living daylights out of the file.
-
...a skintone comparison it sure 'aint! But if you know what you're looking at then other things can be inferred.
-
THIS. They're constantly seeking to make the site/app better. and "better" means more profitable. Absolutely. Changes are to bring something of value. Value. Shareholder... value. Maximise shareholder value. .... remember, if you're not paying, you're the product!
-
I have a vague recollection that a recent camera allowed multiple framing guides at the same time so you could put up the vertical and horizontal boxes at the same time on the monitor. Seems like a good idea, but can't remember where I heard it.
-
They were shared privately, but there will be a YT video, so will share that when it surfaces. The images I saw were partly graded so were more a work in progress sort of thing and most likely will get changed before publishing. I haven't seen comparisons of GH7 V-Log capture -> CST to LogC vs GH7 LogC capture, which would be the comparison you'd want to see before making a purchase decision. It might be that the GH7 LogC doesn't match Alexa colours, but it might still be better than capturing in V-Log and grading from there. I think it's a real pity that ARRI didn't go absolutely nuts and profile the sensor and then have a 64x64x64 LUT that matches it to an Alexa within the GH7 DR range. It's not like the GH7 is going to cannibalise Alexa sales....
-
If you shoot for social media then you might need to publish in vertical, square, and landscape, so open gate means you don't have to film the same thing three times. Also, anamorphic..... Also also, GH5 from 2017 had it, so "recent" is a relative term. Also also also, film had it from before most of us were born, so "recent" might not be the right word....
-
Probably not. But you probably don't want uncompressed RAW because the file sizes are astronomical. There are compression schemes which are very close to be visually lossless, which are almost as good, and save a significant amount of storage. Depending on your needs, you're likely willing to sacrifice some image quality for some space savings. For example, you might accept a 1% loss in quality for a 50%+ reduction in file size, etc. BRAW has compression ratios between 3:1 and 12:1, even on their top cameras, so the artefacts from huge compression ratios can't be that bad, and you can get a lot of compression before people can even notice.
-
YT keeps changing things, maybe this is the first time they've done something you didn't like, or maybe you never noticed before? They've made two changes in the past that fundamentally changed how I use the site, which annoyed me no end, but it is what it is. When will they stop? Never. This is because: If you don't change you die. If you don't believe me then feel free to make a post about it on your myspace page. Things are improving, for the most part, and A/B testing is how they work out what to do, which is the scientific method which built civilisation. If you think things don't get better then fire up Windows v1 and your old Nokia. I'll see you in a week once you remember how to manually setup the TCPIP stack to get network connectivity working. Improvements are made incrementally without people noticing. If every few years Facebook or Google released a new version then it would confuse the absolute crap out of everyone, but when they trickle the changes through in tiny little bits then no training is needed and mostly people adapt pretty smoothly. For example, Amazon makes changes to their live website 2-3 times per day on average.
-
I've seen some early ARRI vs GH7 LogC images, and let's say they're.... not similar. For anyone hoping to get a pocket ARRI, lower your expectations. .....and go back to working on your colour grading.
-
wrong thread. dammit.