-
Posts
7,849 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
I know.. I wrote that to try and cut through the continual "new camera" discussions on here so that people didn't think Levi was just another YouTuber who only talked about new cameras. It's us amateurs who are buying the new equipment all the time!! In terms of pen and paper, I remember talking to an amateur screen writer who wrote feature films once, and he said that the writer was the only person without a budget. He was referring to the idea that he can write space battles or car chases or battle scenes with thousands of people just as easily as he can write two people talking in a room, but for every other person involved in making a film there are huge cost differences of filming those different types of scenes. IIRC that's why we get the strange budget sizes for films, like No Budget which is actually a large amount of money - it's because the cost to develop the film for a movie was a cost that couldn't be lowered below a certain point. Even if you only shot a 1:1 shooting ratio, you would still have to buy and develop 90 minutes of film, which cost a lot of money. Nowadays for that same amount of money you can buy a modest camera setup, have a small budget for expenses, and pay everyone to make a film and still have money left over. It wouldn't be a great film, but the idea you can make a feature film for less than a "no budget" film is amusing from a language perspective It depends on the camera ML is running on, and what you're shooting. If you are shooting something without exotic camera movement, are shooting manual lenses (where the focus will stay put), and have the time on-set to fuss with focus etc, then ML can be great. It's when you need to work quickly, need to be able to monitor real-time to change focus and framing on the fly that some ML setups aren't so good. Andrews post about the EOS-M shooting 5K shows what is possible, and yes the monitoring is abysmal, but if you were to only shoot for a 1080 delivery then the camera is under hugely less strain and all the monitoring and performance potentially improve significantly. If you have a set of lenses and are shooting drama or interviews where people are sitting around talking then ML is a gift from the gods!
-
I was referring to taking already heavily compressed data like H264 and reconstituting it to make a signal that is like it was never compressed in the first place, and how that isn't possible. To put the challenge in different words, how do you write the least data to a file such that you can decode it later to get as close to the original data as possible, which is the goal of every codec. And if you think about it, they're doing a spectacular job. No-one is complaining about the quality of 4:1 RAW, which only a tiny bit different to full RAW, only they did it with only 25% of the data. Then if we look at the common H264 bitrates, they're mostly operating with less than 10% of the bitrate available. And we evaluate all of this via YouTube, which at 35Mbps is around 2% of the original image data. Just think about how crazily good that is - the 4K stream from YouTube is decoded to make a signal 170X the size, and that's what's displayed on your monitor when you watch it. If most other things were made to work with only 0.6% then they'd be so bad you'd have problems with things like recognition and being able to tell what's going on. 128kbps MP3 for example is only 11:1 compression. If audio had 170:1 compression it would be 8kbps, which is in the lower end of VOIP bitrates, hardly what anyone would use for music. The challenge of taking the older cameras and trying to make them look like the P4K is attempting to do better than the people that made YT quality video at 170:1 compression. That's what I mean [Edit, original post had some wrong numbers, so I just fixed them] You're not really a true technician unless you've fixed something by just hitting it on the side. I used to be an IT tech and sometimes you'd just give a computer a thump and that would fix it. We always used to call it "percussive maintenance" ???
-
Agreed. If there were a magical way to get compressed footage to look like RAW footage then we'd all be in heaven and be rich from not having to buy external recorders and fast media! much greater minds than ours have been contemplating such things for a long time. However, these much greater minds have probably been interested in recreating the least distorted reproductions, rather than the glorified Mojo of previous generation RAW cameras! We'll probably still fail, but it will be fun learning how not to do it ???
-
Some types of compression can also cause halos around things, that's very common in poor quality Jpegs for example. I'm not saying that it isn't using sharpening, but the halos won't be 100% caused by that. However, my understanding of sharpening is that it's basically the mathematical opposite to blurring, so in theory we should be able to eliminate it somewhat. However, blurring won't counteract any compression artefacts, so we'll probably have limited success. Still, let's see how we get on when we can start prodding at the footage
-
We talk about earning money making films, but there isn't a thread for it, so I thought I'd start one. Contribute anything you think is useful! Levi Allen just posted his year-in-review video and it's got a bunch of useful content that might be valuable to people. For those of you who don't know Levi, he runs a one-person production company and runs a 100k follower YT channel (that's taken 8 years to grow) that he hasn't monetised. He talks a lot in this video about building his business and some strategies he's implementing, how to balance passion projects with client work, and reflecting on his journey so far. It's good content for anyone just starting their own production company or looking to do that. There's an index available so you can skip around easily, but he's a great communicator so it's a good listen. One of the things I thought was interesting was that he hasn't bought new filming equipment in the last year (although he did spend over $10K on new editing gear!).
-
Absolutely. It would be a wonderful signal coming off the sensor. I was thinking more about what the output file would be, as if you have the same bit-depth but greater DR than the existing gamma curve is designed to handle then you need to compress more DR into the same number of bits and you risk the banding problems that 8-bit Log can suffer from. But after doing a bit more reading I figure they could probably get away with it without huge issues, even if it was still 10-bit.
-
Actually, done a bit more reading about how log profiles use bit-depth and 12-bit would be fine for 20 stops of DR. If it was only 10-bit then there would be less bits-per-stop than other 10-bit profiles, although they could probably re-arrange how many bits each stop gets and get away with it - the extreme highlights and shadows wouldn't need as many bits.
-
That's true, but it's always useful for those who can't afford to walk in the woods, or don't have time or the right shoes for it, to be able to go for a walk in the city and then fix it in post
-
Fstoppers moved to Puerto Rico. Have you had the desire to move?
kye replied to webrunner5's topic in Cameras
Sorry to hear things weren't going well for you. We all have times like that, and it's great that you actually did something about it, which is more than most do (or can manage to do) under those circumstances. If there's anything we can do to help, just ask! -
I'm also looking forward to what everyone else does with these clips. If we unlock the right settings it's more likely to be someone else that figures it out lol So, to be clear, the plan is to shoot both RAW in max resolution and Prores 1080 on all cameras? That makes sense to me. Partly it's a good comparison between those modes for anyone that doesn't (yet) have the cameras and can't try it for themselves, but also all that discussion about v3 vs v4 colour science and how the colours are hard to match kind makes me a little nervous, so also having the RAW without different things baked in would be good I agree, and that's the beauty of a well executed camera test, you can do anything in post that you like. I'm sure we'll be feeding off this footage for some time to come, trying various things and seeing how they work.
-
Fstoppers moved to Puerto Rico. Have you had the desire to move?
kye replied to webrunner5's topic in Cameras
Very nice! We already know - she's becoming a rapper! Step One: get that million dollar contract Step Two: we'll all be waiting for your hot track ??? -
20dB would be just fantastic! What bit depths are they talking? No point having 20 stops of DR and then only having 10 or 12 bits - by the time you convert back to a standard gamma curve it would be banding central. Could you post a link to your source?
-
Indeed it is.. good thing I never claimed that I can do it, only that I would try! ??? My suspicions are that colour matters, but that there are other elements to it as well. @webrunner5 has commented that it looks too clean, and I've heard many other similar kinds of comments around the place too. To me, those comments seem to be similar to the comments about modern 4K cameras being too sharpened, so I have a hunch that the difference between sharpened 4K and P4K RAW is maybe similar to the differences between P4K and BMPCC. If you spend time looking at film stills they have beautiful colour (very high bit-depth!) and of course are uncompressed, but they're also quite soft in comparison to the sharpness of 4K, even if not the resolution of 4K. It is possible to reduce the sharpness of an image without reducing the resolution, and of course the 1080p RAW from the BMPCC will have less resolution than the P4K in 4K RAW too. I believe that the softening effect of vintage lenses on sharpened 4K footage is one of the reasons they're so popular on these forums - they help to give the look that many of us enjoy. I'm thinking of things like blurring the footage, blurring the footage and blending it into the original footage, downscaling and the upscaling the footage, as well as adjusting for colour. My plan is to colour match the footage as much as I can, then pumping it through as many different ways of processing it as I can think of and having them one after another and then posting it to get people's impressions of what works and what doesn't, then trying different things based on feedback. It will be interesting to see what the results are. In the end I'm hoping that even if we don't match the 'look' of the footage from the older cameras, that we manage to find some things that get part of the way there, and then we can explore what those techniques look like on other cameras like the GH5. It would be great to get to a point where we know the settings to make H264 look a lot more like these classic cameras. Plus if it's a combination of various things, having it in a Powergrade like Juan Melara did to replicate the LUTs will mean that we can refine or disable each adjustment to taste. @graphicnatured - just a thought, is it worth shooting the P4K in 1080 Prores as well? If we could make P4K 1080 Prores HQ have the classic look then I'm sure that knowledge would be of real interest to a lot of people here. 1080 RAW would be severely windowed so difficult to match framing on, and what we learn from matching 4K RAW and 1080 Prores can probably be combined for processing 1080 RAW if anyone decides to shoot in that mode to extend their lenses or get 120fps.
-
It would be more sensational, but I'll leave that to you, I'm more interested in facts. In terms of how I interpret the data on those graphs, I do it by looking at the datapoints that I quoted. If you're not sure how to read graphs properly then there are many good online courses from reputable university courses available. If you disagree with their methodology or results, then I look forward to reading your published and peer-reviewed paper on the subject. Or actually making some kind of sensible criticism. Zooming in to a low resolution graph and assessing the data point that's less than one pixel wide is just stupid. I blocked you previously because of your disgusting personal attack on someone else here on this forum, but have been reading your comments because I thought that despite having basically no inter-personal skills (or no compassion) you had some technical knowledge to contribute. But if you can't even read a graph properly, or more likely you're deliberately mis-reading it because you value being right more than the facts, then I'm not sure why anyone would trust anything you contribute. I looked for that mentioned above but couldn't find it. Maybe I missed it? 4K60 10-bit with HLG would really be something if it could do it!
-
+1 - Just use Resolve's integrated conversions. @blafarm @famoss In fact, there's a big difference between using the conversions in Resolve and a LUT: If you use a LUT and the conversion clips any parts of the signal (highlights or shadows) then they're clipped forever and nothing you do after the LUT can get them back. If you use Resolves conversions (either in the Clip properties or via the Colour Space Transform plugin) the clipped values are retained within Resolve (as super-whites or super-blacks) and if you adjust the image after the conversion then you can get them back into the normal range without damaging them. The internet talks a lot about LUTs but that's mainly because the people doing all the talking are selling...... LUTs. I don't know how the other NLEs work, but I'd imagine they work similarly. If you have to use a LUT then you can lower the contrast before the LUT to get the output from the LUT within range, but this defeats the purpose of using a LUT in the first place (because your inputs to the LUT now don't match how the camera encoded them) and you may as well just apply contrast or curves to get the look you want and ignore the LUT.
-
I totally agree on calling out BS, so here goes - your post is BS. You're right that it's optimised for lower bitrates, but the advantages remain at higher ones. This paper shows the objective (Peak Signal-to-Noise ratios) and subjective (blind test) comparison of the two: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7254155 The graphs in the paper test bitrates on UHD 60Hz up to 38Mbps H.264 and 18Mbps HEVC (broadcast bitrates) and they conclude: And what about higher bitrates? This paper here shows the relatively quality of the two at higher bitrates - up to about 250Mbps: https://pdfs.semanticscholar.org/b0bc/9342d1031250db0c7e2aabd2eeed51beef2e.pdf Figure 3 shows that the 50% saving seems to extend up to around 6Mbps H264 (where the equivalent HEVC is about 3Mbps) but after that point there is a knee in the HEVC curve. Figure 4 shows that the HEVC bitrate required to match the H264 goes above 50% up to the point where 40Mbps H264 requires about 30Mbps HEVC. Figure 5 shows that the HEVC bitrate required to match the H264 goes back closer to 50% where 250Mbps H264 requires about 120Mbps HEVC, and for another clip 200Mbps H264 requires about 110Mbps HEVC. Fact-checking the internet is fun, but it helps if you actually know the facts when you do it. Yeah, some higher bitrates would have been nice. I came late to the GH5 party so I don't know what the original specs were or what people thought of them, but they did introduce the 400Mbps and 5K Open Gate modes in firmware updates, so maybe there will be updates for this too? I guess time will tell.
-
The BMPCC4K (Pocket4K / P4K) is a wonderful camera, but some say it looks too clean, or doesn't have the classic look from the previous BMPCC (OG) or BMMCC (MC) cameras. Considering that the P4K should have either higher quality levels (ie, more pixels) or sufficient quality but different (bit-depth and colour science) than the others I think we should be able to process the P4K in post to match the classic look (or looks) as the older models. Even if we can't, I'm sure that there are things we can learn in the attempt. Thanks in advance to @graphicnatured who has volunteered to shoot it. I've shot A/B camera tests before and they're a lot more work than they seem like they should be. Assuming we learn anything, we all owe him a drink - he'll need it!
-
Great stuff! I'll make a new thread so we stop clogging up this one Edit: done.
-
I think that's the idea. If I understand it right, some pixels could be at ISO 100 and others at ISO 25000, so your DR would go through the roof. However, if you had a part of the image that was very dark and those pixels had high ISO, then they'd still be noisy.
-
Agree. They said that it takes two people to set it up but then a single person can operate it, which seems ideal. If you were any good as a wildlife photographer then I'm sure you could put that to good use. Anyone who is travelling the world may very well be going with another person anyway, so that's not as big a deal as it sounds. It's also not as much of an investment as it sounds considering that the image quality is very high and the costs of travelling to the exotic locations, getting guides, etc would be considerable, especially over a multiple year timeframe. I've been contemplating a trip to Antarctica as part of my bucket list and considering the costs involved I'd definitely be taking some serious camera equipment, especially renting some serious glass. Not suggesting that I'd rent that one (!) but those long lenses really are the tool for the job. Personally though, I'd make sure I took multiple camera bodies as a backup, and having two bodies means that you can always have a short lens on one and a longer one on the other, like the pro event stills shooters do. With my GH5 I can also take advantage of the crop factor to turn more reasonable lenses into hugely long telephoto lenses too, saving considerable weight as well!
-
One of the articles said that fungus will grow if there's humidity, the right temperature range, and a source of food. So not only do the spores get in-between the layers of glass inside the lens, but particles of food do as well! No more of those "throw flour everywhere in slow-motion" shoots people!!
-
Fstoppers moved to Puerto Rico. Have you had the desire to move?
kye replied to webrunner5's topic in Cameras
True. I'm not sure why you're pointing that out, but ok 70m2 is tiny! I lived in an 80m2 two bedroom (it was built as a granny-flat by a friends parents to retire in, but they moved out because it was too small). It was a little larger than it absolutely had to be, with a small office and an ensuite in addition to the bathroom / laundry, but it was smaller than the two-bedroom unit I used to live in. I can't imagine that there are enough 1 or 2 bedroom places under that size to offset the staggering number of 3 or 4 bedroom houses that fill the suburban areas of every city and town here in Australia. It would be interesting to see some stats on existing dwellings but I can't imagine the average is that small. That sounds like a pretty nice rotation! I'm guessing that you have ties to Poland? it's not normally on many people's 'must-see places" lists -
It's ok... I know we all love to talk about the crop, but don't worry about not being able to talk about it any more - way before Canon does no-crop 4K we'll be talking about the crop in 8K! I agree, when you can't take your lenses with you then all bets are off and everyone is "in the market" for a new system again. It's a pretty significant point for brands to try and capture and "lock in" customers into their ecosystem. Maybe it's one of those "can you afford to do it? true, but can you afford not to do it?" type things for Panasonic to release their own offerings. As a happy GH5 owner I definitely agree. It will be interesting to see what the GH6 offers. The GH5 has few flaws, but if they offered 4K60 10-bit with HLG and H265 all internally that would be a decent step up. Also if they offered a card slot that could do higher speeds in UHS-I then that would be great too. Being able to use Sandisk 90MB/s cards instead of being forced to buy UHS-II cards would be great. And of course, if they offered the ability to render prores proxies to one card and H265 to the other that would be wonderful. Or RAW!! *ahem* I've watched the release of all the FF mirrorless cameras with underwhelm. They seem to be chasing the photographer market and mostly offer only scraps of improvement for video users, at the cost of buying extortionately priced lenses. I've probably lost all touch with the stills photography market, but there doesn't seem to be anything really that interesting about these cameras from a stills perspective. If you had a 5DIII then I'm not sure why you're paying thousands and thousands... The earlier comments from Panasonic indicated that video was staying in MFT for now, so that makes sense. Of course, they might get their system established, some more lenses sorted out, and then start cramming video features into the FF range, we'll see. Out of a choice of an MFT system limited to 6K sensors that's been around for ages with all kinds of strange glass vs a brand-new FF system with 8K sensor and completely new glass or high-end Leica glass, which system do you think they're going to introduce 8K video into?
-
The Canon 50-1000 lens: The best way to make an FS7 look small, and to make your tripod look like a spaceship!
-
I like the Lenses sub-forum idea. So many lenses, so little time!