Jump to content

kye

Members
  • Posts

    7,875
  • Joined

  • Last visited

Everything posted by kye

  1. Good points. I did a bunch of testing on the overall image quality of various codecs and bitrates in a separate thread some time ago, and here's the results I got from 1080p: and graphed: Prores and h264 from Resolve are in the key but not in the graph because its quality is poor and below the graphing area. Resolve is known for not having great encoders unfortunately. The rest of the thread is here: Results were similar for UHD, so not worth repeating. I do wonder about the perceptual / aesthetic aspects of the errors that each generates. My perception is that Prores always looks so much better than h26x codecs, and I wonder if that's the nature of the error. For example if you take an image and blur it then you will generate a rather significant error from the source image but it would probably be very benign aesthetically, whereas if you took vertical stripes 10px wide and increased their brightness to get the same overall total error across the image it would be almost unwatchable, and a similar thing happens comparing random noise with fixed-pattern noise, so the nature of the error is of significant aesthetic concern. Of course, it could be that Prores tends to look nicer due to its higher bitrates, or tendency to have less sharpening added prior to compression, or 422 instead of 420, etc. I also agree that 100Mbps is sufficient most of the time, but the problem is that when you buy a camera costing multiple thousands of dollars you are likely to hit the situation where it's not sufficient at some point during your ownership of that camera, and for every person that buys it and doesn't hit that situation (for example always shooting in controlled conditions) there will be another person who buys it for use in outdoor adventure and films trees in the wind, rain, snow falling, all while hand-held, and basically experiences those problems on a weekly basis. It's like putting the tyres from a corolla on a Ferrari and not being able to change them - it's a huge bottleneck for the output of the camera.
  2. I own the BMMCC which is tiny but becomes larger than a 1DX when you put a monitor on it, so I bought the BMPCC OG just because of the overall size. The GH5 is great in that regard because of the EVF, which adds little size but makes it usable in almost any conditions, unlike the BMPCC. The 5K mode on the GH5 is also dramatically less sharpened, so it's kind of a "raw-like full sensor readout" mode. THIS. Can you imagine if it leapfrogged the pack and ended up with Alexa-like DR? Now that would be a headline grabber and put MFT back on the map for sure.
  3. I agree with your outline of why similar colours are grouped together, but I wonder if it's a function of bitrate. What I mean is that maybe the encoder would like to leave the colours separated, but due to bitrate limitations it has to sacrifice where it can do the least damage, and due to the 8-bit + Log combination these places are the least-worst and doing it any other way would simply have been worse again. One of the things I find fascinating about modern cameras is the atrocious bitrates they employ. 1080p Prores HQ was ~180Mbps, but the "standard" for 4K was only 100Mbps - even on multi-thousand-dollar flagship cameras like the A7S2. 100Mbps for 4K is laughable but seemed to be unquestioned. The bitrate for Prores is constant per pixel, so the 4K bitrate is 4x the 1080p bitrate at about 700Mbps. The H26x series are designed for broadcast not acquisition so unfortunately don't really get better in quality with bitrates the high, but they certainly do a nice job if you push them to give the highest quality they can muster.
  4. @Parker Nice honest channel launch - subscribed! A request if I may... or two... 🙂 1) Please talk about the most advanced topics you are comfortable talking about. I say this because everyone else seems to start with the basics and never gets past them. This will be a lot of work in the writing stage for each video, but will also help you as talking about something forces you to really understand it. 2) Please talk about how to create a story from the footage that you get. What I mean is, when shooting weddings and the little one-day profiles, you find the story in the edit instead of planning it beforehand. Obviously I know that these things have a bit of predictability to them, but there's definitely an art to finding the story in the edit. I watched the video with isolating headphones and found it fine. Maybe you should balance your voice with the music on headphones and then turn the music down by 3dB more or something as a bit of a safety margin for less pristine listening conditions. It would make the mixing phase pretty straight-foward to have a rule like that. Or better yet, get a traditional musician-style microphone stand and arrange it so the bar is horizontal, put a colour checker where your head will be, focus on It, hit record, then go sit down and just push the horizontal bar out of the way (it will just pivot out of shot) and not only will you be in focus, but you'll also have a colour checker at the start of every shot. You could also mount a small whiteboard to use as a slate for future shot identification, and you can put the mic stand to the side out of shot so no having to put a tripod on the chair where you'll be sitting. Also, on the next shot you can just swing it back into place again without having to move the whole stand. Alternatively, you could hang something from the boom mic, which should be almost directly over your head and just out of shot. Awesome! Content is king, so as long as you make the videos edited nice enough to not be unwatchable, then just film it and get it out there. I watch far more content of people doing things (building houses, machining, travelling, renovating, etc) than I watch professionally edited stuff, simply because it's mostly more interesting. It's pretty hard to be a good enough writer that you can match the authenticity of someone who just opens their mouth and speaks in response to life, it's pretty hard to be a good enough actor that you can match the authenticity of someone who is actually living in their characters shoes and is genuinely seeing things happen and reacting for the first time, and it's pretty darn hard to write a story that is as surprising but logical and plausible as reality itself. This is what film-makers find hard to understand about vlogging or content creators. Their film-making mostly sucks, but the acting and story is equal to that of the pinnacle that Hollywood has to offer.
  5. Good post. I wrote a DCPX plugin that reduces the bit-depth of images and found that you can really reduce the bit depth significantly before it becomes visible on real world images. I think 6 bits was fine for many test images of skin tones, for example. Probably the biggest enemy of the 8-bit codec is the gradients that occur when filming flat-coloured surfaces like any interior wall. Not only is the light going to be unevenly applied to the wall, and not only is the lens going to vignette by a little bit, but the closer these are to perfect, the wider the posterisation banding is going to be. Still, you have to try pretty hard to get visible banding from 8-bit if the image is exposed well and isn't in a crazy-flat log profile.
  6. Watched the video, haven't watched the film, thought that the video was funny and made me glad I didn't watch the movie. It reminded me of this video: Of course, I enjoyed Guardians of the Galaxy, but he does have a point about movie making and editing and story...
  7. I suspect you are a devotee of the "higher resolution sharper images" school of thought. This is also called "the video look". I did tests that mathematically analysed various codecs and the h264 / h265 codecs were mathematically more accurate than Prores at similar bitrates (which makes sense considering that maths is how the designers of compression algorithms test those algorithms and Prores is based on an older compression algorithm) however, humans are not computers. My suspicion is that the preference for prores over h264 etc is in the errors. For starters, Prores is ALL-I so renders motion very well, which is not something that is taken into consideration by the mathematics used to test algorithms. Yes, h264 etc can do ALL-I although most implementations do not enable it. Secondly, to my eye at least, Prores tends to give a softer look to an image than h264, which appears spiky and sharp/brittle in comparison. Anyone familiar with film will know that it isn't sharp at all. Anyone shooting RAW will be familiar with this. Prores has a very similar sharpness to RAW images, and requires sharpening in post to tune the image if you want a more modern feeling image. H264 and H265 feel very different to Prores. Yes, camera manufacturers tend to sharpen the image before applying H264/5 compression so that will add to this effect as well, but when you're comparing Prores to h264 you're comparing the whole image pipeline and this is what tends to happen. Unless you're a fan of the "video look" then these images likely have too much sharpening applied, and I find that my skills in softening an image with blurs are more developed than my skills in sharpening images purely because of how much sharpening is typically applied. All of the above would be similar for Motion JPG implementations as well, and the cameras with that as a codec also have a very good reputation. Finally, I suggest this - after reading this post look around you and notice your surroundings. Do they look 'sharp'? Look at your hands and the texture of your skin, look at the fabrics of your clothes, look at the device you're reading this on. Do they look 'sharp'? No. Reality isn't sharp. It's detailed, sure, but in an understated way, it just IS, without all the details yelling at you to notice them. Unlike h264/h265.
  8. We were definitely spoiled, and the price going up is more reflective of the optical quality of some of those lenses. Covid and buying boredom / nostalgia will have driven the whole market up somewhat and it might correct in the next few years, but I don't think it will go down that much. There is a diminishing supply with lenses ageing and breaking, and there is a rising demand, both due to technology driving up visual media in terms of both video and stills photography, and a rising middle class in countries like China and India that has added billions of people to those who have discretionary income: https://en.wikipedia.org/wiki/Middle_class#Recent_global_growth
  9. I guess you raise an interesting point. Vintage lenses used to be good ways of getting a high image quality lens for relatively little money, but now that prices have skyrocketed for almost all the recognisable brands, outside some bargains in garage sales etc, maybe that's not the case anymore. Considering that China has stepped in and is making new copies of older lens formulas with brands like 7artisans and TTartisans etc, and considering that we can go a long way to matching the coatings of old with diffusion filters and the like, maybe the time of vintage lenses for the masses is over?
  10. Do zoom lenses flare with more / a longer series of circles than primes? Is it because of the larger number of elements in zoom lenses?
  11. I don't think that matters so much really. If they have enough money and are willing to listen to industry and customers to build something that people actually want then I don't think it's a problem. My two favourite cameras are the GH5 and the OG BMPCC. The GH5 is the result of a long line of Panasonic listening to customers and gradually pushing things further and further, and it's the most convenient and usable camera I have. The OG BMPCC is the nicer image, by a long way, and has no legacy whatsoever. If customers placed much emphasis on lineage and history then the OG BMPCC would have been a flop, and yet it was a massive hit, even though the camera itself had quality issues, many significant design issues, and came from a company with no service network. Give people what they want and the customers will come.
  12. I think the phrase you're looking for is "I'm not addicted. I can stop any time I like."
  13. Which party are you talking about exactly that Nikon would be late to? https://imaging.nikon.com/history/chronicle/cousins18-e/index.htm https://imaging.nikon.com/history/chronicle/cousins19-e/index.htm
  14. Yes, but what do we call it? The GH5mk2 thread? The GH6 thread? Hell, if we're not believing the rumours then maybe they're going straight to the GH7 and going for a 12K sensor!
  15. That's a good video, but doesn't address the different bit depths as the source is an 8-bit signal from the C100.
  16. Interest in vintage lenses is directly correlated to sensor resolution, so it's up up up up up....... I saw a post the other day from someone saying that the second-hand price of one particular vintage lens had gone up 4X in about the last year.
  17. They could use sensors from Fairlight and leapfrog the generic video look that almost all Sony-sensor cameras give these days. Now that really would be something.
  18. Starting from scratch might be an advantage, especially considering they don't have a cine line to protect, or internal departmental politics to navigate. Canon has made a mess of their prosumer offerings, with potentially the architecture of the camera being a limiting factor, whereas starting fresh gives you an opportunity to design things from the ground up for the way that things work now, rather than either having to design custom chips (ala ARRI) or having to try and shoehorn new chips into a legacy architecture (ala Canon). Blackmagic did well by essentially designing multiple cameras from the ground up to fit underserved niches, but Nikon would be doing that with (I assume) much more resources, stronger relationships with industry, a proven manufacturing capability, large professional servicing network, a large customer base with brand trust and loyalty, and a huge body of lenses to draw from.
  19. The more I learn about each aspect of the image pipeline, the more I realise that everything matters. If you have a camera like the A7S3 where it's a good sensor, good image processor, and good codec, then the fact that it's 8-bit isn't too bad because the other things are fine. I've run into issues shooting 8-bit C-Log with the Canon XC10. This should be a reasonable example of 8-bit done well as it's a Canon cine/video camera with 4K 300Mbps 8-bit codec, but alas.... Here's this shot, which I will admit is underexposed: which gives us this when we convert the colour space: but unfortunately, if we zoom into the seat in the bottom left: Is this an 8-bit issue? Is this a poor codec issue? I'll leave that as an exercise for the reader. Let's take another example: Which after a transform gives this: and look at the noise! Now, is that ISO noise? 8-bit quantisation? a "poor" 300Mbps codec on a Canon cine camera? This is the vector scope, which clearly shows the 8-bit quantisation: Yes, this is an extreme example with a low-contrast scene, a flat log profile, and 8-bit codec, but this was a $2K Canon cine camera with a 300Mbps codec. So, I replaced it with a GH5, with the decisive factor being the internal 10-bit. Let's take this image here, a 10-bit HLG frame with almost zero contrast: and actually try to break it by applying more contrast than you would ever use in real life: and it holds together. Is that the 10-bit? Is that the 150Mbps 422 codec? Is that the fact it's a downsampled 5K image? Who knows, but it's a night and day difference where with an 8-bit example I struggle to get a good image from a real world shot and a 10-bit image where I can't break it even if I tried. I like to think about it like this - 8-bit can be good enough if you have a good enough implementation of it (unlike the XC10) but a good 10-bit implementation will give you security that you're not going to run into bit depth issues, so for me it's a safety thing.
  20. Of course it is better to know where a problem is and fix it properly, but let's not kid ourselves - the OP doesn't even have time to listen to the final edit to see where any problems might be, so the basket of solutions is pretty empty when there is such a lack of care-factor, especially considering that poor video quality is tolerable but poor audio makes something unwatchable. Besides, any half-decent limiter can put a pretty heavy limit on a signal without it having much/any perceivable effects. Anyone familiar with tape saturation would know that this type of limiting can be very sonically benign. I used to use them for compression and sonic effects in writing electronic music and you'd have to go seriously out of your way to push it hard enough to get anything sonically interesting in terms of distortion. I read somewhere that there is a bone in the ear that we hear through and that it causes high levels of second harmonic distortion, which is then removed by the auditory processing parts of the brain. I'm not sure if that's true, but it's pretty obvious that even-order harmonic distortion is orders of magnitude less offensive than odd-order harmonic distortions, so anything that can create even instead of odd harmonics will be far more pleasing, or can be used so much more before becoming unpleasant.
  21. @zerocool22 Why not just use a limiter? Not sure about FCPX, but Resolve has one built into every track, and every bus, and also the master output.
  22. Thanks, I've asked this question previously but hadn't gotten a definitive answer. A couple of things that are interesting from that spec sheet are: It can do 100fps It can do 100fps with rolling shutter and 50fps with a global shutter I wasn't aware that either the BMMCC or OG BMPCC have either 100fps or a global shutter enabled - does this mean that the sensor from all those years ago is only semi-utilised? Wow. Seriously Blackmagic - do an update on these cameras!! I see minimalism has found another follower...
  23. Do you have any links outlining the dual-gain architecture of the sensor? Some time ago I looked for more information on that aspect and couldn't find anything, and am curious to read more about it 🙂 I definitely agree that small sensor doesn't have to mean vintage look. Lots of wider aperture lenses are able to give a decent amount of background separation. If anyone isn't clear on this, then the following might be of interest: http://www.yedlin.net/NerdyFilmTechStuff/MatchLensBlur.html
  24. This came up in my feedlot sure if it's been posted already but a search didn't reveal any hits: There's some great info in the comments about equipment, settings and process.
  25. @zerocool22 When you play the video you can see levels via the Mixer section of the edit page (it's the panel bottom-right corner with levels just going into the yellow band on the main mix) and they show if something is close/clipping via colour. You have to enable the Mixer via the button at the top-right, similar to how you turn on and off the Inspector window. Also, if you have a few audio tracks you might have to drag out the width of the Mixer panel so you can see each track. It won't show if an audio clip is clipped on the original recording, although I wouldn't think this is something you'd need to refer to frequently, especially considering the damage has already been done, but maybe there's something else going on.. In a sense, the more useful measures are if a track is clipping (which is recorded audio + clip volume + track volume) or if the whole mix is clipping. It's a similar thing in the Fairlight page, although you have more places to see individual levels for tracks and busses etc. Was there something specific you are doing, or a specific situation you are in?
×
×
  • Create New...