-
Posts
7,849 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
Im still working through it, but I would imagine there are an infinite variety. Certainly, looking at film emulations, some are quite different to others in what they do to the vector scope and waveform. What do you mean by this? OK, one last attempt. Here is a LUT stress test image from truecolour. It shows smooth graduations across the full colour space and is useful for seeing if there are any artefacts likely to be caused by a LUT or grade. This is it taken into Resolve and exported out without any effects applied. This is the LUT image with my plugin set to 1-bit. This should create only white, red, green, blue, yellow, magenta, cyan, and black. This is the LUT image with my plugin set to 2-bits. This will create more variation. The thing to look for here is that all the gradual transitions have been replaced by flat areas that transition instantly to another flat area of the next adjacent colour. If you whip one of the above images into your software package I would imagine that you'd find the compression might have created intermediary colours on the edges of the flat areas, but if my processing was creating intermediate colours then they would be visible as separate flat areas, but as you can see, there are none.
-
Read the book. The title isn't an accident.
-
and some time after that maybe they'll be able to take my photos of 2-3+ hours of vigorous whiteboard dissuasions and actually make people work together better! Now that would be the magic of cinema.... Everything has pros and cons. Typically the people who think something is bad are either unaware of what was gained, or don't value them as highly as the things that were lost. Most often things were better in the good old days, but what that doesn't take into account is that the good old days for the 60-year old were when they were in their early 20s and was what the 80-year old was referring to as "the end of civilised society" at the time. The things we value and basically everything that our lives and the world are made of are the accumulation of all barbarity and demonic possession of civilised society through the entire history of time. See this for some examples: https://www.mentalfloss.com/article/52209/15-historical-complaints-about-young-people-ruining-everything There's a great book on social media use that you might find interesting called It's Complicated: The Social Lives of Networked Teens by Danah Boyd which was an incredible read and actually suggested that teens aren't addicted to their phones, but to each other, and that when teens are looking at their phones its much more likely to be in aid of communicating with each other than when an adult is looking at their phone. A google search found a PDF of the whole thing from the authors website, so if you are looking then apparently it's easy to access.
-
Ok, now I understand what you were saying. When you said "a digital monitor 'adds' adjacent pixels" I thought you were talking about pixels in the image somehow being blended together, rather than just that monitors are arrays of R, G, and B lights. One of the up-shots of subtractive colour vs additive colour is that with subtractive colour you get a peak in saturation below the luminance level that saturation peaks at in an additive model. To compensate for that, colourists and colour scientists and LUT creators often darken saturated colours. This is one of the things I said that I find almost everywhere I look. There are other things too. I'm sure that if you look back you'll find I said that it might contribute to it, and not that it is the only factor. Cool. Let's test these. The whole point of this thread is to go from "I think" to "I know". This can be arranged. Scopes make this kind of error all the time. Curves and right angles never mix because when you're generating a line of best fit with a non-zero curve inertia or non-infinite frequency response then you will get ringing in your curve. What this means is that if your input data is 0, 0, 0, X, 0, 0, 0 the curve will have non-zero data on either side of the spike. This article talks about it in the context of image processing, but it applies any time you have a step-change in values. https://en.wikipedia.org/wiki/Ringing_artifacts There is nothing in my code that would allow for the creation of intermediary values, and I'm seeing visually the right behaviour at lower bit-depths when I look at the image (as shown previously with the 1-bit image quality), so at this point I'm happy to conclude that there are no values in between and that its a scoping limitation, or is being created by the jpg compression process.
-
@Andrew Reid considering how good these things are getting, is it now possible to start ranking phones against cheaper cameras? All the discussion is phones vs phones or cameras vs cameras, but I'm curious when we can start comparing them directly. Or maybe it's obvious and I'm out of the loop because I'm still using my iPhone 8.. It will be a great day when the image quality of a consumer device mimics Zeiss Master Primes. Cameras are tools, and currently the best tools are inaccessible to the vast majority of people who would like to create with them. Democratisation of image quality is good for art, good for creators, and good for society in general. The only people it's not good for is technical elitists, or people that are obsessed with equipment and don't care about art. You do realise that if my camera gets better then yours doesn't get worse, right? and smartphones don't crawl from your nightstand and inject you with heroin while you're sleeping... improving the camera in a phone will help you to get access to traditional camera equipment, not hinder it, and owning a phone doesn't automatically mean that you are forced at gunpoint to browse social media. These things are tools. Agreed. It's a tough job to convince many people that a photographer is more than a camera-transportation-device, but the more that smartphones get better and people take photos and realise they're not that great even though the picture quality seems fine, the more they'll realise that there is a skill component to things. The best way to educate the public is to give them the tools and then watch as they discover it's not so easy. This transition from photographers being technicians who restrict access to the equipment to photographers being artists has been occurring for a very long time. It used to be that accessing and operating a 8x10 camera wasn't accessible, then it was lenses and lights, then with digital it was the megapixels and lenses, now it's just lenses, and once fake-bokeh is convincing then there will be no technical or ownership barriers at all, which is why the good photographers are the ones who specialise in lighting, composition, posing, set design, concepts, etc. It's pretty easy to make an Alexa look like a home video. The equipment won't save the amateurs from needing to be artists.
-
+1 for a normal picture profile, like @TomTheDP says. This will mean controlling your lighting and DR in the scene and white-balancing meticulously. It would also help the colourist immensely if you shot a colour chart, preferably in every setup if you can. 8-bit is fine if it's starting in a 709-like space, and the A73 isn't that bad when you consider that feature films have mixed GoPro footage in with cinema cameras!
-
Perhaps these might provide some background to subtractive vs additive colour science. https://www.dvinfo.net/article/production/camgear/what-alexa-and-watercolors-have-in-common.html https://www.dvinfo.net/article/post/making-the-sony-f55-look-filmic-with-resolve-9.html Well, I would have, but I was at work. I will post them now, and maybe we can all relax a little. No bit-crunch: 4.5 bits: 4.0 bits: In terms of your analysis vs mine, my screenshots are all taken prior to the image being compressed to 8-bit jpg, whereas yours was taken after it was compressed to 8-bit jpg. Note how much the banding is reduced on the jpg above vs how it looks uncompressed (both at 4.0 bits): Here's the 5-bit with the noise to show what it looks like before compression: and without the noise applied: Agreed about 'more than the sum of their parts' as it's more like a multiplication - even a 10% loss over many aspects multiplies quickly over many factors. Not a fan of ARRIRAW? I've never really compared them, so wouldn't know. Indeed, and that's kind of the point. I'm trying to work out what it is.
-
It does hark back to Deezids point. Lots more aspects to investigate here yet. Interesting about the saturation of shadows - my impression was that film desaturated both shadows and highlights compared to digital, but maybe when people desaturate digital shadows and highlights they're always done overzealously? We absolutely want our images to be better than reality - the image of the guy in the car doesn't look like reality at all! One of the things that I see that makes an image 'cinematic' vs realistic is resolution and specifically the lack of it. If you're shooting with a compressed codec then I think some kind of image softening in post is a good strategy. I'm yet to systematically experiment with softening the image with blurs, but it's on my list. I'll let others comment on this in order to prevent groupthink, but with what I've recently learned about film which one is which is pretty obvious. When you say 'compression', what are you referring to specifically? Bit-rate? bit-depth? codec? chroma sub-sampling? Have you noticed exceptions to your 10-bit 422 14-stops rule where something 'lesser' had unexpected thickness, or where things above that threshold didn't? If so, do you have any ideas on what might have tipped the balance in those instances? Additive vs subtractive colours and mimicking subtractive colours with additive tools may well be relevant here, and I see some of the hallmarks of that mimicry almost everywhere I look. I did a colour test of the GH5 and BMMCC and I took shots of my face and a colour checker with both cameras, including every colour profile on the GH5. I then took the rec709 image from the GH5 and graded it to match the BMMCC as well as every other colour profile from the GH5. In EVERY instance I saw adjustments being made that (at least partially) mimicked subtractive colour. I highly encourage everyone to take their camera, point it at a colourful scene lit with natural light and take a RAW still image and then a short video clip in their favourite colour profile, and then try to match the RAW still to the colour profile. We talk about "just doing a conversion to rec709" or "applying the LUT" like it's nothing - it's actually applying a dozen or more finely crafted adjustments created by professional colour scientists. I have learned an incredible amount by reverse-engineering these things. It makes sense that the scopes draw lines instead of points, that's also why the vector scope looks like triangles and not points. One less mystery 🙂 I'm happy to re-post the images without the noise added, but you should know that I added the noise before the bit-depth reduction plugin, not after, so the 'dirtying' of the image happened during compression, not by adding the noise. I saw that. His comments about preferring what we're used to were interesting too. Blind testing is a tool that has its uses, and we don't use it nearly enough.
-
I've compared 14-bit vs 12-bit vs 10-bit RAW using ML, and based on the results of my tests I don't feel compelled to even watch a YT video comparing them, let alone do one for myself, even if I had the cameras just sitting there waiting for it. Have you played with various bit-depths? 12-bit and 14-bit are so similar that it takes some pretty hard pixel-peeping to be able to tell a difference. There is one, of course, but it's so far into diminishing returns that the ROI line is practically horizontal, unless you were doing some spectacularly vicious processing in post. I have nothing against people using those modes, but it's a very slight difference.
-
It might be, that's interesting. I'm still working on the logic of subtractive vs additive colour and I'm not quite there enough to replicate it in post. Agreed. In my bit-depth reductions I added grain to introduce noise to get the effects of dithering: "Dither is an intentionally applied form of noise used to randomize quantization error, preventing large-scale patterns such as color banding in images. Dither is routinely used in processing of both digital audio and video data, and is often one of the last stages of mastering audio to a CD." Thickness of an image might have something to do with film grain, but that's not what I was testing (or trying to test anyway). Agreed. That's why I haven't been talking about resolution or sharpness, although maybe I should be talking about reducing resolution and sharpness as maybe that will help with thickness? Obviously it's possible that I made a mistake, but I don't think so. Here's the code: Pretty straight-forwards. Also, if I set it to 2.5bits, then this is what I get: which looks pretty much what you'd expect. I suspect the vertical lines in the parade are just an algorithmic artefact of quantised data. If I set it to 1 bit then the image looks like it's not providing any values between the standard ones you'd expect (black, red, green, blue, yellow, cyan, magenta, white). Happy to hear if you spot a bug. Also, maybe the image gets given new values when it's compressed? Actually, that sounds like it's quite possible.. hmm. I wasn't suggesting that a 4.5bit image pipeline would give that exact result, more that we could destroy bit-depth pretty severely and the image didn't fall apart, thus it's unlikely that thickness comes from the bit-depth. Indeed there is. and I'd expect there to be! I mean, I bought a GH5 based partly on the internal 10-bit! I'm not regretting my decision, but I'm thinking that it's less important than I used to think it was, especially without using a log profile like I also used to do. Essentially the test was to go way too far (4.5bits is ridiculous) and see if that had a disastrous effect, which it didn't seem to do. If we start with the assumption that cheap cameras create images that are thin because of their 8-bit codecs, then by that logic a 5-bit image should be razor thin and completely objectionable, but it wasn't, so it's unlikely that the 8-bit property is the one robbing the cheap cameras of their images thickness.
-
The question we're trying to work out here is what aspects of an image make up this subjective thing referred to by some as 'thickness'. We know that high-end cinema cameras typically have really thick looking images, and that cheap cameras typically do not (although there are exceptions). Therefore this quality of thickness is related to something that differs between these two scenarios. Images from cheap cameras typically have a range of attributes in common, such as 8-bit, 420, highly compressed, cheaper lenses, less attention paid to lighting, and a range of other things. However, despite all these limitations, the images from these cameras are very good in some senses. A 4K file from a smartphone has a heap of resolution, reasonable colour science, etc, so it's not like we're comparing cinema cameras with a potato. This means that the concept of image thickness much be fragile. Otherwise consumer cameras would capture it just fine. If something is fragile, and is only just on the edges of being captured, then if we take a thick image and degrade it in the right ways, then the thickness should evaporate with the slightest degradation. The fact I can take an image and output it at 8-bits and at 5-bits and for there not to be a night-and-day difference then I must assume one of three things: the image wasn't thick to begin with it is thick at both 8-bits and 5-bits and therefore bit-depth doesn't matter than much it is thick at 8-bit but not at 5-bits and people just didn't notice, in a thread especially about this I very much doubt that it's #3, because I've had PMs from folks who I trust saying it didn't look much different. Maybe it's #1, but I also doubt that, because we're routinely judging the thickness of images via stills from YT or Vimeo, which are likely to be 8-bit, 420, and highly compressed. The images of the guy in the car that look great are 8-bit. I don't know where they came from, but if they're screen grabs from a streaming service then they'll be pretty poor quality too. Yet they still look great. I'm starting to think that maybe image thickness is related to the distribution of tones within a HSL cube, and some areas being nicer than others, or there being synergies between various areas and not others.
-
If I'm testing resolution of a camera mode then I typically shoot something almost stationary, open the aperture right up, focus, then stop down at least 3 stops, normally 4, to get to the sweet spot of whatever lens I'm using, and to also make sure that if I move slightly that the focal plane is deep enough. Doing that with dogs might be a challenge!
-
Actually, the fact that an image can be reduced to 5bits and not be visibly ruined, means that the bits aren't as important as we all seem to think. A bit-depth of 5bits is equivalent to taking an 8-bit image and only using 1/8th of the DR, then expanding that out. Or, shooting a 10-bit image and only exposing using 1/32 of that DR and expanding that out. Obviously that's not something I'd recommend, and also considering I applied a lot of noise before doing the bit-depth reduction, but the idea that image thickness is related to bit-depth seems to be disproven. I'm now re-thinking what to test next, but this was an obvious thing and it turned out to be wrong.
-
What's today's digital version of the Éclair NRP 16mm Film Camera?
kye replied to John Matthews's topic in Cameras
Which do you agree with - that film has poor DR or that Canon DSLRs have? I suspect you're talking about film, and this is something I learned about quite recently. In Colour and Mastering for Digital Cinema by Glenn Kennel he shows density graphs for both negative and print films. The negative film graphs show the 2% black, 18% grey and 90% white points all along the linear segment of the graph, with huge amounts of leeway above the 90% white. He says "The latitude of a typical motion picture negative film is 3.0 log exposure, or a scene contrast of 1000 to 1. This corresponds to approximately 10 camera stops". The highlights extend into a very graceful highlight compression curve. The print-through curve is a different story, with the 2% black, 18% grey and 90% white points stretching across almost the entire DR of the film. In contrast to the negative film where the range from 2-90% takes up perhaps half of the mostly-linear section of the graph, in the print-through curve the 2% sits very close to clipping, the region between 18% and 90% encompasses the whole shoulder, and the 90% is very close to the other flat point on the curve. My understanding is that the huge range of leeway in the negative is what people refer to as "latitude" and this is where the reputation film has of having a large DR comes from, because that is true. However, if you're talking about mimicking film then it was a very short period in history where you might shoot on film but process digitally, so you should also take into account the print film positive that would have been used to turn the negative into something you could actually watch. Glenn goes on to discuss techniques for expanding the DR of the print-through by over-exposing the negative and then printing it differently, which does extend the range in the shadows below the 2% quite significantly. I tried to find some curves online to replicate what is in the book but couldn't find any. I'd really recommend the book if you're curious to learn more. I learned more in reading the first few chapters than I have in reading free articles on and off for years now. -
That's what I thought - that there wasn't much visible difference between them and so no-one commented. What's interesting is that I crunched some of those images absolutely brutally. The source images were (clockwise starting top left) 5K 10-bit 420 HLG, 4K 10-bit 422 Cine-D, and 2x RAW 1080p images, which were then graded, put onto a 1080p timeline and then degraded as below: First image had film grain applied then JPG export Second had film grain then RGB values rounded to 5.5bit depth Third had film grain then RGB values rounded to 5.0bit depth Fourth had film grain then RGB values rounded to 4.5bit depth The giveaway is the shadow near the eye on the bottom-right image which at 4.5bits is visibly banding. What this means is that we're judging image thickness via an image pipeline that isn't that visibly degraded by having the output at 5 bits-per-pixel. The banding is much more obvious without the film grain I applied, and for the sake of the experiment I tried to push it as far as I could. Think about that for a second - almost no visible difference making an image 5bits-per-pixel at a data rate of around 200Mbps (each frame is over 1Mb).
-
-
-
-
I'm not sure if it's the film stock of the 80s fashion that really dominates that image, but wow... the United Colours of Beneton the 80s! Hopefully ray bands - I think they're one of the only 80s approved sunglasses still available. 80s music always sounds best on stereos also made in the 80s. I think there's something we can all take away from that 🙂 Great images. Fascinating comments about the D16 having more saturated colours on the bayer filter. The spectral response of film vs digital is something that Glenn Kennel talks about a lot in Colour Mastering for Digital Cinema. If the filters are more saturated then I would imagine that they're either further apart in frequency, or they are narrower, which would mean less cross-talk between the RGB channels. This paper has some interesting comparisons of RGB responses, including 5218 and 5246 film stocks, BetterLight and Megavision digital backs, and Nikon D70 and Canon 20D cameras: http://www.color.org/documents/CaptureColorAnalysisGamuts_ppt.pdf The digital sensors all have hugely more cross-talk between RGB channels than either of the film stocks, which is interesting. I'll have to experiment with doing some A/B test images. In the RGB mixer it's easy to apply a negative amount of the other channels to the output of each channel, which should in theory simulate a narrower filter. I'll have to read more about this, but it might be time to start posting images here and seeing what people think. I wonder how much having a limited bit-depth is playing into what you're saying. For example, if we get three cameras, one with high-DR and low-colour separation, the second with high-DR / higher colour separation, and the third with low-DR / high colour saturation. We take each camera, film something, then take the 10-bit files from each and normalise them. The first camera will require us to apply a lot of contrast (stretching the bits further apart) and also lots of saturation (stretching the bits apart), the second requires contrast but less saturation, and the third requires no adjustments. This would mean the first would have the least effective bit-depth once in 709 space, the second would have more effective bit-depth, and the third the most in 709 space. This is something I can easily test, as I wrote a DCTL to simulate bit depth issues, but it's also something that occurs in real footage, like when I did everything wrong and recorded this low-contrast scene (due to cloud cover) with C-Log in 8-bit.. Once you expand the DR and add saturation, you get something like this: It's easy to see how the bits get stretched apart - this is the vectorscope of the 709 image: A pretty good example of not having much variation in single hues due to low-contrast and low-bit-depth. I just wish it was coming from a camera test and not from one of my real projects!
-
Another comparison video... spoiler, he plays a little trick and mixes in EOS R 1080p and 720p footage with the 8K, 4K and 1080p from the R5. As we've established, there is a difference, but it's not that big, compared to downscaled 1080p and through the YT compression.
-
What's today's digital version of the Éclair NRP 16mm Film Camera?
kye replied to John Matthews's topic in Cameras
A Canon DSLR with ML is probably an excellent comparison image wise. Canon / ML setups are similar to film in that they both likely have: poor DR aesthetically pleasing but high levels of ISO noise nice colours low resolution no compression artefacts I shot quite a bit of ML RAW with my Canon 700D and the image was nice and very organic. -
Also, some situations don't lend themselves to supports because of cramped conditions or the need to be mobile. Boats for example:
-
Depends where you go.. my shooting locations (like Kinkaku-ji) are often decorated with signs like this: which restrict getting shots like this: while shooting is situations like this: I lost a gorillapod at the Vatican because they don't allow tripods (despite it being maybe 8" tall!), and although they offered to put it in a locker for me, our tour group was going in one entry and out the other where the tour bus was meeting us, so there wouldn't have been time to go back for it afterwards, and the bus had already left when we got to security, so that was game over. Perhaps the biggest challenge is that you can never tell what is or is not going to be accepted in any given venue. I visited a temple in Bangkok and security flagged us for further inspection (either our bags or for not having modest enough clothing) which confused us as we thought we were fine and then the people doing the further inspection flagged us through without even looking at us. Their restrictions on what camera equipment could be used? 8mm film cameras were fine, but 16mm film cameras required permission. Seriously. This was in 2018. I wish I'd taken a photo of it. With rules as vague / useless as that, being applied without any consistency whatsoever, my preferred strategy is to not go anywhere near wherever the line might be and so that's why I shoot handheld with a GH5 / Rode VideoMic Pro and a wrist-strap and that's it. Even if something is allowed, if one over-zealous worker decides you're a professional or somehow taking advantage of their rules / venue / cultural heritage / religious sacred site then someone higher up isn't going to overturn that judgement in a hurry, or without a huge fuss, so I just steer clear of the whole situation. It really depends where you go and how you shoot. I've lost count of how many times a vlogger has been kicked out of a public park while filming right next to parents taking photos of their kids with their phones because security thought their camera was "too professional looking", or street photographers being hassled by private security when taking photos in public which is perfectly legal. Even things like a one of the YouTubers I watch who makes music videos was on a shoot in a warehouse in LA that almost got shut down because you need permits to shoot anything there even on private property behind closed doors!
-
Sounds like you aren't one of the people that needs IBIS, and as much as I love it for what I do, it's always better to add mass or add a rig (shoulder reg, monopod, tripod, etc) than to use OIS or IBIS. I use a monopod when I shoot sports, but I also need IBIS as I'm shooting at 300mm+ equivalent, so it's the difference between usable and not. I'd also like to use a monopod or even a small rig for my travel stuff, but venues have all kinds of crazy rules about such things and I've had equipment confiscated before (long story) due to various restrictions, so hand-held is a must.
-
This is interesting. If colour depth is colours above a noise threshold (as outlined here: https://www.dxomark.com/glossary/color-depth/ ) then that raises a few interesting points: humans can hear/see slightly below the noise floor, and noise of the right kind can be used to increase resolution beyond bit-depth (ie, dither) that would explain why different cameras have slightly different colour depth rather than in power-of-two increments it should be effected by downsampling - both because if you down downsample you reduce noise and also because downsampling can average four values artificially creating the values in-between it's the first explanation I've seen that goes anywhere to explaining why colours go to crap when ISO noise creeps in, even if you blur the noise afterwards I was thinking of inter-pixel treatments as possible ways to cheat a little extra, which might work in some situations. Interestingly, I wrote a custom DCTL plugin that simulated different bit-depths, and when I applied it to some RAW test footage it only started degrading the footage noticeably when I got it down to under 6-bits, where it started adding banding to the shadow region on the side of the subjects nose, where there are flat areas of very smooth graduations. It looked remarkably like bad compression, but obviously was only colour degradation, not bitrate. True, although home videos are often filmed in natural light, which is the highest quality light of all, especially if you go back a decade or two where film probably wasn't sensitive enough to film much after the sun sets and electronic images were diabolically bad until very very recently!