Jump to content

tupp

Members
  • Posts

    1,148
  • Joined

  • Last visited

Reputation Activity

  1. Like
    tupp reacted to KnightsFan in Image thickness / density - help me figure out what it is   
    Yes, I think the glow helps a lot to soften those highlights and make them dreamier rather than sharp and pointy and make it more 3D in this instance where the highlights are in the extreme background (like you said, almost like mist between subject and background).
    I agree, the relation between the subject and the other colors is critical and you can't really change that with different sensors or color correction. That's why I say it's mainly about what's in the scene. Furthermore, if your objects in frame don't have subtle variation you can't really add that in. The soft diffuse light comign from the side in the Grandmaster really allows every texture to have a smooth gradation from light to dark, whereas your subject in the boat is much more evenly lit from left to right.
    I assume you're also not employing a makeup team? That's really the difference between good and bad skin tones, particularly in getting different people to look good in the same shot.
  2. Thanks
    tupp reacted to BTM_Pix in Image thickness / density - help me figure out what it is   
    In the same vein, this might be a useful resource for you.
    https://film-grab.com
  3. Like
    tupp reacted to KnightsFan in Image thickness / density - help me figure out what it is   
    @kyeI don't think those images quite nail it. I gathered a couple pictures that fit thickness in my mind, and in addition to the rich shadows, they all have a real sense of 3D depth due to the lighting and lenses, so I think that is a factor. In the pictures you posted, there are essentially 2 layers, subject and background. Not sure what camera was used, but most digital cameras will struggle in actual low light to make strong colors, or if the camera is designed for low light (e.g., A7s2) then it has weak color filters which makes getting rich saturation essentially impossible.
     
    Here's a frame from The Grandmaster which I think hits peak thickness. Dark, rich colors, a couple highlights, real depth with several layers and a nice falloff of focus that makes things a little more dreamy rather than out of focus.

    And the scopes which clearly show the softness of the tones and how mostly everything falls into shadows.

     
     
    For comparison, here's the scopes from the picture of the man with the orange shirt in the boat which shows definite, harsh transitions everywhere.

     
     
    Perhaps, do you have some examples? For example that bright daylight Kodak test posted earlier here
    Has this scope (mostly shadow though a little brighter than the Grandmaster show, but fairly smooth transitions). And to be honest, I think the extreme color saturation particularly on bright objects makes it look less thick.

  4. Like
    tupp reacted to kye in Image thickness / density - help me figure out what it is   
    Indeed.  Reminds me of this article:
    https://www.provideocoalition.com/film-look-two/
    Art Adams again.  Long story short, film desaturates both the highlights and the shadows because on negative film the shadows are the highlights!  (pretty sure that's the right-way around..)
    I definitely think it's in the processing.  I view it as that there are three factors:
    1) things that simply aren't captured by a cheaper camera (eg, clipping)
    2) things that are captured and can be used in post without degrading the image below a certain threshold (ie, what image standards you or your client have)
    3) things that aren't captured well enough to be used in post (eg, noisy shadows beyond redemption, parts of the DR that break if pushed around too much)
    Obviously if you expose your skin tones in a range that is either completely lost (eg, clipped) or aren't in an area that can be recovered without exposing too much noise or breaking the image then there's nothing you can do.
    What I am interested in is the middle part, where a properly exposed image will put the important things in the image, for example skin tones.  Anything in this range should be able to be converted into something that looks great.
    Let's take skin tones - let's imagine that they're well captured but don't look amazing, but that the adjustment to make them look amazing won't break the image.  In that case, the only thing preventing the ok skin tones from looking great is the skill in knowing what transformations to make to get there.
    Yes, if the skin tones are from a bad codec and there is very little hue variation (ie, plastic skin tones) then that's not something that can be recovered from, but if the hues are all there but just aren't nice, then that should be able to be made to look great.
    This is where it's about skill, and why movies with professional colourists involved often look great.  OF course, ARRI has built a lot of that stuff into their colour science too, so in a sense everything shot with an ARRI camera has a first pass from some of the worlds best colour scientists, so is already that much further ahead than other offerings.
    Of course, many others aren't far behind on colour science, but in the affordable cameras its rare to get the best colour science combined with a good enough sensor and codec.
    That was something I had been thinking too, but thickness is present in brighter lit images too isn't it?
    Maybe if I rephrase it, higher-key images taken on thin digital cameras still don't match those higher-key images taken on film.  Maybe cheap cameras are better at higher-key images than low-key images, but I'd suggest there's still a difference.
    Interesting images, and despite the age and lack of resolution and DR, there is definitely some thickness to them.
    I wonder if maybe there is a contrast and saturation softness to them, not in the sense of them being low contrast or low saturation, but more that there is a softness to transitions of luma and chroma within the image?
    In other news...
    I've been messing with some image processing and made some test images.  Curious to hear if these appear thick or not.




    They're all a bit darker, so maybe fall into the exposure range that people are thinking tends to be thicker.
  5. Like
    tupp reacted to KnightsFan in Image thickness / density - help me figure out what it is   
    @tuppMaybe we're disagreeing on what thickness is, but I'd say about 50% of the ones you linked to are what I think of as thick. The canoe one in particular looked thick, because of the sparse use of highlights and the majority of the frame being rather dark, along with a good amount of saturation.
    The first link I found to be quite thin, mostly with shots of vast swathes of bright sky with few saturated shadow tones.
    The kodachrome stills were the same deal. Depending on the content, some were thick and others were thin. If they were all done with the same film stock and process, then that confirms to me that it's mostly what is in the scene that dictates that look.
    I think that's because they are compressed into 8 bit jpgs, so all the colors are going to be smeared towards their neighbors to make them more easily fit a curve defined by 8 bit data points, not to mention added film grain. But yeah, sort of a moot point.
  6. Like
    tupp got a reaction from KnightsFan in Image thickness / density - help me figure out what it is   
    Of course, a lot of home weren't properly exposed and showed scenes with huge contrast range that the emulsion couldn't handle.  However, I did find some examples that have decent exposure and aren't too faded.
     
    Here's one from the 1940's showing showing a fairly deep blue, red and yellow, and then showing a rich color on a car.
     
    Thick greens here, and later a brief moment showing solid reds, and some rich cyan and indigo.  Unfortunately, someone added a fake gate with a big hair.
     
    A lot of contrast in these shots, but the substantial warm greens and warm skin and wood shine, plus one of the later shots with a better "white balance" shows a nice, complex blue on the eldest child's clothes.
     
    Here is a musical gallery of Kodachrome stills.  Much less fading here.  I'd like to see these colors duplicated in digital imaging.  Please note that Paul Simon's "Kodachrome" lyrics don't exactly refer to the emulsion!
     
    OP's original question concerns getting a certain color richness that is inherent in most film stocks but absent from most digital systems.  It doesn't involve lighting, per se, although there has to be enough light to get a good exposure and there can't be to much contrast in the scene.
     
     
    We have no idea if OP's simulated images are close to how they should actually appear, because 80%-90% of the pixels in those images fall outside of the values dictated by the simulated bit depth.  No conclusions can be drawn from those images.
     
    By the way, I agree that banding is not the most crucial consideration here -- banding is just a posterization artifact to which lower bit depths are more susceptible.  I maintain that color depth is the primary element of the film "thickness" in question.
  7. Like
    tupp reacted to hyalinejim in Image thickness / density - help me figure out what it is   
    Yes, it's this transformation in action, as a lut:
    So there are hue transforms going on as well as saturation transforms. But the saturation aspect of it you could totally do in Resolve. Art Adams came up with this for matching F55 to Alexa
     
    And my point is to do something similar for digital to film, the leftmost point on that curve should be raised to boost the shadows. But I don't know if that curve is Log to Log or whatever, in which case it might be right. I think Rec709 to Rec709 it might possibly need to be more like this:

    But I haven't tested it extensively, other than to notice that the results of my tinkering weren't as nice as the lut (because the hue changes are important too). So that adjustment is just a guess off the top of my head and not based on testing how it looks. But you get the general idea.... it's not just a highlight roll off, it's a more or less constant change throughout the range.
  8. Like
    tupp reacted to KnightsFan in Image thickness / density - help me figure out what it is   
    Got some examples? Because I generally don't see those typical home videos as having thick images.
    They're pretty close, I don't really care if there's dithering or compression adding in-between values. You can clearly see the banding, and my point it that while banding is ugly, it isn't the primary factor in thickness.
  9. Like
    tupp reacted to KnightsFan in Image thickness / density - help me figure out what it is   
    I've certainly been enjoying this discussion. I think that image "thickness" is 90% what is in frame and how it's lit. I think @hyalinejimis right talking about shadow saturation, because "thick" images are usually ones that have deep, rich shadows with only a few bright spots that serve to accentuate how deep the shadows are, rather than show highlight detail. Images like the ones above of the gas station, and the faces don't feel thick to me, since they have huge swathes of bright areas, whereas the pictures that @mat33 posted on page 2 have that richness. It's not a matter of reducing exposure, it's that the scene has those beautiful dark tonalities and gradations, along with some nice saturation.
    Some other notes:
    - My Raw photos tend to end up being processed more linear than Rec709/sRGB, which gives them deeper shadows and thus more thickness.
    - Hosing down a scene with water will increase contrast and vividness for a thicker look. Might be worth doing some tests on a hot sunny day, before and after hosing it down.
    - Bit depth comes into play, if only slightly. The images @kyeposted basically had no difference in the highlights, but in the dark areas banding is very apparent. Lower bit depth hurts shadows because so few bits are allocated to those bottom stops. To be clear, I don't think bit depth is the defining feature, nor is compression for that matter.
    - I don't believe there is any scene where a typical mirrorless camera with a documented color profile will look significantly less thick than an Alexa given a decent colorist--I think it's 90% the scene, and then mostly color grading.
  10. Like
    tupp reacted to hyalinejim in Image thickness / density - help me figure out what it is   
    Don't forget about shadow saturation! It often gets ignored in talk about highlight rolloff. The Art Adams articles kye posted above are very interesting but he's only concerned with highlight saturation behaviour. Here is a photo taken on film (Kodak Pro Image 100, the same as used in my example above)

     
    Here is the same scene shot as a digital RAW still with Adobe default colour but with contrast matched using RGB curves in ACR. You'll notice that at first glance it's more saturated:

     
    Now here is the same digital shot with a LUT added to match the saturation and hues of the midtones. These now look like a good match. But look carefully at how desaturated the shadow areas look. The saturation has been globally lowered and the shadows are looking (dare I say it?)..... thin!

     
    Finally, here is the same shot with a tweaked lut that boosts saturation in the shadows but keeps midtone and highlight saturation restrained. Now the shadows have deep blues and it looks more like the film shot. Is this a thicker image compared to the version with default colour? I definitely think it looks nicer.

    Again, open all in tabs to notice the difference. Night mode is good too 🙂
     
  11. Like
    tupp reacted to kye in Image thickness / density - help me figure out what it is   
    Im still working through it, but I would imagine there are an infinite variety.  Certainly, looking at film emulations, some are quite different to others in what they do to the vector scope and waveform.
    What do you mean by this?
    OK, one last attempt.
    Here is a LUT stress test image from truecolour.  It shows smooth graduations across the full colour space and is useful for seeing if there are any artefacts likely to be caused by a LUT or grade.
    This is it taken into Resolve and exported out without any effects applied.

    This is the LUT image with my plugin set to 1-bit.  This should create only white, red, green, blue, yellow, magenta, cyan, and black.

    This is the LUT image with my plugin set to 2-bits.  This will create more variation.

    The thing to look for here is that all the gradual transitions have been replaced by flat areas that transition instantly to another flat area of the next adjacent colour.  
    If you whip one of the above images into your software package I would imagine that you'd find the compression might have created intermediary colours on the edges of the flat areas, but if my processing was creating intermediate colours then they would be visible as separate flat areas, but as you can see, there are none.
  12. Like
    tupp reacted to kye in Image thickness / density - help me figure out what it is   
    Ok, now I understand what you were saying.  When you said "a digital monitor 'adds' adjacent pixels" I thought you were talking about pixels in the image somehow being blended together, rather than just that monitors are arrays of R, G, and B lights.
    One of the up-shots of subtractive colour vs additive colour is that with subtractive colour you get a peak in saturation below the luminance level that saturation peaks at in an additive model.  To compensate for that, colourists and colour scientists and LUT creators often darken saturated colours.
    This is one of the things I said that I find almost everywhere I look.  There are other things too.
    I'm sure that if you look back you'll find I said that it might contribute to it, and not that it is the only factor.
    Cool. Let's test these.  The whole point of this thread is to go from "I think" to "I know".
    This can be arranged.
    Scopes make this kind of error all the time.  Curves and right angles never mix because when you're generating a line of best fit with a non-zero curve inertia or non-infinite frequency response then you will get ringing in your curve.  
    What this means is that if your input data is 0, 0, 0, X, 0, 0, 0 the curve will have non-zero data on either side of the spike.  This article talks about it in the context of image processing, but it applies any time you have a step-change in values.  https://en.wikipedia.org/wiki/Ringing_artifacts
    There is nothing in my code that would allow for the creation of intermediary values, and I'm seeing visually the right behaviour at lower bit-depths when I look at the image (as shown previously with the 1-bit image quality), so at this point I'm happy to conclude that there are no values in between and that its a scoping limitation, or is being created by the jpg compression process.
  13. Thanks
    tupp reacted to kye in Image thickness / density - help me figure out what it is   
    Perhaps these might provide some background to subtractive vs additive colour science.
    https://www.dvinfo.net/article/production/camgear/what-alexa-and-watercolors-have-in-common.html
    https://www.dvinfo.net/article/post/making-the-sony-f55-look-filmic-with-resolve-9.html
    Well, I would have, but I was at work.  I will post them now, and maybe we can all relax a little.
    No bit-crunch:

    4.5 bits:

    4.0 bits:

    In terms of your analysis vs mine, my screenshots are all taken prior to the image being compressed to 8-bit jpg, whereas yours was taken after it was compressed to 8-bit jpg.
    Note how much the banding is reduced on the jpg above vs how it looks uncompressed (both at 4.0 bits):

    Here's the 5-bit with the noise to show what it looks like before compression:

    and without the noise applied:

     
    Agreed about 'more than the sum of their parts' as it's more like a multiplication - even a 10% loss over many aspects multiplies quickly over many factors.
    Not a fan of ARRIRAW?  I've never really compared them, so wouldn't know.
    Indeed, and that's kind of the point.  I'm trying to work out what it is.
  14. Like
    tupp reacted to kye in Image thickness / density - help me figure out what it is   
    It does hark back to Deezids point.  Lots more aspects to investigate here yet.
    Interesting about the saturation of shadows - my impression was that film desaturated both shadows and highlights compared to digital, but maybe when people desaturate digital shadows and highlights they're always done overzealously?
    We absolutely want our images to be better than reality - the image of the guy in the car doesn't look like reality at all!  One of the things that I see that makes an image 'cinematic' vs realistic is resolution and specifically the lack of it.  If you're shooting with a compressed codec then I think some kind of image softening in post is a good strategy.  I'm yet to systematically experiment with softening the image with blurs, but it's on my list.
    I'll let others comment on this in order to prevent groupthink, but with what I've recently learned about film which one is which is pretty obvious.
    When you say 'compression', what are you referring to specifically?  Bit-rate? bit-depth? codec? chroma sub-sampling?
    Have you noticed exceptions to your 10-bit 422 14-stops rule where something 'lesser' had unexpected thickness, or where things above that threshold didn't?  If so, do you have any ideas on what might have tipped the balance in those instances?
    Additive vs subtractive colours and mimicking subtractive colours with additive tools may well be relevant here, and I see some of the hallmarks of that mimicry almost everywhere I look.
    I did a colour test of the GH5 and BMMCC and I took shots of my face and a colour checker with both cameras, including every colour profile on the GH5.  I then took the rec709 image from the GH5 and graded it to match the BMMCC as well as every other colour profile from the GH5.
    In EVERY instance I saw adjustments being made that (at least partially) mimicked subtractive colour.
    I highly encourage everyone to take their camera, point it at a colourful scene lit with natural light and take a RAW still image and then a short video clip in their favourite colour profile, and then try to match the RAW still to the colour profile.  We talk about "just doing a conversion to rec709" or "applying the LUT" like it's nothing - it's actually applying a dozen or more finely crafted adjustments created by professional colour scientists.  I have learned an incredible amount by reverse-engineering these things.
    It makes sense that the scopes draw lines instead of points, that's also why the vector scope looks like triangles and not points.  One less mystery 🙂 
    I'm happy to re-post the images without the noise added, but you should know that I added the noise before the bit-depth reduction plugin, not after, so the 'dirtying' of the image happened during compression, not by adding the noise.
    I saw that.  His comments about preferring what we're used to were interesting too.
    Blind testing is a tool that has its uses, and we don't use it nearly enough.
  15. Like
    tupp reacted to Geoff CB in Image thickness / density - help me figure out what it is   
    When grading files in HDR, I can instantly tell when a file is of lower quality. Grading on a 8-bit timeline doesn't really show the difference, but on a 10 or 12-bit HDR timeline on a HDR panel it is night and day. 
    So for me for a "Thick" image. is 10-bit 4:2:2 or better with at least 14+ Stops of DR.
  16. Like
    tupp reacted to hyalinejim in Image thickness / density - help me figure out what it is   
    Let me ask a question!
    These are ColorChecker patches abstracted from -2, 0 and +2 exposures using film in one case and digital in the other (contrast has been matched). Which colour palette is nicer? Open each in a new tab and flick back and forth.
    ONE:

     
    or TWO:

  17. Like
    tupp reacted to hyalinejim in Image thickness / density - help me figure out what it is   
    This harks back to deezid's point:
    From my investigations film does seem to have much more saturated shadows than what a digital image offers. If you match the saturation of the midtones of digital to film, then the shadows will need a boost to also match... maybe by around 25-50% at the lowest parts. It's a shockingly huge saturation boost in the shadow areas (and the highlights would need to come down in saturation slightly). I'm not talking about log images here, I'm talking contrasty Rec709.
    The digital capture is probably closer to being an accurate representation of the level of saturation in reality. But film is transformative. We want our images to look better than reality!
    If we talk about memory colours (sky, foliage and skin) the preferences of photographers and middle American shoppers led to altered hue and saturation in Kodak film stocks. So it looks like we prefer skies that are more cyan than in reality, foliage that is cooler and skin that is more uniform, and tending towards tan (Fuji skin tends towards rosy pink).
    With 10bit I can get decent, filmic colour out of V-Log! But 8 bit would fall apart.
  18. Like
    tupp reacted to kye in Image thickness / density - help me figure out what it is   
    It might be, that's interesting.  I'm still working on the logic of subtractive vs additive colour and I'm not quite there enough to replicate it in post.
    Agreed.  In my bit-depth reductions I added grain to introduce noise to get the effects of dithering:
    "Dither is an intentionally applied form of noise used to randomize quantization error, preventing large-scale patterns such as color banding in images. Dither is routinely used in processing of both digital audio and video data, and is often one of the last stages of mastering audio to a CD."
    Thickness of an image might have something to do with film grain, but that's not what I was testing (or trying to test anyway).
    Agreed.  That's why I haven't been talking about resolution or sharpness, although maybe I should be talking about reducing resolution and sharpness as maybe that will help with thickness?
    Obviously it's possible that I made a mistake, but I don't think so.
    Here's the code:
    Pretty straight-forwards.
    Also, if I set it to 2.5bits, then this is what I get:

    which looks pretty much what you'd expect.
    I suspect the vertical lines in the parade are just an algorithmic artefact of quantised data.  If I set it to 1 bit then the image looks like it's not providing any values between the standard ones you'd expect (black, red, green, blue, yellow, cyan, magenta, white).

    Happy to hear if you spot a bug.
    Also, maybe the image gets given new values when it's compressed?  Actually, that sounds like it's quite possible..  hmm.
    I wasn't suggesting that a 4.5bit image pipeline would give that exact result, more that we could destroy bit-depth pretty severely and the image didn't fall apart, thus it's unlikely that thickness comes from the bit-depth.
    Indeed there is.  and I'd expect there to be!  I mean, I bought a GH5 based partly on the internal 10-bit!  I'm not regretting my decision, but I'm thinking that it's less important than I used to think it was, especially without using a log profile like I also used to do.
    Essentially the test was to go way too far (4.5bits is ridiculous) and see if that had a disastrous effect, which it didn't seem to do.  
    If we start with the assumption that cheap cameras create images that are thin because of their 8-bit codecs, then by that logic a 5-bit image should be razor thin and completely objectionable, but it wasn't, so it's unlikely that the 8-bit property is the one robbing the cheap cameras of their images thickness.
  19. Like
    tupp got a reaction from Geoff CB in Image thickness / density - help me figure out what it is   
    I think that the "thickness" comes primarily from emulsion's color depth and partially from the highlight compression that you mentioned in another thread, from the forgiving latitude of negative film and from film's texture (grain).
     
    Keep in mind that overlaying "grain" screen on a digital image is not the same as the grain that is integral to forming an image on film emulsion.  Grain actually provides the detail and contrast and much of the color depth of an film image.
     
     
    Home movies shot on Super8 film often have "thick" looking images, if they haven't faded.
     
    You didn't create a 5-bit image nor a "4.5-bit" image, nor did you keep all of the shades within 22.6 shade increments ("4.5-bits") of the 255 increments in the final 8-bit image.
    Here are scopes of both the "4.5-bit" image and the 8-bit image:

     

    If you had actually mapped the 22.6 tones-per-channel from a "4.5-bit" image into 26.5 of the 255 tones-per-channel 8-bit image, then all of the image's pixels would appear inside 22 vertical lines on the RGB histograms, (with 223 of the histogram lines showing zero pixels).
     
    So, even though the histogram of the "4.5-bit" image shows spikes (compared to that of the 8-bit image), the vast majority of the "4.5-bit" image's pixels fall in between the 22.6 tones that would be inherent in an actual "4.5-bit" image.
     
    To do this comparison properly, one should probably shoot an actual "4.5-bit" image, process it in a "4.5-bit" pipeline and display it on a "4.5-bit" monitor.
     
    By the way, there is an perceptible difference between the 8-bit image and the "4.5-bit" image.
  20. Like
    tupp reacted to kye in Image thickness / density - help me figure out what it is   
    The question we're trying to work out here is what aspects of an image make up this subjective thing referred to by some as 'thickness'.
    We know that high-end cinema cameras typically have really thick looking images, and that cheap cameras typically do not (although there are exceptions).  Therefore this quality of thickness is related to something that differs between these two scenarios.
    Images from cheap cameras typically have a range of attributes in common, such as 8-bit, 420, highly compressed, cheaper lenses, less attention paid to lighting, and a range of other things.  However, despite all these limitations, the images from these cameras are very good in some senses.  A 4K file from a smartphone has a heap of resolution, reasonable colour science, etc, so it's not like we're comparing cinema cameras with a potato.
    This means that the concept of image thickness much be fragile.  Otherwise consumer cameras would capture it just fine.
    If something is fragile, and is only just on the edges of being captured, then if we take a thick image and degrade it in the right ways, then the thickness should evaporate with the slightest degradation.
    The fact I can take an image and output it at 8-bits and at 5-bits and for there not to be a night-and-day difference then I must assume one of three things:
    the image wasn't thick to begin with it is thick at both 8-bits and 5-bits and therefore bit-depth doesn't matter than much it is thick at 8-bit but not at 5-bits and people just didn't notice, in a thread especially about this I very much doubt that it's #3, because I've had PMs from folks who I trust saying it didn't look much different.  
    Maybe it's #1, but I also doubt that, because we're routinely judging the thickness of images via stills from YT or Vimeo, which are likely to be 8-bit, 420, and highly compressed.  The images of the guy in the car that look great are 8-bit.  I don't know where they came from, but if they're screen grabs from a streaming service then they'll be pretty poor quality too.  Yet they still look great.
    I'm starting to think that maybe image thickness is related to the distribution of tones within a HSL cube, and some areas being nicer than others, or there being synergies between various areas and not others.
  21. Like
    tupp got a reaction from kye in What's today's digital version of the Éclair NRP 16mm Film Camera?   
    Most of us who shot film were working with a capture range of 7 1/2 to 8 stops, and that range was for normal stock with normal processing.
     
    If one "pulls" the processing (underdevelops) and overexposes the film, a significantly wider range of tones can be captured.  Overexposing and under-developing also reduces grain and decreases color saturation.  This practice was more common in still photography than in filmmaking, because a lot of light was already needed to just  to properly expose normal film stocks.
     
    Conversely, if one "pushes" the processing (overdevelops) while underexposing, a narrower range of tones is captured, and the image has more contrast.  Underexposing and overdeveloping also increases grain boosts color saturation.
     
    Because of film's "highlight compression curve" that you mentioned, one can expose for the shadows and midtones, and use less of the film's 7 1/2 - 8 stop  capture range for rendering highlights.
     
    In contrast, one usually exposes for the highlights and bright tones with digital, dedicating more stops just to keep the highlights from clipping and looking crappy.
     
     
    I don't think @kye was referring to reversal film.
     
      
    The BMPCC was already mentioned in this thread.
     
     
    No.  Because of its tiny size, the BMPCC is more ergonomically versatile than an NPR.
     
    For instance, the BMPCC can be rigged to have the same weight and balance as an NPR, plus it can also have a shoulder pad and a monitor -- two things that the NPR didn't/couldn't have.
     
    In addition, the BMPCC can be rigged on a gimbal, or with two outboard side handles, or with the aforementioned "Cowboy Studio" shoulder mount.  It can also be rigged on a car dashboard.  The NPR cannot be rigged in any of these ways.
     
      
    To exactly which cameras do you refer?  I have shot with many of them, including the NPR, and none of the cameras mentioned in this thread are as you describe.
  22. Like
    tupp reacted to kye in What's today's digital version of the Éclair NRP 16mm Film Camera?   
    Which do you agree with - that film has poor DR or that Canon DSLRs have?
    I suspect you're talking about film, and this is something I learned about quite recently.  In Colour and Mastering for Digital Cinema by Glenn Kennel he shows density graphs for both negative and print films.  
    The negative film graphs show the 2% black, 18% grey and 90% white points all along the linear segment of the graph, with huge amounts of leeway above the 90% white.  He says "The latitude of a typical motion picture negative film is 3.0 log exposure, or a scene contrast of 1000 to 1.  This corresponds to approximately 10 camera stops".  The highlights extend into a very graceful highlight compression curve.
    The print-through curve is a different story, with the 2% black, 18% grey and 90% white points stretching across almost the entire DR of the film.  In contrast to the negative film where the range from 2-90% takes up perhaps half of the mostly-linear section of the graph, in the print-through curve the 2% sits very close to clipping, the region between 18% and 90% encompasses the whole shoulder, and the 90% is very close to the other flat point on the curve.
    My understanding is that the huge range of leeway in the negative is what people refer to as "latitude" and this is where the reputation film has of having a large DR comes from, because that is true.  However, if you're talking about mimicking film then it was a very short period in history where you might shoot on film but process digitally, so you should also take into account the print film positive that would have been used to turn the negative into something you could actually watch.
    Glenn goes on to discuss techniques for expanding the DR of the print-through by over-exposing the negative and then printing it differently, which does extend the range in the shadows below the 2% quite significantly.
    I tried to find some curves online to replicate what is in the book but couldn't find any.  I'd really recommend the book if you're curious to learn more.  I learned more in reading the first few chapters than I have in reading free articles on and off for years now.
  23. Like
    tupp reacted to BenEricson in What's today's digital version of the Éclair NRP 16mm Film Camera?   
    Fair enough. I didn't realize you were talking about reversal film or film prints. Reversal was extremely popular at that time. The Dark Knight was shot negative but done with a traditional film print work flow and saying it has poor dynamic range and is comparable to ML is a tough sell. The roll off on highlights is just so damn pretty!
    From my experience with color negative film, the amount of range and especially highlight detail far exceeds what you would ever need to pull out in post. An exposure needs to be set in the grade, but like you said, you have a ton of leeway. It is quite remarkable what you can pull from even a 16mm negative film.
    I am definitely interested in that book. It was written in 2006 and the scanning technology we have now didn't even exist back then, but I am interested in that more classical style of shooting and finishing contrast than the DI style grading. Thanks for the recommendation. 
  24. Like
    tupp reacted to BenEricson in What's today's digital version of the Éclair NRP 16mm Film Camera?   
    Yeah, the OG pocket cam is S16 and also has a similarity in the highlight sensitivity. Ergonomically though, beyond terrible, any of those cameras mentioned are like Fisher Price toys compared to the Eclair. Nobody ever even bothered to create a modern zoom lens that could compete with the old S16 ones. Maybe the Canon 18 - 80 on a C300?
  25. Like
    tupp got a reaction from BenEricson in What's today's digital version of the Éclair NRP 16mm Film Camera?   
    I have watched some things captured on 16mm film, and I have shot one or two projects with the Eclair NPR.  Additionally, I own an EOSM.
     
     
    The reasons why the EOSM is comparable to the NPR is because:
      some of the ML crop modes for the EOSM allow the use of 16mm and S16 optics;   the ML crop modes enable raw recording at a higher resolution and higher bit-depth.  
     
    By the way, there have been a few relevant threads on the EOSM.  Here is thread based on an EOSHD article about shooting 5k raw on the EOSM using one of the 16mm crop modes.
     
    Here is a thread that suggests the EOSM can make a good Super-8 camera.
     
    Certainly, there are other digital cameras with 16 and S16 crops, and most of them have been mentioned in this thread.  The Digital Bolex and the Ikonoskop A-cam dII are probably the closest digital camera to a 16mm film camera, because not only do they shoot s16, raw/uncompressed with a higher bit depth, but they both utilize a CCD sensor.
     
     
    Of course, one can rig any mirrorless camera set back and balanced on a weighted shoulder rig, in the same way as you show in the photo of your C100.  You could even add a padded "head-pressing plate!"
     
     
    Just build a balanced shoulder rig and keep it built during the entire shoot.
     
     
    I've always wanted to try one of those!:

×
×
  • Create New...