Jump to content


  • Content Count

  • Joined

  • Last visited

Posts posted by tupp

  1. On 5/19/2020 at 5:39 AM, AdrParkinson said:

    How would you say it compares with the old Cinestyle profile for Canon? I always found that while it made grading easier, the bitrate just wasn't there to support it and so there were too many artifacts.

    On 5/19/2020 at 11:56 AM, tupp said:

    When I get a chance, I will try to snip out a few seconds from one of the files for download

    Better late than never...


    Here is a three second clip from a test shoot of the E-M10 III, with the camera's Highlight and Shadow control set to :  -1 highlights; 0 mids; and +1 shadows.


    Here is a coarsely graded video using the rest of the test footage from that session.

  2. 6 hours ago, seanzzxx said:

    eh I think I'll just wait for Tupp to show up and say that it cannot have a dense image as long as it wasn't shot on Kodachrome(tm), seeing how that has been this thread for the last month or so.



    Well, I think that some of the images that I linked were actually shot on Ektachrome and Kodacolor, but I mostly gave Kodachrome examples, as it is the "extreme" of the color emulsions.


    Also, I said early in the thread that color depth was probably the key variable for digital "thickness,"  so I agree that "dense" digital images can be (and have been) achieved with digital cameras.



    On 11/21/2020 at 5:34 PM, kye said:

    Thoughts on how thin / thick these two trailers are?

    Both trailers look good, but the Red footage looks thin/brittle compared to that of the Alexa footage.  Of course, much of these looks could result from the grade.

  3. If the goal is to convert batches of images from raw to jpeg, there are many free, open-source apps that do so.


    Darktable, RawTherapee  and and Photivo are powerful raw image developing applications.  Digikam is a less powerful photo management program, but it can do batch conversions.


    ImageMagick batch converts raw to jpeg.  It has a GUI, but most use it on the command line for speed.  If you are not trying to do anything fancy in your batch conversions, a simple ImageMagick terminal command is probably the quickest and easiest way to go.




  4. On 10/23/2020 at 6:07 PM, KnightsFan said:

    I think that simply adding water, thereby increasing specularity, contrast, and color saturation makes a drastic increase in thickness.

    It doesn't look "thicker" to me -- just wetter, causing different colors and, in this case, more contrast.  If you shot either scene (dry or wet) on Kodachrome, it would look significantly thicker.


    Also, the difference in the pattern of dappled light skews the comparison.  One can't help but wonder how this test would appear with an overcast sky.



    On 10/24/2020 at 2:23 AM, hyalinejim said:

    You can see that effect clearly here. This is Fuji 400H exposed at box speed:


    And this is the same chart exposed at -2 but scanned to bring up the midtones.

    That's some exceedingly coarse grain in the images of those charts.  Huge grain like that significantly reduces color depth, which affects the look of the charts and throws-off the comparison a little.



    On 10/24/2020 at 2:23 AM, hyalinejim said:

    Note how the shadows are lifted, because the shadow areas of the chart are now very close to the base fog of the emulsion, and are hardly registering at all

    That's one way to describe it.  Another way put it is that, due to the initial underexposure, more of the values from dark-tones to mid-tones are compressed together at the bottom end (along with the base fog).  So, when the exposure is boosted to restore the mid-tones to their normal value, the dark tones become brighter than usual, because they remain compressed close to the mid-tones.



    On 10/24/2020 at 2:23 AM, hyalinejim said:

    Yes, it's a less saturated image than the correctly exposed one. But if you took a digital shot of the same chart at the same exposure level, applied a curve to match the contrast and altered saturation so that the midtones match.... I think you'd still see the same pattern of more saturation in the shadows for film, and less in the highlights.

    I am not sure what this exposure comparison adds to the idea that the brighter tones in film emulsions generally are less saturated (with the darker tones being relatively more saturated, by default).



    On 10/24/2020 at 2:23 AM, hyalinejim said:

    Digital images look thin because of the way they (probably accurately) capture saturation from shadows to midtones to highlights.

    I agree partially --  I think that there are other variables involved in how film renders color.  For instance, film generally has more color depth than digital.



    On 10/24/2020 at 10:57 AM, KnightsFan said:

    I don't think you'd get a significantly thicker image out of any two decent digital/film cameras given the same scene and sensible settings.

    I disagree.  A lot depends on the emulsion.  I think that one would see a dramatic difference comparing Kodachrome to digital.  Typical print film would yield less of a difference.

  5. On 10/19/2020 at 6:38 AM, kye said:

    One is the ability to render subtle variations in tone, and yet, we're looking at all these test images in 8-bit, and some in less than 8-bit, yet this doesn't seem to be a limiting factor.

    Although we disagree on the "less than 8-bit" images, I have been waiting for someone to mention that we are viewing film emulsion images through 8-bit files.


    To the eye, the color depth of Kodachrome is considerably more vast than what is shown in these 8-bit images.  Kodachrome was one of the rare film processes that added dye to the emulsion during processing, which gave it such deep colors (and which is also more archival).  Some of that splendor is captured in these 8-bit scans, so, theoretically, there should be a way to duplicate those captured colors shooting digitally and outputting to 8-bit. 




    On 10/19/2020 at 6:38 AM, kye said:

    If you were looking at this scene in real life, these people wouldn't have so much variation in luminance and saturation in their skintones - that baby would have to have been sunbathing for hours but only on the sides of his face and not on the front.

    You probably would see the variation in the skin tones if you were there, but, to one's eyes, such variations don't seem so dramatic.  Furthermore, Kodachrome usually looked snappier than other reversal films (when normally processed), but when directly viewing a Kodachrome slide, it won't look as contrasty as the 8-bit scans we see in this thread.

    Of course, the baby's face (and the parents' faces) is brighter on the front, because of the lighting angle.  If the baby has been sunbathing for hours, then the father is crispier than George Hamilton.

  6. On 10/16/2020 at 1:09 PM, KnightsFan said:

    @tuppMaybe we're disagreeing on what thickness is, but I'd say about 50% of the ones you linked to are what I think of as thick.

    To me, the "thickness" of a film image is revealed by a rich, complex color(s).  That color is not necessarily saturated nor dark.


    That "thickness" of film emulsion has nothing to do with lighting nor with what is showing in the image.  Certainly, for the thickness to be revealed, there has to be some object in the frame that reflects a complex color.  An image of a white wall will not fully utilize the color depth of an imaging system.  However, a small, single color swatch within a mostly neutral image can certainly demonstrate "thickness."


    On 10/16/2020 at 11:55 PM, kye said:

    Long story short, film desaturates both the highlights and the shadows because on negative film the shadows are the highlights!  (pretty sure that's the right-way around..)

    I don't think that's how it works.  Of course, there is also reversal film.



    On 10/16/2020 at 11:55 PM, kye said:

    Yes, if the skin tones are from a bad codec and there is very little hue variation (ie, plastic skin tones) then that's not something that can be recovered from,...

    Agreed.  Digital tends to make skin tones mushy (plastic?) compared to film.

    Look at the complex skin tones in some of these Kodachrome images.  There is a lot going on in those skin tones that would be lost with most digital cameras.  In addition, observe the richness and complexity of the colors on the inanimate objects.



    On 10/16/2020 at 11:55 PM, kye said:

    but thickness is present in brighter lit images too isn't it?

    Yes.  Please note that most of the images in the above linked gallery are brightly lit and/or shot during broad daylight.



    On 10/16/2020 at 11:55 PM, kye said:

    Interesting images, and despite the age and lack of resolution and DR, there is definitely some thickness to them.

    Agreed.  I think that the quality that you seek is inherent in film emulsion, and that quality exists regardless of lighting and regardless of the overall brightness/darkness of an image.



    On 10/16/2020 at 11:55 PM, kye said:

    I wonder if maybe there is a contrast and saturation softness to them, not in the sense of them being low contrast or low saturation, but more that there is a softness to transitions of luma and chroma within the image?

    Because of the extensive color depth and the distinctive color rendering of normal film emulsion, variations in tone are often more apparent with film.  Not sure if that should be considered to be more of a gradual transition in chroma/luma or to be just higher "color resolution."



    On 10/16/2020 at 11:55 PM, kye said:

    I've been messing with some image processing and made some test images.  Curious to hear if these appear thick or not.

    Those images are nice, but they seem thinner than the Kodachrome images in the linked gallery above.



    On 10/17/2020 at 8:41 AM, KnightsFan said:

    Here's a frame from The Grandmaster which I think hits peak thickness. Dark, rich colors, a couple highlights, real depth with several layers and a nice falloff of focus that makes things a little more dreamy rather than out of focus.

    The image is nicely crafted, but I read that it was shot on Fuji Eterna stock.  Nevertheless, to me its colors look "thinner" than those shown in this in this Kodachrome gallery.



    On 10/17/2020 at 9:49 AM, BTM_Pix said:

    In the same vein, this might be a useful resource for you.  https://film-grab.com

    Great site!  Thanks for the link!



    On 10/17/2020 at 10:10 AM, KnightsFan said:

    That's why I say it's mainly about what's in the scene.

    I disagree.  I think that the "thickness" of film is inherent in how emulsion renders color.



    On 10/17/2020 at 10:10 AM, KnightsFan said:

    The soft diffuse light comign from the side in the Grandmaster really allows every texture to have a smooth gradation from light to dark, whereas your subject in the boat is much more evenly lit from left to right.

    The cross-lighting in that "Grandmaster" image seems hard and contrasty to me (which can reveal texture more readily than a softer source).  I don't see many smooth gradations/chirascuro.



    17 hours ago, mat33 said:

    I think the light and amount of contrast of the scene makes a huge difference to the image thickness.

    Evidently, OP seeks the "thickness" that is inherent in film emulsion, regardless of lighting and contrast.



    17 hours ago, mat33 said:

    Here is a screen shot from the D16 (not mine) which while compressed to heck look 'thick' and alive to me.

    Nice shots!

    Images from CCD cameras such as the Digital Bolex generally seem to have "thicker" color than their CMOS counterparts.

    However, even CCD cameras don't seem to have the same level of thickness as many film emulsions.



    5 hours ago, KnightsFan said:

    I just watched the 12k sample footage from the other thread and I think that it displays thick colors despite being an ultra sharp digital capture.

    That certainly is a pretty image.

    Keep in mind that higher resolution begets more color depth in an image.  Furthermore, if your image was shot with Blackmagic Ursa Min 12K, that sensor is supposedly RGBW (with perhaps a little too much "W"), which probably yields nicer colors.

  7. 3 hours ago, KnightsFan said:

    Got some examples? Because I generally don't see those typical home videos as having thick images

    Of course, a lot of home weren't properly exposed and showed scenes with huge contrast range that the emulsion couldn't handle.  However, I did find some examples that have decent exposure and aren't too faded.


    Here's one from the 1940's showing showing a fairly deep blue, red and yellow, and then showing a rich color on a car.


    Thick greens here, and later a brief moment showing solid reds, and some rich cyan and indigo.  Unfortunately, someone added a fake gate with a big hair.


    A lot of contrast in these shots, but the substantial warm greens and warm skin and wood shine, plus one of the later shots with a better "white balance" shows a nice, complex blue on the eldest child's clothes.


    Here is a musical gallery of Kodachrome stills.  Much less fading here.  I'd like to see these colors duplicated in digital imaging.  Please note that Paul Simon's "Kodachrome" lyrics don't exactly refer to the emulsion!


    OP's original question concerns getting a certain color richness that is inherent in most film stocks but absent from most digital systems.  It doesn't involve lighting, per se, although there has to be enough light to get a good exposure and there can't be to much contrast in the scene.



    4 hours ago, KnightsFan said:

    They're pretty close, I don't really care if there's dithering or compression adding in-between values. You can clearly see the banding, and my point it that while banding is ugly, it isn't the primary factor in thickness.

    We have no idea if OP's simulated images are close to how they should actually appear, because 80%-90% of the pixels in those images fall outside of the values dictated by the simulated bit depth.  No conclusions can be drawn from those images.


    By the way, I agree that banding is not the most crucial consideration here -- banding is just a posterization artifact to which lower bit depths are more susceptible.  I maintain that color depth is the primary element of the film "thickness" in question.

  8. 14 hours ago, hyalinejim said:

    Don't forget about shadow saturation! It often gets ignored in talk about highlight rolloff.  The Art Adams articles kye posted above are very interesting but he's only concerned with highlight saturation behaviour.

    Well, when I listed the film "thickness" property of "lower saturation in the brighter areas," naturally, that means that the lower values have more saturation.


    I think that one of those linked articles mentioned the tendency that film emulsions generally have more saturation at and below middle values.


    Thanks for posting the comparisons!



    9 hours ago, KnightsFan said:

    I think that image "thickness" is 90% what is in frame and how it's lit.

    Then what explains the strong "thickness" of terribly framed and badly lit home movies that were shot on Kodachrome 64?




    9 hours ago, KnightsFan said:

    - Bit depth comes into play, if only slightly. The images @kyeposted basically had no difference in the highlights, but in the dark areas banding is very apparent.

    Unfortunately, @kye's images are significantly flawed, and they do not actually simulate the claimed bit-depths.  No conclusions can be made from them.


    By the way, bit depth is not color depth.

  9. 19 hours ago, kye said:

    OK, one last attempt.

    Here is a LUT stress test image from truecolour.  It shows smooth graduations across the full colour space and is useful for seeing if there are any artefacts likely to be caused by a LUT or grade.

    This is it taken into Resolve and exported out without any effects applied.

    This is the LUT image with my plugin set to 1-bit.  This should create only white, red, green, blue, yellow, magenta, cyan, and black.

    This is the LUT image with my plugin set to 2-bits.  This will create more variation.

    Thank you for posting these Trueclor tests, but these images are not relevant to the fact that the "4.5-bit" image that you posted earlier is flawed and is in no way conclusive proof that "4.5-bit" images can closely approximate 8-bit images.


    On the other hand, after examining your 2-bit Truecolor test, it indicates that there is a problem in your rounding code and/or your imaging pipeline.


    2-bit RGB can produce 64 colors, including black, white and two evenly spaced neutral grays.  There seem to be significantly fewer than 64 colors.   Furthermore, some of the adjacent color patches blend with each other in a somewhat irregular way, instead of forming the orderly, clearly defined and well separated pattern of colors that a 2-bit RGB system should produce with that test chart.  In addition, there is only one neutral gray shade rendered, when there should be two different grays.


    Looking at the histogram of the 2-bit Truecolor image shows three "spikes" when there should be four with a 2-bit image:


    Your 2-bit simulation is actually 1.5 bit simulation (with other problems).  So, your "rounding" code could have a bug.



    20 hours ago, kye said:

    If you whip one of the above images into your software package I would imagine that you'd find the compression might have created intermediary colours on the edges of the flat areas, but if my processing was creating intermediate colours then they would be visible as separate flat areas, but as you can see, there are none.

    Well, something is going wrong, and I am not sure if it's compression.  I think that PNG images can exported without compression, so it might be good to post uncompressed PNG's from now on, to eliminate that variable.


    Another thing that would help is if you would stick to the bit depths in question -- 8-bit and "4.5-bit."  All of this bouncing around to various bit depths just further complicates the comparisons.


  10. 1 hour ago, kye said:

    One of the up-shots of subtractive colour vs additive colour is that with subtractive colour you get a peak in saturation below the luminance level that saturation peaks at in an additive model.

    Not all additive color mixing works the same.  Likewise, not all subtractive color mixing works the same.


    However, you might be correct generally in regards to film vs. digital.



    1 hour ago, kye said:

    This can be arranged.

    One has to allow for the boosted levels in each emulsion layer that counter the subtractive effects.



    1 hour ago, kye said:

    Scopes make this kind of error all the time.

    I don't think the scopes are mistaken, but your single trace histogram makes it difficult to discern what exactly is happening (although close examination of your histogram reveals a lot of pixels where they shouldn't be) .  It's best to use a histogram with a column for every value increment.



    1 hour ago, kye said:

    What this means is that if your input data is 0, 0, 0, X, 0, 0, 0 the curve will have non-zero data on either side of the spike.  This article talks about it in the context of image processing, but it applies any time you have a step-change in values.  https://en.wikipedia.org/wiki/Ringing_artifacts

    I estimate that around 80%-90% of the pixels fall in between the proper bit depth increments -- the problem is too big to be "ringing artifacts."



    1 hour ago, kye said:

    There is nothing in my code that would allow for the creation of intermediary values, and I'm seeing visually the right behaviour at lower bit-depths when I look at the image (as shown previously with the 1-bit image quality), so at this point I'm happy to conclude that there are no values in between and that its a scoping limitation, or is being created by the jpg compression process.

    There is a significant problem... some variable(s) that is uncontrolled, and the images do not simulate the reduced bit depths.  No conclusions can be drawn until the problem is fixed.

  11. 22 hours ago, tupp said:

    Are you referring to the concept color emulsion layers subtracting from each other during the printing stage while a digital monitor "adds" adjacent pixels?

    16 hours ago, kye said:

    From the first linked article:



    What makes these monitors additive is the fact that those pure hues are blended back together to create the final colors that we see. Even though the base colors are created through a subtractive process, it’s their addition that counts in the end because that’s what reaches our eyes.

    Film is different in that there is no part of the process that is truly additive. The creation of the film negative, where dyes are deposited on a transparent substrate, is subtractive, and the projection process, where white light is projected through dyes, is also subtractive. (This section edited for clarity.)


    So, the first linked article echoed what I said (except I left out that the print itself is also "subtractive" when projected).


    Is that except from the article (and what I said) what you mean when you refer to "additive" and "subtractive" color?



    Also from the first linked article:


    The difference between subtractive color and additive color is key to differentiating between the classic “film” and “video” looks.

    I'm not so sure about this.  I think that this notion could contribute to the film look, but a lot of other things go into that look, such as progressive scan, no rolling shutter, grain actually forming the image, color depth, compressed highlight roll-off (as you mentioned), the brighter tones are less saturated (which I think is mentioned in the second article that you linked), etc.


    Of all of the elements that give the film "thickness," I would say that color depth, highlight compression, and the lower saturation in the brighter areas would be the most significant.


    It might be possible to suggest the subtractive nature of a film print merely by separating the the color channels and introducing a touch of subtractive overly on the two appropriate color channels.  A plug-in could be made that does this automatically.  However, I don't know if such effort will make a substantial difference.


    Thank you for posting the images without added grain/noise/dithering.   You only had to post the 8-bit image and the "4.5-bit" image.


    Unfortunately, most of the pixel values of the "4.5-bit" image still fall in between the 22.6 value increments prescribed  by "4.5-bits."  So, something is wrong somewhere in your imaging pipeline.



    16 hours ago, kye said:

    In terms of your analysis vs mine, my screenshots are all taken prior to the image being compressed to 8-bit jpg, whereas yours was taken after it was compressed to 8-bit jpg.

    Your NLE's histogram is a single trace, rather than 255 separate columns.  Is there a histogram that shows those 255 columns instead of  a single trace?  It's important, because your NLE histograms are showing 22 spikes with a substantial base that is difficult to discern with that single trace.


    Something might be going wrong during the "rounding" or at the "timeline" phase.

  12. 6 hours ago, kye said:

    Additive vs subtractive colours and mimicking subtractive colours with additive tools may well be relevant here, and I see some of the hallmarks of that mimicry almost everywhere I look.

    I am not sure what you mean.  Are you referring to the concept color emulsion layers subtracting from each other during the printing stage while a digital monitor "adds" adjacent pixels?


    Keep in mind that there is nothing inherently "subtractive" with "subtractive colors."  Likewise, there is nothing inherently "additive" with "additive colors."



    6 hours ago, kye said:

    In EVERY instance I saw adjustments being made that (at least partially) mimicked subtractive colour.

    Please explain what you mean.



    6 hours ago, kye said:

    It makes sense that the scopes draw lines instead of points, that's also why the vector scope looks like triangles and not points.

    Yes, but the histograms are not drawing the expected lines for the "4.5-bit" image nor for the "5-bit" image.  Those images are full 8-bit images.


    On the other hand, the "2.5-bit" shows the histogram lines as expected.  Did you do something different when making the "2.5-bit" image?


    7 hours ago, kye said:

    I'm happy to re-post the images without the noise added, but you should know that I added the noise before the bit-depth reduction plugin, not after, so the 'dirtying' of the image happened during compression, not by adding the noise.

    If the culprit is compression, then why is the "2.5-bit" image showing the histogram lines as expected, while the "4.5-bit" and "5-bit" images do not show the histogram lines?


    Please just post the 8-bit image and the "4.5-bit" image without the noise/grain/dithering.



  13. 6 hours ago, kye said:

    I'm still working on the logic of subtractive vs additive colour and I'm not quite there enough to replicate it in post.

    If you are referring to "additive" and "subtractive" colors in the typical imaging sense, I don't think that it applies here.



    6 hours ago, kye said:

     In my bit-depth reductions I added grain to introduce noise to get the effects of dithering:

    "Dither is an intentionally applied form of noise used to randomize quantization error,

    There are many different types of dithering.  "Noise" dithering (or "random" dithering) is probably the worst type.  One would think that a grain overlay that yields dithering would be random, but I am not sure that is what your grain filter is actually doing.


    Regardless, the introducing the variable of grain/dithering is unnecessary for the comparison, and, likely, it is what skewed the results.



    6 hours ago, kye said:

    That's why I haven't been talking about resolution or sharpness, although maybe I should be talking about reducing resolution and sharpness as maybe that will help with thickness?

    Small film formats have a lot of resolution with normal stocks and normal processing.


    If you reduce the resolution, you reduce the color depth, so that is probably not wise to do.



    6 hours ago, kye said:

    Obviously it's possible that I made a mistake, but I don't think so.

    Here's the code:

    Too bad there's no mark-up/mark-down for <code> on this web forum.


    The noise/grain/dithering that was introduced is likely what caused the problem -- not the rounding code.  Also, I think that the images went through a YUV 4:2:0 pipeline at least once.


    I posted the histograms and waveforms that clearly show that the "4.5-bit" image is mostly an 8-bit image, but you can see for yourself.  Just take your "4.5-bit" image an put it in your NLE and look at the histogram.  Notice that there are spikes with bases that merge, instead of just vertical lines.  That means that a vast majority of the image's pixels fall in between the 22 "rounded 4.5-bit" increments.



    6 hours ago, kye said:

    Also, if I set it to 2.5bits, then this is what I get:


    which looks pretty much what you'd expect.

    Yes.  The histogram should show equally spaced vertical lines that represent the increments of the lower bit depth (2.5-bits) contained within a larger bit dept (8-bits).


    6 hours ago, kye said:

    I suspect the vertical lines in the parade are just an algorithmic artefact of quantised data.

    The vertical lines in the waveforms merely show the locations where each scan line trace goes abruptly up and down to delineate a pool of a single color.  More gradual and more varied scan line slopes appear with images of a higher bit depth that do not contain large pools of a single color.



    7 hours ago, kye said:

    Also, maybe the image gets given new values when it's compressed?  Actually, that sounds like it's quite possible..  hmm.

    I checked the histogram of "2.5-bit" image without the added noise/grain/dithering, and it shows the vertical histogram lines as expected.  So, the grain/dithering is the likely culprit.



    7 hours ago, kye said:

    I wasn't suggesting that a 4.5bit image pipeline would give that exact result, more that we could destroy bit-depth pretty severely and the image didn't fall apart, thus it's unlikely that thickness comes from the bit-depth.

    An unnecessary element (noise/grain/dithering) was added to the "4.5-bit" image that made it a dirty 8-bit image, so we can't really conclude anything from the comparison.  Post the "4.5-bit" image without grain/dithering, and we might get a good indication of how "4.5-bits" actually appears.



    7 hours ago, kye said:

    Essentially the test was to go way too far (4.5bits is ridiculous) and see if that had a disastrous effect, which it didn't seem to do.

    Using extremes to compare dramatically different outcomes is a good testing method, but you have to control your variables and not introduce any elements that skew the results.


    Please post the "4-5-bit" image without any added artificial elements.



  14. 8 hours ago, kye said:

    The question we're trying to work out here is what aspects of an image make up this subjective thing referred to by some as 'thickness'.

    I think that the "thickness" comes primarily from emulsion's color depth and partially from the highlight compression that you mentioned in another thread, from the forgiving latitude of negative film and from film's texture (grain).


    Keep in mind that overlaying "grain" screen on a digital image is not the same as the grain that is integral to forming an image on film emulsion.  Grain actually provides the detail and contrast and much of the color depth of an film image.



    8 hours ago, kye said:

    We know that high-end cinema cameras typically have really thick looking images, and that cheap cameras typically do not (although there are exceptions).

    Home movies shot on Super8 film often have "thick" looking images, if they haven't faded.


    8 hours ago, kye said:

    The fact I can take an image and output it at 8-bits and at 5-bits and for there not to be a night-and-day difference...

    You didn't create a 5-bit image nor a "4.5-bit" image, nor did you keep all of the shades within 22.6 shade increments ("4.5-bits") of the 255 increments in the final 8-bit image.

    Here are scopes of both the "4.5-bit" image and the 8-bit image:




    If you had actually mapped the 22.6 tones-per-channel from a "4.5-bit" image into 26.5 of the 255 tones-per-channel 8-bit image, then all of the image's pixels would appear inside 22 vertical lines on the RGB histograms, (with 223 of the histogram lines showing zero pixels).


    So, even though the histogram of the "4.5-bit" image shows spikes (compared to that of the 8-bit image), the vast majority of the "4.5-bit" image's pixels fall in between the 22.6 tones that would be inherent in an actual "4.5-bit" image.


    To do this comparison properly, one should probably shoot an actual "4.5-bit" image, process it in a "4.5-bit" pipeline and display it on a "4.5-bit" monitor.


    By the way, there is an perceptible difference between the 8-bit image and the "4.5-bit" image.

  15. 15 hours ago, kye said:

    He says "The latitude of a typical motion picture negative film is 3.0 log exposure, or a scene contrast of 1000 to 1.  This corresponds to approximately 10 camera stops".  The highlights extend into a very graceful highlight compression curve.

    Most of us who shot film were working with a capture range of 7 1/2 to 8 stops, and that range was for normal stock with normal processing.


    If one "pulls" the processing (underdevelops) and overexposes the film, a significantly wider range of tones can be captured.  Overexposing and under-developing also reduces grain and decreases color saturation.  This practice was more common in still photography than in filmmaking, because a lot of light was already needed to just  to properly expose normal film stocks.


    Conversely, if one "pushes" the processing (overdevelops) while underexposing, a narrower range of tones is captured, and the image has more contrast.  Underexposing and overdeveloping also increases grain boosts color saturation.


    Because of film's "highlight compression curve" that you mentioned, one can expose for the shadows and midtones, and use less of the film's 7 1/2 - 8 stop  capture range for rendering highlights.


    In contrast, one usually exposes for the highlights and bright tones with digital, dedicating more stops just to keep the highlights from clipping and looking crappy.



    15 hours ago, kye said:

    However, if you're talking about mimicking film then it was a very short period in history where you might shoot on film but process digitally, so you should also take into account the print film positive that would have been used to turn the negative into something you could actually watch.

    6 hours ago, BenEricson said:

    Fair enough. I didn't realize you were talking about reversal film or film prints. Reversal was extremely popular at that time. The Dark Knight was shot negative but done with a traditional film print work flow and saying it has poor dynamic range and is comparable to ML is a tough sell.

    I don't think @kye was referring to reversal film.



    6 hours ago, BenEricson said:

    Yeah, the OG pocket cam is S16 and also has a similarity in the highlight sensitivity.

    The BMPCC was already mentioned in this thread.



    6 hours ago, BenEricson said:

    Ergonomically though, beyond terrible,

    No.  Because of its tiny size, the BMPCC is more ergonomically versatile than an NPR.


    For instance, the BMPCC can be rigged to have the same weight and balance as an NPR, plus it can also have a shoulder pad and a monitor -- two things that the NPR didn't/couldn't have.


    In addition, the BMPCC can be rigged on a gimbal, or with two outboard side handles, or with the aforementioned "Cowboy Studio" shoulder mount.  It can also be rigged on a car dashboard.  The NPR cannot be rigged in any of these ways.



    7 hours ago, BenEricson said:

    any of those cameras mentioned are like Fisher Price toys compared to the Eclair.

    To exactly which cameras do you refer?  I have shot with many of them, including the NPR, and none of the cameras mentioned in this thread are as you describe.

  16. 11 hours ago, BenEricson said:

    I'm still curious how someone could think a EOSM is comparable in any way...Have you guys actually watched something shot on 16mm? Bizarre comparison.

    I have watched some things captured on 16mm film, and I have shot one or two projects with the Eclair NPR.  Additionally, I own an EOSM.



    The reasons why the EOSM is comparable to the NPR is because:

    •   some of the ML crop modes for the EOSM allow the use of 16mm and S16 optics;
    •   the ML crop modes enable raw recording at a higher resolution and higher bit-depth.



    By the way, there have been a few relevant threads on the EOSM.  Here is thread based on an EOSHD article about shooting 5k raw on the EOSM using one of the 16mm crop modes.


    Here is a thread that suggests the EOSM can make a good Super-8 camera.


    Certainly, there are other digital cameras with 16 and S16 crops, and most of them have been mentioned in this thread.  The Digital Bolex and the Ikonoskop A-cam dII are probably the closest digital camera to a 16mm film camera, because not only do they shoot s16, raw/uncompressed with a higher bit depth, but they both utilize a CCD sensor.



    8 hours ago, BrooklynDan said:

    When you have a properly shoulder-mounted camera, you can press it into your shoulder and into the side of your head, which creates far more stability.  [snip]  Try that with your mirrorless camera hovering in front of you.

    Of course, one can rig any mirrorless camera set back and balanced on a weighted shoulder rig, in the same way as you show in the photo of your C100.  You could even add a padded "head-pressing plate!"



    7 hours ago, HockeyFan12 said:

    But I prefer the Aaton/Arri 416 form factor or the Amira over the Red/EVA1/Alexa Mini form factor, too, and it's an issue with prosumer cameras that the form factor really makes no sense ergonomically. Too big for IBIS to make sense, too small to be shoulder-mounted comfortably.

    Just build a balanced shoulder rig and keep it built during the entire shoot.



    7 hours ago, HockeyFan12 said:

    I keep going back to the $30 cowboy studio stabilizer, which somehow distributes weight evenly even with a front-heavy camera by clamping around your back. For 2-4 pound prosumer cameras and cameras without IBIS, I've found it preferable to a shoulder rig.

    I've always wanted to try one of those!:


  17. 13 hours ago, tupp said:

    there is also that shoulder mount digital camera with the ergonomic thumb hold of which I can never remember the name.

    The name of this S16 digital camera is the Ikonoskop A-cam dII.


    Of course the BMPCC and the BMMCC would also be comparable to the NPR.



    13 hours ago, John Matthews said:

    Would I be wrong in saying there was a carefree nature of this camera, meaning you didn't have to think so much about setup, just find a moment and start shooting.

    Well, since the NPR is a film camera, of course one had to be way more deliberate and prepared compared to today's digital cameras.  If you had already loaded a film stock with the appropriate ISO and color temperature and if you had already taken your light meter readings and set your aperture, then you could  start manually focusing and shooting.  Like many 16mm and S16 cameras of it's size , the NPR could not shoot more than 10 minutes before you had to change magazines.  One usually had to check the gate for hairs/debris before removing a magazine after a take.


    Processing and color timing and printing (or transferring) was a whole other ordeal much more involved and complex (and more expensive) than color grading digital images.


    On the other hand, the NPR did enable more "freedom" relative to its predecessors.  The camera was "self-blimped" and could use a crystal-sync motor, so it was much more compact than other cameras that could be used when recording sound.


    Also, it used coaxial magazines instead of displacement magazines, so it's center of gravity never changed, and with the magazines mounted in the rear of the camera body, it made for better balance on one's shoulder than previous cameras.  The magazines could also be changed instantly, with no threading through the camera body.


    In the video that you linked, that quick NPR turret switching trick was impressive, and it never occurred to me, as I was shooting narrative films with the NPR, mostly using a zoom on the turret's Cameflex mount.


    The NPRs film shutter/advancing movement was fairly solid for such a small camera, as well.



    14 hours ago, John Matthews said:

    Question: would you consider the modern-day version a camera with raw (big files) or 8 bit?

    In regards to the image coming out of a film camera, a lot depends on the film stock used, but look from a medium fast color negative 16mm stock is probably comparable to 2K raw on current cameras that can shoot a S16 crop (with S16 lenses).


    By the way, film doesn't really have a rolling shutter problem.



    14 hours ago, Anaconda_ said:

    As far as a point and shoot doc camera, I'd say the Ursa Mini Pro g2 is pretty close  [snip]  Next I'd say FS5/7 and Canon's Cx00 range depending on specific needs. Or if you have the budget, go for Red.

    It is important to use a digital camera that has a S16 (or standard 16) crop to approximate the image of the NPR, because the S16 optics are important to the look.



    14 hours ago, Anaconda_ said:

    Eosm is nice, but not grab and go,

    The EOSM is a bit more "grab and go" than an Eclair NPR.

  18. 52 minutes ago, noone said:

    For those interested, Google seems to work.

    Mind you I can not find anyone with a definition on how to work out colour depth, but plenty on working out file sizes.

    Makes sense really given that not all photos are the same size but can have varying colour depth for varying size sensors.



    Most of the results of your Google search echo the common misconception that bit depth is color depth, but resolutions' effect on color depth is easily demonstrated (I have already given one example above).

  • Create New...