Jump to content


  • Posts

  • Joined

  • Last visited

Everything posted by tupp

  1. If you (and/or your client) like the aspect ratio and like the fact that you are using a wider portion of the image circle of your lenses, then, to me, those are the most important considerations. So, you are probably best shooting at 4096x2160 (DCI 4K) and down-converting cleanly to 2048x1080 (DCI 2K) or less cleanly to 1920x1013. Any extra rendering time for the odd height pixel in the "less clean" resolution would likely be minimal, but it would probably be a good idea to test it, just to make sure.
  2. Glad to know that I am making progress. You have not directly addressed most of my points, which suggests that you agree with them. Firstly, the banding doesn't have to be eliminated in the down conversion to retain the full color depth of the original image. Banding/posterization is merely an artifact that does not reduce the color depth of an image. One can shoot a film with a hair in the gate or shoot a video with a dust speck on the sensor, yet the hair or dust speck does not reduce the image's color depth. Secondly, broad patches of uniformly colored pale sky tend to exhibit shallow colors that do not utilize a lot of color depth bandwidth. So, it's not as if there is much color depth lost in the areas of banding. Thirdly, having no experience with 8K cameras, I am not sure if the posterization threshold of such a high resolution behaves identically to those of lower resolutions. Is the line in the same place? Is it smooth or crooked or dappled? In regards to eliminating banding during a down conversion, there are many ways to do so. One common technique is selective dithering. I have read that diffusion dithering is considered most favorable over other dithering methods.
  3. Nope. Color depth is the number of different colors that can be produced in a given area. A given area has to be considered, because imaging necessarily involves area... which area necessarily involves resolution. Obviously, if a 1-bit imaging system produces more differing colors as the resolution is increased, then resolution is an important factor to color depth -- it is not just bit depth that determines color depth. The above example of a common screen printing is just such an imaging system that produces a greater number of differing colors as the resolution increases, while the bit depth remains at 1-bit. The Wikipedia definition of color depth is severely flawed in at least two ways: it doesn't account for resolution; and it doesn't account for color depth in analog imaging systems -- which possess absolutely no bit depth nor pixels. Now, let us consider the wording of the Wikipedia definition of color depth that you quoted. This definition actually gives two image areas for consideration "a single pixel" -- meaning an RGB pixel group; and "the number of bits used for each color component of a single pixel" -- meaning a single pixel site of one of the color channels. For simplicity's sake, let's just work with Wikipedia's area #2 -- a single channel pixel site of a given bit depth of "N." We will call the area of that pixel site "A." If we double the resolution, the number of pixel sites in "A" increases to two. Suddenly, we can produce more tones inside "A." In fact, area "A" can now produce "N²" number of tones -- much more than "N" tones. Likewise, if we quadruple the resolution, "A" suddenly contains four times the pixel sites that it did originally, with the number of possible tones within "A" now increasing to "N⁴." Now, one might say, "that's not how it actually works in digital images -- two or four adjacent pixels are not designed to render a single tone." Well, the fact is that there are some sensors and monitors that use more pixels within a pixel group than those found within the typical Bayer pixel group or found withing a striped RGB pixel group. Furthermore (and probably most importantly), image detail can feather off within one or two or three pixel groups, and such tiny transitions might be where higher tone/color depth is most utilized. By the way, I didn't come up with the idea that resolution is "half" of color depth. It is a fact that I learned when I studied color depth in analog photography in school -- back when there was no such thing as bit depth in imaging. In addition, experts have more recently shown that higher resolutions give more color information (color depth), allowing for conversions from 4k, 4:2:0, 8-bit to Full HD, 4:4:4, 10-bit -- using the full, true 10-bit gamut of tones. Here is Andrew Ried's article on the conversion and here is the corresponding EOSHD thread.
  4. Well, this scenario is somewhat problematic because one is using the same camera with the same sensor. So, automatically there is a binning and/or line-skipping variable. However, barring such issues and given that all other variables are identical in both instances, it is very possible that the 8K camera will exhibit a banding/posterization artifact just like the SD camera. Nevertheless, the 8K camera will have a ton more color depth than the SD camera, and, likewise, the 8K camera will have a lot more color depth than a 10-bit, 800x600 camera that doesn't exhibit the banding artifact. Of course, it is not practical to have 1-bit camera sensors (but it certainly is possible). Nonetheless, resolution and bit depth are equally weighted factors in regards to color depth in digital imaging, and, again, a 4k sensor has 4 times the color depth of an otherwise equivalent Full HD sensor.
  5. I acknowledged your single "complexity" (bit rate), and even other variables, including compression and unnamed natural and "artificial" influences such as A/D conversion methods, resolution/codec conversion methods, post image processing effects, etc. By the way, greater bit rate doesn't always mean superior images, even with all other variables (including compression) being the same. A file can have greater bit rate with a lot of the bandwidth unused and/or empty. One is entitled to one's opinion, but the fact is that resolution is integral to digital color depth. Furthermore, resolution has equal weighting to bit depth when one considers a single color channel -- that is a fundamental fact of digital imaging. Here is the formula: COLOR DEPTH = RESOLUTION X BITDEPTH^n (where "n" is the number of color channels and all where pixel groups can be discerned individually). Most don't realize it, but a 1-bit image can produce zillions of colors. We witness this fact whenever we see images screen printed in a magazine, on a poster or on a billboard. Almost all screen printed photos are 1-bit images made up of dots of ink. The ink dot is either there or it is not there (showing the white base) -- there are no "in-between" shades. To increase the color depth in such 1-bit images, one must increase the resolution by using a finer printing screen. That resolution/color-depth relationship of screen printing also applies to digital imaging (and also to analog imaging), even if the image has greater bit depth. I simply state fact, and the fact is that 4k has 4 times the color depth and 4 times the bit rate of full HD (all other variables being equal and barring compression, of course).
  6. 4K has 4 times the color depth (and 4 times the bit rate) of full HD, all other variables being equal and barring compression or any artificial effects.
  7. Finding a cheap, fun camera certainly can be part of the fun for those who can afford to buy one. Another part of the fun is using inexpensive gear to shoot something compelling, which can be done with a camera that one already owns. Why exclude those who can't buy a camera, merely because they can't afford to experience one part of the fun?
  8. I wasn't suggesting anything about your points regarding social mores. I was merely showing what is to my knowledge the only feature film prior to 1980 that primarily addresses issues of having a female US president. It is not just a film that happens to have a female US president as a secondary character. Of course, the mores have changed dramatically since 1964, so much so that the ending (and title) of "Kisses For My President would have to be different. On the other hand, I don't think that changing social mores nor politics is at the heart of the mediocrity of our age. Certainly, shoehorning diversity into content doesn't help, but there is a larger reason(s) for the shallow, uninspired material that we encounter today.
  9. The general idea for this contest is great, but forcing folks to buy a camera might be a deal-breaker for some. Perhaps it should be stipulated that the camera merely has to has to be "trending" on Ebay for no more than US$150.
  10. tupp


    Although the company is gone, Harrison & Harrison was a dominant filter maker for cinema "back in the day." They invented black dot diffusion, which is the basis of Black Pro Mist filters and of other derivative filter technology. Well, the set of 5 filters that I linked was listed at US$200, but, as mentioned, H&H filters can can sometimes be found individually. What is a P2K? Definitely interested in that. Keep in mind that although the black levels can be lifted with diffusion filters, that doesn't mean that one will see more detail in the shadows. To approximate black dot effect, the black spray paint specs should be "embedded" within a diffusion layer (hair spray or something similar). Not sure what you seek here nor if any existing lens filters can yield such results. On the contrary, if you DIY, you are in complete control of the distribution of the diffusion medium. In the videos that I watched, it didn't seem too difficult. I don't know, guess it's just me... If you have a good lens hood or matte box (or a solid French flag), the flare will be reduced when the Sun is out of frame. It shouldn't take 20 minutes to "set-up" a lens hood. I am not suggesting lifting the blacks. To add ambient fog in post, one basically slaps a smooth white, slightly diffusing layer/track over the image, and then adjusts the opacity of that white layer/track as desired. Doing so is very similar to an out-of-frame light source hitting a lens diffusion filter. If one wants the look of ambient flare on a len diffusion filter, one can similarly lower the camera exposure and then use the post method stated directly above. The results will closely simulate doing it all in-camera with the higher black levels and no extra noise, plus one will have more control over the level of "ambient flare."
  11. tupp


    What size do you need? Here is an 82mm Tiffen Low Contrast filter for sale. If your lens is smaller, you could just use a step-up ring. By the way, there are plenty of YouTube videos on making DIY black-promist filters. One can even make a smaller increment than 1/8. To approximate the the black dot process one needs to apply the black spray paint before the hair spray (or other diffusion spray). Also, the Harrison & Harrison black dot originals can still be found for sale in sets or individually. It's always puts a smile on one's face when a YouTuber conducts a test with just a frontal light source, and the subject turns their head from left to right. As he suggests, it's generally best to use a lenser (flag the light source outside of the frame from hitting the lens/filter) or a hood/matte box. One can always add ambient fog in post.
  12. Great! Hopefully, this model will not require a recorder for raw footage.
  13. Anyone can call almost anything "art." Art mostly defies definition. Art doesn't have to push boundaries -- art can be something that is merely pretty. It can also be something that is stimulating, funny or entertaining in some way. To me, the big problem with current movies and television today is that there aren't a lot of good, original stories being generated. Similarly, there just isn't a lot of inspired originality anymore in the other performing arts, such as music, dance and theatre. We find ourselves deep in the age of mediocrity. Some will put the blame on the conglomeration of entertainment companies along with the onset of digital technology. Huge corporations (and talentless board members) making most of the big decisions in the arts has got to water things down. Also, before digital, one had to be more deliberate and thoroughly flesh-out ideas and be extensively prepared, talented and/or experienced. With digital, one can shoot things more "on the fly," without prep nor originality and with minimal artistic ability and little know how.
  14. tupp

    Scanning film

    Good to know for anyone working in that area. They have three scanners. Thanks!
  15. tupp


    Agreed. A good lens choice should reduce the video look more readily than diffusion filters. Vintage lenses are ideal. If you can't get Xtal Express, use a vintage spherical lens.
  16. tupp


    Yes, of course, but if one exposes properly and/or uses HDR features, then it might be possible to match "blown-out" areas in the frame. Additionally, lens diffusion scattering from "out-of-frame" sources is also influenced by lens hoods and matte boxes. In the 1970's, David Hamilton was the king of using lens diffusion while blowing-out highlights and light sources. As I recall, black-dot lens diffusion didn't appear until the early 1980's, and Hamilton would push Ektachrome which increased contrast, countering the softness/flatness produced by the lens diffusion. In addition, pushing gave coarser grain, which worked well for Hamilton's soft aesthetic.
  17. tupp


    Certainly there are many diffusion effects that can be emulated accurately in post. Furthermore, there are also diffusion effects that are exclusive to post which can't be done with optical filters. However, there are some optical filters which can't be duplicated digitally, such as IR cut/pass filters, UV/ haze filters, split-diopters, pre-flashing filters, etc.
  18. Well, the 16S, the Bolex, the Krasnogorsk, etc. all had their eyepieces at the rear of the camera, so they weren't shoulder mounted. There were a few tricks that one could practice to keep them stable. There were also other brackets (such as belt pole rigs) that could help. Of course, weight could always be added for more stability. I am with you on shoulder rigs. A balanced shoulder rig is always fairly stable regardless of weight.
  19. Your P4K should closely match your P6K if you use a speedbooster with your EF lenses on your P4K. As you are likely aware, a speedbooster (or focal reducer) is just an adapter with optics that condense the image circle and character of a lens to a smaller size. Most M4/3 speedboosters will yield a Super35/APS-C frame and look, plus give an extra stop of exposure to boot. Here is a video comparing a Metabones speedbooster with a recent Viltrox focal reducer on the P4K, cued to the section comparing autofocus speed in lower light. To me, the Viltrox is good and the Metabones is better. Neither seems to have any prohibitive problem with their electronics. Was the AF performance of your adapters as good as these speedboosters?
  20. I didn't see any FPN with the BM cameras using Fairchild sensors. The BM cameras with CMOSIS sensors (BMPC, OG Ursa, Ursa Min 4k) can exhibit FPN if one is not careful, but having a global shutter is a worthwhile trade-off.
  21. The most important thing is that one can control the aperture (and view a scope). The aperture readout is not crucial. Most cinema lenses are completely manual for good reasons. There is too much riding on the line in larger budget projects to rely on decisions made by the camera or lens. Furthermore, any IS glitch could bust a take and/or force a cut in post, which could prove to be expensive and detrimental to the piece's impact. Additionally, it is likely that most cinematographers want lens manufacturers put their efforts into optical performance rather than into automatic electronic features. Nobody buys a Master Prime to shoot handheld at Bar Mitzvahs. It's not easy to handhold a narrow non-IS lens, but it can be done with success. Back in the film days, there were no IS lenses, so one had to learn how to be smooth when handheld. The non-IS results generally do not posses the same look/feel as handheld with a modern IS camera/lens, but I wouldn't say that handheld without IS is generally worse the with IS. Of course, a tripod eliminates a lot of stability problems, and one really should disable IS when using a tripod. Certainly. If there is a nearby rental house, it might be wise to go there and test your EF-S lenses on a P6K or s P6K Pro prior to making a purchase. Not sure how "consuming Humble Pie" is relevant, but getting a camera that works for you is more important. By the way, I prefer the Small Faces. Again, it would be useful to actually see how your lenses work with any camera in consideration (if possible), prior to a purchase. In the case of the C70, try it with an official Canon adapter. Full frame lenses are a wise investment if they have a deep mount, and especially if they are completely manual. One of the great benefits of having FF deep-mount lenses is the ability to use them with speed boosters on shallow-mount Super35/APS-C cameras. Such a combination gives an extra stop of exposure along with almost the complete full frame view and character, plus the image is usually sharper than using a full frame lens with a dummy adapter.
  22. Please point out where there are assumptions or false conclusions. Okay. I asked if your lenses were EF-S -- there was no assumption (although I suspected as much, which is why I asked). Okay. Never experienced that. Are you shooting manual exposure or is the aperture automatically controlled? Never experienced that either, but I would tend not to use IS on a cinematography camera such as the P6K. On the other hand, do you think that your EF-S lenses would perform on the P6K just as well as they perform on Canon EF-S cameras? Do you think that your EF-S lenses would perform on the C70 with a Canon EF-to-RF adapter just as well as they perform on a Canon EF-S camera? No doubt. Do you realize that most M4/3 lenses can be used on Cameras such as the C70 and the P6K with no vignetting? It doesn't offend, but I truly hope that your preference is informed.
  23. I think it is a combination of a biased interpretation of one's own link, plus poor comprehension of another somewhat misleading source. I already addressed the Gerald Undone video that you linked. I disagree with the conclusions to which he jumps in regards to dynamic range. He sets up arbitrary conditions (the size of the C70's sensor and the lack of NR options on the A7S III) for which the C70's dynamic range is "better" in his mind than the A7S III. However, at 09:52 in the video, he additionally states that the low light performance of the A7S III is far superior to that of the C70: While he makes this statement, we see a side-by-side comparison of the performance of the C70 and the A7S III starting at iso 12800 and 25600, which reveals that the A7S III is exceptionally cleaner than the noisy C70. So much for the CVP and "GU" links. The C70 is not "clean" at 12,800 iso, unlike the A7S III. I see. Well, once again, I would have to take your word on that, but after seeing the discrepancy between your statements and your links, I don't think that I will.
  • Create New...