Jump to content

tupp

Members
  • Content Count

    802
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by tupp


  1. 5 hours ago, cantsin said:

    You talk about perceptual color depth,

    No.  I am referring to the actual color depth inherent in an image (or imaging system).

     

     

    5 hours ago, cantsin said:

    created through dithering,

    I never mentioned dithering.  Dithering is the act of adding noise to parts of an image or re-patterning areas in an image.

     

    The viewer's eye blending adjacent colors in a given image is not dithering.

     

     

    5 hours ago, cantsin said:

    And even that can't be measured by your formula, because it doesn't factor in viewing distance.

    Again, I am not talking about dithering -- I am talking about the actual color depth of an image.


    The formula doesn't require viewing distance because it does not involve perception.  It gives an absolute value of color depth inherent in an entire image.  Furthermore, the formula and the point of my example are two different things.


    By the way, the formula can also be used with a smaller, local area of images to compare their relative color depth, but one must use proportionally identical sized areas for such a comparison to be valid.

     

     

    5 hours ago, cantsin said:

    Or to phrase it less politely: this is bogus science.

    What I assert is perfectly valid and fundamental to imaging.  The formula is also very simple, straightforward math.

     

    However, let's forget the formula for a moment.  You apparently admit that resolution affects color depth in analog imaging:

    6 hours ago, cantsin said:

    I see the point that in analog film photography with its non-discrete color values, color depth can only be determined when measuring the color of each part of the image.  Naturally, the number of different color values (and thus color depth) will increase with the resolution of the film or the print.

    Not sure why the same principle fails to apply to digital imaging.  Your suggestion that "non-discrete color values" of analog imaging necessitate measuring color in parts of an image to determine color depth does not negate the fact that the same process works with a digital image.

     

    The reason why I gave the example of the two RGB pixels is that I was merely trying to show in a basic way that an increase in resolution brings an increase in digital color depth (the same way it happens with an analog image).  Once one grasps that rudimentary concept, it is fairly easy to see how the formula simply quantifies digital, RGB color depth.

     

    In a subsequent post, I'll give a different example that should demonstrate the strong influence of resolution on color depth.


  2. 57 minutes ago, cantsin said:

    I found exactly three references for this equation, all in camera forums, and all posted by a forum member called tupp...

    So what?

     

     

    57 minutes ago, cantsin said:

    But seriously, I see the point that in analog film photography with its non-discrete color values, color depth can only be determined when measuring the color of each part of the image. Naturally, the number of different color values (and thus color depth) will increase with the resolution of the film or the print.

    It works the same with digital imaging.  However, in both realms (analog and digital) the absolute color depth of an image includes the entire image.

     

    I will try to demonstrate how color depth increases in digital imaging as the resolution is increased.  Consider a single RGB pixel group of size "X," positioned at a distance at which the red, green and blue pixels blend together and cannot be discerned separately by the viewer.   This RGB pixel group employs a bit depth that is capable of producing "Y" number of colors.

     

    Now, keeping the same viewing distance and the same bit depth, what if we squeezed two RGB pixels into the same space as size "X"?  Would you say that the viewer would still only see "Y" number of colors -- the same number as the single pixel that previously filled size "X" -- or would (slightly) differing shades/colors of the two RGB pixel groups blend to create more colors?

     

    What if we fit 4 RGB pixel groups into space "X"?  ... or 8 RGB pixel groups into space "X"?

     

     

    57 minutes ago, cantsin said:

    In digital photography and video, however, the number of possible color values is predetermined through the color matrix for each pixel. Therefore, in digital imaging, color depth = bit depth.

    As I think I have shown above, resolution plays a fundamental role in digital color depth.

     

    Resolution is in fact an equally weighted factor to bit depth, in digital color depth.  I would be happy to explain this if you have accepted the above example.


  3. 18 hours ago, cantsin said:

    Do you have any reference for this? I couldn't find a single one online.

    I don't have a reference.  When I studied photography long before digital imaging existed, I learned that resolution is integral to color depth.

     

    Color depth in digital imaging is exceedingly more quantifiable, as it involves a given number of pixels with a given bit depth, rather than indeterminate dyes and grain found in film emulsion (and rather than unfixed, non-incremental values inherent in analog video).  The formula for "absolute" color depth in RGB digital imaging is: 

    Color Depth = (Bit Depth x Resolution)^3


  4. 6 hours ago, cantsin said:

    "Color depth or colour depth (see spelling differences), also known as bit depth, is either the number of bits used to indicate the color of a single pixel, in a bitmapped image or video frame buffer". -  https://en.wikipedia.org/wiki/Color_depth

    The notion that bit depth is identical to color depth is a common misconception, that has apparently made it's way into Wikipedia.

     

    The only instances in which bit depth and color depth are the same is when considering only a single photosite/pixel or a single RGB pixle-group.  Once extra pixels/pixel-groups are added, color depth and bit depth become different properties.  This happens due to the fact that resolution and bit depth are equally weighted factors of color depth, in digital imaging.


  5. 3 hours ago, cantsin said:

    The results are attached here. "8bit" and "10bit" only refer to the color depth of the original video file; both stills are 8bit PNGs.

    Actually,  "8bit" and "10bit" refer only to bit depth -- bit depth and color depth are two different properties.

    26 minutes ago, cantsin said:

    One should not forget though that all my tests are specific to Panasonic's VLog curve - which highly compresses dynamic range

    My understanding is that Vlog doesn't actually change the dynamic range -- it changes the tone mapping within the dynamic range to capture more shades in desired exposure regions.


  6. 1 hour ago, Kisaha said:

    @tupp maybe you can't tell the difference between microphones,

    Almost every mic that I have seen on hand-held booms had an interference tube.  What kind of mic is that?

     

     

    1 hour ago, Kisaha said:

    @IronFilm participate in specialized sound forums, and if we copy-paste your statement here, would bring a lot of laughs, or rage, to the sound professionals around the world.

    They might stop laughing when the director and post production sound supervisor start asking why there is so much extraneous noise in the audio.

     

     

    1 hour ago, Kisaha said:

    Also, I am working 19 years as a sound pro, have worked in 4 countries , and what you said just ain't true!

    I have been in production for a little while.  By the way, I started out in audio recording.

     

    I have worked on all kinds of film productions in various departments, from small corporate gigs to large features on stages and on location.  I am telling you exactly what I see audio pros use on such sets.

     

     

    1 hour ago, Kisaha said:

    Ignorance on a field you clearly do not comprehend ain't a sin, just you do not have to push your wrong perspective on a subject matter that you haven't mastered, especially on a forum that people help other people.

    [snip]

    you are not "specifically" (what that does even mean?) an "audio person", so you can't understand that the critical point here ain't only, how "wide"  the reception is, but other qualities and characteristics of sound capturing, that do not apply to most people's limited knowledge.

    When I was involved in audio recording, I was utterly ignorant about the different types of mics and recording techniques.  I also was completely unaware of certain brands/models that had more desirable response for certain applications.

     

    Please enlighten those of us with limited knowledge with your mastery of the important mic properties in recording.  Specifically, please explain what is more important than the degree of directionality in regards to film audio production, given a quality, professional mic.

     


  7. On 10/23/2017 at 1:12 PM, IronFilm said:

    It seems you might be mistakenly thinking that the only microphone on a boom is a shotgun, which is very *very* wrong :-/

    You should tell that to the pro's here in Hollywood.

     

    Almost all the boom operators that I see here on set are using shotguns on their booms, both indoors and outdoors.  These operators are nimble and precise, and they want clean, dry audio.

     

    I am not specifically and audio person, but I always use my boomed shotgun mic, and I have always gotten great results.  I would never want anything with wider reception.


  8. On 10/17/2017 at 6:26 AM, Anaconda_ said:

    There's some funny info in this thread.

    Shotguns are perfectly fine indoors.

    Agreed!  Otherwise, somebody needs to tell all the pro mixers here in Hollywood that they are wasting their money hiring boom operators!

     

    I suspect that this suggestion will sound alien to some here, but never mount your shotgun mic to your camera.  Always boom the shotgun mic just outside of the frame, as close as possible to the subject.  If your subject is static, such as someone sitting for an interview, you can boom from a light stand or C-stand.

     

    Of course, make sure that the shotgun mic is aimed directly at the subject's mouth or aimed directly at the action you want to record.

     

    One important trick that is often overlooked -- position the mic so that there are no air conditioners or other noise sources along the shotgun mic's axis (both in front and in back of the mic) where it is most sensitive.  So, if there is an air conditioner on camera right, boom the mic from slightly to camera right, so that it is aiming a little leftward towards the subject, perpendicular to the noise source on the right of camera.

     

    As @maxotics suggested, it is best to use both a lav and a boomed shotgun, if possible.  In such a scenario, one is always a backup for the other.

     

     


  9. If you want to fix dead/hot pixels in your footage, you can do so in most NLEs.

     

    First, make a duplicate track of your footage.  Then, make a mask that is transparent, except for the dead/hot pixels.  Attach that mask to the top video track, so that the hot/dead pixels are the only part top track that covers identical track below.  Then, Gaussian blur the footage in the top track (but not the attached mask), so that the values/color from the adjacent pixels seep into the masked hot/dead pixels.


  10. Make transcription text files with time code references to all spoken lines in your footage/audio.  Then, edit "on paper" before you even get close to sitting down at the edit bay.  It makes the process exceedingly easier and much more thorough.  You can add b-roll/cut-aways during the paper edit and/or during the NLE session.

     

    If you have a lot of footage/audio (more than ~20-30 minutes), hire a cheap transcription service to make the text files with time code reference.  The least expensive transcription services are probably in India and, perhaps, in the Philippines, and they can often transcribe most languages.  These services are worth every penny you spend on them.


  11. 7 minutes ago, Bizz said:

    You beat me to it!  I was also going to suggest a cheap base rig with rails with this one that has adjustable height on the camera platform.

     

    Op, you might as well get a cheap rig such as this that additionally gives you rails to attach a matte box or lens hood, a follow focus, etc., if you ever want to use such items.

     

    On the other hand, you could also just slide the camera to its most forward position on the Manfrotto 502.


  12. Consider renting.

     

    A $7000 budget for a camera package sans lenses seems rather excessive for a short.  I've worked on shorts in which the budget for the entire project was a fraction of that of  your camera (sans lenses) package.

     

    Plus, renting gives you more flexibility/recover-ability.  If you rent a camera that has a problem (such as fpn) or if you just don't like it, you can return it and get another camera.

     

    Also, for a narrative short, you probably don't even need 4K, which could greatly reduce your budget.


  13. On 9/10/2017 at 2:45 PM, cantsin said:

    This is true. Rule of thumb is: 90% of c-mount lenses are designed for 2/3" image circles (i.e. half of MFT, equivalent to 4:3 16mm film), 15% are designed for 1/3" image circles (Super 8 equivalent) or smaller, 4% are designed for 1" (or Super 16), 1% is designed for MFT or bigger image circles.

    Might want to check the math and add the percentages.

     

    At any rate, there were tons of C-mount lenses made for larger videcons/plumbicons, so, not the above percentages are a "rule of thumb."


  14. On 9/7/2017 at 7:40 AM, 7 Lakes said:

     

    So yes or no? :) How to check before buying a used one in the internet? Are there different C-Mount lenses with a different angle of view? If I understand it correctly, it will almost cover the sensor with 25mm and above?

    There is no absolute image circle size for a C-mount, nor is there an absolute image circle size for the various focal lengths with a C-mount.

     

    Furthermore, one can use an adapter to mount APS-C and full frame lenses to C-mount cameras.

     

    You just have to determine the image circle of each lens on an individual basis.


  15. On 9/7/2017 at 5:23 AM, Ed_David said:

    It shows me that the f65's mechanical shutter is beautiful - but not really that noticeably better and definitely does not make the extra weight and bulk and size of the f65 worth it.

    I think that the mechanical shutter makes a more noticeable difference with significant movement and with handheld shots.  The mechanical shutter probably also reduces noise.

     

    By the way, Panavision modified a few F65s by removing the mechanical shutter.  They called the modified version the "F65 Mini," and they usually live in France:

    http://panavision.fr/produits/sony-f65-mini/

    www.vimeo.com/197192795

     

×
×
  • Create New...