Jump to content

tupp

Members
  • Posts

    1,148
  • Joined

  • Last visited

Posts posted by tupp

  1. On 3/20/2018 at 3:59 AM, Aussie Ash said:

    here is a pic of the IMAX shoulder rig

    it only weighs 54 pounds- about 24 kgs.

    Definitely, some heavy duty IBIS with that rig.

     

    Looks like they forgot the cage with a Rode and shoe mounts!   /s

     

    Jeez!  That's as massive than a Mitchell BNC or a Dalsa Origin.

  2. 12 hours ago, Pavel Mašek said:

    (NX1 is running on Linux-like OS Tizen)

    Not sure that Tizen is "Linux-like" -- I think it actually is version of Linux.

     

     

    4 hours ago, Arikhan said:

    As the processor of the NX1 is very powerful, I invented a workflow called IDOB - Image Data Optimization Blocks. It can combine 2 RAW images and output a single DNG file (16 bit) with a convenient amount of data (much less than a single RAW) but with a tremendous amount of DR. And it fits in a modern workflow with Premiere or Davinci.....

    This ground principle is not my invention: When putting (stacking) more images together (even shot under same conditions = similar EV) you will get a single superiour image with great DR and tremendous possibilities to work in post.

     

    This sounds like the old Magic Lantern HDR video in which alternating H264 frames have different ISOs  (WARNING!  STROBING/FLASHING FRAMES):

     

    The trick is that you shoot with the ML HDR video feature at a higher frame rate (50fps or 60fps) and then combine every two consecutive images to get HDR video at half the frame rate (25fps or 30fps).  As mentioned in the video, one can encounter motion/ghosting problems with this method.

     

    I wonder if the Magic Lantern HDR option works their 60fps raw video feature on the 5d III (although I am not sure if this HDR method makes sense when shooting raw).

     

  3. 45 minutes ago, cantsin said:

    The EOS-M is simply not powerful enough for that. Blackmagic is the only affordable option if you need these specs.

    I said that those specs would be "ideal" -- I realize that those exact specs are probably not currently feasible.   On the other hand, 12-bit/10bit raw at 1800x1024 is working with the EOSM, and that is certainly close enough to what I seek.  However, the Magic Lantern documentation, nomenclature and forum are so confusing, I cannot figure out whether or not those specs are available in a super16 crop.

     

    ML has not been able to get higher raw resolutions from the full sensor, but I am not sure that is due to the EOSM lacking "power."  I assume that this current limitation doesn't involve the EOSM's SD card controller bottleneck, because the EOSM can obviously record full-HD+ resolutions at higher bit depths with crop modes.  So, there might be a way to get raw and "near-full-HD" without a crop.

     

    I am just trying to figure out what is possible right now with the EOSM.

  4. 5 hours ago, Alpicat said:

    I think part of the confusion in crop numbers comes from the fact that the magic lantern menu always states crop factor in relation to 35mm full frame (i.e. same sensor size as Canon 5D), no matter which camera.... 

    That's not the cause of my confusion -- I don't even have Magic Lantern installed, so I am not looking at the menus.  Indeed, if all the crops were simply labeled relative to full frame, there would be  a lot less confusion in regard to crop sizes.

     

    I think that a lot of the confusion with Magic Lantern comes from three things:

    1. The official documentation is not current and a lot of features are left out;
    2. The only mention of some features is scattered over a zillion forum posts, so information on is difficult to find;
    3. Since all of the features are released as they appear, the features weren't named together, all at once, so the naming has been somewhat arbitrary, reactive and haphazard.  Consequently, it is difficult to tell from the feature's name what that feature does (especially relative to other similar features).

     

     

    5 hours ago, Alpicat said:

    Do RJ do a focal reducer for the EOS M mount directly? I'd definitely be interested in getting one.

    Yes.  I have this one.  There are other brands that have "dumb" EF-to-EF-M focal reducers -- just search on Ebay.

     

    Thanks for the info!

  5. 27 minutes ago, byuri734 said:

    I agree, if you are looking at S16, it is better to go with a BMPCC. You probably can get a used one for just a bit more than an eventually new eos-m speed booster.

    Thanks.  As I mentioned above, I have shot extensively with the BMPCC, but I don't own one nor do I own a BMMCC.

     

    I own an EOSM with an RJ focal reducer.  For the optics that I have, it would be ideal to get a super16 crop, full HD, 10-bit, 4:4:4 (or raw),  or to get the same specs with no crop (APS-C).

  6. 1 hour ago, Matt Kieley said:

    3x + the 1.6x crop of the sensor. If you want s16 and RAW, just get the Blackmagic Pocket or Micro. It reliably shoots Full HD in RAW as well as ProRes, with a s16 sensor.

    Thanks for the confirmation on the 3x crop.  I wonder what resolutions are available at that crop and whether or not I would have to deal with focus pixels.

     

    In regards to the Blackmagic pocket and micro, I have shot a lot with the pocket, and it gives great images.  I own an EOSM, but not any Blackmagic cameras.  So, instead of spending another US$1000, I would rather try similar functionality with my EOSM.

  7. 12 hours ago, Alpicat said:

    I posted a comment on one of their youtube videos where they demonstrate the adapter: https://www.youtube.com/watch?v=GPseT8Ok3EA&lc=

    What happened to Bohus?  Thanks for the link.

     

     

    12 hours ago, Alpicat said:

    I think the only way to get close to a 3x crop (which is approx Super 16 size) is to increase the horizontal resolution to 2.5k (using 5x zoom mode). However at that resolution you only get about 3 seconds record time, and the vertical res is limited to 1080. 

    I am fairly sure that there is a 3x crop mode in Magic Lantern, but I don't know what resolutions are possible, nor if one can shoot in that mode and not suffer pink/green focus pixels.  It is difficult to determine Magic Lantern capabilities as the only mention of features is sometimes spread over many threads on their forum, with some threads having 60+ pages of posts.

     

     

    12 hours ago, Alpicat said:

    I actually emailed the metabones sales team last week about their plans to make a speed booster for the EOS M mount (since it would be useful for the Canon M50 too), and they say they plan to do it but can't provide further info at this stage. If they do make it, that would be one way of getting super 16mm field of view on the EOS M!

    Well, if Metabones doesn't make a speed booster that mounts on the EOSM, some of their competitors already got the EOSM covered.  Plus, if an EOSM-m4/3 gets made, that would allow the use of a lot of m4/3 focal reducers on the EOSM, including Metabones.

     

    Metabones should actually make their $peedboosters with interchangeable camera mounts, so you only have to spend $500+ for one set of optics.

  8. 46 minutes ago, Alpicat said:

    I've also checked with Fotodiox as they already make a Sony E-mount to M4/3 adapter, as do various other manufacturers

    Wow!  I had no idea that such adapters existed.  Thanks!

     

    Here's the Fotodiox version, by the way.

     

     

    47 minutes ago, Alpicat said:

    The Sony mount is very similar to the EOS M mount with the same flange focal distance so should be easy to produce an adadpter for EOS M to m4/3, even if it lacks electronic contacts.

    Indeed, both mounts have the same flange focal distance (18mm), plus the EF-M has a slightly wider throat diameter at 47mm, compared to the 46.1mm throat of the E-mount.  So, it should be a little easier to make a M4/3 adapter for the EOSM.

     

    50 minutes ago, Alpicat said:

    Fotodiox said they'd talk to the design team and see if they can make such an adapter.

    Can you post your Fotodiox contact info?  I would like to "second" your proposal for such an adapter.

     

    By the way, much of this thread involves shooting with a "Super-8m" crop on the EOSM.  I would like to enjoy the same HD resolution and raw benefits, but shoot with a Super-16 crop.  Would that be the equivalent of the 3x crop?  If so, can I shoot that crop on the EOSM with full HD, raw, 24p at 10-bit?

     

    Thanks!

  9. On 3/2/2018 at 5:29 AM, Alpicat said:
    On 3/2/2018 at 4:07 AM, Yurolov said:

    Is there any way I can use my pl mount s16 super speeds on the cam? Any recommendations?

    There is a PL mount to EOS M adapter, but it's from MTF so it's expensive: https://www.srb-photographic.co.uk/pl-mount-to-canon-eos-m-mtf-camera-adaptor-9290-p.asp  I don't know if there are cheaper alternatives. 

    There are PL to EF-M tilt adapters for US$195.  I seem to recall non-tilt versions selling for a little over US$100, but I can't find them anymore.

     

    Another option is to get a PL to EF adapter for ~US$100 and merely attach that to a dummy EF to EF-M adapter (US$9).

     

     

    On 3/2/2018 at 5:29 AM, Alpicat said:

    Funnily enough I was talking to the MTF guys at BVE expo yesterday asking them if they had an EOS M to M4/3 adapter! It would be great to use Micro 4/3 lenses or a BMPCC speed booster on the EOS M. Unfortunately no such adapter exists. 

    The flange focal distance for M4/3 is 19.25mm while the flange focal distance for EF-M is 18mm.  So, it should be possible to drill matching holes in a M4/3 mount to match the screws on the EF-M mount and shim the mount ~1.25mm further out.

     

    Of course, making the locking pin work is another consideration.  Also, safe clearance for the EOSM's EF-M electronic contacts should be considered, to prevent damage.

  10. 1 hour ago, kidzrevil said:

    Image resolution also helps with color depth so 4k 8 bit makes the 10 bit argument less and less of a factor hence why sony is sticking hlg curves in 8 bit cameras.

    Glad to know somebody gets it!

     

    1 hour ago, kidzrevil said:

    I have totally abandoned luts

    I don't use luts in grading, but a lut can help with "client preview" on set.

  11. 10 hours ago, Matthew Hartman said:

    One more time in English. :grin:

    Ha, ha!

     

    If you start out with an uncompressed file with negligible noise, the only thing other than bit depth that determines the "bandwidth" is resolution.  Of course, if the curve/tone-mapping is too contrasty in such a file, there is less room to push grading, but the bandwidth is there, nevertheless.

     

    Bandwidth probably isn't the reason that  8-bit produces banding more often than 10-bit, because some 8-bit files have more bandwidth than some 10-bit files.

     

     

  12. 20 hours ago, Matthew Hartman said:

    8bit has very little bandwidth to be pushed around in grade.  8bit cameras are setup by the manufacturer through internal testing to capture the optimum image gradiation for it's respective sensor encoding. Just because an 8bit camera offers log or exposure tools doesn't mean that the image is that mallabe. 

    The 8bit HEVC images coming out of my NX1 are vibrant and brilliant stock, and I notice very little artifacts as is. But I have little room to push channels in Lumentri before I break it and see artifacts, banding, macroblocking, noise, etc.

    Barring compression variables, the "bandwidth" of 8-bit (or any other bit depth) is largely determined by resolution.  Banding artifacts that are sometimes encountered when pushing 8-bit files (uncompressed) are not due to lack of "bandwidth" per se, but result from the reduced number of incremental thresholds in which all of the image information is contained.

     

     

  13.  

     

    Here is a 1-bit image (in an 8-bit PNG container):

    radial_halftone.thumb.png.2c11aa87f4f80287c6379767baa7b870.png

     

    If you download this image and zoom into it, you will see that it consists only of black and white dots -- no grey shades (except for a few unavoidable PNG artifacts near the center).  It is essentially a 1-bit image.

     

    If you zoom out gradually, you should at some point be able to eliminate most of the moire and see a continuous gradation from black to white.

     

    Now, the question is:  how can a 1-bit image consisting of only black and white dots exhibit continuous gradation from black to white?

     

    The answer is resolution.  If an image has fine enough resolution, it can produce zillions of shades, which, of course readily applies to each color channel in an RGB digital image.

     

    So, resolution is integral to color depth.

  14. 5 hours ago, cantsin said:

    You talk about perceptual color depth,

    No.  I am referring to the actual color depth inherent in an image (or imaging system).

     

     

    5 hours ago, cantsin said:

    created through dithering,

    I never mentioned dithering.  Dithering is the act of adding noise to parts of an image or re-patterning areas in an image.

     

    The viewer's eye blending adjacent colors in a given image is not dithering.

     

     

    5 hours ago, cantsin said:

    And even that can't be measured by your formula, because it doesn't factor in viewing distance.

    Again, I am not talking about dithering -- I am talking about the actual color depth of an image.


    The formula doesn't require viewing distance because it does not involve perception.  It gives an absolute value of color depth inherent in an entire image.  Furthermore, the formula and the point of my example are two different things.


    By the way, the formula can also be used with a smaller, local area of images to compare their relative color depth, but one must use proportionally identical sized areas for such a comparison to be valid.

     

     

    5 hours ago, cantsin said:

    Or to phrase it less politely: this is bogus science.

    What I assert is perfectly valid and fundamental to imaging.  The formula is also very simple, straightforward math.

     

    However, let's forget the formula for a moment.  You apparently admit that resolution affects color depth in analog imaging:

    6 hours ago, cantsin said:

    I see the point that in analog film photography with its non-discrete color values, color depth can only be determined when measuring the color of each part of the image.  Naturally, the number of different color values (and thus color depth) will increase with the resolution of the film or the print.

    Not sure why the same principle fails to apply to digital imaging.  Your suggestion that "non-discrete color values" of analog imaging necessitate measuring color in parts of an image to determine color depth does not negate the fact that the same process works with a digital image.

     

    The reason why I gave the example of the two RGB pixels is that I was merely trying to show in a basic way that an increase in resolution brings an increase in digital color depth (the same way it happens with an analog image).  Once one grasps that rudimentary concept, it is fairly easy to see how the formula simply quantifies digital, RGB color depth.

     

    In a subsequent post, I'll give a different example that should demonstrate the strong influence of resolution on color depth.

  15. 57 minutes ago, cantsin said:

    I found exactly three references for this equation, all in camera forums, and all posted by a forum member called tupp...

    So what?

     

     

    57 minutes ago, cantsin said:

    But seriously, I see the point that in analog film photography with its non-discrete color values, color depth can only be determined when measuring the color of each part of the image. Naturally, the number of different color values (and thus color depth) will increase with the resolution of the film or the print.

    It works the same with digital imaging.  However, in both realms (analog and digital) the absolute color depth of an image includes the entire image.

     

    I will try to demonstrate how color depth increases in digital imaging as the resolution is increased.  Consider a single RGB pixel group of size "X," positioned at a distance at which the red, green and blue pixels blend together and cannot be discerned separately by the viewer.   This RGB pixel group employs a bit depth that is capable of producing "Y" number of colors.

     

    Now, keeping the same viewing distance and the same bit depth, what if we squeezed two RGB pixels into the same space as size "X"?  Would you say that the viewer would still only see "Y" number of colors -- the same number as the single pixel that previously filled size "X" -- or would (slightly) differing shades/colors of the two RGB pixel groups blend to create more colors?

     

    What if we fit 4 RGB pixel groups into space "X"?  ... or 8 RGB pixel groups into space "X"?

     

     

    57 minutes ago, cantsin said:

    In digital photography and video, however, the number of possible color values is predetermined through the color matrix for each pixel. Therefore, in digital imaging, color depth = bit depth.

    As I think I have shown above, resolution plays a fundamental role in digital color depth.

     

    Resolution is in fact an equally weighted factor to bit depth, in digital color depth.  I would be happy to explain this if you have accepted the above example.

  16. 18 hours ago, cantsin said:

    Do you have any reference for this? I couldn't find a single one online.

    I don't have a reference.  When I studied photography long before digital imaging existed, I learned that resolution is integral to color depth.

     

    Color depth in digital imaging is exceedingly more quantifiable, as it involves a given number of pixels with a given bit depth, rather than indeterminate dyes and grain found in film emulsion (and rather than unfixed, non-incremental values inherent in analog video).  The formula for "absolute" color depth in RGB digital imaging is: 

    Color Depth = (Bit Depth x Resolution)^3

  17. 6 hours ago, cantsin said:

    "Color depth or colour depth (see spelling differences), also known as bit depth, is either the number of bits used to indicate the color of a single pixel, in a bitmapped image or video frame buffer". -  https://en.wikipedia.org/wiki/Color_depth

    The notion that bit depth is identical to color depth is a common misconception, that has apparently made it's way into Wikipedia.

     

    The only instances in which bit depth and color depth are the same is when considering only a single photosite/pixel or a single RGB pixle-group.  Once extra pixels/pixel-groups are added, color depth and bit depth become different properties.  This happens due to the fact that resolution and bit depth are equally weighted factors of color depth, in digital imaging.

  18. 3 hours ago, cantsin said:

    The results are attached here. "8bit" and "10bit" only refer to the color depth of the original video file; both stills are 8bit PNGs.

    Actually,  "8bit" and "10bit" refer only to bit depth -- bit depth and color depth are two different properties.

    26 minutes ago, cantsin said:

    One should not forget though that all my tests are specific to Panasonic's VLog curve - which highly compresses dynamic range

    My understanding is that Vlog doesn't actually change the dynamic range -- it changes the tone mapping within the dynamic range to capture more shades in desired exposure regions.

  19. 1 hour ago, Kisaha said:

    @tupp maybe you can't tell the difference between microphones,

    Almost every mic that I have seen on hand-held booms had an interference tube.  What kind of mic is that?

     

     

    1 hour ago, Kisaha said:

    @IronFilm participate in specialized sound forums, and if we copy-paste your statement here, would bring a lot of laughs, or rage, to the sound professionals around the world.

    They might stop laughing when the director and post production sound supervisor start asking why there is so much extraneous noise in the audio.

     

     

    1 hour ago, Kisaha said:

    Also, I am working 19 years as a sound pro, have worked in 4 countries , and what you said just ain't true!

    I have been in production for a little while.  By the way, I started out in audio recording.

     

    I have worked on all kinds of film productions in various departments, from small corporate gigs to large features on stages and on location.  I am telling you exactly what I see audio pros use on such sets.

     

     

    1 hour ago, Kisaha said:

    Ignorance on a field you clearly do not comprehend ain't a sin, just you do not have to push your wrong perspective on a subject matter that you haven't mastered, especially on a forum that people help other people.

    [snip]

    you are not "specifically" (what that does even mean?) an "audio person", so you can't understand that the critical point here ain't only, how "wide"  the reception is, but other qualities and characteristics of sound capturing, that do not apply to most people's limited knowledge.

    When I was involved in audio recording, I was utterly ignorant about the different types of mics and recording techniques.  I also was completely unaware of certain brands/models that had more desirable response for certain applications.

     

    Please enlighten those of us with limited knowledge with your mastery of the important mic properties in recording.  Specifically, please explain what is more important than the degree of directionality in regards to film audio production, given a quality, professional mic.

     

  20. On 10/23/2017 at 1:12 PM, IronFilm said:

    It seems you might be mistakenly thinking that the only microphone on a boom is a shotgun, which is very *very* wrong :-/

    You should tell that to the pro's here in Hollywood.

     

    Almost all the boom operators that I see here on set are using shotguns on their booms, both indoors and outdoors.  These operators are nimble and precise, and they want clean, dry audio.

     

    I am not specifically and audio person, but I always use my boomed shotgun mic, and I have always gotten great results.  I would never want anything with wider reception.

  21. On 10/17/2017 at 6:26 AM, Anaconda_ said:

    There's some funny info in this thread.

    Shotguns are perfectly fine indoors.

    Agreed!  Otherwise, somebody needs to tell all the pro mixers here in Hollywood that they are wasting their money hiring boom operators!

     

    I suspect that this suggestion will sound alien to some here, but never mount your shotgun mic to your camera.  Always boom the shotgun mic just outside of the frame, as close as possible to the subject.  If your subject is static, such as someone sitting for an interview, you can boom from a light stand or C-stand.

     

    Of course, make sure that the shotgun mic is aimed directly at the subject's mouth or aimed directly at the action you want to record.

     

    One important trick that is often overlooked -- position the mic so that there are no air conditioners or other noise sources along the shotgun mic's axis (both in front and in back of the mic) where it is most sensitive.  So, if there is an air conditioner on camera right, boom the mic from slightly to camera right, so that it is aiming a little leftward towards the subject, perpendicular to the noise source on the right of camera.

     

    As @maxotics suggested, it is best to use both a lav and a boomed shotgun, if possible.  In such a scenario, one is always a backup for the other.

     

     

×
×
  • Create New...