Jump to content

tupp

Members
  • Content Count

    962
  • Joined

  • Last visited

Posts posted by tupp

  1. Most editors that I know don't edit camera files -- they use lower-quality, uncompressed proxies and/or transcode the entire project to an NLE-compatible, high quality format.

     

    Not only can using compressed camera files slow down the NLE and cause problems with effects, but grading compressed files can also cause discrepancies with the rendered look.  My editor (and color-grading) friends usually grade with a high quality format after they have edited with proxies.

     

    Another trick when working on narrative projects -- use multiple drives to speed things up.  For instance, in a two person dialog scene, put all of Character A's shots on one drive, and then put all of character B's shots on another drive.  This will allow cutting between different drives instead of cutting within a single drive.  Likewise, one could put close-ups on one drive, the medium shots on another drive and all of the wide shots on yet another drive.

  2. 22 hours ago, Eugenia said:

    ML footage comes out a little bit contrasty compared to the VisionColor CineTech picture style I'm using (which is my favorite of all I ever tried, and it has more dynamic range than even Technicolor Cinestyle or Miller's C-Log). Being contrasty, it means it's more difficult to manipulate MLRAW footage.

    Aren't MLV raw files are "raw?"  If so, such files have the same initial linear contrast as Canon h264 does, before the in-camera processing.  Thus, with the proper post processing, one should be able to duplicate the contrast of the h264 files (combined with a given picture style).

     

    I have dabbled a little with MLV files and MLV-App.   It seems one can output images with an exceedingly flat contrast by merely using one of the log profiles, if one wants to grade in a program other than MLV-App.

     

    On the other hand, ML instability and full res capability are understandably important concerns.

  3. It's frustrating, because on the E-M10 MkIII's LCD, the battery symbol appears outside of the image area when the camera is in video mode, but, for some reason, the HDMI output includes the battery symbol within the image.  In addition, if one is in manual still mode set to 16:9 aspect ratio, the battery symbol disappears completely from the screen but reappears within the image on the HDMI signal.

    The older E-M10 MkII can output clean HDMI simply by holding down and releasing the "Info" button.

    You could try shooting in still mode with a 3:2 aspect ratio and crop-in the top and the bottom, or, perhaps it would be better to shoot 16x9 (video or still mode) and crop-in a little all around.

    By the way, the E-M10 MkIII can't output 4K video when in shooting mode -- it only outputs 4K when in playback mode.

    One can load this modified firmware on it, but I don't think that the hack enables any of the features you need.

  4. @Volumetrik  Nice tests!

     

     

    7 hours ago, Volumetrik said:

    It seems that both 14-bit and 10-bit depths handle high exposure detail very, very well. Both can be ''metered'' about the same for the highlights.  [snip]

    I had trouble seeing the difference in this scene from 10-bit and 14-bit. In my eye, they both seem equal.

    That's because bit depth and dynamic range are two completely independent properties.

     

    There seems to be a common misconception that bit depth and dynamic range (or contrast) are the same thing or are somehow tied together -- they're not.

     

    Testing the look of various bit depths is more suited to a color depth perception experiment, but we're not viewing your images on a 14-bit monitor, so the results don't give true bit depth (and, hence, true color depth).  Of course, greater bit depths give more color choices when grading, but bit depth doesn't inherently affect contrast.

     

    By the way, another common misconception is that bit depth and color depth are the same property -- they aren't.  Bit depth is merely one of two equally-weighted factors of color depth, with the other factor being resolution.  Essentially,  COLOR DEPTH = BIT DEPTH x RESOLUTION.

  5. 13 hours ago, Cally said:

    Hi i bought my camera second hand its a Panasonic Lumix DMC-GX80 and every time i turn it off a giant yellow ! appears. i asked my friend about it he said it was because he tried and failed to remove the recording limit. does anyone know how to factory reset this (other then in the menu cos i tried that) or to fix it, i dont care about a longer recording time or removing the limit i just the ! gone 

    Did you try the procedure, "Reverting back to initial state," given in the first post of this thread?

  6. 9 hours ago, Dustin said:

    I had a cheap diffuser box I had picked up years back but while that helped it didn’t soften the light enough.

    How big is your softbox?

  7. I didn't watch Berg's entire video (43 minutes), but here it is cued to when he starts to give detail on his experiences with trying to record 24P through HDMI.

     

    Here is a guy who has an M6 II with the new firmware and he has (had?) a Ninja V.  I don't know if one can be messaged on YouTube, but, if so, it might be worth asking this guy to test the M6 II at 24P with the Ninja V.

     

    Of course, it would not be too surprising if Canon intentionally crippled the capabilities of these cameras.

     

  8. As a US citizen, I can assure you that it is embarrassing when a fellow countryman ignorantly gloats about the US.  I don't encounter such folk very often in the wild, and I haven't noticed a lot of them here.

     

    However, there were a couple of recent posts that made me (and likely others from the US) cringe every time the forum member capitalized "AMERICAN."

     

    On the other hand, I must confess that I do idolise (BRITISH spelling) Trump:

     

  9. 1 hour ago, BTM_Pix said:

    I frequented it during 1995 to 2001 so not sure what era that would count as.

    Ah, you were there just before its swank "hipsterfication."  The Hollywood area was really nice and pleasant at that time.  It's really bad now.

     

     

    1 hour ago, BTM_Pix said:

    Probably known as its "wilderness era".

    Things were a lot better then.

  10. 5 hours ago, odie said:

    L.A. March 19, 2020

    CBB32E6B-B8A5-4910-B226-83D2A873B90C.jpeg

    FYI, the camera is near the front of Grauman's Chinese Theater, with the Sun setting in the background.  Normally at this time there would be throngs of tourists, actors dressed as super heroes (looking for photo victims), and "rappers" selling their CDs (never buy  the CDs nor even talk to the "rappers").

     

    This theater is on Hollywood Blvd., about one mile from me.  I usually avoid the tourist section of Hollywood Blvd., but since there are so few people I might take my walk in that direction today.

     

     

    2 hours ago, BTM_Pix said:

    Many years ago when I used to have to go over to do jobs in LA quite regularly, I always used to stay the Roosevelt.

    Did you stay there during it's "golden era" or had they already converted the Roosevelt into a "swank" hipster haven?

     

  11. 8 hours ago, thebrothersthre3 said:

    If you combined three photos at three different exposures you'd have something with more dynamic range than an Alexa I'd say. I'd imagine its just a matter of a processor/sensor that can take three photos at the same time, if thats possible. 

    Other than the camera/sensor mentioned by @androidlad, scientific and machine vision cameras likely exist that have a greater dynamic range than an Alexa (and an A7S).

     

    The Panavision Dynamax sensor supposedly took three different exposures, which yielded a 120dB DR.

     

     

    7 hours ago, Andrew Reid said:

    Yes looks every bit as bad as expected... Honestly looks like total muck.  Contrast is your friend.

    I usually don't make qualitative judgements on the results of such early tests, especially since whoever blazed this trail into HDR was a pioneer, lacking the benefit of years of HDR tweaking by others who subsequently jumped onto the HDR bandwagon.

     

    However, I wouldn't call it "bad" nor does it seem to be "muck," lacking in contrast.  It seems exceedingly "Dragan-esque," with an unnatural, tone-mapped feel. It's actually interesting, but not the look into which HDR eventually developed.

     

     

    7 hours ago, Andrew Reid said:

    Look at lighting in Citizen Kane.

    Obviously, they weren't striving to match the look of "Citizen Kane" (which is a phenomenal film, by the way), nor were they trying to apply "ideal" lighting.  They were merely testing a new idea.

     

    Please give them a break. 

  12. 17 hours ago, kaylee said:

    sorry, I don't understand, what do you guys mean by "triple exposure video"...? how would that work, and for what purpose...?

    There is a bit of overlap in terminology, so It's confusing.  By "triple exposure video," I think that OP intended to mean "HDR video with three different exposure levels."

     

    Historically, the terms "double exposure" and "multiple exposure" meant exposing a single piece of emulsion in-camera, two or more times, to combine different images into something like this:

    relander-6-768x658@2x.jpg

     

    With digital imaging, the multiple exposure process is a little different, because each exposure is a separate image and, thus, a separate file.  Some digital cameras offer various ways to combine files and to create multiple exposure images, such as the Canon 5D mkIII and the Olympus OMD cameras.

     

    Undoubtedly, most who create "multiple exposure" images today are combining the images in post/editing.

     

     

    On 3/18/2020 at 12:00 AM, heart0less said:

    IMHO, it was one of the biggest accomplishments of Magic Lantern.  Canons back then didn't have the best dynamic range, but implementing Dual ISO to them changed that.

    Magic Lantern offers two HDR video methods:  Dual ISO (which you mentioned) and HDR video.  I don't remember which method appeared first.

     

    ML's Dual ISO is a technique in which every other row of pixels is given a different exposure/gain.  So, for example, all "even" rows are given a darker exposure while all "odd" rows are given a brighter exposure.  The separate exposures are "blended" to make a single HDR image.  This method can induce aliasing/moire.

     

    Our own @ZEEK made a video tutorial on setting up a Dual ISO raw video on a Canon EOSM:

     

    Incidentally, "dual ISO" has morphed into a term for more recent cameras featuring a sensor that can be "set" to one of two native ISOs.

     

    On the other hand, ML's HDR video simply gives a different exposure/gain to every other frame.  For instance, every "odd" frame is given a darker exposure while every "even" frame is given a brighter exposure.  This technique doesn't suffer the aliasing/moire of ML's Dual ISO, but I seem to recall reports of motion artifacts.

     

    By the way, prior to ML's Dual ISO, Panavision touted their Dyanmax HDR sensor which was claimed to feature three different, nested pixel arrays, with each array having a different exposure (somewhat similar to ML Dual ISO).  Panavision has remained mysterious in regards to why they abandoned the idea (and subsequently sold their sensor foundry).

     

     

    On 3/17/2020 at 6:36 PM, leslie said:

    just line up three cameras together and blend in post ?

    It was done in 2010 with two Canon 5D mkII's and a beam splitter (probably the mirror type):

     

     

    Here is another early method from 2011 with a single Canon 7D and a post process:

     

     

    For three cameras, one could possibly use a prism beam splitter.  If one could put the beam splitter and three shallow-mount cameras in a light-tight enclosure, it might be possible to use a single "taking" lens.

  13. There might be ways that a cinematographer and editor could  "interpret" a performance.  And, perhaps, lighting could influence a performance, as could lenses, DOF (if the actor/performer could see those results on set).

     

    However, it is not the cinematographer's job to influence a performance -- that is the directors job.

     

    In regards to an editor or cinematographer "interpreting" a performance, both crew members exist to serve the director's vision.  So, they can suggest ideas and execute their craft, but the director necessarily has the final word.

  14. 5 hours ago, leslie said:

    anyone one done anything with either 8mm film or 16mm.

    Yes.

     

    6 hours ago, leslie said:

    One of the guys at the mens shed has a lot of his own stuff from yesteryear and mentioned that he'd like to get it converted to digital. I dont have an issue with a roll of 35mm but hundreds of feet or more is a bit daunting.

    That requires a telecine or film chain.  If it's serious work, best to bring the footage  to someone who has a decent set-up.

     

     

  15. 20 minutes ago, Andrew Reid said:

    We had lens rehousing, now we need camera rehousing.

    A complete camera rehousing has already been done with a BM camera.

     

    Due to BM's "design aesthetic," there also have been several other mods to their cameras.  Remember the Wooden camera BMPC lens mount mod?  How about their current BMPCC6K lens mount mod kit?

     

     

    29 minutes ago, Andrew Reid said:

    Let's see a Chinese company take the inside out of the Pocket 6K and put it all in a different body... Adding the hinge to the screen, don't forget.

    That would be great!  By the way, a Chinese company has already added the hinged screen.

  16. On 2/7/2020 at 3:39 AM, leslie said:

    Can you point my to a link where industrial design is specified ?

    What is the full name of the company again?...

     

    Blackmagic Design is a textbook example of a small, newer company focusing on form over function.

×
×
  • Create New...