Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by tupp

  1. 42 minutes ago, SRV1981 said:

    Thanks @tupp I am using a combo of iPhone and Sona A7III video.

    If the files coming off of those devices are compressed, your machine (and NLE) has to continually uncompress those files on the fly, while applying effects and filters.  It's a huge demand on the computer's resources.



    42 minutes ago, SRV1981 said:

    Additionally, I am making videos from Keynote for animations etc and layering all of these clips sometimes picture in picture etc.

    "Keynote?"  That sounds like a cute Apple name for a presentation app.


    Make those clips uncompressed at a low bitrate.



    42 minutes ago, SRV1981 said:

    I am just not sure if it will take a long time to create proxies...

    How long and how many are your video clips?


    Try creating proxies with a couple of files, and see how long it takes.


    If you have a lot of clips to convert to proxies, you could also build a cheap Linux box and batch render your proxies (with ffmpeg, handbrake, mencoder, etc.)  to a fast SSD drive.  Then, edit off of that SSD.  That workflow might payoff if you are doing three videos a week.



    42 minutes ago, SRV1981 said:

    ... I am making anywhere from 1-3 videos a week, which is to support my job as a teacher.  

    Evidently, teaching has changed dramatically since I attended school.  You are making more videos per week than a lot of pros make in a month.


    If this is for teaching, why do you need 4K?  Try reducing the resolution to HD and reduce the bitrate wherever possible.


    What happened to chalkboards? 



    42 minutes ago, SRV1981 said:

    Apple has offered $480 for my 2015 MBP and I was looking at a 13" 2020 MBP 2 ghz, 32gb ram, 512 SSD for $2100 or $1600 after giftcard.

    US $2100 for a small 2Ghz laptop with a 512 SSD?!   Before dropping that kind of money for a laptop of questionable power/quality,  I would look into streamlining your workflow, as suggested above.


    Again, with any MacBook (or any Mac) that has a T2 chip, make sure that secure boot is disabled.

  2. Don't know much specifically about your gear, but merely using proxies should give a huge performance boost.  Working with compressed camera files can slow things down to a crawl and cause discrepancies in effects and color grading.


    You shouldn't need high quality files until grading and rendering.  Some graders transcode camera files to uncompressed and then work on them.


    Regardless, if you get a new MacBook, Louis Rossmann (who makes a living repairing MacBooks) warns folks to disable secure boot.   If your current MacBook has a T2 chip, you should make sure that secure boot is disabled.

  3. Interesting article and blog post!


    Many folks prefer the look of vintage lenses with digital sensors.  It's good that Cooke has noticed this trend and reacted to it.  Of course, they are not the only lens manufacturer to come out with brand-new "vintage" lines.


    It would be great if someone would test the character of the new Cooke "vintage" lenses against that of their old "Xtal Espress" anamorphics.

  4. On 5/20/2020 at 7:04 AM, Video Hummus said:

    Does the curve adjustment happen before the encoding?

    Not sure how the highlight/shadow control could happen after encoding.


    By encoding, do you mean "conversion to 8-bit?"  If so, I have no clue as to what stage in the camera's imaging process that the highlight/shadow control is applied, but I would guess that the 8-bit conversion happens early at a low level, before most other processes.

  5. 6 hours ago, AdrParkinson said:

    How would you say it compares with the old Cinestyle profile for Canon? I always found that while it made grading easier, the bitrate just wasn't there to support it and so there were too many artifacts.

    I never experienced artifacts with Cinestyle.  Are you referring to  compression artifacts in the shadows or to posteriztion/banding?


    At any rate, I haven't noticed problems on the E-M10 III with my highlight/shadow settings (in my brief experience so far with the camera), but, again, I am using a light touch with those settings.


    I don't have any short clips, otherwise I would post them.  When I get a chance, I will try to snip out a few seconds from one of the files for download -- I think that ffmpeg can do so without any transcoding.

  6. One thing that never gets mentioned about the E-M10 III is that, although it cannot employ custom picture profiles, it does share the highlight/shadow control feature found in other OMD cameras.  This attribute allows changes in the camera's contrast curve over a large range of values.  It's a powerful control, and one must use a light touch to avoid pushing the curve too far, as it can look unnatural.  I set the highlights to "-1" and the shadows to "+1," which levels the contrast curve a bit.  Additionally, I enable the "Muted" picture profile, with @TiJoBa's recommended "-2" setting for sharpness and with a "0" setting for saturation.


    This Imaging-Resource review gives examples of how the highlight/shadow control can affect those areas of the contrast curve.  Scroll down to the "Highlight/Shadow Control" section and "mouse over" the different values to see how it changes the detail and brightness in those areas.


    By the way, the OMD highlight/shadow control also allows adjustment of the midrange values (at the "center cross" in the display).  Eventually, I will test setting the shadows to "+2," the midrange to +1 and the highlights to "0."



  7. 13 minutes ago, rawshooter said:

    The MagicLantern project is pretty much dead - the last nightly builds are from July 2018:  https://builds.magiclantern.fm/  There are only some (or just one?) individual developers who keep hacking individual camera ports and offering their own builds off-site.

    On the contrary, ML is thriving.  You can't go by that nightly build page, as that is not where the action is. Most of the nightly builds that everyone uses are not official.


    To see the current activity, go to the main forum page, and scroll down to the bottom section titled, "Recently Updated Topics."  After a brief scanning of just a few of the top messages, I see the following active developers:  Danne, masc, cmh, Levas, ilia3101, reddeercity, 2blackbar, critix.


    Our own @ZEEK is active with ML and MLV-App instructional videos.

  8. 2 minutes ago, mr_eight said:

    see attached screenshot taken from Wooden Camera's instruction booklet, so it should be possible to mount a arca-swiss clamp



    Again, you don't necessarily need a separate clamp -- you could just bolt an L-bracket directly to the cage.  Of course, using Arca-Swiss clamps or other quick-release system makes the changeovers faster (and adds height to the camera).

  9. Most editors that I know don't edit camera files -- they use lower-quality, uncompressed proxies and/or transcode the entire project to an NLE-compatible, high quality format.


    Not only can using compressed camera files slow down the NLE and cause problems with effects, but grading compressed files can also cause discrepancies with the rendered look.  My editor (and color-grading) friends usually grade with a high quality format after they have edited with proxies.


    Another trick when working on narrative projects -- use multiple drives to speed things up.  For instance, in a two person dialog scene, put all of Character A's shots on one drive, and then put all of character B's shots on another drive.  This will allow cutting between different drives instead of cutting within a single drive.  Likewise, one could put close-ups on one drive, the medium shots on another drive and all of the wide shots on yet another drive.

  10. 22 hours ago, Eugenia said:

    ML footage comes out a little bit contrasty compared to the VisionColor CineTech picture style I'm using (which is my favorite of all I ever tried, and it has more dynamic range than even Technicolor Cinestyle or Miller's C-Log). Being contrasty, it means it's more difficult to manipulate MLRAW footage.

    Aren't MLV raw files are "raw?"  If so, such files have the same initial linear contrast as Canon h264 does, before the in-camera processing.  Thus, with the proper post processing, one should be able to duplicate the contrast of the h264 files (combined with a given picture style).


    I have dabbled a little with MLV files and MLV-App.   It seems one can output images with an exceedingly flat contrast by merely using one of the log profiles, if one wants to grade in a program other than MLV-App.


    On the other hand, ML instability and full res capability are understandably important concerns.

  11. It's frustrating, because on the E-M10 MkIII's LCD, the battery symbol appears outside of the image area when the camera is in video mode, but, for some reason, the HDMI output includes the battery symbol within the image.  In addition, if one is in manual still mode set to 16:9 aspect ratio, the battery symbol disappears completely from the screen but reappears within the image on the HDMI signal.

    The older E-M10 MkII can output clean HDMI simply by holding down and releasing the "Info" button.

    You could try shooting in still mode with a 3:2 aspect ratio and crop-in the top and the bottom, or, perhaps it would be better to shoot 16x9 (video or still mode) and crop-in a little all around.

    By the way, the E-M10 MkIII can't output 4K video when in shooting mode -- it only outputs 4K when in playback mode.

    One can load this modified firmware on it, but I don't think that the hack enables any of the features you need.

  12. @Volumetrik  Nice tests!



    7 hours ago, Volumetrik said:

    It seems that both 14-bit and 10-bit depths handle high exposure detail very, very well. Both can be ''metered'' about the same for the highlights.  [snip]

    I had trouble seeing the difference in this scene from 10-bit and 14-bit. In my eye, they both seem equal.

    That's because bit depth and dynamic range are two completely independent properties.


    There seems to be a common misconception that bit depth and dynamic range (or contrast) are the same thing or are somehow tied together -- they're not.


    Testing the look of various bit depths is more suited to a color depth perception experiment, but we're not viewing your images on a 14-bit monitor, so the results don't give true bit depth (and, hence, true color depth).  Of course, greater bit depths give more color choices when grading, but bit depth doesn't inherently affect contrast.


    By the way, another common misconception is that bit depth and color depth are the same property -- they aren't.  Bit depth is merely one of two equally-weighted factors of color depth, with the other factor being resolution.  Essentially,  COLOR DEPTH = BIT DEPTH x RESOLUTION.

  13. 13 hours ago, Cally said:

    Hi i bought my camera second hand its a Panasonic Lumix DMC-GX80 and every time i turn it off a giant yellow ! appears. i asked my friend about it he said it was because he tried and failed to remove the recording limit. does anyone know how to factory reset this (other then in the menu cos i tried that) or to fix it, i dont care about a longer recording time or removing the limit i just the ! gone 

    Did you try the procedure, "Reverting back to initial state," given in the first post of this thread?

  14. 9 hours ago, Dustin said:

    I had a cheap diffuser box I had picked up years back but while that helped it didn’t soften the light enough.

    How big is your softbox?

  15. I didn't watch Berg's entire video (43 minutes), but here it is cued to when he starts to give detail on his experiences with trying to record 24P through HDMI.


    Here is a guy who has an M6 II with the new firmware and he has (had?) a Ninja V.  I don't know if one can be messaged on YouTube, but, if so, it might be worth asking this guy to test the M6 II at 24P with the Ninja V.


    Of course, it would not be too surprising if Canon intentionally crippled the capabilities of these cameras.


  16. As a US citizen, I can assure you that it is embarrassing when a fellow countryman ignorantly gloats about the US.  I don't encounter such folk very often in the wild, and I haven't noticed a lot of them here.


    However, there were a couple of recent posts that made me (and likely others from the US) cringe every time the forum member capitalized "AMERICAN."


    On the other hand, I must confess that I do idolise (BRITISH spelling) Trump:


  17. 1 hour ago, BTM_Pix said:

    I frequented it during 1995 to 2001 so not sure what era that would count as.

    Ah, you were there just before its swank "hipsterfication."  The Hollywood area was really nice and pleasant at that time.  It's really bad now.



    1 hour ago, BTM_Pix said:

    Probably known as its "wilderness era".

    Things were a lot better then.

  18. 5 hours ago, odie said:

    L.A. March 19, 2020


    FYI, the camera is near the front of Grauman's Chinese Theater, with the Sun setting in the background.  Normally at this time there would be throngs of tourists, actors dressed as super heroes (looking for photo victims), and "rappers" selling their CDs (never buy  the CDs nor even talk to the "rappers").


    This theater is on Hollywood Blvd., about one mile from me.  I usually avoid the tourist section of Hollywood Blvd., but since there are so few people I might take my walk in that direction today.



    2 hours ago, BTM_Pix said:

    Many years ago when I used to have to go over to do jobs in LA quite regularly, I always used to stay the Roosevelt.

    Did you stay there during it's "golden era" or had they already converted the Roosevelt into a "swank" hipster haven?


  19. 8 hours ago, thebrothersthre3 said:

    If you combined three photos at three different exposures you'd have something with more dynamic range than an Alexa I'd say. I'd imagine its just a matter of a processor/sensor that can take three photos at the same time, if thats possible. 

    Other than the camera/sensor mentioned by @androidlad, scientific and machine vision cameras likely exist that have a greater dynamic range than an Alexa (and an A7S).


    The Panavision Dynamax sensor supposedly took three different exposures, which yielded a 120dB DR.



    7 hours ago, Andrew Reid said:

    Yes looks every bit as bad as expected... Honestly looks like total muck.  Contrast is your friend.

    I usually don't make qualitative judgements on the results of such early tests, especially since whoever blazed this trail into HDR was a pioneer, lacking the benefit of years of HDR tweaking by others who subsequently jumped onto the HDR bandwagon.


    However, I wouldn't call it "bad" nor does it seem to be "muck," lacking in contrast.  It seems exceedingly "Dragan-esque," with an unnatural, tone-mapped feel. It's actually interesting, but not the look into which HDR eventually developed.



    7 hours ago, Andrew Reid said:

    Look at lighting in Citizen Kane.

    Obviously, they weren't striving to match the look of "Citizen Kane" (which is a phenomenal film, by the way), nor were they trying to apply "ideal" lighting.  They were merely testing a new idea.


    Please give them a break. 

  • Create New...