Jump to content

tupp

Members
  • Posts

    1,148
  • Joined

  • Last visited

Everything posted by tupp

  1. You can change the export settings in Keynote. Set it to 1920x1080 (or smaller) and to ProRes 422. Evidently, there is no way to adjust frame rate (defaults to 29.97) nor bitrate in the current version of Keynote. First of all, Mac is not the easiest OS to use. By the way, almost all OS's are point-and-click, so even a Mac user should have little trouble working with them 😉. Furthermore, the free transcoding programs that I mentioned (ffmpeg, handbrake, mencoder) all work the same on any platform, be it Mac, Windows, Linux, BSD, etc. I suggested a Linux rendering box for... rendering (the proxies). In such a scenario, you would still use your current MacBook to edit the renders off of the aforementioned external SSD. On the other hand, you could easily do your entire post production (including presentation animation and editing) on the same Linux box, and it could cost you as little as $200 for a used machine that is adequately snappy. The software would all be free and open source. You still have not given any information on the number of clips per video that you edit nor on the length of those clips. If you use 5 clips per video and they are each 5 minutes in length, start the proxies rendering (in FCP?) on your current MacBook, and go make a cup of coffee. If you are making 3 videos per week with 2 hours of camera clips to edit for each video, a cheap, separate rendering box is likely more efficient. Your heightened teaching efforts are certainly welcome! However, it is doubtful that 4K is any more engaging to students than regular HD. Great!
  2. If the files coming off of those devices are compressed, your machine (and NLE) has to continually uncompress those files on the fly, while applying effects and filters. It's a huge demand on the computer's resources. "Keynote?" That sounds like a cute Apple name for a presentation app. Make those clips uncompressed at a low bitrate. How long and how many are your video clips? Try creating proxies with a couple of files, and see how long it takes. If you have a lot of clips to convert to proxies, you could also build a cheap Linux box and batch render your proxies (with ffmpeg, handbrake, mencoder, etc.) to a fast SSD drive. Then, edit off of that SSD. That workflow might payoff if you are doing three videos a week. Evidently, teaching has changed dramatically since I attended school. You are making more videos per week than a lot of pros make in a month. If this is for teaching, why do you need 4K? Try reducing the resolution to HD and reduce the bitrate wherever possible. What happened to chalkboards? US $2100 for a small 2Ghz laptop with a 512 SSD?! Before dropping that kind of money for a laptop of questionable power/quality, I would look into streamlining your workflow, as suggested above. Again, with any MacBook (or any Mac) that has a T2 chip, make sure that secure boot is disabled.
  3. Don't know much specifically about your gear, but merely using proxies should give a huge performance boost. Working with compressed camera files can slow things down to a crawl and cause discrepancies in effects and color grading. You shouldn't need high quality files until grading and rendering. Some graders transcode camera files to uncompressed and then work on them. Regardless, if you get a new MacBook, Louis Rossmann (who makes a living repairing MacBooks) warns folks to disable secure boot. If your current MacBook has a T2 chip, you should make sure that secure boot is disabled.
  4. In this lens re-greasing video, the vlogger used LiquiMoly LM47 MoS2. On the other hand, the same guy used a different grease in an earlier video.
  5. Interesting article and blog post! Many folks prefer the look of vintage lenses with digital sensors. It's good that Cooke has noticed this trend and reacted to it. Of course, they are not the only lens manufacturer to come out with brand-new "vintage" lines. It would be great if someone would test the character of the new Cooke "vintage" lenses against that of their old "Xtal Espress" anamorphics.
  6. Not sure how the highlight/shadow control could happen after encoding. By encoding, do you mean "conversion to 8-bit?" If so, I have no clue as to what stage in the camera's imaging process that the highlight/shadow control is applied, but I would guess that the 8-bit conversion happens early at a low level, before most other processes.
  7. I never experienced artifacts with Cinestyle. Are you referring to compression artifacts in the shadows or to posteriztion/banding? At any rate, I haven't noticed problems on the E-M10 III with my highlight/shadow settings (in my brief experience so far with the camera), but, again, I am using a light touch with those settings. I don't have any short clips, otherwise I would post them. When I get a chance, I will try to snip out a few seconds from one of the files for download -- I think that ffmpeg can do so without any transcoding.
  8. One thing that never gets mentioned about the E-M10 III is that, although it cannot employ custom picture profiles, it does share the highlight/shadow control feature found in other OMD cameras. This attribute allows changes in the camera's contrast curve over a large range of values. It's a powerful control, and one must use a light touch to avoid pushing the curve too far, as it can look unnatural. I set the highlights to "-1" and the shadows to "+1," which levels the contrast curve a bit. Additionally, I enable the "Muted" picture profile, with @TiJoBa's recommended "-2" setting for sharpness and with a "0" setting for saturation. This Imaging-Resource review gives examples of how the highlight/shadow control can affect those areas of the contrast curve. Scroll down to the "Highlight/Shadow Control" section and "mouse over" the different values to see how it changes the detail and brightness in those areas. By the way, the OMD highlight/shadow control also allows adjustment of the midrange values (at the "center cross" in the display). Eventually, I will test setting the shadows to "+2," the midrange to +1 and the highlights to "0."
  9. On the contrary, ML is thriving. You can't go by that nightly build page, as that is not where the action is. Most of the nightly builds that everyone uses are not official. To see the current activity, go to the main forum page, and scroll down to the bottom section titled, "Recently Updated Topics." After a brief scanning of just a few of the top messages, I see the following active developers: Danne, masc, cmh, Levas, ilia3101, reddeercity, 2blackbar, critix. Our own @ZEEK is active with ML and MLV-App instructional videos.
  10. Yep. Again, you don't necessarily need a separate clamp -- you could just bolt an L-bracket directly to the cage. Of course, using Arca-Swiss clamps or other quick-release system makes the changeovers faster (and adds height to the camera).
  11. I think most L-Brackets take 1/4-20 threads on both sides, so you might not even need a separate mounting clamp -- Arca-Swiss or otherwise.
  12. tupp

    How to edit H265?

    Most editors that I know don't edit camera files -- they use lower-quality, uncompressed proxies and/or transcode the entire project to an NLE-compatible, high quality format. Not only can using compressed camera files slow down the NLE and cause problems with effects, but grading compressed files can also cause discrepancies with the rendered look. My editor (and color-grading) friends usually grade with a high quality format after they have edited with proxies. Another trick when working on narrative projects -- use multiple drives to speed things up. For instance, in a two person dialog scene, put all of Character A's shots on one drive, and then put all of character B's shots on another drive. This will allow cutting between different drives instead of cutting within a single drive. Likewise, one could put close-ups on one drive, the medium shots on another drive and all of the wide shots on yet another drive.
  13. Aren't MLV raw files are "raw?" If so, such files have the same initial linear contrast as Canon h264 does, before the in-camera processing. Thus, with the proper post processing, one should be able to duplicate the contrast of the h264 files (combined with a given picture style). I have dabbled a little with MLV files and MLV-App. It seems one can output images with an exceedingly flat contrast by merely using one of the log profiles, if one wants to grade in a program other than MLV-App. On the other hand, ML instability and full res capability are understandably important concerns.
  14. Not familiar with the Wooden Camera cage, but most Arca-Swiss clamps have standard 1/4" or 3/8" female threads or have a counter-sunk hole. So, if the cage has a way to attach cameras/plates with those threads/hole, then it should be easy to find an Arca-Swiss clamp that works
  15. It's frustrating, because on the E-M10 MkIII's LCD, the battery symbol appears outside of the image area when the camera is in video mode, but, for some reason, the HDMI output includes the battery symbol within the image. In addition, if one is in manual still mode set to 16:9 aspect ratio, the battery symbol disappears completely from the screen but reappears within the image on the HDMI signal. The older E-M10 MkII can output clean HDMI simply by holding down and releasing the "Info" button. You could try shooting in still mode with a 3:2 aspect ratio and crop-in the top and the bottom, or, perhaps it would be better to shoot 16x9 (video or still mode) and crop-in a little all around. By the way, the E-M10 MkIII can't output 4K video when in shooting mode -- it only outputs 4K when in playback mode. One can load this modified firmware on it, but I don't think that the hack enables any of the features you need.
  16. @Volumetrik Nice tests! That's because bit depth and dynamic range are two completely independent properties. There seems to be a common misconception that bit depth and dynamic range (or contrast) are the same thing or are somehow tied together -- they're not. Testing the look of various bit depths is more suited to a color depth perception experiment, but we're not viewing your images on a 14-bit monitor, so the results don't give true bit depth (and, hence, true color depth). Of course, greater bit depths give more color choices when grading, but bit depth doesn't inherently affect contrast. By the way, another common misconception is that bit depth and color depth are the same property -- they aren't. Bit depth is merely one of two equally-weighted factors of color depth, with the other factor being resolution. Essentially, COLOR DEPTH = BIT DEPTH x RESOLUTION.
  17. Did you try the procedure, "Reverting back to initial state," given in the first post of this thread?
  18. Evidently, the Octopus prototype with the Ximea m4/3 module has been up and running for awhile: Hopefully, the Octopus folks can get this camera into production.
  19. tupp

    Lighting Help

    How big is your softbox?
  20. I didn't watch Berg's entire video (43 minutes), but here it is cued to when he starts to give detail on his experiences with trying to record 24P through HDMI. Here is a guy who has an M6 II with the new firmware and he has (had?) a Ninja V. I don't know if one can be messaged on YouTube, but, if so, it might be worth asking this guy to test the M6 II at 24P with the Ninja V. Of course, it would not be too surprising if Canon intentionally crippled the capabilities of these cameras.
  21. As a US citizen, I can assure you that it is embarrassing when a fellow countryman ignorantly gloats about the US. I don't encounter such folk very often in the wild, and I haven't noticed a lot of them here. However, there were a couple of recent posts that made me (and likely others from the US) cringe every time the forum member capitalized "AMERICAN." On the other hand, I must confess that I do idolise (BRITISH spelling) Trump:
  22. Ah, you were there just before its swank "hipsterfication." The Hollywood area was really nice and pleasant at that time. It's really bad now. Things were a lot better then.
  23. FYI, the camera is near the front of Grauman's Chinese Theater, with the Sun setting in the background. Normally at this time there would be throngs of tourists, actors dressed as super heroes (looking for photo victims), and "rappers" selling their CDs (never buy the CDs nor even talk to the "rappers"). This theater is on Hollywood Blvd., about one mile from me. I usually avoid the tourist section of Hollywood Blvd., but since there are so few people I might take my walk in that direction today. Did you stay there during it's "golden era" or had they already converted the Roosevelt into a "swank" hipster haven?
×
×
  • Create New...