Jump to content

tupp

Members
  • Posts

    1,144
  • Joined

  • Last visited

Everything posted by tupp

  1. Thanks for the link! Okay. So, this is an interesting approach! RGBW sensors have appeared before, but not in this arrangement. A 6x6 cell seems a little unwieldy and complex -- there was nothing wrong with the simple 3x1 RGB sensor on the Sony F35. However, if Blackmagic made the filter work and if the results are good, I would like to try it. I just wish that they would offer this type of sensor in a lower resolution, on a less expensive and more versatile camera body.
  2. Newsflash: Rich patent troll defeats underfunded but justified plaintiff in US patent court!
  3. Me, too... like perhaps a shallow lens mount -- is that too much to ask? Some try to stand out from the rest of the pack with unique images. Such individuals value the versatility to be able to use a variety of shallow-mount lenses and shallow-mount creative adapters. Built-in NDs that prevent shallow mounts are not useful when one needs such versatility. Besides, there are plenty of adapters with built-in NDs that can be used on shallow-mount cameras.
  4. Does Blackmagic actually claim that the sensor uses an RGB filter? Also, if it is an RGB filter, wouldn't the matrix be 3x1 (but staggered?) -- similar to a striped RGB filters that we've seen on CCD sensors? If Blackmagic doesn't state that they are actually using an RGB filter, I suspect that they are doing a low level "twist" on a Bayer filter and using creative marketing.
  5. Plus, the RED patent is weak. There is plenty of prior art in regards to wavelet compression of video.
  6. I have the E-M10 III. I love it, and it has a few advantages (and disadvantages) compared to the GX85. However, if I didn't have the E-M10 III nor any lenses, I would seriously consider this US$498 GX85 bundle with 12-32mm and 45-150mm lenses.
  7. 12k? Meh... I'll keep using my Forza 18K camera from 2014. The resolution and frame rates do absolutely nothing for me. On the other hand, if this camera is using a striped RGB CMOS sensor, that IS notable! The striped RGB CCD sensors always looked good, especially on the Sony S35. Does anyone see where Blackmagic states that the camera actually utilizes a striped RGB filter on the sensor? I suspect that they are being creative with their claims here, and could be using a Bayer filter with a twist on the low-level RGB conversion the A-D converter.
  8. Press release here. I've never even tried H.265!
  9. On one feature I shot, we used a "Snow White"-themed rolling case, very similar to this: We put in a little padding in it, and it was very convenient in carrying the fully built camera. When we wanted the camera on set, we would call, "'Princess Case' on set!" When shooting in crowds, we could walk a fair distance away from the case, and nobody bothered it.
  10. I am primarily concerned about the risk of losing your camera and more so about the hazard that the rig poses for motorists behind the hero vehicle. At minimum, replace the open hooks with strong carabiners or with removable chain links. The top carabiner needs to completely encompass that runner on the luggage rack, so that it cannot fall off during a bounce. In addition, to prevent the bounce/wobble, solidity could be added with two extra ratchet straps (or motorcycle straps) -- one strap tensioned between the camera platform and to the top of the car and the other strap tensioned between the camera platform and the bottom of the car. These extra straps would also increase safety.
  11. Not familiar with Glidecam (as much as Steadicam), but there has to be a Glidecam model with enough capacity for a Sony F3. Be aware that Steadicam-style stabilizers are not something that the typical gimbal-kiddie can just pick up and instantly start shooting -- it takes a bit of practice and training. The best Steadicam operators have years of experience. On the other hand, I would bet that gimbals exist that could hold an F3 with an FD prime. I see two alarming problems with the rig pictured: That tag line with the hooks should be replaced with a solid strut (or, even better, two "triangulated" struts). Every significant bump will cause the camera to bounce up and down (ruining that part of the take). There is no backup "safety" portion of the rig -- a rig that uses suction cups and a tag line with open hooks. With each bounce, there is a possibility that one of those open hooks could fall off of its pick point, and if that happens, "that's all she wrote" for the camera (and possibly for a motorcyclist following the car). If they had to go with the tag line (instead of a strut), those open hooks should have been carabiners. Regardless, any car rig should have separate, properly-tensioned safety straps, and the pictured rig has nothing in that regard. A typical grip hostess tray with risers and a head would be more secure and would be easier to rig and adjust. Also, I am not an audio person, but why is that mic mounted like that on the camera with a car rig? I strongly urge you to go review several different tutorials on how to properly and safely rig car mounts, before trying to do so yourself.
  12. Meant to say, "The darker the image, the more saturation seems conspicuous -- unless, of course, the image gets so dark that it's mostly black."
  13. Generally, the darker the image, the more the saturation. Furthermore, most digital cameras give a lot of saturation in their non-raw files, and Canon cameras additionally boost the reds. Starting with the brightest sample posted, the image below was yielded merely by boosting the gamma/mid-tones, bringing the blacks down to zero, reducing the saturation and backing off the reds (for Canon): If one wants to keep it a little darker (and still have it look like daytime), be more gentle in boosting the mid-tones but further reduce the saturation, and keep the blacks at zero and keep the Canon reds reduced as in the image directly above: By the way, the fringing/chromatic-aberration doesn't look too bad, and a light touch with a CA/fringe filter should take care of it nicely.
  14. tupp

    bmp4k adventures

    Of course, all of the battery plates that I have linked/mentioned already have mounting holes/screws. Furthermore, each of the videos that I have linked show the plates mounted to cameras and cages.
  15. tupp

    bmp4k adventures

    The first battery plate that I linked (with the BMP4K power cable) is metal. Regardless, here is a video that gives a rundown on the various sizes of NP batteries.
  16. make sure your power switch is turned off when you plug in your light (or any other device); wear gloves if you are not familiar with how to handle hot lights; only mount gels to the side barndoors or to a gel frame made for the fixture.
  17. Tungsten lights are a good deal right now. In a recent interview, Roger Deakins talked about LEDs vs. tungsten.
  18. tupp

    bmp4k adventures

    NP batteries are a good way to go for most cameras, and there are plenty of options both more expensive and much less expensive than that small rig battery plate. Here is an NP battery plate with a BMP4K connector for US$38. If I had a BMP4K and I wanted to use NP batteries, I would just get a cheap plastic battery plate and wire it to a BMP4K connector (with pigtails), which would probably give a total cost of about US$15. Here is Chung Dha's inexpensive NP battery plate video. Keep in mind that some NP battery plates allow slight leeching of current from the battery by the connected equipment and by the plate's own LED indicators and circuitry. Here is The Frugal Filmmaker's video on how he installed a cut-off switch on plastic NP battery plates that prevents that current leeching.
  19. As you may recall, we have previously discussed the Topaz Labs software, including "JPEG to RAW AI" Again, the color depth of the original image cannot be increased unless something artificial is added. Likewise, detail lost in extreme highlights and shadows cannot be recreated unless something artificial is added. It seems that AI can provide more color depth and create details, as shown in the Topaz software. Removing artifacts such as banding does not indicate an increase in bit depth -- it just means that the artifacts have been removed. The resulting image without banding can have the same bit depth as the original image that suffers banding. COLOR DEPTH = RESOLUTION x BIT DEPTH. So, if one can increase the resolution and maintain the same bit depth, then the color depth increases. Similarly, if one can increase the bit depth and maintain the same resolution, then color depth increases. Of course, merely putting an 8-bit image into a 10-bit container will not increase the color depth of the original image, nor will merely up-ressing an image increase the color depth. Something artificial has to be introduced to increase color depth in a given image.
  20. I mentioned the Metabones G-mount Expander in an earlier GFX 100 thread. It gives a 1:1 crop factor with full frame lenses on the GFX 100. So, no vignetting worries. However, the only current Expander model accepts Nikkor lenses.
  21. I am sure. Kdenlive, Cinelerra (GG) or Blender is what I would recommend. I have heard good things about Shotcut and Olive. Openshot would also work for someone who doesn't need anything fancy. One of the the great things about Cinelerra is the Blue Banana plugin -- a very unique and powerful color grading interface. I wish that somebody would port it to MLV-app. If one is okay with proprietary software, then there is Resolve, Lightworks or Piranha. All of the open source NLEs work on most distros. On the other hand, there are special media distros that are worth considering, such as AV Linux and Ubuntu Studio. I tried Resolve once, and I as I recall it was distributed as a tarball, and I had no problem installing it on my Debian-based distro. Not sure where you are from, but, as I recall, my current Ebay machine cost US $145. It has an i7-3770, 3.4GHz cpu, and it came with 16 gigs of ram, two mediocre graphics cards and a 500GB drive. I put an SSD in it and loaded a non-systemd distro. It's fairly snappy.
  22. You can change the export settings in Keynote. Set it to 1920x1080 (or smaller) and to ProRes 422. Evidently, there is no way to adjust frame rate (defaults to 29.97) nor bitrate in the current version of Keynote. First of all, Mac is not the easiest OS to use. By the way, almost all OS's are point-and-click, so even a Mac user should have little trouble working with them 😉. Furthermore, the free transcoding programs that I mentioned (ffmpeg, handbrake, mencoder) all work the same on any platform, be it Mac, Windows, Linux, BSD, etc. I suggested a Linux rendering box for... rendering (the proxies). In such a scenario, you would still use your current MacBook to edit the renders off of the aforementioned external SSD. On the other hand, you could easily do your entire post production (including presentation animation and editing) on the same Linux box, and it could cost you as little as $200 for a used machine that is adequately snappy. The software would all be free and open source. You still have not given any information on the number of clips per video that you edit nor on the length of those clips. If you use 5 clips per video and they are each 5 minutes in length, start the proxies rendering (in FCP?) on your current MacBook, and go make a cup of coffee. If you are making 3 videos per week with 2 hours of camera clips to edit for each video, a cheap, separate rendering box is likely more efficient. Your heightened teaching efforts are certainly welcome! However, it is doubtful that 4K is any more engaging to students than regular HD. Great!
  23. If the files coming off of those devices are compressed, your machine (and NLE) has to continually uncompress those files on the fly, while applying effects and filters. It's a huge demand on the computer's resources. "Keynote?" That sounds like a cute Apple name for a presentation app. Make those clips uncompressed at a low bitrate. How long and how many are your video clips? Try creating proxies with a couple of files, and see how long it takes. If you have a lot of clips to convert to proxies, you could also build a cheap Linux box and batch render your proxies (with ffmpeg, handbrake, mencoder, etc.) to a fast SSD drive. Then, edit off of that SSD. That workflow might payoff if you are doing three videos a week. Evidently, teaching has changed dramatically since I attended school. You are making more videos per week than a lot of pros make in a month. If this is for teaching, why do you need 4K? Try reducing the resolution to HD and reduce the bitrate wherever possible. What happened to chalkboards? US $2100 for a small 2Ghz laptop with a 512 SSD?! Before dropping that kind of money for a laptop of questionable power/quality, I would look into streamlining your workflow, as suggested above. Again, with any MacBook (or any Mac) that has a T2 chip, make sure that secure boot is disabled.
  24. Don't know much specifically about your gear, but merely using proxies should give a huge performance boost. Working with compressed camera files can slow things down to a crawl and cause discrepancies in effects and color grading. You shouldn't need high quality files until grading and rendering. Some graders transcode camera files to uncompressed and then work on them. Regardless, if you get a new MacBook, Louis Rossmann (who makes a living repairing MacBooks) warns folks to disable secure boot. If your current MacBook has a T2 chip, you should make sure that secure boot is disabled.
  25. In this lens re-greasing video, the vlogger used LiquiMoly LM47 MoS2. On the other hand, the same guy used a different grease in an earlier video.
×
×
  • Create New...