Jump to content


  • Content Count

  • Joined

  • Last visited

Posts posted by tupp

  1. 12k?   Meh... I'll keep using my Forza 18K camera from 2014.


    The resolution and frame rates do absolutely nothing for me.


    On the other hand, if this camera is using a striped RGB CMOS sensor, that IS notable!  The striped RGB CCD sensors always looked good, especially on the Sony S35.


    Does anyone see where Blackmagic states that the camera actually utilizes a striped RGB filter on the sensor?  I suspect that they are being creative with their claims here, and could be using a Bayer filter with a twist on the low-level RGB conversion the A-D converter.

  2. 11 hours ago, Antoin e said:

    Also, dummy question : How do you transport these type of cameras for a shoot ? I can put in my pelican 1510 when everything is detached but it does take some time to put it together so i guess maybe just put the F3 already build up in the car with a seatbelt and put the lenses/accessories in the case.

    On one feature I shot, we used a "Snow White"-themed rolling case, very similar to this:


    We put in a little padding in it, and it was very convenient in carrying the fully built camera.  When we wanted the camera on set, we would call, "'Princess Case' on set!"


    When shooting in crowds, we could walk a fair distance away from the case, and nobody bothered it.

  3. 44 minutes ago, Antoin e said:

    Of course it is amateur but at least i can still get some cool shots

    I am primarily concerned about the risk of losing your camera and more so about the hazard that the rig poses for motorists behind the hero vehicle.


    At minimum, replace the open hooks with strong carabiners or with removable chain links.  The top carabiner needs to completely encompass that runner on the luggage rack, so that it cannot fall off during a bounce.


    In addition, to prevent the bounce/wobble, solidity could be added with two extra ratchet straps (or motorcycle straps) -- one strap tensioned between the camera platform and to the top of the car and the other strap tensioned between the camera platform and the bottom of the car.  These extra straps would also increase safety.

  4. 1 hour ago, Antoin e said:

    Do any of you already used it on a Glidecam ? I tried it on my Ronin M and obviously it did not work but i am wondering if it is possible with a glidecam in order to get steadyshots.

    Not familiar with Glidecam (as much as Steadicam), but there has to be a Glidecam model with enough capacity for a Sony F3.


    Be aware that Steadicam-style stabilizers are not something that the typical gimbal-kiddie can just pick up and instantly start shooting -- it takes a bit of practice and training.  The best Steadicam operators have years of experience.


    On the other hand, I would bet that gimbals exist that could hold an F3 with an FD prime.



    1 hour ago, Antoin e said:

    Also i will try to mount it on my car tomorrow with the camtree car mount beacuse i saw a photo of a F3 being rigged on with it :


    I see two alarming problems with the rig pictured:

    1. That tag line with the hooks should be replaced with a solid strut (or, even better, two "triangulated" struts).  Every significant bump will cause the camera to bounce up and down (ruining that part of the take).
    2. There is no backup "safety" portion of the rig -- a rig that uses suction cups and a tag line with open hooks.  With each bounce, there is a possibility that one of those open hooks could fall off of its pick point, and if that happens, "that's all she wrote" for the camera (and possibly for a motorcyclist following the car).  If they had to go with the tag line (instead of a strut), those open hooks should have been carabiners.  Regardless, any car rig should have separate, properly-tensioned safety straps, and the pictured rig has nothing in that regard.


    A typical grip hostess tray with risers and a head would be more secure and would be easier to rig and adjust.


    Also, I am not an audio person, but why is that mic mounted like that on the camera with a car rig?


    I strongly urge you to go review several different tutorials on how to properly and safely rig car mounts, before trying to do so yourself.

  5. Generally, the darker the image, the more the saturation.  Furthermore, most digital cameras give a lot of saturation in their non-raw files, and Canon cameras additionally boost the reds.


    Starting with the brightest sample posted, the image below was yielded merely by boosting the gamma/mid-tones, bringing the blacks down to zero, reducing the saturation and backing off the reds (for Canon):




    If one wants to keep it a little darker (and still have it look like daytime), be more gentle in boosting the mid-tones but further reduce the saturation, and keep the blacks at zero and keep the Canon reds reduced as in the image directly above:




    By the way, the fringing/chromatic-aberration doesn't look too bad, and a light touch with a CA/fringe filter should take care of it nicely.

  6. 44 minutes ago, leslie said:

    there's probably is good vlog in buying a cheap battery plate drilling / tapping and maybe countersinking a couple of holes in it to aid functionality in bolting it to a camera. However i'm kinda lazy if smallrig want to do the work for me i'll pay them a $100 😎  i can see the value in that

    Of course, all of the battery plates that I have linked/mentioned already have mounting holes/screws.


    Furthermore, each of the videos that I have linked show the plates mounted to cameras and cages.

  7. On 6/12/2020 at 2:54 PM, Oliver Daniel said:

    When I started out, I had barely any budget for lights and bought 3 X 800w redheads for £120. They sparked when I plugged them in, burnt my hands and melted all my gels in like 2 minutes.

    1. make sure your power switch is turned off when you plug in your light (or any other device);
    2. wear gloves if you are not familiar with how to handle hot lights;
    3. only mount gels to the side barndoors or to a gel frame made for the fixture.



  8. 43 minutes ago, leslie said:

    A quick google search has that smallrig np battery holder on preorder  for $ 95  aus from the videoguys.

    NP batteries are a good way to go for most cameras, and there are plenty of options both more expensive and much less expensive than that small rig battery plate.


    Here is an NP battery plate with a BMP4K connector for US$38.


    If I had a BMP4K and I wanted to use NP batteries, I would just get a cheap plastic battery plate and wire it to a BMP4K connector (with pigtails), which would probably give a total cost of about US$15.  Here is Chung Dha's inexpensive NP battery plate video.


    Keep in mind that some NP battery plates allow slight leeching of current from the battery by the connected equipment and by the plate's own LED indicators and circuitry.  Here is The Frugal Filmmaker's video on how he installed a cut-off switch on plastic NP battery plates that prevents that current leeching.

  9. 3 hours ago, sanveer said:

    I noticed it already in the upressing video of Topaz Lapbs where 480p was upressed to 4k.

    As you may recall, we have previously discussed the Topaz Labs software, including "JPEG to RAW AI"


    3 hours ago, sanveer said:

    Theoretically, as long as a JPEG (or any other 8-bit photo or video codec) isn't exposed absolutely terribly, its possible to not just improve (usable) dynamic range, but also increase the bit depth (super useful for post work) as well as increase the resolution.

    Again, the color depth of the original image cannot be increased unless something artificial is added.  Likewise, detail lost in extreme highlights and shadows cannot be recreated unless something artificial is added.


    It seems that AI can provide more color depth and create details, as shown in the Topaz software.



    3 hours ago, sanveer said:

    I noticed it already in the upressing video of Topaz Lapbs where 480p was upressed to 4k. It removed all kinds of artefacts and banding. Which, implies that, intentionally or unintentionally ir was also improving the vit depth.

    Removing artifacts such as banding does not indicate an increase in bit depth -- it just means that the artifacts have been removed.  The resulting image without banding can have the same bit depth as the original image that suffers banding.


    COLOR DEPTH =  RESOLUTION x BIT DEPTH.  So, if one can increase the resolution and maintain the same bit depth, then the color depth increases.  Similarly, if one can increase the bit depth and maintain the same resolution, then color depth increases.


    Of course, merely putting an 8-bit image into a 10-bit container will not increase the color depth of the original image, nor will merely up-ressing an image increase the color depth.  Something artificial has to be introduced to increase color depth in a given image.



  10. 9 hours ago, Mark Romero 2 said:

    I am not sure that the OP could edit on a US $200 linux box.

    I am sure.


    9 hours ago, Mark Romero 2 said:

    What would you propose as an NLE???

    Kdenlive, Cinelerra (GG) or Blender is what I would recommend.  I have heard good things about Shotcut and Olive.  Openshot would also work for someone who doesn't need anything fancy.


    One of the the great things about Cinelerra is the Blue Banana plugin -- a very unique and powerful color grading interface.  I wish that somebody would port it to MLV-app.


    If one is okay with proprietary software, then there is Resolve, Lightworks or Piranha.



    9 hours ago, Mark Romero 2 said:

    And would one need a particular Distro in order to run that NLE?  (For instance, Resolve is coded to work on Cent OS and isn't really guaranteed to work on other Distros, although i have heard of people using it on Ubuntu).

    All of the open source NLEs work on most distros.  On the other hand, there are special media distros that are worth considering, such as AV Linux and Ubuntu Studio.


    I tried Resolve once, and I as I recall it was distributed as a tarball, and I had no problem installing it on my Debian-based distro.



    9 hours ago, Mark Romero 2 said:

    Of course, my vision might be warped because around here, people sell their used PC's for WAY MORE than the rest of the USA.

    Not sure where you are from, but, as I recall, my current Ebay machine cost US $145.  It has an i7-3770, 3.4GHz cpu, and it came with 16 gigs of ram, two mediocre graphics cards and a 500GB drive.  I put an SSD in it and loaded a non-systemd distro.  It's fairly snappy.

  11. 6 hours ago, SRV1981 said:

    -Keynote is a cute app and I find it helpful - not sure I can change video export settings but will check!

    You can change the export settings in Keynote.  Set it to 1920x1080 (or smaller) and to ProRes 422.  Evidently, there is no way to adjust frame rate (defaults to 29.97) nor bitrate in the current version of Keynote.



    6 hours ago, SRV1981 said:

    -I probably will stick with Mac for ease of use etc. as my free time spent for work doesn't motivated me to learn/work with linux and creating my own machines.  Granted this may be the most efficient i'm seeking most efficient with Apple's products available to me.

    First of all, Mac is not the easiest OS to use.  By the way, almost all OS's are point-and-click, so even a Mac user should have little trouble working with them  😉.


    Furthermore, the free transcoding programs that I mentioned (ffmpeg, handbrake, mencoder) all work the same on any platform, be it Mac, Windows, Linux, BSD, etc.


    I suggested a Linux rendering box for... rendering (the proxies).  In such a scenario, you would still use your current MacBook to edit the renders off of the aforementioned external SSD.


    On the other hand, you could easily do your entire post production (including presentation animation and editing) on the same Linux box, and it could cost you as little as $200 for a used machine that is adequately snappy.  The software would all be free and open source.


    You still have not given any information on the number of clips per video that you edit nor on the length of those clips.  If you use 5 clips per video and they are each 5 minutes in length, start the proxies rendering (in FCP?) on your current MacBook, and go make a cup of coffee.



    6 hours ago, SRV1981 said:

    Granted this may be the most efficient i'm seeking most efficient with Apple's products available to me.

    If you are making 3 videos per week with 2 hours of camera clips to edit for each video, a cheap, separate rendering box is likely more efficient.



    6 hours ago, SRV1981 said:

    -I appreciate your feedback but education has changed and my individual desire to provide more engaging content to my students doesn't represent anything other than attempting to be a part of a small group of teachers who use technology to reach our students.  Chalkboards exist, I like them.  Whiteboards exist, I like them better.  Smartboards exist, I like them even better. 

    Your heightened teaching efforts are certainly welcome!  However, it is doubtful that 4K is any more engaging to students than regular HD.



    6 hours ago, SRV1981 said:

    -My 2015 macbook pro is not a T2 chip device


  12. 42 minutes ago, SRV1981 said:

    Thanks @tupp I am using a combo of iPhone and Sona A7III video.

    If the files coming off of those devices are compressed, your machine (and NLE) has to continually uncompress those files on the fly, while applying effects and filters.  It's a huge demand on the computer's resources.



    42 minutes ago, SRV1981 said:

    Additionally, I am making videos from Keynote for animations etc and layering all of these clips sometimes picture in picture etc.

    "Keynote?"  That sounds like a cute Apple name for a presentation app.


    Make those clips uncompressed at a low bitrate.



    42 minutes ago, SRV1981 said:

    I am just not sure if it will take a long time to create proxies...

    How long and how many are your video clips?


    Try creating proxies with a couple of files, and see how long it takes.


    If you have a lot of clips to convert to proxies, you could also build a cheap Linux box and batch render your proxies (with ffmpeg, handbrake, mencoder, etc.)  to a fast SSD drive.  Then, edit off of that SSD.  That workflow might payoff if you are doing three videos a week.



    42 minutes ago, SRV1981 said:

    ... I am making anywhere from 1-3 videos a week, which is to support my job as a teacher.  

    Evidently, teaching has changed dramatically since I attended school.  You are making more videos per week than a lot of pros make in a month.


    If this is for teaching, why do you need 4K?  Try reducing the resolution to HD and reduce the bitrate wherever possible.


    What happened to chalkboards? 



    42 minutes ago, SRV1981 said:

    Apple has offered $480 for my 2015 MBP and I was looking at a 13" 2020 MBP 2 ghz, 32gb ram, 512 SSD for $2100 or $1600 after giftcard.

    US $2100 for a small 2Ghz laptop with a 512 SSD?!   Before dropping that kind of money for a laptop of questionable power/quality,  I would look into streamlining your workflow, as suggested above.


    Again, with any MacBook (or any Mac) that has a T2 chip, make sure that secure boot is disabled.

  13. Don't know much specifically about your gear, but merely using proxies should give a huge performance boost.  Working with compressed camera files can slow things down to a crawl and cause discrepancies in effects and color grading.


    You shouldn't need high quality files until grading and rendering.  Some graders transcode camera files to uncompressed and then work on them.


    Regardless, if you get a new MacBook, Louis Rossmann (who makes a living repairing MacBooks) warns folks to disable secure boot.   If your current MacBook has a T2 chip, you should make sure that secure boot is disabled.

  14. Interesting article and blog post!


    Many folks prefer the look of vintage lenses with digital sensors.  It's good that Cooke has noticed this trend and reacted to it.  Of course, they are not the only lens manufacturer to come out with brand-new "vintage" lines.


    It would be great if someone would test the character of the new Cooke "vintage" lenses against that of their old "Xtal Espress" anamorphics.

  15. On 5/20/2020 at 7:04 AM, Video Hummus said:

    Does the curve adjustment happen before the encoding?

    Not sure how the highlight/shadow control could happen after encoding.


    By encoding, do you mean "conversion to 8-bit?"  If so, I have no clue as to what stage in the camera's imaging process that the highlight/shadow control is applied, but I would guess that the 8-bit conversion happens early at a low level, before most other processes.

  16. 6 hours ago, AdrParkinson said:

    How would you say it compares with the old Cinestyle profile for Canon? I always found that while it made grading easier, the bitrate just wasn't there to support it and so there were too many artifacts.

    I never experienced artifacts with Cinestyle.  Are you referring to  compression artifacts in the shadows or to posteriztion/banding?


    At any rate, I haven't noticed problems on the E-M10 III with my highlight/shadow settings (in my brief experience so far with the camera), but, again, I am using a light touch with those settings.


    I don't have any short clips, otherwise I would post them.  When I get a chance, I will try to snip out a few seconds from one of the files for download -- I think that ffmpeg can do so without any transcoding.

  17. One thing that never gets mentioned about the E-M10 III is that, although it cannot employ custom picture profiles, it does share the highlight/shadow control feature found in other OMD cameras.  This attribute allows changes in the camera's contrast curve over a large range of values.  It's a powerful control, and one must use a light touch to avoid pushing the curve too far, as it can look unnatural.  I set the highlights to "-1" and the shadows to "+1," which levels the contrast curve a bit.  Additionally, I enable the "Muted" picture profile, with @TiJoBa's recommended "-2" setting for sharpness and with a "0" setting for saturation.


    This Imaging-Resource review gives examples of how the highlight/shadow control can affect those areas of the contrast curve.  Scroll down to the "Highlight/Shadow Control" section and "mouse over" the different values to see how it changes the detail and brightness in those areas.


    By the way, the OMD highlight/shadow control also allows adjustment of the midrange values (at the "center cross" in the display).  Eventually, I will test setting the shadows to "+2," the midrange to +1 and the highlights to "0."



  18. 13 minutes ago, rawshooter said:

    The MagicLantern project is pretty much dead - the last nightly builds are from July 2018:  https://builds.magiclantern.fm/  There are only some (or just one?) individual developers who keep hacking individual camera ports and offering their own builds off-site.

    On the contrary, ML is thriving.  You can't go by that nightly build page, as that is not where the action is. Most of the nightly builds that everyone uses are not official.


    To see the current activity, go to the main forum page, and scroll down to the bottom section titled, "Recently Updated Topics."  After a brief scanning of just a few of the top messages, I see the following active developers:  Danne, masc, cmh, Levas, ilia3101, reddeercity, 2blackbar, critix.


    Our own @ZEEK is active with ML and MLV-App instructional videos.

  19. 2 minutes ago, mr_eight said:

    see attached screenshot taken from Wooden Camera's instruction booklet, so it should be possible to mount a arca-swiss clamp



    Again, you don't necessarily need a separate clamp -- you could just bolt an L-bracket directly to the cage.  Of course, using Arca-Swiss clamps or other quick-release system makes the changeovers faster (and adds height to the camera).

  • Create New...