Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 07/09/2021 in all areas

  1. For anyone with an E-M1 ii, someone has managed to modify the v3.4 firmware to remove the 30 minute video record limit and allow focus-stacking (for stills) with any lens - https://www.mu-43.com/threads/modified-firmware-looking-for-users.99804/post-1449131 I've loaded it on my E-M1 ii and the focus-stacking certainly works with Panasonic lenses. Not yet tested > 30 minute video recording, but the display certainly shows more than 30 minutes available. (There are also modified firmware versions for some other Oly cameras at the beginning of that thread e.g. for the E-M10 iii and E-PL9, plus earlier E-M1 ii firmware versions - see https://www.mu-43.com/threads/modified-firmware-looking-for-users.99804/post-1184700 ) I dipped a toe in the Olympus pool by getting a used E-M1 ii about a year ago (I own several Panasonic cameras as well). It's grown on me over time - relatively small, great colours, IBIS and build quality, but the differences/restrictions in how it operates in video versus stills mode get frustrating at times (like no 5x5 focus areas in video, and the single centre focus area is larger). The audio pre-amps are noisy too. It feels like there is a better video camera lurking in there but Oly have never really been interested/committed enough to video to let it out...
    1 point
  2. I used one of my either a iBook G4 or one of my two Intel MacBooks, all of which have Firewire ports. The supplied cable was a firewire to MiniDV connected straight into the camera.
    1 point
  3. Did you use a computer with a Firewire 400 port or use a Firewire to USB adapter? I've got 120 MiniDV cassettes I should have archived which I've been postponing for years, blaming it on lacking the right connector.
    1 point
  4. True! I thought the Micro matched pretty well to the Alexa? Unless you're talking about 4K+ Alexas and shooting a hyper-modern look with the latest high-end cine lenses? This is a slight aside, but I've been seeing the promo videos they have for their new Signature Zoom lenses and dear god are they going for a sharp look with those videos! I understand why of course, everyone who can afford to use ARRI knows you can soften lenses up in post of course, but I was thinking how I wasn't really a fan of that particular treatment! If it was Christmas and my birthday and I won lotto too, I'd wish for a BM Pocket Cine Camera in the P2K form factor with an updated 2.8K sensor with the same colour science, and set to the global shutter mode rather than rolling shutter. Still, the P2K and M2K (Micro) are pretty amazing for what they cost and how big they are.
    1 point
  5. That doesn't sound right. I'd maybe reach out to some experts, like on the LiftGammaGain forums perhaps (colourists used to do a lot of DI stuff before everything went digital) and confirm. I hear those guys talking about the older digital formats and there were so many gotchas in there that it's almost amazing that anyone got a good result! Yeah, because it's a personal project you can really take your time and let your subconscious process the footage in the background as you do other things. I'm always amazed at how a good editor can put little moments together that aren't related, but compose a thread that wasn't literally there, but somehow captures the feeling more than a 'correct' version. Like when Herzog said “Facts do not constitute the truth. There is a deeper stratum.” In a way, you're lucky that it's miniDV because you're less likely to get swayed by things that look good but aren't meaningful. I'm sure that if I had a 1DX2 and fast primes I'd be tempted to include too much 120p B-roll in my edits like McKinnon does, so you've side-stepped these challenges! Perhaps calling it degradation was misrepresenting it. Think of it like this... reality doesn't have compression artefacts, it's not 'sharp', it's not grainy, and smooth surfaces are not featureless, but the miniDV footage has introduced these things due to its limitations. Your job is to make the footage look the most like reality was, or the most how it felt. You've definitely got license to take some creative liberties to make it more like poetry and less like prose! If you don't blur the footage at all, you get all the nasties and all the content in the footage. If you blur it a lot then you get no nasties and very little of the content, but there will be a little sweet spot where each effect hides much more of the nasties than the original footage, and that's what you're aiming for. I'd suggest setting up a bunch of effects and going through them one by one at a sensible viewing distance and fine-tuning their parameters and opacity by eye, trying to make the footage look the most like a window to being there, or of a Hollywood film of being there (or whichever aesthetic you prefer!). By adjusting those parameters by looking at the monitor and not the control panel you may find that effects that aren't worthy get set to the sweet spot of zero opacity. Make sure you're always turning each effect on/off to make sure you're improving the image, as it's easy to get lost and get used to something, but when watching things we can also easily get used to quite strong looks as well, so don't shy from making it stylised. I'd make a few passes through each effect to optimise each one in combination with the others. A cool point of reference is to look from the screen to the other objects in the room, so effectively using reality as a reference. Comparing to reality will quickly sort out how sharp/unsharp things should be etc. Save that as a preset, reset them all and do the same exercise a few days later. Do it a few times. Then you can compare them and see which you liked, and maybe blend them together. You are trying to create a look that looks 'right' and the original footage looks awful in comparison. When I've done this (like in the example I posted above) I got to a point where the final grade seemed like footage from a Super-16 film camera and the original footage seemed like someone had applied a bunch of awful distortions for some reason. That's a good place to end up. It's useful to review some older film captures as a reference too. They were blurrier than you think, but never looked offensive, so it's a good look to go for. It could be due to the interlacing, but it could also be due to any capture issues you're experiencing. Once you confirm your capture then just see what's least offensive.
    1 point
  6. Fair enough. About the VEAI, some models like Artemis for example tend to overdo it sometimes, especially with really bad source footage. Proteus I find is a totally different story and very configurable too. I've never noticed any weirdness with decent quality source footage. All I can say is I can finally use the otherwise horrible 120fps on my S5, and all its FHD footage for that matter! It's really that good. But yeah, only you could tell if something like that is for you.
    1 point
  7. Just found this, which is absolutely spectacular. If you think you need to shoot in 4K, think again. Simply wonderful.
    1 point
  8. This right here. I am not enjoying your rants, Andrew, because (a) it’s a losing game, (b) makes you come across as bitter and (c) doesn’t seem to be your ‘groove’ and doesn’t accentuate your differentiation and strength. Whether you are right or wrong isn’t the point; you can’t change the rules of the game and it doesn’t make for entertaining content. In my opinion, your strength is the intersection point between technology and creativity (I would love to see more of the latter to balance the tech side). You aren’t competing with bloggers, YouTube’s or even cinematographers if you create your own category. And that doesn’t mean you can’t voice a strong opinion as part of your concept. I just would rather see you fight FOR something instead of AGAINST.
    1 point
×
×
  • Create New...