Jump to content

KnightsFan

Members
  • Posts

    1,139
  • Joined

  • Last visited

About KnightsFan

Contact Methods

  • Website URL
    https://gobuildstuff.com/

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

KnightsFan's Achievements

Long-time member

Long-time member (5/5)

710

Reputation

  1. I've been working remote since pre-pandemic. The question isn't whether I like hopping on a zoom call, it's whether I prefer it over commuting 50 minutes each way in rush hour traffic. Depends on who is doing the saving. The huge companies that own and rent out offices definitely don't like it. I much prefer working from my couch, 10 feet from my kitchen, than in an office!
  2. The matte is pretty good! Is it this repo you are using? You mentioned RVM in the other topic. https://github.com/PeterL1n/RobustVideoMatting Tracking of course needs some work. How are you currently tracking your camera? Is this all done in real time, or are you compositing after the fact? I assume that you are compositing later since you mention syncing tracks by audio. If I were you, I would ditch the crane if you're over the weight limit, just get some wide camera handles and make slow deliberate movements, and mount some proper tracking devices on top instead of a phone if that's what you're using now. Of course the downside to this approach compared to the projected background we're talking about in the other topic is, you can merge lighting easier with a projected background, and also with this approach you need to synchronize a LOT more settings between your virtual and real camera. With projected background you only need to worry about focus, with this approach you need to match exposure, focus, zoom, noise pattern, color response, and on and on. It's all work that can be done, but makes the whole process very tedious to me.
  3. I have a control surface I made for various software. I have a couple of rotary encoders just like the one you have, which I use for adjusting selections, but I got a higher resolution one (LPD-3806) for finer controls, like rotating objects or controlling automation curves. Just like you said, having infinite scrolling is imperative for flexible control. I recommend still passing raw data from the dev board to PC, and using desktop software to interpret the raw data. It's much faster to iterate, and you have much more CPU power and memory available. I wrote an app that receives the raw data from my control surface over USB, then transmits messages out to the controlled software using OSC. I like OSC better than MIDI because you aren't limited to low resolution 8 bit messages, you can send float or even string values. Plus OSC is much more explicit about port numbers, at least in the implementations I've used. But having a desktop software interpreting everything was a game changer for me compared to sending Midi directly from the arduino.
  4. CineD is measuring at different resolutions. Downscaling 4k to 1080p improves SNR by 0.5-1 stop. Probably the log curve on the GH5 doesn't take advantage of full sensor DR.
  5. This. The main concrete benefit of ProRes is that it's standard. There are a couple defined flavors, and everyone from the camera manufacturers, to the producers, to the software engineers, know exactly what they are working with. Standards are almost always not the best way to do something, but they are the best way to make sure it works. "My custom Linux machine boots in 0.64 seconds, so much faster than Windows! Unfortunately it doesn't have USB drivers so it can only be used with a custom keyboard and mouse I built in my garage" is fairly analogous to the ProRes vs. H.265 debate. As has been pointed out, on a technical level 10 bit 422 H.264 All-I is essentially interchangeable with ProRes. Both are DCT compression methods, and H.264 can be tuned with as many custom options as you like, including setting a custom transform matrix. H.265 expands it by allowing different size blocks, but that's something you can turn off in encoder settings. However, given a camera or piece of software, you have no idea what settings they are actually choosing. Compounding that, many manufacturers use higher NR and more sharpening for H.264 than ProRes, not for a technical reason, but based on consumer convention. Obviously once you add IPB, it's a completely different comparison, no longer about comparing codecs so much as comparing philosophies. Speed vs. size. As far as decode speed, it's largely down to hardware choices and, VERY importantly, software implementation. Good luck editing H.264 in Premiere no matter your hardware. Resolve is much better, if you have the right GPU. But if you are transcoding with ffmpeg, H.265 is considering faster to decode than ProRes with nVidia hardware acceleration. But this goes back to the first paragraph--when we talk about differences in software implementation, it is better to just know the exact details from one word: "ProRes"
  6. Wow great info @BTM_Pix which confirms my suspicions: Zoom's app is the Panasonic-autofocus of their system. I've considered buying a used F2 (not BT), opening it up and soldering the pins from a bluetooth arduino into the Rec button, but I don't have time for any more silly projects at the moment. I wish Deity would update the Connect with 32 bit. Their receiver is nice and bag friendly, and they've licensed dual transmit/rec technology already. AND they have both lav and XLR transmitters.
  7. I was looking at this when it was announced with the exact same thought about using F2's in conjunction. From what I can tell though, the app only pairs with a single recorder, so you can't simultaneously rec/stop all 3 units wirelessly, right?
  8. I've seen cameras that scan rooms into 3D for real estate walkthroughs. Product demos especially real estate are a great practical use case for VR, since photography distorts space so much easier than a full, congruent 3D model. One surprising aspect to VR content creation that I've run into both at work and in hobbies is that you can have a 3D environment that looks totally normal in screen space, and then as soon as you step into that world in VR you immediately notice mismatches in scale between props. By "surprising," I mean it's surprising how invisible scale mismatches are on a computer screen even when you move freely in 3D. But yes, renderings for VR make a lot more sense to me than a fixed-location image or video, I'd really rather just have a normal 3D screen for that, rather than have it "glued" to my head.
  9. 3D porn is last decade, we're way beyond that haha
  10. I'm talking from my experience regarding what VR users typically complain about. Some people have higher tolerances but for the general public with current tech, discrepancy between your perception of motion and the visual interpretation of that motion see is a great way to get a lot of complaints--especially with rotation. Translation is tolerated slightly more.
  11. Exactly. The lack of head tracking in static content like this makes it really tedious to watch. I could definitely see the appeal of animated films, where you can kind of move around and see under and around things. But I haven't watched any yet so that's speculation. Games and simulations are definitely the appeal for VR. Nothing more fun than goofing off with a couple friends in a VR game. The dual 4k eyes are probably fine resolution wise. I think YouTube's VR app is bugged, because there's no difference between 8k and 144p when I switch settings, so I think I'm viewing it in 480p or thereabouts. These static videos are about a 1/10 on immersion scale. High quality games like Half Life Alyx are more like 7/10. Well like Django said, there's only so much you can do with a 180VR shot. Unless you want to make everyone vomit, you can't: - Change focal length - Tilt or cant the angle - Pan or change height mid shot - Move quickly in any direction - Have objects close to the lens - Have anything out of focus, and especially don't rack focus - Arguably can't have quick cuts, though I think tolerance on that is higher I'm not saying it's a perfect shot, but within the medium there's no a whole lot of options and obviously it won't even have the novelty factor if you watch it on a screen. Personally I think static 180VR content like this is a dead end medium creatively.
  12. Strangely enough I've never tried YouTube VR. I searched for some videos (couldn't find that one in particular) but was pretty underwhelmed. I don't know if it was YouTube compression or the video quality itself, but it looked like mush. Switching YouTube's resolution in the app didn't visually change anything, so I'm not convinced it was playing back in full res--maybe the app is just broken and the video is fine. The bigger problem with this style of lens is that without head tracking, it's not very enjoyable at all to watch. I prefer the "fake 3D cinema" experience for VR movies where you feel like you're looking at a big 3D screen, but with proper head tracking. Did you watch in VR? It doesn't have the fisheye effect when it matches your own vision. What would you have done differently considering the medium?
  13. KnightsFan

    The Aesthetic

    Photo cameras have hovered between 20-30MP as standard for quite some time. 6k sensors have been the standard sensor resolution for hybrids pretty much since the DSLR revolution started. The 5D mkII would have been 5.6k if it had full pixel readout, which means that unless manufacturers decided to reduce sensor resolution since then, raw videos would never have been 4k for most hybrids. Even saying that resolution has increased is misleading imo. Outside of Blackmagic cameras and a couple niche releases from companies that (tellingly) went out of business, there have never been consumer-priced cinema cameras with less than 4k sensors that are now using larger. Even the A7s3 is still 12MP. So, you can complain that Blackmagic specifically now sells a 4k pocket camera instead of an HD one, or that they picked a 4k sensor instead of a nonexistent 2.8k one. But honestly there's very few product lines that actually fall into the trend of increasing video resolution outside of increasing file resolution to match existing sensor resolution.
  14. This. I love VR for entertainment, I have high hopes for it as a productivity tool, and it's a huge benefit in many training applications. But the Metaverse concept--particularly with Facebook behind it--is a terrible idea.
  15. The main conceptual thing was I (stupidly) left the rotation of the XR controller on in the first one, whereas with camera projection the rotation should be ignored. I actually don't think it's that noticeable a change since I didn't rotate the camera much in the first test. It's only noticeable with canted angles. The other thing I didn't do correctly in the first test, was I parented the camera to a tracked point, but forgot to account for the offset. So the camera pivots around the wrong point. That is really the main thing that is noticeably wrong from the first one. Additionally, I measured out the size of the objects so the mountains are scaled correctly, and I measured the starting point of the camera's sensor as accurately as I could instead of just "well this looks about right." Similarly I actually aligned the virtual light source and my light. Nothing fancy, I just eyeballed it, but I didn't even do that for the first version. That's all just a matter of putting the effort in to make sure everything was aligned. The better tracking is because I used a Quest instead of a Rift headset. The Quest has cameras on the bottom, so it has an easier time tracking an object held at chest height. I have the headset tilted up on my head to record these, so the extra downward FOV helps considerably. Nothing really special there, just better hardware. There were a few blips where it lost tracking--if I was doing this on a shoot I'd use HTC Vive trackers, but they are honestly rather annoying to set up, despite being significantly better for this sort of thing. Also this time I put some handles on my camera, and the added stability means that the delay between tracking and rendering is less noticeable. The delay is a result of the XR controller tracking position at only 30 hz, plus HDMI delay from PC to TV which is fairly large on my model. I can reduce this delay by using a Vive, which has 250 hz tracking, and a low latency projector (or screen, but for live action I need more image area). I think in best case I might actually get sub-frame delay at 24 fps. Some gaming projectors claim <5ms latency.
×
×
  • Create New...