Jump to content

KnightsFan

Members
  • Posts

    1,214
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. I've been working remote since pre-pandemic. The question isn't whether I like hopping on a zoom call, it's whether I prefer it over commuting 50 minutes each way in rush hour traffic. Depends on who is doing the saving. The huge companies that own and rent out offices definitely don't like it. I much prefer working from my couch, 10 feet from my kitchen, than in an office!
  2. The matte is pretty good! Is it this repo you are using? You mentioned RVM in the other topic. https://github.com/PeterL1n/RobustVideoMatting Tracking of course needs some work. How are you currently tracking your camera? Is this all done in real time, or are you compositing after the fact? I assume that you are compositing later since you mention syncing tracks by audio. If I were you, I would ditch the crane if you're over the weight limit, just get some wide camera handles and make slow deliberate movements, and mount some proper tracking devices on top instead of a phone if that's what you're using now. Of course the downside to this approach compared to the projected background we're talking about in the other topic is, you can merge lighting easier with a projected background, and also with this approach you need to synchronize a LOT more settings between your virtual and real camera. With projected background you only need to worry about focus, with this approach you need to match exposure, focus, zoom, noise pattern, color response, and on and on. It's all work that can be done, but makes the whole process very tedious to me.
  3. I have a control surface I made for various software. I have a couple of rotary encoders just like the one you have, which I use for adjusting selections, but I got a higher resolution one (LPD-3806) for finer controls, like rotating objects or controlling automation curves. Just like you said, having infinite scrolling is imperative for flexible control. I recommend still passing raw data from the dev board to PC, and using desktop software to interpret the raw data. It's much faster to iterate, and you have much more CPU power and memory available. I wrote an app that receives the raw data from my control surface over USB, then transmits messages out to the controlled software using OSC. I like OSC better than MIDI because you aren't limited to low resolution 8 bit messages, you can send float or even string values. Plus OSC is much more explicit about port numbers, at least in the implementations I've used. But having a desktop software interpreting everything was a game changer for me compared to sending Midi directly from the arduino.
  4. CineD is measuring at different resolutions. Downscaling 4k to 1080p improves SNR by 0.5-1 stop. Probably the log curve on the GH5 doesn't take advantage of full sensor DR.
  5. This. The main concrete benefit of ProRes is that it's standard. There are a couple defined flavors, and everyone from the camera manufacturers, to the producers, to the software engineers, know exactly what they are working with. Standards are almost always not the best way to do something, but they are the best way to make sure it works. "My custom Linux machine boots in 0.64 seconds, so much faster than Windows! Unfortunately it doesn't have USB drivers so it can only be used with a custom keyboard and mouse I built in my garage" is fairly analogous to the ProRes vs. H.265 debate. As has been pointed out, on a technical level 10 bit 422 H.264 All-I is essentially interchangeable with ProRes. Both are DCT compression methods, and H.264 can be tuned with as many custom options as you like, including setting a custom transform matrix. H.265 expands it by allowing different size blocks, but that's something you can turn off in encoder settings. However, given a camera or piece of software, you have no idea what settings they are actually choosing. Compounding that, many manufacturers use higher NR and more sharpening for H.264 than ProRes, not for a technical reason, but based on consumer convention. Obviously once you add IPB, it's a completely different comparison, no longer about comparing codecs so much as comparing philosophies. Speed vs. size. As far as decode speed, it's largely down to hardware choices and, VERY importantly, software implementation. Good luck editing H.264 in Premiere no matter your hardware. Resolve is much better, if you have the right GPU. But if you are transcoding with ffmpeg, H.265 is considering faster to decode than ProRes with nVidia hardware acceleration. But this goes back to the first paragraph--when we talk about differences in software implementation, it is better to just know the exact details from one word: "ProRes"
  6. Wow great info @BTM_Pix which confirms my suspicions: Zoom's app is the Panasonic-autofocus of their system. I've considered buying a used F2 (not BT), opening it up and soldering the pins from a bluetooth arduino into the Rec button, but I don't have time for any more silly projects at the moment. I wish Deity would update the Connect with 32 bit. Their receiver is nice and bag friendly, and they've licensed dual transmit/rec technology already. AND they have both lav and XLR transmitters.
  7. I was looking at this when it was announced with the exact same thought about using F2's in conjunction. From what I can tell though, the app only pairs with a single recorder, so you can't simultaneously rec/stop all 3 units wirelessly, right?
  8. I've seen cameras that scan rooms into 3D for real estate walkthroughs. Product demos especially real estate are a great practical use case for VR, since photography distorts space so much easier than a full, congruent 3D model. One surprising aspect to VR content creation that I've run into both at work and in hobbies is that you can have a 3D environment that looks totally normal in screen space, and then as soon as you step into that world in VR you immediately notice mismatches in scale between props. By "surprising," I mean it's surprising how invisible scale mismatches are on a computer screen even when you move freely in 3D. But yes, renderings for VR make a lot more sense to me than a fixed-location image or video, I'd really rather just have a normal 3D screen for that, rather than have it "glued" to my head.
  9. 3D porn is last decade, we're way beyond that haha
  10. I'm talking from my experience regarding what VR users typically complain about. Some people have higher tolerances but for the general public with current tech, discrepancy between your perception of motion and the visual interpretation of that motion see is a great way to get a lot of complaints--especially with rotation. Translation is tolerated slightly more.
  11. Exactly. The lack of head tracking in static content like this makes it really tedious to watch. I could definitely see the appeal of animated films, where you can kind of move around and see under and around things. But I haven't watched any yet so that's speculation. Games and simulations are definitely the appeal for VR. Nothing more fun than goofing off with a couple friends in a VR game. The dual 4k eyes are probably fine resolution wise. I think YouTube's VR app is bugged, because there's no difference between 8k and 144p when I switch settings, so I think I'm viewing it in 480p or thereabouts. These static videos are about a 1/10 on immersion scale. High quality games like Half Life Alyx are more like 7/10. Well like Django said, there's only so much you can do with a 180VR shot. Unless you want to make everyone vomit, you can't: - Change focal length - Tilt or cant the angle - Pan or change height mid shot - Move quickly in any direction - Have objects close to the lens - Have anything out of focus, and especially don't rack focus - Arguably can't have quick cuts, though I think tolerance on that is higher I'm not saying it's a perfect shot, but within the medium there's no a whole lot of options and obviously it won't even have the novelty factor if you watch it on a screen. Personally I think static 180VR content like this is a dead end medium creatively.
  12. Strangely enough I've never tried YouTube VR. I searched for some videos (couldn't find that one in particular) but was pretty underwhelmed. I don't know if it was YouTube compression or the video quality itself, but it looked like mush. Switching YouTube's resolution in the app didn't visually change anything, so I'm not convinced it was playing back in full res--maybe the app is just broken and the video is fine. The bigger problem with this style of lens is that without head tracking, it's not very enjoyable at all to watch. I prefer the "fake 3D cinema" experience for VR movies where you feel like you're looking at a big 3D screen, but with proper head tracking. Did you watch in VR? It doesn't have the fisheye effect when it matches your own vision. What would you have done differently considering the medium?
  13. KnightsFan

    The Aesthetic

    Photo cameras have hovered between 20-30MP as standard for quite some time. 6k sensors have been the standard sensor resolution for hybrids pretty much since the DSLR revolution started. The 5D mkII would have been 5.6k if it had full pixel readout, which means that unless manufacturers decided to reduce sensor resolution since then, raw videos would never have been 4k for most hybrids. Even saying that resolution has increased is misleading imo. Outside of Blackmagic cameras and a couple niche releases from companies that (tellingly) went out of business, there have never been consumer-priced cinema cameras with less than 4k sensors that are now using larger. Even the A7s3 is still 12MP. So, you can complain that Blackmagic specifically now sells a 4k pocket camera instead of an HD one, or that they picked a 4k sensor instead of a nonexistent 2.8k one. But honestly there's very few product lines that actually fall into the trend of increasing video resolution outside of increasing file resolution to match existing sensor resolution.
  14. This. I love VR for entertainment, I have high hopes for it as a productivity tool, and it's a huge benefit in many training applications. But the Metaverse concept--particularly with Facebook behind it--is a terrible idea.
  15. The main conceptual thing was I (stupidly) left the rotation of the XR controller on in the first one, whereas with camera projection the rotation should be ignored. I actually don't think it's that noticeable a change since I didn't rotate the camera much in the first test. It's only noticeable with canted angles. The other thing I didn't do correctly in the first test, was I parented the camera to a tracked point, but forgot to account for the offset. So the camera pivots around the wrong point. That is really the main thing that is noticeably wrong from the first one. Additionally, I measured out the size of the objects so the mountains are scaled correctly, and I measured the starting point of the camera's sensor as accurately as I could instead of just "well this looks about right." Similarly I actually aligned the virtual light source and my light. Nothing fancy, I just eyeballed it, but I didn't even do that for the first version. That's all just a matter of putting the effort in to make sure everything was aligned. The better tracking is because I used a Quest instead of a Rift headset. The Quest has cameras on the bottom, so it has an easier time tracking an object held at chest height. I have the headset tilted up on my head to record these, so the extra downward FOV helps considerably. Nothing really special there, just better hardware. There were a few blips where it lost tracking--if I was doing this on a shoot I'd use HTC Vive trackers, but they are honestly rather annoying to set up, despite being significantly better for this sort of thing. Also this time I put some handles on my camera, and the added stability means that the delay between tracking and rendering is less noticeable. The delay is a result of the XR controller tracking position at only 30 hz, plus HDMI delay from PC to TV which is fairly large on my model. I can reduce this delay by using a Vive, which has 250 hz tracking, and a low latency projector (or screen, but for live action I need more image area). I think in best case I might actually get sub-frame delay at 24 fps. Some gaming projectors claim <5ms latency.
  16. ...and today I cleaned up the effect quite a bit. This is way too much fun! The background is a simple terrain with a solid color shader, lit by a single directional light and a solid color sky. With it this far out of focus, it looks shockingly good considering it's just a low poly mesh colored brown! https://s3.amazonaws.com/gobuildstuff.com/videos/virtualBGTest2.mp4 It's a 65" tv behind this one. You can vaguely see my reflection in the screen haha.
  17. Thanks to this topic, I dug a little further into virtual backgrounds myself. I used this library from github, and made this today. It's tracked using an Oculus controller velcro'd to the top of my camera. "Calibration" was a tape measure and some guesstimation, and the model isn't quite the right scale for the figure. No attempt to match focus, color, or lighting. So not terribly accurate, but imo it went shockingly well for a Sunday evening proof on concept. https://s3.amazonaws.com/gobuildstuff.com/videos/virtualBGTest.mp4
  18. It's still easy to spot an object sliding around in the frame, especially if it's supposed to be fixed to the ground. And motion capture usually means capturing the animation of a person, ie how their arms, legs, and face move, and we're especially adept at noticing when a fellow human moves incorrectly. So for foreground character animation, the accuracy has to also be really high. I believe that even with high end motion capture systems, an animator will clean up the data afterwards for high fidelity workflows like you would see in blockbusters or AAA video games. The repo I posted is a machine learning algorithm to get full body human animation from a video of a person, as opposed to the traditional method of a person wearing a suit with markers tracked by an array of cameras fed through an algorithm someone wrote manually. It has shockingly good results, and that method will only get better with time--as with every other machine learning application! Machine learning is the future of animation imo. For motion tracking, Vive is exceptional. I've used a lot of commercial VR headsets, and Vive is top of the pack for tracking. Much better than the inside out version (using builtin cameras instead of external sensors) of even Oculus/Meta's headsets. I don't know what the distance limit is, I've got my base stations about 10 ft apart. For me and my room, the limiting factor for a homemade virtual set is the size of the backdrop, not the tracking area.
  19. You can't get accurate position data from an accelerometer, the errors from integrating twice are too high. It might depend on the phone, but orientation data in my experience is highly unreliable. I send orientation from my phone to PC for real world manipulation of 3D props, and it's nowhere near accurate enough for camera tracking. It's great for placing virtual set dressings, where small errors actually can make it look more natural. There's a free android app called Sensors Multitool where you can read some of the data. If I set my phone flat on a table the orientation data "wobbles" by up to 4 degrees. But in general, smartphones are woefully underutilized in so many ways. With a decent router, you can set up a network and use phones as mini computers to run custom scripts or apps anywhere on set--virtual or real production. Two way radios, reference pictures, IP cams, audio recorders, backup hard drive, note taking, all for a couple bucks second hand on ebay.
  20. Sorry I made a typo, I was talking about motion capture in my response to Jay, not motion tracking. The repo I linked to can give pretty good results for full body motion capture. Resolution is still important there, but not in the sense of background objects moving. As with any motion capture system, there will be some level of manual cleanup required. For real time motion tracking, the solution is typically a dedicated multicam system like Vive trackers, or a depth sensing camera. Not a high resolution camera for VFX/post style tracking markers.
  21. How accurate does it need to be? There are open source repos on github for real time, full body motion tracking based on a single camera which are surprisingly accurate https://github.com/digital-standard/ThreeDPoseUnityBarracuda. It's a pretty significant jump in price up to an actual mocap suit, even a "cheap" one. I wonder how accurate or cost effective it would be to instead mount one of these on your camera. https://www.stereolabs.com/zed/ I keep almost buying a Zed camera to mess around with, because I never had a project to use it in. Though if you already have a Vive ecosystem, a single tracker isn't a ton more money. You can make your own focus ring rotation sensor with a potentiometer and an arduino for next to nothing. (As you might be able to tell, I'm all for spending way too much effort on things that have commercial solutions). One piece that I haven't researched at all is how to actually project the image. I don't know what type of projector, what type of screen, how bad the color cast will be, or the potential cost of all that would be. An LCD would be the cleanest way to go, but for any size at all that's a hefty investment. I once did a test with a background LCD screen using lego stopmotion so that the screen would be a sufficient size, but that was long before I had any idea what I was doing. Enjoying the write up so far!
  22. Unity is very dependent on coding but is designed to be easy to code in. There are existing packages on the asset store (both paid and free) that can get you pretty far, but if you aren't planning to write code, then a lot of Unity's benefits are lost. For example you can make a 3D animation without writing any code at all, but at that point just use Blender or Unreal. The speed with which you can write and iterate code in Unity is its selling point. Edit: Also, Unity uses C# so that would be the language to learn
  23. Thanks! I've been building games in Unity and Blender for many years, so there was a lot of build up to making this particular project. All that's to say I'm interested in virtual production, and will definitely be interested to see what BTM or anyone else comes up with in this area.
  24. Last year my friends and I put together a show which we filmed in virtual reality. We tried to make it as much like actual filmmaking as possible, except we were 1000 miles away. That didn't turn out to be as possible as we'd hoped, due to bugs and limited features. The engine is entirely built from scratch using Unity, so I was doing making patches every 10 minutes just to get it to work. Most of the 3D assets were made for the show, maybe 10% or so were found free on the web. Definitely one of the more difficult aspects was building a pipeline entirely remotely for scripts, storyboards, assets, recordings, and then at the very end the relatively simple task of collaborative remote editing with video and audio files. If you're interested you can watch it here (mature audiences) https://www.youtube.com/playlist?list=PLocS26VJsm_2dYcuwrN36ZgrOVtx49urd I've been toying with the idea of going solo on my own virtual production as well.
  25. @kyeWith that software, does Resolve have to be in a specific layout? For example, if I undock my curves window, can it still find the curves and adjust it, or does it rely on knowing the exact positions of various buttons on screen? Does it work with an arbitrary sized window or only full screen?
×
×
  • Create New...