Jump to content

KnightsFan

Members
  • Posts

    1,209
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. Wow great info @BTM_Pix which confirms my suspicions: Zoom's app is the Panasonic-autofocus of their system. I've considered buying a used F2 (not BT), opening it up and soldering the pins from a bluetooth arduino into the Rec button, but I don't have time for any more silly projects at the moment. I wish Deity would update the Connect with 32 bit. Their receiver is nice and bag friendly, and they've licensed dual transmit/rec technology already. AND they have both lav and XLR transmitters.
  2. I was looking at this when it was announced with the exact same thought about using F2's in conjunction. From what I can tell though, the app only pairs with a single recorder, so you can't simultaneously rec/stop all 3 units wirelessly, right?
  3. I've seen cameras that scan rooms into 3D for real estate walkthroughs. Product demos especially real estate are a great practical use case for VR, since photography distorts space so much easier than a full, congruent 3D model. One surprising aspect to VR content creation that I've run into both at work and in hobbies is that you can have a 3D environment that looks totally normal in screen space, and then as soon as you step into that world in VR you immediately notice mismatches in scale between props. By "surprising," I mean it's surprising how invisible scale mismatches are on a computer screen even when you move freely in 3D. But yes, renderings for VR make a lot more sense to me than a fixed-location image or video, I'd really rather just have a normal 3D screen for that, rather than have it "glued" to my head.
  4. 3D porn is last decade, we're way beyond that haha
  5. I'm talking from my experience regarding what VR users typically complain about. Some people have higher tolerances but for the general public with current tech, discrepancy between your perception of motion and the visual interpretation of that motion see is a great way to get a lot of complaints--especially with rotation. Translation is tolerated slightly more.
  6. Exactly. The lack of head tracking in static content like this makes it really tedious to watch. I could definitely see the appeal of animated films, where you can kind of move around and see under and around things. But I haven't watched any yet so that's speculation. Games and simulations are definitely the appeal for VR. Nothing more fun than goofing off with a couple friends in a VR game. The dual 4k eyes are probably fine resolution wise. I think YouTube's VR app is bugged, because there's no difference between 8k and 144p when I switch settings, so I think I'm viewing it in 480p or thereabouts. These static videos are about a 1/10 on immersion scale. High quality games like Half Life Alyx are more like 7/10. Well like Django said, there's only so much you can do with a 180VR shot. Unless you want to make everyone vomit, you can't: - Change focal length - Tilt or cant the angle - Pan or change height mid shot - Move quickly in any direction - Have objects close to the lens - Have anything out of focus, and especially don't rack focus - Arguably can't have quick cuts, though I think tolerance on that is higher I'm not saying it's a perfect shot, but within the medium there's no a whole lot of options and obviously it won't even have the novelty factor if you watch it on a screen. Personally I think static 180VR content like this is a dead end medium creatively.
  7. Strangely enough I've never tried YouTube VR. I searched for some videos (couldn't find that one in particular) but was pretty underwhelmed. I don't know if it was YouTube compression or the video quality itself, but it looked like mush. Switching YouTube's resolution in the app didn't visually change anything, so I'm not convinced it was playing back in full res--maybe the app is just broken and the video is fine. The bigger problem with this style of lens is that without head tracking, it's not very enjoyable at all to watch. I prefer the "fake 3D cinema" experience for VR movies where you feel like you're looking at a big 3D screen, but with proper head tracking. Did you watch in VR? It doesn't have the fisheye effect when it matches your own vision. What would you have done differently considering the medium?
  8. KnightsFan

    The Aesthetic

    Photo cameras have hovered between 20-30MP as standard for quite some time. 6k sensors have been the standard sensor resolution for hybrids pretty much since the DSLR revolution started. The 5D mkII would have been 5.6k if it had full pixel readout, which means that unless manufacturers decided to reduce sensor resolution since then, raw videos would never have been 4k for most hybrids. Even saying that resolution has increased is misleading imo. Outside of Blackmagic cameras and a couple niche releases from companies that (tellingly) went out of business, there have never been consumer-priced cinema cameras with less than 4k sensors that are now using larger. Even the A7s3 is still 12MP. So, you can complain that Blackmagic specifically now sells a 4k pocket camera instead of an HD one, or that they picked a 4k sensor instead of a nonexistent 2.8k one. But honestly there's very few product lines that actually fall into the trend of increasing video resolution outside of increasing file resolution to match existing sensor resolution.
  9. This. I love VR for entertainment, I have high hopes for it as a productivity tool, and it's a huge benefit in many training applications. But the Metaverse concept--particularly with Facebook behind it--is a terrible idea.
  10. The main conceptual thing was I (stupidly) left the rotation of the XR controller on in the first one, whereas with camera projection the rotation should be ignored. I actually don't think it's that noticeable a change since I didn't rotate the camera much in the first test. It's only noticeable with canted angles. The other thing I didn't do correctly in the first test, was I parented the camera to a tracked point, but forgot to account for the offset. So the camera pivots around the wrong point. That is really the main thing that is noticeably wrong from the first one. Additionally, I measured out the size of the objects so the mountains are scaled correctly, and I measured the starting point of the camera's sensor as accurately as I could instead of just "well this looks about right." Similarly I actually aligned the virtual light source and my light. Nothing fancy, I just eyeballed it, but I didn't even do that for the first version. That's all just a matter of putting the effort in to make sure everything was aligned. The better tracking is because I used a Quest instead of a Rift headset. The Quest has cameras on the bottom, so it has an easier time tracking an object held at chest height. I have the headset tilted up on my head to record these, so the extra downward FOV helps considerably. Nothing really special there, just better hardware. There were a few blips where it lost tracking--if I was doing this on a shoot I'd use HTC Vive trackers, but they are honestly rather annoying to set up, despite being significantly better for this sort of thing. Also this time I put some handles on my camera, and the added stability means that the delay between tracking and rendering is less noticeable. The delay is a result of the XR controller tracking position at only 30 hz, plus HDMI delay from PC to TV which is fairly large on my model. I can reduce this delay by using a Vive, which has 250 hz tracking, and a low latency projector (or screen, but for live action I need more image area). I think in best case I might actually get sub-frame delay at 24 fps. Some gaming projectors claim <5ms latency.
  11. ...and today I cleaned up the effect quite a bit. This is way too much fun! The background is a simple terrain with a solid color shader, lit by a single directional light and a solid color sky. With it this far out of focus, it looks shockingly good considering it's just a low poly mesh colored brown! https://s3.amazonaws.com/gobuildstuff.com/videos/virtualBGTest2.mp4 It's a 65" tv behind this one. You can vaguely see my reflection in the screen haha.
  12. Thanks to this topic, I dug a little further into virtual backgrounds myself. I used this library from github, and made this today. It's tracked using an Oculus controller velcro'd to the top of my camera. "Calibration" was a tape measure and some guesstimation, and the model isn't quite the right scale for the figure. No attempt to match focus, color, or lighting. So not terribly accurate, but imo it went shockingly well for a Sunday evening proof on concept. https://s3.amazonaws.com/gobuildstuff.com/videos/virtualBGTest.mp4
  13. It's still easy to spot an object sliding around in the frame, especially if it's supposed to be fixed to the ground. And motion capture usually means capturing the animation of a person, ie how their arms, legs, and face move, and we're especially adept at noticing when a fellow human moves incorrectly. So for foreground character animation, the accuracy has to also be really high. I believe that even with high end motion capture systems, an animator will clean up the data afterwards for high fidelity workflows like you would see in blockbusters or AAA video games. The repo I posted is a machine learning algorithm to get full body human animation from a video of a person, as opposed to the traditional method of a person wearing a suit with markers tracked by an array of cameras fed through an algorithm someone wrote manually. It has shockingly good results, and that method will only get better with time--as with every other machine learning application! Machine learning is the future of animation imo. For motion tracking, Vive is exceptional. I've used a lot of commercial VR headsets, and Vive is top of the pack for tracking. Much better than the inside out version (using builtin cameras instead of external sensors) of even Oculus/Meta's headsets. I don't know what the distance limit is, I've got my base stations about 10 ft apart. For me and my room, the limiting factor for a homemade virtual set is the size of the backdrop, not the tracking area.
  14. You can't get accurate position data from an accelerometer, the errors from integrating twice are too high. It might depend on the phone, but orientation data in my experience is highly unreliable. I send orientation from my phone to PC for real world manipulation of 3D props, and it's nowhere near accurate enough for camera tracking. It's great for placing virtual set dressings, where small errors actually can make it look more natural. There's a free android app called Sensors Multitool where you can read some of the data. If I set my phone flat on a table the orientation data "wobbles" by up to 4 degrees. But in general, smartphones are woefully underutilized in so many ways. With a decent router, you can set up a network and use phones as mini computers to run custom scripts or apps anywhere on set--virtual or real production. Two way radios, reference pictures, IP cams, audio recorders, backup hard drive, note taking, all for a couple bucks second hand on ebay.
  15. Sorry I made a typo, I was talking about motion capture in my response to Jay, not motion tracking. The repo I linked to can give pretty good results for full body motion capture. Resolution is still important there, but not in the sense of background objects moving. As with any motion capture system, there will be some level of manual cleanup required. For real time motion tracking, the solution is typically a dedicated multicam system like Vive trackers, or a depth sensing camera. Not a high resolution camera for VFX/post style tracking markers.
  16. How accurate does it need to be? There are open source repos on github for real time, full body motion tracking based on a single camera which are surprisingly accurate https://github.com/digital-standard/ThreeDPoseUnityBarracuda. It's a pretty significant jump in price up to an actual mocap suit, even a "cheap" one. I wonder how accurate or cost effective it would be to instead mount one of these on your camera. https://www.stereolabs.com/zed/ I keep almost buying a Zed camera to mess around with, because I never had a project to use it in. Though if you already have a Vive ecosystem, a single tracker isn't a ton more money. You can make your own focus ring rotation sensor with a potentiometer and an arduino for next to nothing. (As you might be able to tell, I'm all for spending way too much effort on things that have commercial solutions). One piece that I haven't researched at all is how to actually project the image. I don't know what type of projector, what type of screen, how bad the color cast will be, or the potential cost of all that would be. An LCD would be the cleanest way to go, but for any size at all that's a hefty investment. I once did a test with a background LCD screen using lego stopmotion so that the screen would be a sufficient size, but that was long before I had any idea what I was doing. Enjoying the write up so far!
  17. Unity is very dependent on coding but is designed to be easy to code in. There are existing packages on the asset store (both paid and free) that can get you pretty far, but if you aren't planning to write code, then a lot of Unity's benefits are lost. For example you can make a 3D animation without writing any code at all, but at that point just use Blender or Unreal. The speed with which you can write and iterate code in Unity is its selling point. Edit: Also, Unity uses C# so that would be the language to learn
  18. Thanks! I've been building games in Unity and Blender for many years, so there was a lot of build up to making this particular project. All that's to say I'm interested in virtual production, and will definitely be interested to see what BTM or anyone else comes up with in this area.
  19. Last year my friends and I put together a show which we filmed in virtual reality. We tried to make it as much like actual filmmaking as possible, except we were 1000 miles away. That didn't turn out to be as possible as we'd hoped, due to bugs and limited features. The engine is entirely built from scratch using Unity, so I was doing making patches every 10 minutes just to get it to work. Most of the 3D assets were made for the show, maybe 10% or so were found free on the web. Definitely one of the more difficult aspects was building a pipeline entirely remotely for scripts, storyboards, assets, recordings, and then at the very end the relatively simple task of collaborative remote editing with video and audio files. If you're interested you can watch it here (mature audiences) https://www.youtube.com/playlist?list=PLocS26VJsm_2dYcuwrN36ZgrOVtx49urd I've been toying with the idea of going solo on my own virtual production as well.
  20. @kyeWith that software, does Resolve have to be in a specific layout? For example, if I undock my curves window, can it still find the curves and adjust it, or does it rely on knowing the exact positions of various buttons on screen? Does it work with an arbitrary sized window or only full screen?
  21. I buy used whenever I can: cameras, lenses, microphones, cars, clothes. Mainly to do my part to reduce waste. I sell or give away anything I've upgraded away from, unless it's absolutely unsalvageable. So a nice perk is that often I don't lose much value. In fact, with 2 out of the 3 cameras I've sold, I actually profited from it! In both cases I bought and sold on ebay, which is what I almost always use.
  22. Never heard of it. It's a good idea, there are a number of ad hoc locking connectors out there including for USB, would be nice if they could standardize and gain traction. The other side though is that software standardization for network AV isn't there yet. NDI has some promise. Yeah, the difficulty of finding an affordable 5.1 receiver for my home theater shows the limitations of digital longevity. Only way these days is with HDMI passthrough, which means finding a receiver that supports all the image formats you want. It's ridiculous that every time you want to upgrade your image you need a new sound system. I think anyone serious about audio should stick with analog mics for the reason you mention. Digital mics are a little bit like fixed lens cameras. However, I do think there is a user tier where they make sense and will continue to grow market share. Right, but you're talking orders of magnitude more expensive than consumer gear. If you're buying Rode Wireless GO-tier--which is near the quality tier I expect for this Tascam unit--it will be much more reliable to record at the TX. I trust a $150 recorder. I don't trust a $150 wireless set. In terms of what's technologically possible, recording at the TX in extremely high quality with very high reliability is quite affordable. I can extrapolate quite a bit in terms of how early digitization would help the consumer market. If my shotgun mic recorded onto an SD and sent wireless to a mixer for monitoring, suddenly I don't need nearly as good of a boom pole or booming technique to avoid cable rattle. I'm not saying that's a good pro workflow, but the benefits are pretty tangible for the rest of us.
  23. The problem with USB audio for widespread use is it's limited to short cables. There are workarounds, but it gets complicated once you get to 5+ meters. XLR cables can easily be 10x that length. So XLR is better unless you know that the DAC is always going to be right next to the recorder. As far as non-USB digital audio goes, it would be interesting if we saw some uptake of RJ45 ethernet ports on cameras along with the necessary standards to carry video and/or audio. Those have (flimsy) locking connectors and longer run distance, plus there's a POE standard in place. But as of right now, the only universal standard for digital audio in the consumer world is USB. AES would help pro audio, but would be useless for most consumers still. @mkabiI think we're heading in the direction of putting the DAC (digitizer) in the mic housing like you are suggesting. Certainly the consumer world is. The Sennheiser XS Lav Mobile and the Rode VideoMic NTG are examples, and then there are a couple bodypacks like the Zoom F2 or Deity BP-TRX that are moving there for the wireless world. I think putting the DAC in the mic is especially beneficial for consumer products, because it means that the audio company is solely responsible for the audio signal chain. However, for the foreseeable future, if there is a wire between the mic and recorder, then analog via XLR to a DAC inside the recorder has no downsides compared to digitizing on the other end. If we're talking wireless it's a different story--it would be better to both digitize and record on the TX side, but that's a legal minefield. Also, the reason lavs get away with thin cables and 3.5mm jacks is because they have short runs. You can run unbalanced audio from your shirt to your belt, not 50m across a stage.
  24. The NPH scene don't look like CG characters to me
  25. KnightsFan

    Sony A7 IV

    I imagine when it says 4k30 oversampled from 33 MP, it means that 4k60 is a crop mode--assuming the rumor is from a reliable source in the first place. Although if Sony does make it 4k30 max camera now, it will be an interesting data point in Canon's resurgence of hybrid mirrorless cams.
×
×
  • Create New...