
KnightsFan
Members-
Posts
1,144 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by KnightsFan
-
Interesting older article about WB and ISO. Alister Chapman
KnightsFan replied to webrunner5's topic in Cameras
He's being obtusely literal in my opinion. So obviously you can't change the camera's analog gain after the fact. But most people don't judge image or workflow based on counting which photons and voltages flowed through their equipment, they care about whether the end result is accurate to their expectation. So when people say you can change WB in post, it means that the NLE is performing a mathematically correct operation to emulate a different white balance, based on accurate metadata. Not too long ago, there was no such thing as a color managed workflow in consumer NLE's, which meant that the WB sliders and gain adjustments--outside of not changing analog camera circuitry's native WB in post--ALSO produced mathematically incorrect results compared. So when we got accurate WB and ISO adjustments in raw processors, it was truly revolutionary. Nowadays, as long as its color managed and the files have sufficient data, you can get the same result even without raw. Neither one is technically changing the camera's WB, but they produce the correct results and that's all that matters. I'll also point out that I suspect that most (all?) sensors don't actually change their analog gain levels based on WB setting. I bet it's almost always digital adjustment. In that case, Alister would have to also argue that changing WB on the camera doesn't actually change WB. Maybe he wants to argue that shooting at anything other than identical gain on each pixel isn't true white balancing, but I am not sure that is a useful description of the process. That is why I say it's obtusely literal. Everything I said also applies to ISO on cameras that have a fixed amount of gain. -
This. Do your own tests and trust your judgement, but here's my opinion. If all you care about is how it looks on YouTube, 50 is perfectly fine. No one can tell the difference between a 50 and 100 source file on a 7" phone screen, or a 65" 4k screen 12' away with window glare across the front. I care more about how my content looks in Resolve than on YouTube. And even then, I use 100 mbps H265 (IPB). When I had an XT3, I shot a few full projects at 200 mbps and didn't see any improvement. I've done tests with my Z Cam and can't see benefits to >100. I'd be happy with 50 in most scenarios. It might be confirmation bias but I think I have been in scenarios where 100 looked better than 50, in particular when handheld. Keep in my also, on most cameras, especially consumer cameras, the nominal rate is the upper limit (it would be a BIG problem the encoder went OVER its nominal rate because the SD card requirements would be a lie). So while I shoot at 100, the file size is usually closer to 70, so it might not even be as big a file size increase as you think. But for me, 100 mpbs is the sweet spot, when shooting H265 IPB.
-
Red's encoding is Jpeg 2000, which has been around since 2000 and provides any compression ratio you want with a subjective cutoff where it's visually lossless (as does every algorithm). Jpeg 2000 has been used for DCP's since 2004 with a compression ratio of about 12:1. So there was actually a pretty long precedent of motion pictures using the the exact algorithm and at a high compression ratio before Red did it. Red didn't add anything in terms of compression technique or ratios. They just applied existing algorithms to bayer data, the way photo cameras did, instead of RGB data.
-
Honestly the "or more" part is the only bit I really take issue with. Once Elon Musk reaches Mars, he should patent transportation devices that can go 133 million miles or more so he can collect royalties when someone else invents interstellar travel. If he specifically describes "any device that can transport 1 or more persons" that would even cover wormholes that don't technically use rockets! If the patent had listed the specific set of frame rates that they were able to achieve, like 24-48 in 4k and 24-120 in 2k (or whatever the Red One was capable of at the time), at the compression ratios that they could hit, that would seem more like fair play. That leaves opportunity for further technical innovation, Which, by the way, Red might very well have been first at as well.
-
I guess I disagree that anyone should have been allowed to patent 8K compressed Raw, or 12k, or 4k 1000 fps--a decade before any of that was possible. I see arguments that the patent is valid because Red were the first to do 4k raw, so to the victor go the spoils... but since we're talking about differences like 23 vs 24, it's a valid point that they patented numbers that they could not achieve at the time. And in a broader sense, I don't understand why a parent should be able to prevent other companies from applying known, existing math to data that they generate. Without even inventing an algorithm, Red legally blocked all compression algorithms.
-
I've been working remote since pre-pandemic. The question isn't whether I like hopping on a zoom call, it's whether I prefer it over commuting 50 minutes each way in rush hour traffic. Depends on who is doing the saving. The huge companies that own and rent out offices definitely don't like it. I much prefer working from my couch, 10 feet from my kitchen, than in an office!
-
The matte is pretty good! Is it this repo you are using? You mentioned RVM in the other topic. https://github.com/PeterL1n/RobustVideoMatting Tracking of course needs some work. How are you currently tracking your camera? Is this all done in real time, or are you compositing after the fact? I assume that you are compositing later since you mention syncing tracks by audio. If I were you, I would ditch the crane if you're over the weight limit, just get some wide camera handles and make slow deliberate movements, and mount some proper tracking devices on top instead of a phone if that's what you're using now. Of course the downside to this approach compared to the projected background we're talking about in the other topic is, you can merge lighting easier with a projected background, and also with this approach you need to synchronize a LOT more settings between your virtual and real camera. With projected background you only need to worry about focus, with this approach you need to match exposure, focus, zoom, noise pattern, color response, and on and on. It's all work that can be done, but makes the whole process very tedious to me.
-
I have a control surface I made for various software. I have a couple of rotary encoders just like the one you have, which I use for adjusting selections, but I got a higher resolution one (LPD-3806) for finer controls, like rotating objects or controlling automation curves. Just like you said, having infinite scrolling is imperative for flexible control. I recommend still passing raw data from the dev board to PC, and using desktop software to interpret the raw data. It's much faster to iterate, and you have much more CPU power and memory available. I wrote an app that receives the raw data from my control surface over USB, then transmits messages out to the controlled software using OSC. I like OSC better than MIDI because you aren't limited to low resolution 8 bit messages, you can send float or even string values. Plus OSC is much more explicit about port numbers, at least in the implementations I've used. But having a desktop software interpreting everything was a game changer for me compared to sending Midi directly from the arduino.
-
CineD is measuring at different resolutions. Downscaling 4k to 1080p improves SNR by 0.5-1 stop. Probably the log curve on the GH5 doesn't take advantage of full sensor DR.
-
This. The main concrete benefit of ProRes is that it's standard. There are a couple defined flavors, and everyone from the camera manufacturers, to the producers, to the software engineers, know exactly what they are working with. Standards are almost always not the best way to do something, but they are the best way to make sure it works. "My custom Linux machine boots in 0.64 seconds, so much faster than Windows! Unfortunately it doesn't have USB drivers so it can only be used with a custom keyboard and mouse I built in my garage" is fairly analogous to the ProRes vs. H.265 debate. As has been pointed out, on a technical level 10 bit 422 H.264 All-I is essentially interchangeable with ProRes. Both are DCT compression methods, and H.264 can be tuned with as many custom options as you like, including setting a custom transform matrix. H.265 expands it by allowing different size blocks, but that's something you can turn off in encoder settings. However, given a camera or piece of software, you have no idea what settings they are actually choosing. Compounding that, many manufacturers use higher NR and more sharpening for H.264 than ProRes, not for a technical reason, but based on consumer convention. Obviously once you add IPB, it's a completely different comparison, no longer about comparing codecs so much as comparing philosophies. Speed vs. size. As far as decode speed, it's largely down to hardware choices and, VERY importantly, software implementation. Good luck editing H.264 in Premiere no matter your hardware. Resolve is much better, if you have the right GPU. But if you are transcoding with ffmpeg, H.265 is considering faster to decode than ProRes with nVidia hardware acceleration. But this goes back to the first paragraph--when we talk about differences in software implementation, it is better to just know the exact details from one word: "ProRes"
-
Zoom F3 - Compact 2 channel 32 bit float audio recorder
KnightsFan replied to BTM_Pix's topic in Cameras
Wow great info @BTM_Pix which confirms my suspicions: Zoom's app is the Panasonic-autofocus of their system. I've considered buying a used F2 (not BT), opening it up and soldering the pins from a bluetooth arduino into the Rec button, but I don't have time for any more silly projects at the moment. I wish Deity would update the Connect with 32 bit. Their receiver is nice and bag friendly, and they've licensed dual transmit/rec technology already. AND they have both lav and XLR transmitters. -
Zoom F3 - Compact 2 channel 32 bit float audio recorder
KnightsFan replied to BTM_Pix's topic in Cameras
I was looking at this when it was announced with the exact same thought about using F2's in conjunction. From what I can tell though, the app only pairs with a single recorder, so you can't simultaneously rec/stop all 3 units wirelessly, right? -
I've seen cameras that scan rooms into 3D for real estate walkthroughs. Product demos especially real estate are a great practical use case for VR, since photography distorts space so much easier than a full, congruent 3D model. One surprising aspect to VR content creation that I've run into both at work and in hobbies is that you can have a 3D environment that looks totally normal in screen space, and then as soon as you step into that world in VR you immediately notice mismatches in scale between props. By "surprising," I mean it's surprising how invisible scale mismatches are on a computer screen even when you move freely in 3D. But yes, renderings for VR make a lot more sense to me than a fixed-location image or video, I'd really rather just have a normal 3D screen for that, rather than have it "glued" to my head.
-
3D porn is last decade, we're way beyond that haha
-
I'm talking from my experience regarding what VR users typically complain about. Some people have higher tolerances but for the general public with current tech, discrepancy between your perception of motion and the visual interpretation of that motion see is a great way to get a lot of complaints--especially with rotation. Translation is tolerated slightly more.
-
Exactly. The lack of head tracking in static content like this makes it really tedious to watch. I could definitely see the appeal of animated films, where you can kind of move around and see under and around things. But I haven't watched any yet so that's speculation. Games and simulations are definitely the appeal for VR. Nothing more fun than goofing off with a couple friends in a VR game. The dual 4k eyes are probably fine resolution wise. I think YouTube's VR app is bugged, because there's no difference between 8k and 144p when I switch settings, so I think I'm viewing it in 480p or thereabouts. These static videos are about a 1/10 on immersion scale. High quality games like Half Life Alyx are more like 7/10. Well like Django said, there's only so much you can do with a 180VR shot. Unless you want to make everyone vomit, you can't: - Change focal length - Tilt or cant the angle - Pan or change height mid shot - Move quickly in any direction - Have objects close to the lens - Have anything out of focus, and especially don't rack focus - Arguably can't have quick cuts, though I think tolerance on that is higher I'm not saying it's a perfect shot, but within the medium there's no a whole lot of options and obviously it won't even have the novelty factor if you watch it on a screen. Personally I think static 180VR content like this is a dead end medium creatively.
-
Strangely enough I've never tried YouTube VR. I searched for some videos (couldn't find that one in particular) but was pretty underwhelmed. I don't know if it was YouTube compression or the video quality itself, but it looked like mush. Switching YouTube's resolution in the app didn't visually change anything, so I'm not convinced it was playing back in full res--maybe the app is just broken and the video is fine. The bigger problem with this style of lens is that without head tracking, it's not very enjoyable at all to watch. I prefer the "fake 3D cinema" experience for VR movies where you feel like you're looking at a big 3D screen, but with proper head tracking. Did you watch in VR? It doesn't have the fisheye effect when it matches your own vision. What would you have done differently considering the medium?
-
Photo cameras have hovered between 20-30MP as standard for quite some time. 6k sensors have been the standard sensor resolution for hybrids pretty much since the DSLR revolution started. The 5D mkII would have been 5.6k if it had full pixel readout, which means that unless manufacturers decided to reduce sensor resolution since then, raw videos would never have been 4k for most hybrids. Even saying that resolution has increased is misleading imo. Outside of Blackmagic cameras and a couple niche releases from companies that (tellingly) went out of business, there have never been consumer-priced cinema cameras with less than 4k sensors that are now using larger. Even the A7s3 is still 12MP. So, you can complain that Blackmagic specifically now sells a 4k pocket camera instead of an HD one, or that they picked a 4k sensor instead of a nonexistent 2.8k one. But honestly there's very few product lines that actually fall into the trend of increasing video resolution outside of increasing file resolution to match existing sensor resolution.
-
This. I love VR for entertainment, I have high hopes for it as a productivity tool, and it's a huge benefit in many training applications. But the Metaverse concept--particularly with Facebook behind it--is a terrible idea.
-
The main conceptual thing was I (stupidly) left the rotation of the XR controller on in the first one, whereas with camera projection the rotation should be ignored. I actually don't think it's that noticeable a change since I didn't rotate the camera much in the first test. It's only noticeable with canted angles. The other thing I didn't do correctly in the first test, was I parented the camera to a tracked point, but forgot to account for the offset. So the camera pivots around the wrong point. That is really the main thing that is noticeably wrong from the first one. Additionally, I measured out the size of the objects so the mountains are scaled correctly, and I measured the starting point of the camera's sensor as accurately as I could instead of just "well this looks about right." Similarly I actually aligned the virtual light source and my light. Nothing fancy, I just eyeballed it, but I didn't even do that for the first version. That's all just a matter of putting the effort in to make sure everything was aligned. The better tracking is because I used a Quest instead of a Rift headset. The Quest has cameras on the bottom, so it has an easier time tracking an object held at chest height. I have the headset tilted up on my head to record these, so the extra downward FOV helps considerably. Nothing really special there, just better hardware. There were a few blips where it lost tracking--if I was doing this on a shoot I'd use HTC Vive trackers, but they are honestly rather annoying to set up, despite being significantly better for this sort of thing. Also this time I put some handles on my camera, and the added stability means that the delay between tracking and rendering is less noticeable. The delay is a result of the XR controller tracking position at only 30 hz, plus HDMI delay from PC to TV which is fairly large on my model. I can reduce this delay by using a Vive, which has 250 hz tracking, and a low latency projector (or screen, but for live action I need more image area). I think in best case I might actually get sub-frame delay at 24 fps. Some gaming projectors claim <5ms latency.
-
...and today I cleaned up the effect quite a bit. This is way too much fun! The background is a simple terrain with a solid color shader, lit by a single directional light and a solid color sky. With it this far out of focus, it looks shockingly good considering it's just a low poly mesh colored brown! https://s3.amazonaws.com/gobuildstuff.com/videos/virtualBGTest2.mp4 It's a 65" tv behind this one. You can vaguely see my reflection in the screen haha.
-
Thanks to this topic, I dug a little further into virtual backgrounds myself. I used this library from github, and made this today. It's tracked using an Oculus controller velcro'd to the top of my camera. "Calibration" was a tape measure and some guesstimation, and the model isn't quite the right scale for the figure. No attempt to match focus, color, or lighting. So not terribly accurate, but imo it went shockingly well for a Sunday evening proof on concept. https://s3.amazonaws.com/gobuildstuff.com/videos/virtualBGTest.mp4
-
It's still easy to spot an object sliding around in the frame, especially if it's supposed to be fixed to the ground. And motion capture usually means capturing the animation of a person, ie how their arms, legs, and face move, and we're especially adept at noticing when a fellow human moves incorrectly. So for foreground character animation, the accuracy has to also be really high. I believe that even with high end motion capture systems, an animator will clean up the data afterwards for high fidelity workflows like you would see in blockbusters or AAA video games. The repo I posted is a machine learning algorithm to get full body human animation from a video of a person, as opposed to the traditional method of a person wearing a suit with markers tracked by an array of cameras fed through an algorithm someone wrote manually. It has shockingly good results, and that method will only get better with time--as with every other machine learning application! Machine learning is the future of animation imo. For motion tracking, Vive is exceptional. I've used a lot of commercial VR headsets, and Vive is top of the pack for tracking. Much better than the inside out version (using builtin cameras instead of external sensors) of even Oculus/Meta's headsets. I don't know what the distance limit is, I've got my base stations about 10 ft apart. For me and my room, the limiting factor for a homemade virtual set is the size of the backdrop, not the tracking area.
-
You can't get accurate position data from an accelerometer, the errors from integrating twice are too high. It might depend on the phone, but orientation data in my experience is highly unreliable. I send orientation from my phone to PC for real world manipulation of 3D props, and it's nowhere near accurate enough for camera tracking. It's great for placing virtual set dressings, where small errors actually can make it look more natural. There's a free android app called Sensors Multitool where you can read some of the data. If I set my phone flat on a table the orientation data "wobbles" by up to 4 degrees. But in general, smartphones are woefully underutilized in so many ways. With a decent router, you can set up a network and use phones as mini computers to run custom scripts or apps anywhere on set--virtual or real production. Two way radios, reference pictures, IP cams, audio recorders, backup hard drive, note taking, all for a couple bucks second hand on ebay.
-
Sorry I made a typo, I was talking about motion capture in my response to Jay, not motion tracking. The repo I linked to can give pretty good results for full body motion capture. Resolution is still important there, but not in the sense of background objects moving. As with any motion capture system, there will be some level of manual cleanup required. For real time motion tracking, the solution is typically a dedicated multicam system like Vive trackers, or a depth sensing camera. Not a high resolution camera for VFX/post style tracking markers.