Jump to content

KnightsFan

Members
  • Posts

    1,144
  • Joined

  • Last visited

About KnightsFan

Contact Methods

  • Website URL
    https://gobuildstuff.com/

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

KnightsFan's Achievements

Long-time member

Long-time member (5/5)

720

Reputation

  1. He's being obtusely literal in my opinion. So obviously you can't change the camera's analog gain after the fact. But most people don't judge image or workflow based on counting which photons and voltages flowed through their equipment, they care about whether the end result is accurate to their expectation. So when people say you can change WB in post, it means that the NLE is performing a mathematically correct operation to emulate a different white balance, based on accurate metadata. Not too long ago, there was no such thing as a color managed workflow in consumer NLE's, which meant that the WB sliders and gain adjustments--outside of not changing analog camera circuitry's native WB in post--ALSO produced mathematically incorrect results compared. So when we got accurate WB and ISO adjustments in raw processors, it was truly revolutionary. Nowadays, as long as its color managed and the files have sufficient data, you can get the same result even without raw. Neither one is technically changing the camera's WB, but they produce the correct results and that's all that matters. I'll also point out that I suspect that most (all?) sensors don't actually change their analog gain levels based on WB setting. I bet it's almost always digital adjustment. In that case, Alister would have to also argue that changing WB on the camera doesn't actually change WB. Maybe he wants to argue that shooting at anything other than identical gain on each pixel isn't true white balancing, but I am not sure that is a useful description of the process. That is why I say it's obtusely literal. Everything I said also applies to ISO on cameras that have a fixed amount of gain.
  2. This. Do your own tests and trust your judgement, but here's my opinion. If all you care about is how it looks on YouTube, 50 is perfectly fine. No one can tell the difference between a 50 and 100 source file on a 7" phone screen, or a 65" 4k screen 12' away with window glare across the front. I care more about how my content looks in Resolve than on YouTube. And even then, I use 100 mbps H265 (IPB). When I had an XT3, I shot a few full projects at 200 mbps and didn't see any improvement. I've done tests with my Z Cam and can't see benefits to >100. I'd be happy with 50 in most scenarios. It might be confirmation bias but I think I have been in scenarios where 100 looked better than 50, in particular when handheld. Keep in my also, on most cameras, especially consumer cameras, the nominal rate is the upper limit (it would be a BIG problem the encoder went OVER its nominal rate because the SD card requirements would be a lie). So while I shoot at 100, the file size is usually closer to 70, so it might not even be as big a file size increase as you think. But for me, 100 mpbs is the sweet spot, when shooting H265 IPB.
  3. Red's encoding is Jpeg 2000, which has been around since 2000 and provides any compression ratio you want with a subjective cutoff where it's visually lossless (as does every algorithm). Jpeg 2000 has been used for DCP's since 2004 with a compression ratio of about 12:1. So there was actually a pretty long precedent of motion pictures using the the exact algorithm and at a high compression ratio before Red did it. Red didn't add anything in terms of compression technique or ratios. They just applied existing algorithms to bayer data, the way photo cameras did, instead of RGB data.
  4. Honestly the "or more" part is the only bit I really take issue with. Once Elon Musk reaches Mars, he should patent transportation devices that can go 133 million miles or more so he can collect royalties when someone else invents interstellar travel. If he specifically describes "any device that can transport 1 or more persons" that would even cover wormholes that don't technically use rockets! If the patent had listed the specific set of frame rates that they were able to achieve, like 24-48 in 4k and 24-120 in 2k (or whatever the Red One was capable of at the time), at the compression ratios that they could hit, that would seem more like fair play. That leaves opportunity for further technical innovation, Which, by the way, Red might very well have been first at as well.
  5. I guess I disagree that anyone should have been allowed to patent 8K compressed Raw, or 12k, or 4k 1000 fps--a decade before any of that was possible. I see arguments that the patent is valid because Red were the first to do 4k raw, so to the victor go the spoils... but since we're talking about differences like 23 vs 24, it's a valid point that they patented numbers that they could not achieve at the time. And in a broader sense, I don't understand why a parent should be able to prevent other companies from applying known, existing math to data that they generate. Without even inventing an algorithm, Red legally blocked all compression algorithms.
  6. I've been working remote since pre-pandemic. The question isn't whether I like hopping on a zoom call, it's whether I prefer it over commuting 50 minutes each way in rush hour traffic. Depends on who is doing the saving. The huge companies that own and rent out offices definitely don't like it. I much prefer working from my couch, 10 feet from my kitchen, than in an office!
  7. The matte is pretty good! Is it this repo you are using? You mentioned RVM in the other topic. https://github.com/PeterL1n/RobustVideoMatting Tracking of course needs some work. How are you currently tracking your camera? Is this all done in real time, or are you compositing after the fact? I assume that you are compositing later since you mention syncing tracks by audio. If I were you, I would ditch the crane if you're over the weight limit, just get some wide camera handles and make slow deliberate movements, and mount some proper tracking devices on top instead of a phone if that's what you're using now. Of course the downside to this approach compared to the projected background we're talking about in the other topic is, you can merge lighting easier with a projected background, and also with this approach you need to synchronize a LOT more settings between your virtual and real camera. With projected background you only need to worry about focus, with this approach you need to match exposure, focus, zoom, noise pattern, color response, and on and on. It's all work that can be done, but makes the whole process very tedious to me.
  8. I have a control surface I made for various software. I have a couple of rotary encoders just like the one you have, which I use for adjusting selections, but I got a higher resolution one (LPD-3806) for finer controls, like rotating objects or controlling automation curves. Just like you said, having infinite scrolling is imperative for flexible control. I recommend still passing raw data from the dev board to PC, and using desktop software to interpret the raw data. It's much faster to iterate, and you have much more CPU power and memory available. I wrote an app that receives the raw data from my control surface over USB, then transmits messages out to the controlled software using OSC. I like OSC better than MIDI because you aren't limited to low resolution 8 bit messages, you can send float or even string values. Plus OSC is much more explicit about port numbers, at least in the implementations I've used. But having a desktop software interpreting everything was a game changer for me compared to sending Midi directly from the arduino.
  9. CineD is measuring at different resolutions. Downscaling 4k to 1080p improves SNR by 0.5-1 stop. Probably the log curve on the GH5 doesn't take advantage of full sensor DR.
  10. This. The main concrete benefit of ProRes is that it's standard. There are a couple defined flavors, and everyone from the camera manufacturers, to the producers, to the software engineers, know exactly what they are working with. Standards are almost always not the best way to do something, but they are the best way to make sure it works. "My custom Linux machine boots in 0.64 seconds, so much faster than Windows! Unfortunately it doesn't have USB drivers so it can only be used with a custom keyboard and mouse I built in my garage" is fairly analogous to the ProRes vs. H.265 debate. As has been pointed out, on a technical level 10 bit 422 H.264 All-I is essentially interchangeable with ProRes. Both are DCT compression methods, and H.264 can be tuned with as many custom options as you like, including setting a custom transform matrix. H.265 expands it by allowing different size blocks, but that's something you can turn off in encoder settings. However, given a camera or piece of software, you have no idea what settings they are actually choosing. Compounding that, many manufacturers use higher NR and more sharpening for H.264 than ProRes, not for a technical reason, but based on consumer convention. Obviously once you add IPB, it's a completely different comparison, no longer about comparing codecs so much as comparing philosophies. Speed vs. size. As far as decode speed, it's largely down to hardware choices and, VERY importantly, software implementation. Good luck editing H.264 in Premiere no matter your hardware. Resolve is much better, if you have the right GPU. But if you are transcoding with ffmpeg, H.265 is considering faster to decode than ProRes with nVidia hardware acceleration. But this goes back to the first paragraph--when we talk about differences in software implementation, it is better to just know the exact details from one word: "ProRes"
  11. Wow great info @BTM_Pix which confirms my suspicions: Zoom's app is the Panasonic-autofocus of their system. I've considered buying a used F2 (not BT), opening it up and soldering the pins from a bluetooth arduino into the Rec button, but I don't have time for any more silly projects at the moment. I wish Deity would update the Connect with 32 bit. Their receiver is nice and bag friendly, and they've licensed dual transmit/rec technology already. AND they have both lav and XLR transmitters.
  12. I was looking at this when it was announced with the exact same thought about using F2's in conjunction. From what I can tell though, the app only pairs with a single recorder, so you can't simultaneously rec/stop all 3 units wirelessly, right?
  13. I've seen cameras that scan rooms into 3D for real estate walkthroughs. Product demos especially real estate are a great practical use case for VR, since photography distorts space so much easier than a full, congruent 3D model. One surprising aspect to VR content creation that I've run into both at work and in hobbies is that you can have a 3D environment that looks totally normal in screen space, and then as soon as you step into that world in VR you immediately notice mismatches in scale between props. By "surprising," I mean it's surprising how invisible scale mismatches are on a computer screen even when you move freely in 3D. But yes, renderings for VR make a lot more sense to me than a fixed-location image or video, I'd really rather just have a normal 3D screen for that, rather than have it "glued" to my head.
  14. 3D porn is last decade, we're way beyond that haha
  15. I'm talking from my experience regarding what VR users typically complain about. Some people have higher tolerances but for the general public with current tech, discrepancy between your perception of motion and the visual interpretation of that motion see is a great way to get a lot of complaints--especially with rotation. Translation is tolerated slightly more.
×
×
  • Create New...