Jump to content


  • Posts

  • Joined

  • Last visited

About KnightsFan

Contact Methods

  • Website URL

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

KnightsFan's Achievements

Long-time member

Long-time member (5/5)



  1. While neither are tools I would likely use personally, I had to stop by to say Blackmagic knocked it out of the park with their FF camera and app. I'm extremely happy to see more L mount cameras in general. Competition is great, but standardization of interchangeable parts is also preferable. I love to see multiple companies adhering to a standard connection between any interlocking pieces while each innovating their core functionality. I think Blackmagic's strength (and their target audience for this form factor) is the simplicity. Back when I worked on a lot of student films, Blackmagic cameras were always the #1 choice because of their ease of use. A lot of those students were more artistic than technical. Having a big, built in screen, simple UI, and an end-to-end workflow with Da Vinci Resolve was really great for them. The iPhone app is a similar advantage. I wouldn't use an iPhone for any serious project personally, but lots of people already do. And all of those people now have a huge reason to use Resolve, and many will buy the full version. Having an end-to-end workflow for phones is really innovative, particularly with the direct-to-cloud part. I'm always happy to see smaller companies doing something fresh and original, and really wish Blackmagic the best! I'd love a that box-style camera though... but I guess that's what Z Cam is for
  2. The problem with any effort to stop technology is that it won't work in the long run. Right now, there are only a handful of companies that have the computing power to run an LLM like ChatGPT, so it's somewhat feasible to control. But once the technology can run on your home PC, there is no amount of legislation or unionization that can control its use. And that statement is not to say anything is good or bad. The reality is simply that we have very limited ability to control the distribution and use of software. Switching to opinion mode, I believe that the technology is ultimately a good thing. I think limiting the use of technology, in order to preserve jobs, is bad in the long run. I believe it's better for humans if cars drive themselves and we don't need to employ human truck drivers. It's better for humans to give everyone the ability to make entire movies, simply by describing it to a computer. The big problem is that our economic model won't support it. And I'm not talking about studios and unions--the fundamental problem is that digital goods can be infinitely duplicated at no cost, and every economy is based on shifting finite packages. The same applies to AI, but with the new meta-layer being that the actual, duplicated product of AI isn't a digital good, it's a skillset for producing that digital good. I don't have all the right words to describe exactly what I'm trying to say. The example I give is that right now, self driving cars are not as good as people. But the moment any car can drive itself better than a human, every car will be able to. We have to keep training new truck drivers to do the same task. That is not true of a duplicatable AI skillset. So to bring this back to my original point, we can try to prevent self driving cars in an effort to protect truck drivers, but someday, someone will still achieve it and at that moment, the software will exist, and unlike a physical product, it can be copied all over the world simultaneously. So instead of preventing technology or its use, we need to adapt our economic model to better serve humans in lieu of our new abilities.
  3. Nice article! My perspective is as a software engineer, at a company that is making a huge effort to leverage AI faster and better than the industry. I am generally less optimistic than you that AI is "just a tool" and will not result in large swaths of the creative industry losing money. The first point I always make is that it's not about whether AI will replace all jobs, it's about the net gain or loss. As with any technology, AI tools both create and destroy jobs. The question for the economy is how many. Is there a net loss or a net gain? And of course we're not only concerned with number of jobs, but also how much money that job is worth. Across a given economy--for example, the US economy--will AI generated art cause clients/studios/customers to put more, or less net money into photography? My feeling is less. For example, my company ran an ad campaign using AI generated photos. It was done in collaboration with both AI specialists to write prompts, and artists to conceptualize and review. So while we still used a human artist, it would have taken many more people working many more hours to achieve the same thing. The net result was we spent less money towards creative on that particular campaign, meaning less money in the photography industry. It's difficult for me to imagine that AI will result in more money being spent on artistic fields like photography. I'm not talking about money that creatives spend on gear, which is a flow of money from creatives out, I'm talking about the inflow from non-creatives, towards creatives. The other point I'll make is that I don't think anyone should worry about GPT-4. It's very competent at writing code, but as a software engineer, I am confident that the current generation of AI tools cannot do my job. However, I am worried about what GPT-5, or GPT-10, or GPT-20 will do. I see a lot of articles--not necessarily Andrew's--that confidently say AI won't replace X because it's not good enough. It's like looking at a baby and saying, "that child can't even talk! It will never replace me as a news anchor." We must assume that AI will continue to improve exponentially at every task, for the foreseeable future. In this sense, "improve" doesn't necessarily mean "give the scientifically accurate answer" either. Machine learning research goes in parallel with psychology research. A lot of machine learning breakthroughs actually provide ideas and context for studies on human learning, and vice versa. We will be able to both understand and model human behavior better in future generations. My third point is that I disagree that people are fundamentally moved by other people's creations. You write I think that only a very small fraction of moviegoers care at all about who made the content. This sounds like an argument made in favor of practical effects over CGI, and we all know which side won that. People like you and I might love the practical effects in Oppenheimer simply for being practical, but the big CGI franchises crank out multiple films each year worth billions of dollars. If your argument is that the people driving the entertainment market will pay more for carefully crafted art than generic, by the numbers stories and effects, I can't disagree more. Groot, Rocket Raccoon, and Shrek sell films and merchandise based off face and name recognition. What percent of fans do you think know who voiced them? 50%, ie 100 million+ people? How many can name a single animator for those characters? What about Master Chief from Halo (originally a one dimensional character literally from Microsoft), how many people can tell you who wrote, voiced, or animated any of the Bungie Halo games? In fact, most Halo fans feel more connected to the original Bungie character than the one from the Halo TV series, despite having a much more prominent actor portrayal. My final point is not specifically about AI. I live in an area of the US where, decades ago, everyone worked in good paying textile mill jobs. Then the US outsourced textile production overseas and everyone lost their jobs. The US and my state economies are larger than ever. Jobs were created in other sectors, and we have a booming tech sector--but very few laid off, middle aged textile workers retrained and started a new successful career. It's plausible that a lot of new, unknown jobs will spring up thanks to AI, but it's also plausible that "photography" shrinks in the same way that textiles did.
  4. He's being obtusely literal in my opinion. So obviously you can't change the camera's analog gain after the fact. But most people don't judge image or workflow based on counting which photons and voltages flowed through their equipment, they care about whether the end result is accurate to their expectation. So when people say you can change WB in post, it means that the NLE is performing a mathematically correct operation to emulate a different white balance, based on accurate metadata. Not too long ago, there was no such thing as a color managed workflow in consumer NLE's, which meant that the WB sliders and gain adjustments--outside of not changing analog camera circuitry's native WB in post--ALSO produced mathematically incorrect results compared. So when we got accurate WB and ISO adjustments in raw processors, it was truly revolutionary. Nowadays, as long as its color managed and the files have sufficient data, you can get the same result even without raw. Neither one is technically changing the camera's WB, but they produce the correct results and that's all that matters. I'll also point out that I suspect that most (all?) sensors don't actually change their analog gain levels based on WB setting. I bet it's almost always digital adjustment. In that case, Alister would have to also argue that changing WB on the camera doesn't actually change WB. Maybe he wants to argue that shooting at anything other than identical gain on each pixel isn't true white balancing, but I am not sure that is a useful description of the process. That is why I say it's obtusely literal. Everything I said also applies to ISO on cameras that have a fixed amount of gain.
  5. This. Do your own tests and trust your judgement, but here's my opinion. If all you care about is how it looks on YouTube, 50 is perfectly fine. No one can tell the difference between a 50 and 100 source file on a 7" phone screen, or a 65" 4k screen 12' away with window glare across the front. I care more about how my content looks in Resolve than on YouTube. And even then, I use 100 mbps H265 (IPB). When I had an XT3, I shot a few full projects at 200 mbps and didn't see any improvement. I've done tests with my Z Cam and can't see benefits to >100. I'd be happy with 50 in most scenarios. It might be confirmation bias but I think I have been in scenarios where 100 looked better than 50, in particular when handheld. Keep in my also, on most cameras, especially consumer cameras, the nominal rate is the upper limit (it would be a BIG problem the encoder went OVER its nominal rate because the SD card requirements would be a lie). So while I shoot at 100, the file size is usually closer to 70, so it might not even be as big a file size increase as you think. But for me, 100 mpbs is the sweet spot, when shooting H265 IPB.
  6. Red's encoding is Jpeg 2000, which has been around since 2000 and provides any compression ratio you want with a subjective cutoff where it's visually lossless (as does every algorithm). Jpeg 2000 has been used for DCP's since 2004 with a compression ratio of about 12:1. So there was actually a pretty long precedent of motion pictures using the the exact algorithm and at a high compression ratio before Red did it. Red didn't add anything in terms of compression technique or ratios. They just applied existing algorithms to bayer data, the way photo cameras did, instead of RGB data.
  7. Honestly the "or more" part is the only bit I really take issue with. Once Elon Musk reaches Mars, he should patent transportation devices that can go 133 million miles or more so he can collect royalties when someone else invents interstellar travel. If he specifically describes "any device that can transport 1 or more persons" that would even cover wormholes that don't technically use rockets! If the patent had listed the specific set of frame rates that they were able to achieve, like 24-48 in 4k and 24-120 in 2k (or whatever the Red One was capable of at the time), at the compression ratios that they could hit, that would seem more like fair play. That leaves opportunity for further technical innovation, Which, by the way, Red might very well have been first at as well.
  8. I guess I disagree that anyone should have been allowed to patent 8K compressed Raw, or 12k, or 4k 1000 fps--a decade before any of that was possible. I see arguments that the patent is valid because Red were the first to do 4k raw, so to the victor go the spoils... but since we're talking about differences like 23 vs 24, it's a valid point that they patented numbers that they could not achieve at the time. And in a broader sense, I don't understand why a parent should be able to prevent other companies from applying known, existing math to data that they generate. Without even inventing an algorithm, Red legally blocked all compression algorithms.
  9. I've been working remote since pre-pandemic. The question isn't whether I like hopping on a zoom call, it's whether I prefer it over commuting 50 minutes each way in rush hour traffic. Depends on who is doing the saving. The huge companies that own and rent out offices definitely don't like it. I much prefer working from my couch, 10 feet from my kitchen, than in an office!
  10. The matte is pretty good! Is it this repo you are using? You mentioned RVM in the other topic. https://github.com/PeterL1n/RobustVideoMatting Tracking of course needs some work. How are you currently tracking your camera? Is this all done in real time, or are you compositing after the fact? I assume that you are compositing later since you mention syncing tracks by audio. If I were you, I would ditch the crane if you're over the weight limit, just get some wide camera handles and make slow deliberate movements, and mount some proper tracking devices on top instead of a phone if that's what you're using now. Of course the downside to this approach compared to the projected background we're talking about in the other topic is, you can merge lighting easier with a projected background, and also with this approach you need to synchronize a LOT more settings between your virtual and real camera. With projected background you only need to worry about focus, with this approach you need to match exposure, focus, zoom, noise pattern, color response, and on and on. It's all work that can be done, but makes the whole process very tedious to me.
  11. I have a control surface I made for various software. I have a couple of rotary encoders just like the one you have, which I use for adjusting selections, but I got a higher resolution one (LPD-3806) for finer controls, like rotating objects or controlling automation curves. Just like you said, having infinite scrolling is imperative for flexible control. I recommend still passing raw data from the dev board to PC, and using desktop software to interpret the raw data. It's much faster to iterate, and you have much more CPU power and memory available. I wrote an app that receives the raw data from my control surface over USB, then transmits messages out to the controlled software using OSC. I like OSC better than MIDI because you aren't limited to low resolution 8 bit messages, you can send float or even string values. Plus OSC is much more explicit about port numbers, at least in the implementations I've used. But having a desktop software interpreting everything was a game changer for me compared to sending Midi directly from the arduino.
  12. CineD is measuring at different resolutions. Downscaling 4k to 1080p improves SNR by 0.5-1 stop. Probably the log curve on the GH5 doesn't take advantage of full sensor DR.
  13. This. The main concrete benefit of ProRes is that it's standard. There are a couple defined flavors, and everyone from the camera manufacturers, to the producers, to the software engineers, know exactly what they are working with. Standards are almost always not the best way to do something, but they are the best way to make sure it works. "My custom Linux machine boots in 0.64 seconds, so much faster than Windows! Unfortunately it doesn't have USB drivers so it can only be used with a custom keyboard and mouse I built in my garage" is fairly analogous to the ProRes vs. H.265 debate. As has been pointed out, on a technical level 10 bit 422 H.264 All-I is essentially interchangeable with ProRes. Both are DCT compression methods, and H.264 can be tuned with as many custom options as you like, including setting a custom transform matrix. H.265 expands it by allowing different size blocks, but that's something you can turn off in encoder settings. However, given a camera or piece of software, you have no idea what settings they are actually choosing. Compounding that, many manufacturers use higher NR and more sharpening for H.264 than ProRes, not for a technical reason, but based on consumer convention. Obviously once you add IPB, it's a completely different comparison, no longer about comparing codecs so much as comparing philosophies. Speed vs. size. As far as decode speed, it's largely down to hardware choices and, VERY importantly, software implementation. Good luck editing H.264 in Premiere no matter your hardware. Resolve is much better, if you have the right GPU. But if you are transcoding with ffmpeg, H.265 is considering faster to decode than ProRes with nVidia hardware acceleration. But this goes back to the first paragraph--when we talk about differences in software implementation, it is better to just know the exact details from one word: "ProRes"
  14. Wow great info @BTM_Pix which confirms my suspicions: Zoom's app is the Panasonic-autofocus of their system. I've considered buying a used F2 (not BT), opening it up and soldering the pins from a bluetooth arduino into the Rec button, but I don't have time for any more silly projects at the moment. I wish Deity would update the Connect with 32 bit. Their receiver is nice and bag friendly, and they've licensed dual transmit/rec technology already. AND they have both lav and XLR transmitters.
  15. I was looking at this when it was announced with the exact same thought about using F2's in conjunction. From what I can tell though, the app only pairs with a single recorder, so you can't simultaneously rec/stop all 3 units wirelessly, right?
  • Create New...