Jump to content

KnightsFan

Members
  • Posts

    1,205
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. Last time you checked, AI is in its infancy. ChatGPT, arguably our most sophisticated model, just turned 1 year old. However, already what you said is already incorrect. Learning models long ago invented their own languages. https://www.theatlantic.com/technology/archive/2017/06/artificial-intelligence-develops-its-own-non-human-language/530436/. It is not what we call artistic, but these are very early models with extremely limited datasets compared to ours. My argument is that this is the same for humans. We build up prompts over the course of our lifetime. Billions of them. Every time someone told you, as a child, that you can't do something... that's a prompt that you remembered, and later tried to do. You telling me that AI can't create is a prompt that I am using to write this post. Every original idea that you have is based entirely on the experiences you have had in your life. Is that a statement that you disagree with? If so, can you explain where else your ideas come from? And if not, can you explain how your experiences lead you to more original ideas than machine learning models'? We do not have ideas in a vacuum. And obviously our ideas evolve over time as something is incrementally added. But you can't go back 200,000 years to the first humans and expect them to invent something analogous to Haiku's either.
  2. The same applies to humans. We have an information set, and can only create thoughts and inferences from that information. What information do humans have access to that AI do not, which allows them to create where nothing existed before? And I'm not talking about AI today in 2023. I mean the ones we'll have 50 years from now. Perhaps you could try to define what a "new work" is vs a "mash up" in a formal and abstract sense. We're looking for a definition that shows what humans can do, that machine learning can never do.
  3. Unrealistic art is just as technically challenging as realistic art. A lot of research, experiments, and programming went into the shaders and tools used to make Up, which is highly impactful emotionally. The technique of creating emotion is itself technical. As a casual viewer, it's completely fine to have an opinion that the more frames the better. It's your opinion. However, for someone who is creating movies--whether personal art projects, or part of hundreds of professionals on a blockbuster--one tool cannot be better than another. Different frame rates create different emotions, the same way different focal lengths do. Saying 60p is better is like saying horror movies are better than romances. On the topic of horror, however, it's well known that one of the critical elements of horror is not showing the monster. Obscuring the monster through camera angles and shadows is a critical element of scaring people. That's not an artistic note. It's simply scarier. In fact, most of effective storytelling is saying just enough that the primary story takes place in the audience's head. If you disagree with that, then I'm not even sure fiction is something you enjoy--which is perfectly okay, but also renders everything else moot! When people say 24p has a dreamy effect, another way to say it is that giving the audience less information allows them to create more in their head. Something else I will add to the discussion about 24p vs 60p is that I have never seen a really good movie shot in 60p. By that I mean, I have never seen a movie that has top class story, lighting, direction, editing, and acting that is also 60p. It's hard to compare The Lord of the Rings with the Hobbit on the merit of framerate because I bet that, all else being equal, 48p Lord of the Rings would be more enjoyable than 24p The Hobbit.
  4. Do you believe that humans have a non-physical and/or magical ability to innovate using information outside of that which we learned? Human thoughts are also mashups of our experiences. We start with nothing and gradually take in information during our lifetime.
  5. In my opinion as a software engineer at a company extensively using AI, it is a mistake to believe that there is anything humans can do that AI will never be able to do. ChatGPT was launched barely over a single year ago. Midjourney was launched less than 18 months ago. Imagine where they will be next year. Or, more fairly, imagine where these models will be when they are the age of a working professional--then remember that the models will keep learning indefinitely, not tied to a human lifespan. Just like machine learning models, all people--including highly skilled professionals--start with 0 knowledge, and their opinions and artistic vision/instincts are formed from sensory inputs. The building blocks of our brains are not complex, though humans have more training data and a lot more neurons than today's ML models.
  6. And do you prefer the plot, dialog, and action to accurately represent real life exclusively? Or do you ever like characters that are braver, stronger, or more villainous than real life? My whole point in my first post was that deviations from reality are used in every aspect of films and storytelling in general. But Kye's post wasn't about the color grade only, it was the entire visual design. We don't light real life the same way we light movies. What is the difference in your opinion between unrealistic lighting design, and unrealistic motion design? There are no wrong opinions of course, I'm just asking questions to explain my point of view on it.
  7. Every movie that I really enjoy and watch over and over has elements that are purposely unrealistic, whether in the image, the staging, or characterization. I'm not talking about technical story unrealism, like elves or warp speeds. ^ Here, it's not the image quality, considering the time it was shot. However, the staging of the actors is unrealistic. The way they pose, the dialog--no one actually does that or speaks that way. One of my top 5 films, and perhaps my favorite opening scene ever. ^ Have you ever seen a toy shop organized like that, with those colors and lights? I picked Hugo because, seeing it in 3D, I was blown away by how they changed the interpupillary distance for different scenes to get different moods, using unrealism as part of the craft. And it's an easy segue into the highly creative movie it revolves around. Even in 1902, they could have made the moon more realistic! Specifically on the topic of framerate, Spiderverse did a fantastic job using different frame rates to convey different moods. Some of it is explained here: https://www.youtube.com/watch?v=JN5sqSEXxm4 ^ This is another favorite movie (and it's recent--they could have shot digital or HFR if they'd wanted). Everything works because of unrealism, from the costumes, to the sets, to dialog, sound, delivery. I would argue that purposely making films look or act realistic results in boring content. I don't necessarily disagree. Good movies transport me to that world with perfect clarity, but the world may not be realistic. When I watch the Third Man, I'm there, in black and white, with the grain, and the film noir corny dialog, and Orson Welles' overacting. That's the world I'm in.
  8. Panasonic S5 Is mirrorless full frame, can use my existing lenses Has good video and photo quality (10 bit, good DR) Lightweight enough to bring backpacking Comfortable to carry/hold for viewfinder-based photography Has pixel shift high res for specific VFX uses I'm not sure the S5 II has enough benefits to justify the price increase compared to used hand S5. The part I'd miss about the S5 II are the solid neck strap rings 😂 In fact I'm planning on getting an S5 to replace the A7RII as my photo camera, and MAYBE for some video... I'm open to other suggestions. The big pieces I would upgrade if I could are rolling shutter, and battery life. I guess that's assuming it's truly 1 camera, for photos and videos. If video, an UMP 4.6k or 12k looks pretty nice: good balance between quality, ease of use, realistically priced for me (unlike an Alexa 35), and not too annoying for solo operation.
  9. While neither are tools I would likely use personally, I had to stop by to say Blackmagic knocked it out of the park with their FF camera and app. I'm extremely happy to see more L mount cameras in general. Competition is great, but standardization of interchangeable parts is also preferable. I love to see multiple companies adhering to a standard connection between any interlocking pieces while each innovating their core functionality. I think Blackmagic's strength (and their target audience for this form factor) is the simplicity. Back when I worked on a lot of student films, Blackmagic cameras were always the #1 choice because of their ease of use. A lot of those students were more artistic than technical. Having a big, built in screen, simple UI, and an end-to-end workflow with Da Vinci Resolve was really great for them. The iPhone app is a similar advantage. I wouldn't use an iPhone for any serious project personally, but lots of people already do. And all of those people now have a huge reason to use Resolve, and many will buy the full version. Having an end-to-end workflow for phones is really innovative, particularly with the direct-to-cloud part. I'm always happy to see smaller companies doing something fresh and original, and really wish Blackmagic the best! I'd love a that box-style camera though... but I guess that's what Z Cam is for
  10. The problem with any effort to stop technology is that it won't work in the long run. Right now, there are only a handful of companies that have the computing power to run an LLM like ChatGPT, so it's somewhat feasible to control. But once the technology can run on your home PC, there is no amount of legislation or unionization that can control its use. And that statement is not to say anything is good or bad. The reality is simply that we have very limited ability to control the distribution and use of software. Switching to opinion mode, I believe that the technology is ultimately a good thing. I think limiting the use of technology, in order to preserve jobs, is bad in the long run. I believe it's better for humans if cars drive themselves and we don't need to employ human truck drivers. It's better for humans to give everyone the ability to make entire movies, simply by describing it to a computer. The big problem is that our economic model won't support it. And I'm not talking about studios and unions--the fundamental problem is that digital goods can be infinitely duplicated at no cost, and every economy is based on shifting finite packages. The same applies to AI, but with the new meta-layer being that the actual, duplicated product of AI isn't a digital good, it's a skillset for producing that digital good. I don't have all the right words to describe exactly what I'm trying to say. The example I give is that right now, self driving cars are not as good as people. But the moment any car can drive itself better than a human, every car will be able to. We have to keep training new truck drivers to do the same task. That is not true of a duplicatable AI skillset. So to bring this back to my original point, we can try to prevent self driving cars in an effort to protect truck drivers, but someday, someone will still achieve it and at that moment, the software will exist, and unlike a physical product, it can be copied all over the world simultaneously. So instead of preventing technology or its use, we need to adapt our economic model to better serve humans in lieu of our new abilities.
  11. Nice article! My perspective is as a software engineer, at a company that is making a huge effort to leverage AI faster and better than the industry. I am generally less optimistic than you that AI is "just a tool" and will not result in large swaths of the creative industry losing money. The first point I always make is that it's not about whether AI will replace all jobs, it's about the net gain or loss. As with any technology, AI tools both create and destroy jobs. The question for the economy is how many. Is there a net loss or a net gain? And of course we're not only concerned with number of jobs, but also how much money that job is worth. Across a given economy--for example, the US economy--will AI generated art cause clients/studios/customers to put more, or less net money into photography? My feeling is less. For example, my company ran an ad campaign using AI generated photos. It was done in collaboration with both AI specialists to write prompts, and artists to conceptualize and review. So while we still used a human artist, it would have taken many more people working many more hours to achieve the same thing. The net result was we spent less money towards creative on that particular campaign, meaning less money in the photography industry. It's difficult for me to imagine that AI will result in more money being spent on artistic fields like photography. I'm not talking about money that creatives spend on gear, which is a flow of money from creatives out, I'm talking about the inflow from non-creatives, towards creatives. The other point I'll make is that I don't think anyone should worry about GPT-4. It's very competent at writing code, but as a software engineer, I am confident that the current generation of AI tools cannot do my job. However, I am worried about what GPT-5, or GPT-10, or GPT-20 will do. I see a lot of articles--not necessarily Andrew's--that confidently say AI won't replace X because it's not good enough. It's like looking at a baby and saying, "that child can't even talk! It will never replace me as a news anchor." We must assume that AI will continue to improve exponentially at every task, for the foreseeable future. In this sense, "improve" doesn't necessarily mean "give the scientifically accurate answer" either. Machine learning research goes in parallel with psychology research. A lot of machine learning breakthroughs actually provide ideas and context for studies on human learning, and vice versa. We will be able to both understand and model human behavior better in future generations. My third point is that I disagree that people are fundamentally moved by other people's creations. You write I think that only a very small fraction of moviegoers care at all about who made the content. This sounds like an argument made in favor of practical effects over CGI, and we all know which side won that. People like you and I might love the practical effects in Oppenheimer simply for being practical, but the big CGI franchises crank out multiple films each year worth billions of dollars. If your argument is that the people driving the entertainment market will pay more for carefully crafted art than generic, by the numbers stories and effects, I can't disagree more. Groot, Rocket Raccoon, and Shrek sell films and merchandise based off face and name recognition. What percent of fans do you think know who voiced them? 50%, ie 100 million+ people? How many can name a single animator for those characters? What about Master Chief from Halo (originally a one dimensional character literally from Microsoft), how many people can tell you who wrote, voiced, or animated any of the Bungie Halo games? In fact, most Halo fans feel more connected to the original Bungie character than the one from the Halo TV series, despite having a much more prominent actor portrayal. My final point is not specifically about AI. I live in an area of the US where, decades ago, everyone worked in good paying textile mill jobs. Then the US outsourced textile production overseas and everyone lost their jobs. The US and my state economies are larger than ever. Jobs were created in other sectors, and we have a booming tech sector--but very few laid off, middle aged textile workers retrained and started a new successful career. It's plausible that a lot of new, unknown jobs will spring up thanks to AI, but it's also plausible that "photography" shrinks in the same way that textiles did.
  12. He's being obtusely literal in my opinion. So obviously you can't change the camera's analog gain after the fact. But most people don't judge image or workflow based on counting which photons and voltages flowed through their equipment, they care about whether the end result is accurate to their expectation. So when people say you can change WB in post, it means that the NLE is performing a mathematically correct operation to emulate a different white balance, based on accurate metadata. Not too long ago, there was no such thing as a color managed workflow in consumer NLE's, which meant that the WB sliders and gain adjustments--outside of not changing analog camera circuitry's native WB in post--ALSO produced mathematically incorrect results compared. So when we got accurate WB and ISO adjustments in raw processors, it was truly revolutionary. Nowadays, as long as its color managed and the files have sufficient data, you can get the same result even without raw. Neither one is technically changing the camera's WB, but they produce the correct results and that's all that matters. I'll also point out that I suspect that most (all?) sensors don't actually change their analog gain levels based on WB setting. I bet it's almost always digital adjustment. In that case, Alister would have to also argue that changing WB on the camera doesn't actually change WB. Maybe he wants to argue that shooting at anything other than identical gain on each pixel isn't true white balancing, but I am not sure that is a useful description of the process. That is why I say it's obtusely literal. Everything I said also applies to ISO on cameras that have a fixed amount of gain.
  13. This. Do your own tests and trust your judgement, but here's my opinion. If all you care about is how it looks on YouTube, 50 is perfectly fine. No one can tell the difference between a 50 and 100 source file on a 7" phone screen, or a 65" 4k screen 12' away with window glare across the front. I care more about how my content looks in Resolve than on YouTube. And even then, I use 100 mbps H265 (IPB). When I had an XT3, I shot a few full projects at 200 mbps and didn't see any improvement. I've done tests with my Z Cam and can't see benefits to >100. I'd be happy with 50 in most scenarios. It might be confirmation bias but I think I have been in scenarios where 100 looked better than 50, in particular when handheld. Keep in my also, on most cameras, especially consumer cameras, the nominal rate is the upper limit (it would be a BIG problem the encoder went OVER its nominal rate because the SD card requirements would be a lie). So while I shoot at 100, the file size is usually closer to 70, so it might not even be as big a file size increase as you think. But for me, 100 mpbs is the sweet spot, when shooting H265 IPB.
  14. Red's encoding is Jpeg 2000, which has been around since 2000 and provides any compression ratio you want with a subjective cutoff where it's visually lossless (as does every algorithm). Jpeg 2000 has been used for DCP's since 2004 with a compression ratio of about 12:1. So there was actually a pretty long precedent of motion pictures using the the exact algorithm and at a high compression ratio before Red did it. Red didn't add anything in terms of compression technique or ratios. They just applied existing algorithms to bayer data, the way photo cameras did, instead of RGB data.
  15. Honestly the "or more" part is the only bit I really take issue with. Once Elon Musk reaches Mars, he should patent transportation devices that can go 133 million miles or more so he can collect royalties when someone else invents interstellar travel. If he specifically describes "any device that can transport 1 or more persons" that would even cover wormholes that don't technically use rockets! If the patent had listed the specific set of frame rates that they were able to achieve, like 24-48 in 4k and 24-120 in 2k (or whatever the Red One was capable of at the time), at the compression ratios that they could hit, that would seem more like fair play. That leaves opportunity for further technical innovation, Which, by the way, Red might very well have been first at as well.
  16. I guess I disagree that anyone should have been allowed to patent 8K compressed Raw, or 12k, or 4k 1000 fps--a decade before any of that was possible. I see arguments that the patent is valid because Red were the first to do 4k raw, so to the victor go the spoils... but since we're talking about differences like 23 vs 24, it's a valid point that they patented numbers that they could not achieve at the time. And in a broader sense, I don't understand why a parent should be able to prevent other companies from applying known, existing math to data that they generate. Without even inventing an algorithm, Red legally blocked all compression algorithms.
  17. I've been working remote since pre-pandemic. The question isn't whether I like hopping on a zoom call, it's whether I prefer it over commuting 50 minutes each way in rush hour traffic. Depends on who is doing the saving. The huge companies that own and rent out offices definitely don't like it. I much prefer working from my couch, 10 feet from my kitchen, than in an office!
  18. The matte is pretty good! Is it this repo you are using? You mentioned RVM in the other topic. https://github.com/PeterL1n/RobustVideoMatting Tracking of course needs some work. How are you currently tracking your camera? Is this all done in real time, or are you compositing after the fact? I assume that you are compositing later since you mention syncing tracks by audio. If I were you, I would ditch the crane if you're over the weight limit, just get some wide camera handles and make slow deliberate movements, and mount some proper tracking devices on top instead of a phone if that's what you're using now. Of course the downside to this approach compared to the projected background we're talking about in the other topic is, you can merge lighting easier with a projected background, and also with this approach you need to synchronize a LOT more settings between your virtual and real camera. With projected background you only need to worry about focus, with this approach you need to match exposure, focus, zoom, noise pattern, color response, and on and on. It's all work that can be done, but makes the whole process very tedious to me.
  19. I have a control surface I made for various software. I have a couple of rotary encoders just like the one you have, which I use for adjusting selections, but I got a higher resolution one (LPD-3806) for finer controls, like rotating objects or controlling automation curves. Just like you said, having infinite scrolling is imperative for flexible control. I recommend still passing raw data from the dev board to PC, and using desktop software to interpret the raw data. It's much faster to iterate, and you have much more CPU power and memory available. I wrote an app that receives the raw data from my control surface over USB, then transmits messages out to the controlled software using OSC. I like OSC better than MIDI because you aren't limited to low resolution 8 bit messages, you can send float or even string values. Plus OSC is much more explicit about port numbers, at least in the implementations I've used. But having a desktop software interpreting everything was a game changer for me compared to sending Midi directly from the arduino.
  20. CineD is measuring at different resolutions. Downscaling 4k to 1080p improves SNR by 0.5-1 stop. Probably the log curve on the GH5 doesn't take advantage of full sensor DR.
  21. This. The main concrete benefit of ProRes is that it's standard. There are a couple defined flavors, and everyone from the camera manufacturers, to the producers, to the software engineers, know exactly what they are working with. Standards are almost always not the best way to do something, but they are the best way to make sure it works. "My custom Linux machine boots in 0.64 seconds, so much faster than Windows! Unfortunately it doesn't have USB drivers so it can only be used with a custom keyboard and mouse I built in my garage" is fairly analogous to the ProRes vs. H.265 debate. As has been pointed out, on a technical level 10 bit 422 H.264 All-I is essentially interchangeable with ProRes. Both are DCT compression methods, and H.264 can be tuned with as many custom options as you like, including setting a custom transform matrix. H.265 expands it by allowing different size blocks, but that's something you can turn off in encoder settings. However, given a camera or piece of software, you have no idea what settings they are actually choosing. Compounding that, many manufacturers use higher NR and more sharpening for H.264 than ProRes, not for a technical reason, but based on consumer convention. Obviously once you add IPB, it's a completely different comparison, no longer about comparing codecs so much as comparing philosophies. Speed vs. size. As far as decode speed, it's largely down to hardware choices and, VERY importantly, software implementation. Good luck editing H.264 in Premiere no matter your hardware. Resolve is much better, if you have the right GPU. But if you are transcoding with ffmpeg, H.265 is considering faster to decode than ProRes with nVidia hardware acceleration. But this goes back to the first paragraph--when we talk about differences in software implementation, it is better to just know the exact details from one word: "ProRes"
  22. Wow great info @BTM_Pix which confirms my suspicions: Zoom's app is the Panasonic-autofocus of their system. I've considered buying a used F2 (not BT), opening it up and soldering the pins from a bluetooth arduino into the Rec button, but I don't have time for any more silly projects at the moment. I wish Deity would update the Connect with 32 bit. Their receiver is nice and bag friendly, and they've licensed dual transmit/rec technology already. AND they have both lav and XLR transmitters.
  23. I was looking at this when it was announced with the exact same thought about using F2's in conjunction. From what I can tell though, the app only pairs with a single recorder, so you can't simultaneously rec/stop all 3 units wirelessly, right?
  24. I've seen cameras that scan rooms into 3D for real estate walkthroughs. Product demos especially real estate are a great practical use case for VR, since photography distorts space so much easier than a full, congruent 3D model. One surprising aspect to VR content creation that I've run into both at work and in hobbies is that you can have a 3D environment that looks totally normal in screen space, and then as soon as you step into that world in VR you immediately notice mismatches in scale between props. By "surprising," I mean it's surprising how invisible scale mismatches are on a computer screen even when you move freely in 3D. But yes, renderings for VR make a lot more sense to me than a fixed-location image or video, I'd really rather just have a normal 3D screen for that, rather than have it "glued" to my head.
  25. 3D porn is last decade, we're way beyond that haha
×
×
  • Create New...