Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 02/19/2024 in all areas

  1. Perhaps the critical concept is that AI is calculators trained with only human input data. This whole thing is like when people learned anatomy. At first people thought that we couldn't possibly understand how the body worked because it was made by God. Over many hundreds of years we've basically worked out more and more of the organic chemistry and various principles etc, and now no-one who is familiar with modern medicine would question our ability to understand the physical body. Now comes AI, and we're back to saying that we couldn't possibly understand or replicate what it is to be human, because we're etherial magical special and knowable only to God. I think that line of thinking will suffer the same fate, and will suffer it at thousands of times the pace.
    2 points
  2. Allthough you need an external recorder. (something I don't like because it makes the setup too big and defeats the joy for shooting for me). + the IBIS is not to be underestimated. But the sigma FP does have an nice image.
    2 points
  3. I spent a lot of time testing different configurations to make the FP more ergonomic. I finally found the best cage and drive to keep it, relatively, compact without going down the Dark Labs rabbit hole... With that said, out of the box, the camera isn't much bigger than the original eos-m, so when you add a cage, the ssd, lens adapter and lens, it morphs into a much larger camera that isn't much smaller than my 5D3 but much more cumbersome. The screen is pretty nice on the FP and excuse the pun, but with the 5D3, you can see the "magic" of raw on the LCD. With the FP, although it's a better screen, the image has a very flat, dimensional look. Battery life isn't that great with the FP either, compared to the 5D3... I'd say you can get about 45 minutes, or so. In some ways, the FP would probably work better if fully rigged out or if the barebones camera was on a gimbal with the SSD attached to the gimbal's handle. With my current setup, I'd say @BTM_Pix 's approach using M mount lenses would probably be the most compact route. I've been looking at some LTM lenses, so we'll see. In a perfect world, I'd have the money to buy an R5C or R5ii when it gets released, but that doesn't seem likely, so I'll decide what to do with my FP. To tell the truth, if someone offered me a good price, I'd consider selling it right now. I got OpenAI for video now.
    1 point
  4. As I'm sure you know and you just misspoke, you need an external drive for 12bit 4K, not an external recorder. You can shoot internal 12bit 1080p, but ML Raw has an undeniably better IQ, in all respects, than the FP's FHD. I agree about the IBIS, but I don't really see a point for ML Raw with the R5? You can already shoot internal raw. But all of the Canon IBIS cameras are Digic X, so it will be a long time before they break that code. It will be interesting to see what they can do with the EOS-R and hopefully the M50. I'd actually be really happy if they were able to enable continuous MLV raw on the 5D4. But I'm still really impressed with what the 5D3 with ML Raw and a Canon IS lens is capable of handheld. So maybe I'm just easily impressed.
    1 point
  5. 4K raw looks clean but I prefer the look from the 5D3. The 14bit color helps with the highlights. But I do like the FP.
    1 point
  6. I suspect you've fallen foul of the "y" being adjacent to the "t" on the keyboard there.
    1 point
  7. How much do you know about South Korean culture? There are many things that promote such things in that culture that are not present in other cultures, or are present to a much less significant degree. Seems like a convenient subsample. 90% of the daily users of a platform specifically targeted at people who want to post photos for other people to see. 200 Million is 2.4% of the worlds population. I'd be amazed if 90% of the top 2.4% of the worlds most vain people didn't use it either. Another convenient subsample. If you're putting posing for photos or smiling into the lying category then I can tell you're either trolling or you're on some serious drugs. The people on Tinder / IG / etc are one of these vanity collections again, like if you surveyed everyone at a Miss World competition and concluded that everyone uses 450g of makeup per day. An AI video is a video where: no pixel was ever directly recorded from real-life there is no way to go back to the source because there was unknowable amounts of training data and limited input data (these mysterious GoPros scattered around) there is no way to know how the training data was processed there is no way to know how the AI works I think we're a tad beyond sucking in your guy when someone pulls out their phone to take a snapshot. On a more general note, I occasionally read members of the forums writing that the world is going down the drain due to one social trend or other, and I just roll my eyes and wonder what the hell they are looking at online. I mean, when you open a browser you don't see anything until you search for that thing, so the things that people say about the world are really just a reflection of what they have somehow become fixated on and have let it distort their world view.
    1 point
  8. Haven't thought about that. @mercer 🙂 How do you like the output and experience compared to the 5DIII? Eos M50 would also be a fun little raw cam. Looking forward to that one. It has a Digic8 processor just like the Eos R. @Andrew Reid
    1 point
  9. It's called a Sigma FP and I assume it's due for an update soon.
    1 point
  10. I've had both for at least 2 years each. If your priority is size, I'd go with the GX85. If your priority is features, the E-m1 ii blows away the GX85 in almost every regard.
    1 point
  11. Its gonna be way easier to direct the AI then it is to direct shitty actors. AI is improving at lightning speed, in 5 years time you can create high end car commercials from your bedroom, just based off your visionboard/storyboard, no need to gather a crew, fly across the world, lock off roads, rent gear, ... they prolly need to introduce new laws, as its easy to bypass specific location laws now.
    1 point
  12. I mean I'm not really trying to argue about all of this. I think we're all gna cope with a moment like this in different ways. What matters is we're aware that it's happening and we aren't naive about what it might mean for many of us and/or our peers' livelihoods.
    1 point
  13. It won't always look shitty. Sorry but anyone who's saying this looks sh*tty must not have seen the original Will Smith spaghetti one they had a year ago. THAT was sh*tty - this new one is no where near perfect, but sh*tty is just a ridiculous descriptor of something that is still developing. It is imo good enough for us to understand the implications for the art-based labor industry.
    1 point
  14. Reality TV is not even 'reality' and clearly cares little for the tenets of JSP, so I would argue that format will embrace AI with the quickness and you'll have AI-based characters that viewers are rooting for...
    1 point
  15. I don't think a photo real animation with no back end labour can be described as just a better animation tool. Current animation tools, critically, take years of practice and hundreds of paid hours to create each individual work. A production going from "writer, director, and 10,000 hours of professional, lifelong technical artists" to "writer, director, and a 2 month subscription to OpenAI" is, in my opinion, something to pay attention to and expect disruption from, whether you categorize it as a "just a better tool" or not. Switching perspectives a little, these tools are absolutely perfect for hobbyists like me. I'm never going to hire artists, so my productions go from crap CGI to amazing CGI, and no one loses a job. There are no downsides! If that's the angle you're coming from, then I agree with you. However, for anyone making a living off of video work, there's a very very large chance that the amount of money that anyone is willing to pay for ANY kind of creative content creation is going to decrease, fast.
    1 point
  16. It won't always look shitty. Remember 30 years ago when CGI looked like Legos photographed in stopmotion against a flickery blue screen? Let's wait 30 years on AI generated imagery. No technology can take away the enjoyment of doing something, though it can take away the economic viability of selling it. Which indirectly affects us, because if fewer cameras are sold, people like you and I will face higher equipment prices. Certainly AI is already used in the gaming industry to make assets ahead of time. It will be a bit longer before the computational power exists at the end user to fully leverage AI in real time at 60+ fps. When you have a 13 millisecond rendering budget, it's a delicate balance between clever programming and artistically deciding what you can get away with--and that it requires another leap in intelligence levels. Very few humans are able to design top-tier real time renderers. AI will get there, but it's a vastly more complex task than offline image generation. But yes, AI today already threatens every technical game artist the same way it does the film and animation industries, and will likely be the dominant producer of assets in a couple years. In the near term, humans might still make hero assets, but every rock, tree, and building in the background will be AI. Human writers and voice actors might still voice the main character, but in an RPG with 500 background characters and a million lines of dialog, it is cheaper and higher quality for AI to write and voice generic dialog.
    1 point
  17. This is vastly underestimating the quality of AI based video in the near future. You must see that the tech we're seeing/talking about right now will be capable of reproducing imagery that is stunningly life like. The only thing being removed from the equation moving forward is our role in the capture process. But even that is not true - because this AI tech stands on the sum total legacy of everything humans have captured of the world to date. One thing that is humbling about AI-based video/audio etc, is that it is telling us that even our physical existence can be reduced to 1s and 0s. There were/are human economic systems within which something like AI would/can be used in non-exploitative ways towards human beings. We unfortunately do not exist within one of those system at the moment.
    1 point
  18. Agreed. As you say, if people want to use cams and other traditional forms of real life capture for home/family use, no one will stop them. But it's unlikely that media/film production companies in the future will be hiring/paying people who offer camera capture, set design, lighting, etc, etc, as a sole and primary service - which is really what we're talking about. Also, the AI approach won't be seen as a 'forgery' to mass consumers in most circumstances. The ones intended for insidious deep-fake purposes? Yes, of course. But most AI-based video will be seen/consumed as a valid representation of real life ala a painting. It will also be impossible to tell the difference in the future. That's just based on how far a company like Open AI has come in a year. Also, these distinctions we're making around real vs fake will be irrelevant to the vast majority of humans born into it from here on out. All realms of commerce have experienced crushing human labor disruptions in the past and present times (car manufacturing being the most obvious example). What makes this stunning and unique is that it is happening to the realm of commerce (i.e art-based commerce) that we instinctively know humans will continue to do whether they are paid for it or not. You can't say the same for alot of other realms of the human labor economy. So it will be, imo, one of the most poignant blows in the history of human labor to date.
    1 point
  19. ghostwind

    24p is outdated

    We hope it happens on page 24.
    1 point
  20. Footage looks good, and global shutter is a definite plus of course, but I'd like to see a side-by-side with the OG BMPCC or P4K or Sigma FP etc. To be honest, the CST in Resolve can convert Linear gamma to whatever you want, so that part was done already. Adjusting the primaries can be done with a 3x3 RGB matrix, which isn't easy but also wouldn't be too hard if you knew what you were doing. I commend him for pursuing the project, but it's probably half a days work for a professional colour scientist, at most, maybe even an hour or two if they are setup for it already. This is an example of a RAW shooting camera with a good conversion to LogC and then a nice LogC LUT put onto it, nothing more. This has been done by Juan Melara for a range of cameras including the P4K etc, which are almost indistinguishable in side-by-side comparisons. https://juanmelara.com.au/products/bmpcc-4k-to-alexa-powergrade-and-luts This is why I moved to colour grading rather than going from camera to camera, the RAW-shooting cameras are all nearly perfect already and improving them without improving the colour grading is a waste of time really.
    0 points
×
×
  • Create New...