Jump to content

KnightsFan

Members
  • Posts

    1,190
  • Joined

  • Last visited

Reputation Activity

  1. Like
    KnightsFan reacted to eatstoomuchjam in Deciding closest modern camera to Digital Bolex look   
    FWIW, the debate about USB audio may be irrelevant - the write up on newsshooter says that they plan to add a dedicated 3.5mm ltc time code port and 3.5mm mic and headphone jacks.
    Overall, it looks pretty cool to me.  If they really can deliver it at less than $1k, I’ll likely be a customer.  
  2. Like
    KnightsFan got a reaction from kye in Deciding closest modern camera to Digital Bolex look   
    Well yes, they spent the last few years making an 8K full frame camera, so it's implicit to me that going for 16mm size was for economic reasons and that their goal is larger format. Nothing wrong with that, in fact, many camera companies started with small sensor, simple cameras (Z Cam E1, BMCC 2.5k) before earning the credibility to sell more expensive, feature-rich models (F6 Pro, BM 12k).
    I read a comment years ago that the 8K LF model was going to be $13k or something. Presumably they realized that their 8K LF dream might not work out on a first model. (I've been checking on Octopus every few months since the announcement in 2019, so at one point or another I've seen most of their posts and comments)
  3. Like
    KnightsFan reacted to QuickHitRecord in Deciding closest modern camera to Digital Bolex look   
    Octopus Cinema is pivoting to focus exclusively S16 cameras now. Check it out:
  4. Like
    KnightsFan got a reaction from IronFilm in Even the latest Zoom H4 now "supports" timecode!   
    I guess this TC approach is technically better than nothing, but I do not like being vendor locked into a closed, proprietary system. BTM_Pix's post perfectly encapsulates why.
    LTC is simple, works well, and has open documentation, so in the absolute worst case, you can fix problems yourself. Not so with UltraSync Blue. Buying a dongle for TC is fine , but if Zoom made a dongle that takes normal LTC via a BNC or 3.5mm, that would be sooooo much better.
  5. Like
    KnightsFan got a reaction from kye in 2023 In Review - How Did Your Year Go?   
    I love seeing the reels that everyone has posted!
    2023 was a fantastic year, just not for film or video! It was the first year since 2012 where I made 0 video projects. My hope is to spend 2024 getting back to narrative films. The difficulty is finding people with the time and resources to make movies for fun.
    I did get to take some fun photos. I've used an A7rII for photos the last couple years. My primary lens is a 28mm Nikon AI, which it has been since I bought it 6 or 7 years ago--talk about good return on investment! I attached a couple pictures I took this year. The snowy one is a Canon 24-105, and the others are the Nikon 28. Importantly, my photography kit weighs just 2.28lbs/1.03kg with filter and lens cap, and fits in a pouch on my daypack. Most of my adventures include a lot of hiking, and some rock climbing, so I like a camera that is A) small and light B) not stored in front of me and C) can be retrieved without removing my pack.
    I did also side/down-grade from a Zoom F4 to an F3 for audio. It's unlikely I'll work on proper productions any time soon. The F3's size and 32 bit are great for recording effects, and for production use I'll velcro it to boom pole next to an NPF battery sled (and then hope whoever is using it while I run camera aims it well). It's overall a better fit for what I do now.
     
     
    For 2024, my plan is to make videos of all kinds. Narrative films, tutorials on video game design (my other creative hobby), videos about DIY projects, and maybe some animation. I'm also interested in building a projector-based virtual set--I did a proof of concept, but I'd need to invest in good backdrops to make it photorealistic.
    Aside from virtual sets, I have all the gear I need. However, I do plan a couple new items.
    - A couple lights for narrative films. Probably LEDs that can run off batteries when needed.
    - Switching my video camera to full frame. Maybe a Z Cam F6 instead of my M4.
    - Considering lens upgrades. I never owned high quality glass, it's always been borrowed or rented by the production. I might get Sigma Arts--I always enjoyed using them. I look at cinema lens sets every now and then, but honestly I won't get much more out of "real" cinema housings vs 3D printed gears, and you have to go waaay up in price before optical quality rises.



  6. Like
    KnightsFan got a reaction from mercer in 2023 In Review - How Did Your Year Go?   
    I love seeing the reels that everyone has posted!
    2023 was a fantastic year, just not for film or video! It was the first year since 2012 where I made 0 video projects. My hope is to spend 2024 getting back to narrative films. The difficulty is finding people with the time and resources to make movies for fun.
    I did get to take some fun photos. I've used an A7rII for photos the last couple years. My primary lens is a 28mm Nikon AI, which it has been since I bought it 6 or 7 years ago--talk about good return on investment! I attached a couple pictures I took this year. The snowy one is a Canon 24-105, and the others are the Nikon 28. Importantly, my photography kit weighs just 2.28lbs/1.03kg with filter and lens cap, and fits in a pouch on my daypack. Most of my adventures include a lot of hiking, and some rock climbing, so I like a camera that is A) small and light B) not stored in front of me and C) can be retrieved without removing my pack.
    I did also side/down-grade from a Zoom F4 to an F3 for audio. It's unlikely I'll work on proper productions any time soon. The F3's size and 32 bit are great for recording effects, and for production use I'll velcro it to boom pole next to an NPF battery sled (and then hope whoever is using it while I run camera aims it well). It's overall a better fit for what I do now.
     
     
    For 2024, my plan is to make videos of all kinds. Narrative films, tutorials on video game design (my other creative hobby), videos about DIY projects, and maybe some animation. I'm also interested in building a projector-based virtual set--I did a proof of concept, but I'd need to invest in good backdrops to make it photorealistic.
    Aside from virtual sets, I have all the gear I need. However, I do plan a couple new items.
    - A couple lights for narrative films. Probably LEDs that can run off batteries when needed.
    - Switching my video camera to full frame. Maybe a Z Cam F6 instead of my M4.
    - Considering lens upgrades. I never owned high quality glass, it's always been borrowed or rented by the production. I might get Sigma Arts--I always enjoyed using them. I look at cinema lens sets every now and then, but honestly I won't get much more out of "real" cinema housings vs 3D printed gears, and you have to go waaay up in price before optical quality rises.



  7. Like
    KnightsFan reacted to IronFilm in 2023 In Review - How Did Your Year Go?   
    I think the demand for Juniors/Interns in that field is probably going to soon drop off a cliff, but I think the demand for Senior Software Developers will remain strong for years to come 
  8. Like
    KnightsFan reacted to herein2020 in 2023 In Review - How Did Your Year Go?   
    2024 is upon us and I have decided to look back before looking forward. How did everyone else's year turn out?
     
    Work
    2023 has been the busiest year I have ever had, I have shot an even wider variety of events than before as well as small projects such as photoshoots, social media content for clients, etc. This year even logging into this site was a luxury I rarely had. Next year looks like it will be the same as I continue to build a repeat client base across a wider spectrum of project types.
    Gear
    Each year my goal is to buy nothing and of course it did not work out that way for me. My biggest purchase this year was to build a new editing workstation. I was having real problems editing the H.265 10bit 422 footage that the R7 and R5 produce and time is money as they say; with so many jobs my workstation became the limiting factor. I ended up building a custom workstation with a Core i9 CPU, 24 cores, 128GB of memory, NVME drives, and RTX 4080 GPU. Most importantly I made sure that the CPU I selected supported QuickSync which can hardware accelerate H.265 10bit 422 footage. This setup with Davinci Resolve finally fixed my Fusion and footage lags once and for all.
    I also now have access to AV1 with the new CPU which produces almost lossless footage quality. I thought it was going to be a great new codec to use; until I found out Vimeo does not support it even though they say they do (my first AV1 upload to Vimeo stuttered horribly and was unwatchable), and on YouTube the user has to specifically enable it in their profile to enable AV1 playback. I will still probably always upload to my personal YT channel using AV1 but for clients I will need to stick with H.265 or H.264.
    As many of you already know, my setup is now 100% Canon; R7 (Photo/Video), R5 (Photos), C70(Video) and I use them in that order from a frequency standpoint. The R7 has exceeded my expectations in every way (battery life, Image Quality, reliability, photos, video, rigging options). The R5 is a bit of a disappointment, 95% photos, sometimes a B cam for the R7, or a C cam for the R7 and C70. I probably would be better off with two R7's and the C70 but it is too late now.
    C70 
    The C70 is great when locked down on a tripod and especially for long form with XLR audio as an A cam. I am far from a pixel peeper, but for me when looking at the footage after a shoot, its always a bit of a disappointment. I can't quite put my finger on it, but it always seems flatter and duller than the R7 or R5 even though they are all set to CLOG3.  I personally think it is a combination of the Canon speedbooster and the Canon EF 24-105 F4 lens that lives on it as well as the fact that I have little or no control over the lighting for most of the events that I shoot. The 24-105mm is underwhelming in all areas, but I just can't seem to find a better lens for the C70; something like the Tamron 35-150mm F2-2.8 I think would be a better lens for the C70 but that's not an option in EF mount.
    I think the 24-105mm combined with the speedbooster does something to the contrast and saturation that is hard to recover in post. Also, the slightest bit of direct light and the image washes out very quickly due to the speedbooster; and when I say direct light I don't mean direct sunlight, I'm talking even shooting concerts and music videos the image washes out terribly from DJ or music video lights. Needless to say, my setup is not doing the C70 any favors, but as a OMB my setup times are already long as it is. I am thinking about switching to a native RF lens for the C70 but that would be an expensive endeavor. The Canon RF 24-105mm F2.8 is $3K....insanity, it would probably greatly improve the C70's IQ but do nothing for its reach.
    R5
    The 30min recording limit pretty much kills it for anything long form which eliminates it as ever being an A cam even if overheating weren't a concern (which it still is to me). I do shoot b roll clips in a pinch with it or go with the C70 and R7 locked down and shoot the R5 handheld but its rare that I need all 3. The EVF has so much latency that I still miss my 5D4 for photography action shots. Other than the EVF lag, it is great for photography.
    R7
    I've said enough praises about it, but I do find myself thinking a lot about FF equivalency these days when picking up the R7 (more on that in a moment). To my eyes; the IQ out of the R7 and R5 are identical until the R5's second native ISO kicks in, but if the ISO needs to be that high I am in trouble anyway. These days I use an F7 panel light and pretty much never have to go above ISO800 with the R7 combined with the Sigma 18-35 F1.8 EF-S lens.
    Stability
    I shoot almost exclusively handheld now. Many projects I don't even bring the gimbal and the few times that I do, I hate everything about it and either don't use it or only use it for one or two shots (long form speedramps mostly). The R7's IS is great, and I no longer try to emulate fancy YT camera movements, I let the action do the moving and just slightly follow it with the camera work. For very short walks (backwards or forwards) I am stable enough combined with IS and sometimes DR warp stabilization that I can pull it off, but needing to walk really isn't as common as you would think for the projects that I shoot.
    vND Meike Adapter
    My favorite accessory of all time for the R7 and pretty much any camera I have ever owned has become the vND Meike adapter. No more fiddling with lens ND filters, it is nothing short of amazing. It does add a slight green cast to the footage but it is an even cast, no dreaded X pattern or variable cast. In post I just add 20 to the magenta slider in Davinci Resolve and its fixed. Sometimes the green is complimentary, and IMO gives it a higher end look so depending on the situation I will leave it. 
    My only complaint with the filter is it does not go to zero stops of ND so if you are running back and forth from inside to outside for an event you have to accommodate the 2 stops when you are indoors by raising the ISO. It is also very easy to bump the little wheel and there are no numbers or steps on it to set it precisely to where it was before.
    FF Equivalency
    Over the course of this past year I have spent more time than I would like thinking about FF equivalency but not in the way most people do. I personally think the FF "look" is a myth, I have no clue when looking at a shot if it was a FF sensor or not.
    When I personally think about FF equivalency it is from a lighting perspective. Before arriving at a new venue for a project I always worry that there won't be enough light available and all of my F2.8 FF lenses are no longer F2.8 on the R7 and the Sigma 18-35 F1.8 usually isn't long enough for back of the room type of long form content (dance recitals, corporate events, holiday shows, etc.) so I usually use the RF 70-200mm F2.8 on the R7 for B cam work. The R7 also does not have a second native ISO so it gets noisy pretty quickly after 1600ISO.
    Those situations and the fact that 60FPS is line skipped (in both the R7 and the R5) sometimes makes me wonder if the R6 II would be as good of a fit for me as the R7. So far the R7 has delivered on every project and my concerns about lighting were overblown, but I feel like its the one thing that adds stress to my day when arriving at a new venue for a new project where I have no control over the lighting.
    2024 Demo Reel
    I also created a new Demo Reel for 2024. This was shot across many years, projects, and camera eco systems. No matter how busy you are, you still need to keep those new customers coming in so I decided to start off the year with a demo reel.
    Below are all of the cameras that I think I used for the footage in this video. I also uploaded this video using AV1 to YT.
    Cameras - Canon R7, R5, C70, S5, GH5, C200, R6, GoPro 8
    Drones: DJI P4, Mavic Pro, Autel EVO II
     
     
     
  9. Like
    KnightsFan reacted to QuickHitRecord in 2023 In Review - How Did Your Year Go?   
    As a full-time freelancer, it was a rough year for me. I made about 2/3 of what I did in 2022; it was the worst year I've had since 2018. I'm still trying to understand why. It was probably a lack of focus as I helped my wife through her cancer treatments, the last of which was in April (as far as we know, she is cancer-free). Also, I lost a large account -- a relief for my mental health, but it did account for a double digit percentage of my income over the past two years. And I took two personal trips, which meant turning down quite a bit of work (as they say, "take two weeks off, lose six weeks of work"). But yeah, it just felt really slow, especially the Fall/Winter. I've been watching the rise of AI and big jumps in smartphone technology closely, and I am wondering how much longer my skill set will be relevant. It may be time for a change soon, though I can't think of anything that I would rather do.
    Equipment-wise, I bought a second C70, two RF zooms, a Red Scarlet-X, and an M2 Mac Studio + BenQ monitor. I sold about 50 items that I just haven't been using. I probably have another 10 that I can let go of too.
    My new reel for 2024:
     
  10. Like
    KnightsFan got a reaction from John Matthews in 8-bit REC709 is more flexible in post than you think   
    Lol true. My point with 8 vs 10 was that the difference is readily apparent to the naked eye in most shooting conditions without any color grading (though again, it could just be my camera's implementation). From my experience shooting DNG vs ProRes on old blackmagic cameras, I can't say I ever saw a difference. So it's all about diminishing returns.
  11. Like
    KnightsFan reacted to John Matthews in 8-bit REC709 is more flexible in post than you think   
    Look how great 8bit is!
    ...10bit is better ...12bit is even better than 10bit ...I think I know where this conversation is going.
    According to ChatGPT, to get the most out of 8bit:
    Use a Flat Picture Profile: Any suggestions on Panasonic (without adjusting highlights and shadows)? Expose Carefully: I use roughly -.3 to -.7 EV comp. Anyone else? Control Contrast: that's part of the Picture Profile (on Panasonic) Use a Lens Hood: good point in general, but it makes my setup bigger. With modern lenses, does it really matter so much? White Balance: seems obvious... for those don't have multi-camera setups, do you just use auto WB? Avoid Aggressive Color Grading: of course, it's 8bit, but how far can you go? Shoot in the Best Quality: what's the minimum? 4k 100mbps -ish? What about on a tripod? Use a Tripod or Stabilization: less movement = better image? Control Lighting: obviously, but does 10bit even matter in such controlled situations?
  12. Like
    KnightsFan got a reaction from John Matthews in 8-bit REC709 is more flexible in post than you think   
    The biggest difference I notice between 8 and 10 bit footage is that 8 bit has splotchy chroma variation. I believe this is a result of the encoder rather than inherent in bit depth, but it's been visible on every camera that I've used which natively shoots both bit depths. In this quick example, I shot 60 Mbps 4:2:0 UHD Rec 709 in 10 bit H265 and 8 bit H264, and added some saturation to exaggerate the effect. No other color corrections applied. Notice when zooming in, the 8 bit version has sort of splotches of color in places.
     All settings were the same, but this is not a perfectly controlled test--partially because I was lazy, and partially to demonstrate that it's not that hard to show a 10 bit benefit at least on this camera. I do, however, agree with the initial premise, that 8 bit does generally get the job done, and I generally also agree that 8 bit 4k downscales to a better image than native 10 bit 1080p.
     



  13. Like
    KnightsFan got a reaction from kye in 8-bit REC709 is more flexible in post than you think   
    The biggest difference I notice between 8 and 10 bit footage is that 8 bit has splotchy chroma variation. I believe this is a result of the encoder rather than inherent in bit depth, but it's been visible on every camera that I've used which natively shoots both bit depths. In this quick example, I shot 60 Mbps 4:2:0 UHD Rec 709 in 10 bit H265 and 8 bit H264, and added some saturation to exaggerate the effect. No other color corrections applied. Notice when zooming in, the 8 bit version has sort of splotches of color in places.
     All settings were the same, but this is not a perfectly controlled test--partially because I was lazy, and partially to demonstrate that it's not that hard to show a 10 bit benefit at least on this camera. I do, however, agree with the initial premise, that 8 bit does generally get the job done, and I generally also agree that 8 bit 4k downscales to a better image than native 10 bit 1080p.
     



  14. Like
    KnightsFan got a reaction from kye in 24p is outdated   
    It's a couple people, really. I disagree with you about aesthetics of 24p and about the purpose of art, but agree about AI. I disagree with some others about the nature of art requiring a human origin, but agree with them about 24p and the purpose of art. And a lot of us who disagree have had a decent discussion, between the silliness, so don't give up entirely.
    In my opinion @zlfan has been especially inflammatory, not addressing examples or arguments, and ending every other post with lol. And I'm not interested in @Emanuel 's statements like "art made by machines is not art. Period." I know some of it is a language barrier, but it's not a useful statement or a reasoned argument. I appreciate @kye's detailed posts with actual examples, even when I disagree. I didn't read every post, but he might be the only one other than me who has tried to explain their artistic position with any depth or examples. Saying "24p is better better because it's what we've always done" is as inane a position as "more frames is better" not because of the position taken, but because neither statement contributes to anyone's understanding.
    I haven't posted here in a while because I don't have time for making movies anymore. I don't know if I was included in the previous statement that there are too many engineers in this thread--I would definitely prefer to be tagged if so--but I'm one of the few people who has posted original narrative work here on the forum, back when we had a screening room section (as low budget and poorly made as my work was! I'm certainly not the best filmmaker here). As an artist, I will say that anyone who does not delve into the exact mechanics behind the emotional response that art invokes, particularly in a field that requires huge amounts of collaboration, might be doing their artistic side a disservice.
  15. Like
    KnightsFan got a reaction from zlfan in 24p is outdated   
    In my opinion as a software engineer at a company extensively using AI, it is a mistake to believe that there is anything humans can do that AI will never be able to do. ChatGPT was launched barely over a single year ago. Midjourney was launched less than 18 months ago. Imagine where they will be next year. Or, more fairly, imagine where these models will be when they are the age of a working professional--then remember that the models will keep learning indefinitely, not tied to a human lifespan.
    Just like machine learning models, all people--including highly skilled professionals--start with 0 knowledge, and their opinions and artistic vision/instincts are formed from sensory inputs. The building blocks of our brains are not complex, though humans have more training data and a lot more neurons than today's ML models.
  16. Like
    KnightsFan got a reaction from kye in 24p is outdated   
    Right, if we gave a machine learning model only movies, then it would have a limited understanding of anything. But if we gave it a more representative slice of life, similar to what a person takes in, it would have a more human-like understanding of movies. There's no person whose sole experience is "watching billions of movies, and nothing else." We have experiences like going to work, watching a few hundred movies, listening to a few thousand songs, talking to people from different cultures, etc. That was my point about a person's life being a huge collection of prompts.
    We can observe more limited ranges of artistic output from groups of people who have fewer diverse experiences as well.
    Defining art as being made by a living person does, by definition, make it so that machines cannot produce art. It's not a useful definition though, because
    1. It's very easy to make something where it is impossible to tell how it was made, and so then we don't know whether it's art.
    2. We now need a new word for things that we would consider art if produced by humans, but was in fact produced by a machine
    Perhaps a more useful thing in that case would be for you to explain why art requires a living person, especially taking into account the two points above?
    Jaron Lanier wrote an interesting book 10 years ago about our value as training data, called Who Owns the Future. Worth a read for a perspective on how the data we produce for large companies is increasing their economic value.
    I don't disagree, but I also believe that learning art is also a process of taking in information (using a broad definition of information) over the course of a lifetime, and creating an output that is based on that information.
  17. Like
    KnightsFan got a reaction from Jedi Master in 24p is outdated   
    Last time you checked, AI is in its infancy. ChatGPT, arguably our most sophisticated model, just turned 1 year old.
    However, already what you said is already incorrect. Learning models long ago invented their own languages. https://www.theatlantic.com/technology/archive/2017/06/artificial-intelligence-develops-its-own-non-human-language/530436/. It is not what we call artistic, but these are very early models with extremely limited datasets compared to ours.
    My argument is that this is the same for humans. We build up prompts over the course of our lifetime. Billions of them. Every time someone told you, as a child, that you can't do something... that's a prompt that you remembered, and later tried to do. You telling me that AI can't create is a prompt that I am using to write this post. Every original idea that you have is based entirely on the experiences you have had in your life. Is that a statement that you disagree with? If so, can you explain where else your ideas come from? And if not, can you explain how your experiences lead you to more original ideas than machine learning models'?
    We do not have ideas in a vacuum. And obviously our ideas evolve over time as something is incrementally added. But you can't go back 200,000 years to the first humans and expect them to invent something analogous to Haiku's either.
  18. Like
    KnightsFan got a reaction from IronFilm in 24p is outdated   
    Every movie that I really enjoy and watch over and over has elements that are purposely unrealistic, whether in the image, the staging, or characterization. I'm not talking about technical story unrealism, like elves or warp speeds.

    ^ Here, it's not the image quality, considering the time it was shot. However, the staging of the actors is unrealistic. The way they pose, the dialog--no one actually does that or speaks that way. One of my top 5 films, and perhaps my favorite opening scene ever.
     

    ^ Have you ever seen a toy shop organized like that, with those colors and lights? I picked Hugo because, seeing it in 3D, I was blown away by how they changed the interpupillary distance for different scenes to get different moods, using unrealism as part of the craft. And it's an easy segue into the highly creative movie it revolves around. Even in 1902, they could have made the moon more realistic!

     
    Specifically on the topic of framerate, Spiderverse did a fantastic job using different frame rates to convey different moods. Some of it is explained here:
     
    https://www.youtube.com/watch?v=JN5sqSEXxm4
    ^ This is another favorite movie (and it's recent--they could have shot digital or HFR if they'd wanted). Everything works because of unrealism, from the costumes, to the sets, to dialog, sound, delivery.
     
    I would argue that purposely making films look or act realistic results in boring content.
     
    I don't necessarily disagree. Good movies transport me to that world with perfect clarity, but the world may not be realistic. When I watch the Third Man, I'm there, in black and white, with the grain, and the film noir corny dialog, and Orson Welles' overacting. That's the world I'm in.
  19. Like
    KnightsFan got a reaction from Emanuel in 24p is outdated   
    Every movie that I really enjoy and watch over and over has elements that are purposely unrealistic, whether in the image, the staging, or characterization. I'm not talking about technical story unrealism, like elves or warp speeds.

    ^ Here, it's not the image quality, considering the time it was shot. However, the staging of the actors is unrealistic. The way they pose, the dialog--no one actually does that or speaks that way. One of my top 5 films, and perhaps my favorite opening scene ever.
     

    ^ Have you ever seen a toy shop organized like that, with those colors and lights? I picked Hugo because, seeing it in 3D, I was blown away by how they changed the interpupillary distance for different scenes to get different moods, using unrealism as part of the craft. And it's an easy segue into the highly creative movie it revolves around. Even in 1902, they could have made the moon more realistic!

     
    Specifically on the topic of framerate, Spiderverse did a fantastic job using different frame rates to convey different moods. Some of it is explained here:
     
    https://www.youtube.com/watch?v=JN5sqSEXxm4
    ^ This is another favorite movie (and it's recent--they could have shot digital or HFR if they'd wanted). Everything works because of unrealism, from the costumes, to the sets, to dialog, sound, delivery.
     
    I would argue that purposely making films look or act realistic results in boring content.
     
    I don't necessarily disagree. Good movies transport me to that world with perfect clarity, but the world may not be realistic. When I watch the Third Man, I'm there, in black and white, with the grain, and the film noir corny dialog, and Orson Welles' overacting. That's the world I'm in.
  20. Thanks
    KnightsFan got a reaction from PannySVHS in 24p is outdated   
    Every movie that I really enjoy and watch over and over has elements that are purposely unrealistic, whether in the image, the staging, or characterization. I'm not talking about technical story unrealism, like elves or warp speeds.

    ^ Here, it's not the image quality, considering the time it was shot. However, the staging of the actors is unrealistic. The way they pose, the dialog--no one actually does that or speaks that way. One of my top 5 films, and perhaps my favorite opening scene ever.
     

    ^ Have you ever seen a toy shop organized like that, with those colors and lights? I picked Hugo because, seeing it in 3D, I was blown away by how they changed the interpupillary distance for different scenes to get different moods, using unrealism as part of the craft. And it's an easy segue into the highly creative movie it revolves around. Even in 1902, they could have made the moon more realistic!

     
    Specifically on the topic of framerate, Spiderverse did a fantastic job using different frame rates to convey different moods. Some of it is explained here:
     
    https://www.youtube.com/watch?v=JN5sqSEXxm4
    ^ This is another favorite movie (and it's recent--they could have shot digital or HFR if they'd wanted). Everything works because of unrealism, from the costumes, to the sets, to dialog, sound, delivery.
     
    I would argue that purposely making films look or act realistic results in boring content.
     
    I don't necessarily disagree. Good movies transport me to that world with perfect clarity, but the world may not be realistic. When I watch the Third Man, I'm there, in black and white, with the grain, and the film noir corny dialog, and Orson Welles' overacting. That's the world I'm in.
  21. Like
    KnightsFan got a reaction from IronFilm in Don't panic about AI - it's just a tool   
    The problem with any effort to stop technology is that it won't work in the long run. Right now, there are only a handful of companies that have the computing power to run an LLM like ChatGPT, so it's somewhat feasible to control. But once the technology can run on your home PC, there is no amount of legislation or unionization that can control its use.
    And that statement is not to say anything is good or bad. The reality is simply that we have very limited ability to control the distribution and use of software.
    Switching to opinion mode, I believe that the technology is ultimately a good thing. I think limiting the use of technology, in order to preserve jobs, is bad in the long run. I believe it's better for humans if cars drive themselves and we don't need to employ human truck drivers. It's better for humans to give everyone the ability to make entire movies, simply by describing it to a computer. The big problem is that our economic model won't support it. And I'm not talking about studios and unions--the fundamental problem is that digital goods can be infinitely duplicated at no cost, and every economy is based on shifting finite packages. The same applies to AI, but with the new meta-layer being that the actual, duplicated product of AI isn't a digital good, it's a skillset for producing that digital good.
    I don't have all the right words to describe exactly what I'm trying to say. The example I give is that right now, self driving cars are not as good as people. But the moment any car can drive itself better than a human, every car will be able to. We have to keep training new truck drivers to do the same task. That is not true of a duplicatable AI skillset. So to bring this back to my original point, we can try to prevent self driving cars in an effort to protect truck drivers, but someday, someone will still achieve it and at that moment, the software will exist, and unlike a physical product, it can be copied all over the world simultaneously.
    So instead of preventing technology or its use, we need to adapt our economic model to better serve humans in lieu of our new abilities.
  22. Like
    KnightsFan got a reaction from IronFilm in Don't panic about AI - it's just a tool   
    Nice article! My perspective is as a software engineer, at a company that is making a huge effort to leverage AI faster and better than the industry. I am generally less optimistic than you that AI is "just a tool" and will not result in large swaths of the creative industry losing money.
    The first point I always make is that it's not about whether AI will replace all jobs, it's about the net gain or loss. As with any technology, AI tools both create and destroy jobs. The question for the economy is how many. Is there a net loss or a net gain? And of course we're not only concerned with number of jobs, but also how much money that job is worth. Across a given economy--for example, the US economy--will AI generated art cause clients/studios/customers to put more, or less net money into photography? My feeling is less. For example, my company ran an ad campaign using AI generated photos. It was done in collaboration with both AI specialists to write prompts, and artists to conceptualize and review. So while we still used a human artist, it would have taken many more people working many more hours to achieve the same thing. The net result was we spent less money towards creative on that particular campaign, meaning less money in the photography industry. It's difficult for me to imagine that AI will result in more money being spent on artistic fields like photography. I'm not talking about money that creatives spend on gear, which is a flow of money from creatives out, I'm talking about the inflow from non-creatives, towards creatives.
    The other point I'll make is that I don't think anyone should worry about GPT-4. It's very competent at writing code, but as a software engineer, I am confident that the current generation of AI tools cannot do my job. However, I am worried about what GPT-5, or GPT-10, or GPT-20 will do. I see a lot of articles--not necessarily Andrew's--that confidently say AI won't replace X because it's not good enough. It's like looking at a baby and saying, "that child can't even talk! It will never replace me as a news anchor." We must assume that AI will continue to improve exponentially at every task, for the foreseeable future. In this sense, "improve" doesn't necessarily mean "give the scientifically accurate answer" either. Machine learning research goes in parallel with psychology research. A lot of machine learning breakthroughs actually provide ideas and context for studies on human learning, and vice versa. We will be able to both understand and model human behavior better in future generations.
    My third point is that I disagree that people are fundamentally moved by other people's creations. You write
    I think that only a very small fraction of moviegoers care at all about who made the content. This sounds like an argument made in favor of practical effects over CGI, and we all know which side won that. People like you and I might love the practical effects in Oppenheimer simply for being practical, but the big CGI franchises crank out multiple films each year worth billions of dollars. If your argument is that the people driving the entertainment market will pay more for carefully crafted art than generic, by the numbers stories and effects, I can't disagree more.
    Groot, Rocket Raccoon, and Shrek sell films and merchandise based off face and name recognition. What percent of fans do you think know who voiced them? 50%, ie 100 million+ people? How many can name a single animator for those characters? What about Master Chief from Halo (originally a one dimensional character literally from Microsoft), how many people can tell you who wrote, voiced, or animated any of the Bungie Halo games? In fact, most Halo fans feel more connected to the original Bungie character than the one from the Halo TV series, despite having a much more prominent actor portrayal.
    My final point is not specifically about AI. I live in an area of the US where, decades ago, everyone worked in good paying textile mill jobs. Then the US outsourced textile production overseas and everyone lost their jobs. The US and my state economies are larger than ever. Jobs were created in other sectors, and we have a booming tech sector--but very few laid off, middle aged textile workers retrained and started a new successful career. It's plausible that a lot of new, unknown jobs will spring up thanks to AI, but it's also plausible that "photography" shrinks in the same way that textiles did.
  23. Like
    KnightsFan reacted to kye in Don't panic about AI - it's just a tool   
    Great post.  As a fellow computer science person, I agree with your analysis, especially that it will get better and better, and will get so good that we will learn more about the human condition due to how good it will get.  This is also not something new, in the early days of computer graphics, someone wrote a simulation of how birds fly in formation and it was so accurate that the biologists and animal behavioural scientists studied the algorithms and this is how the 'rules' of birds flying in formation were initially discovered. 
    I just wanted to add to the above quote by saying that studios have already made large strides in this direction with the comic-book genre films, whose characters are the stars and not the actors that play them.  This is an extension of things like the James Bond films.  These were all films where the character was constant and the actor was replaceable.  
    VFX films are the latest iteration of this, where the motion capture and voice actors and the animators are far less known, and when it's AI replacing those creatives to make CGI characters that will be the next step, and then it will be AI making realistic-looking characters.
    For those reading that aren't aware of the potential success of completely virtual characters and how people can bond with a virtual person, I direct your attention to Hatsune Miku, a virtual pop star:
    Link: https://en.wikipedia.org/wiki/Hatsune_Miku
    She was created in 2007, which in the software world is an incredibly long time ago, and in the pop star world is probably even longer!
    But did it work?
    That's a figure from over a decade ago and equates to just over USD$70,000,000, which is almost USD$100M in todays money.  I couldn't find any reliable more recent estimates, but she is clearly a successful commercial brand when you review the below.
     
    What does this mean in reality though, it's not like she topped the charts.  Here is a concert from 2016 - she is rear-projected onto a pane of glass that was mounted on the stage.
    She was announced as a performer at the 2020 Coachella, that was cancelled due to covid.
    So, while Japan might be more suited to CGI characters than the west is (although that is changing) - take the Replika story for example.  Replika is a female virtual AI companion who messages and sends pics to subscribers, including flirty suggestive ones.  The owners of Replika decided that the flirty stuff should be a separate paid feature and turned it off for the free version - the users reacted strongly.  So strongly in fact that it's now an active field of research for psychologists trying to figure out how to understand, manage and regulate these things.  It's one thing for tech giants to 'curate' your online interactions, but it's another when the tech giants literally control your girlfriend.
    Background: https://theconversation.com/i-tried-the-replika-ai-companion-and-can-see-why-users-are-falling-hard-the-app-raises-serious-ethical-questions-200257
    There are also other things to take into consideration as well.  Fans are very interested in knowing as much as possible about their idols, but idols are real people and have human psychological needs and limitations, but virtual idols will not.  The virtual idols that share their entire lives with their fans will be even more relatable than the human stars that need privacy and get frustrated and yell at paparazzi etc.  These virtual idols will be able to be PR-perfect in all the right ways (i.e. just human enough to be relatable but not so human that they accidentally offend people).  
    There is already a huge market for personalised messages from stars, virtual idols will be able to create these in virtually infinite amounts.  Virtual stars will be able to perform at simultaneous concerts, make public appearances wherever and whenever is optimal, etc.  
    And if you still need another example about how we underestimate technology... 
    "Computers in the future may weigh less than 1.5 tons.” - Popular Mechanics magazine, 1949.
  24. Like
    KnightsFan got a reaction from Katrikura in Don't panic about AI - it's just a tool   
    Nice article! My perspective is as a software engineer, at a company that is making a huge effort to leverage AI faster and better than the industry. I am generally less optimistic than you that AI is "just a tool" and will not result in large swaths of the creative industry losing money.
    The first point I always make is that it's not about whether AI will replace all jobs, it's about the net gain or loss. As with any technology, AI tools both create and destroy jobs. The question for the economy is how many. Is there a net loss or a net gain? And of course we're not only concerned with number of jobs, but also how much money that job is worth. Across a given economy--for example, the US economy--will AI generated art cause clients/studios/customers to put more, or less net money into photography? My feeling is less. For example, my company ran an ad campaign using AI generated photos. It was done in collaboration with both AI specialists to write prompts, and artists to conceptualize and review. So while we still used a human artist, it would have taken many more people working many more hours to achieve the same thing. The net result was we spent less money towards creative on that particular campaign, meaning less money in the photography industry. It's difficult for me to imagine that AI will result in more money being spent on artistic fields like photography. I'm not talking about money that creatives spend on gear, which is a flow of money from creatives out, I'm talking about the inflow from non-creatives, towards creatives.
    The other point I'll make is that I don't think anyone should worry about GPT-4. It's very competent at writing code, but as a software engineer, I am confident that the current generation of AI tools cannot do my job. However, I am worried about what GPT-5, or GPT-10, or GPT-20 will do. I see a lot of articles--not necessarily Andrew's--that confidently say AI won't replace X because it's not good enough. It's like looking at a baby and saying, "that child can't even talk! It will never replace me as a news anchor." We must assume that AI will continue to improve exponentially at every task, for the foreseeable future. In this sense, "improve" doesn't necessarily mean "give the scientifically accurate answer" either. Machine learning research goes in parallel with psychology research. A lot of machine learning breakthroughs actually provide ideas and context for studies on human learning, and vice versa. We will be able to both understand and model human behavior better in future generations.
    My third point is that I disagree that people are fundamentally moved by other people's creations. You write
    I think that only a very small fraction of moviegoers care at all about who made the content. This sounds like an argument made in favor of practical effects over CGI, and we all know which side won that. People like you and I might love the practical effects in Oppenheimer simply for being practical, but the big CGI franchises crank out multiple films each year worth billions of dollars. If your argument is that the people driving the entertainment market will pay more for carefully crafted art than generic, by the numbers stories and effects, I can't disagree more.
    Groot, Rocket Raccoon, and Shrek sell films and merchandise based off face and name recognition. What percent of fans do you think know who voiced them? 50%, ie 100 million+ people? How many can name a single animator for those characters? What about Master Chief from Halo (originally a one dimensional character literally from Microsoft), how many people can tell you who wrote, voiced, or animated any of the Bungie Halo games? In fact, most Halo fans feel more connected to the original Bungie character than the one from the Halo TV series, despite having a much more prominent actor portrayal.
    My final point is not specifically about AI. I live in an area of the US where, decades ago, everyone worked in good paying textile mill jobs. Then the US outsourced textile production overseas and everyone lost their jobs. The US and my state economies are larger than ever. Jobs were created in other sectors, and we have a booming tech sector--but very few laid off, middle aged textile workers retrained and started a new successful career. It's plausible that a lot of new, unknown jobs will spring up thanks to AI, but it's also plausible that "photography" shrinks in the same way that textiles did.
  25. Thanks
    KnightsFan got a reaction from IronFilm in RED Files Lawsuit Against Nikon   
    Red's encoding is Jpeg 2000, which has been around since 2000 and provides any compression ratio you want with a subjective cutoff where it's visually lossless (as does every algorithm). Jpeg 2000 has been used for DCP's since 2004 with a compression ratio of about 12:1. So there was actually a pretty long precedent of motion pictures using the the exact algorithm and at a high compression ratio before Red did it.
    Red didn't add anything in terms of compression technique or ratios. They just applied existing algorithms to bayer data, the way photo cameras did, instead of RGB data.
×
×
  • Create New...