Jump to content

KnightsFan

Members
  • Posts

    1,205
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. That was a Z Cam E2-M4. I saw similar results with the XT3 back when I used that primarily. Fwiw, I also saw similar color issues comparing Canon 5D3 8 bit vs magic lantern Raw. The color blocks are most obvious in relatively uniform gradients, such as skies. This tree shot isn't the best viewing since it's so busy with high frequency changes, but you can still see it pretty easily when you zoom to 100% or view on a 4K monitor, especially in motion. Most obvious is the greenish splotch in the bottom area that I highlighted, and the upper highlighted area has red and green splotches. If it's not a big enough deal for what you do, then great! To me, it's a big enough deal, since I have the option of 10 bit. There's no downside: file size is the same, and I've never encountered overheating on any camera ever (shooting narrative I take relatively short clips with time in between). That's not to say the difference is uber important... I mean I am splitting hairs about something that has very little bearing on the final product. I'm posting here to show what the difference is not to tell you that it matters for you. All else being equal, I'll always use 10 bit on the cameras I've tested.
  2. I don't know if this is universal, or just the cameras I've tested, but I've found that recording 8 bit produces blocky color artifacts that are visible even without log recording or color grading. See my example in the other thread, and note that the comparison is with roughly equivalent bitrate (In this particular example, the 10 bit file ended up slightly smaller but within a couple %).
  3. When people talk about motion cadence, I think it's a conflation of many possible sources, where rolling shutter is just one piece and is often not the prominent one. The various motion problems I have identified are below. I don't see a lot of discussion on 4, 5, and 6 on film forums. I see stuff about 4 every now and then. Obvious Settings 1. Frame rate (24 vs 30 vs etc) 2. Shutter speed Sensor Tech 3. Rolling shutter 4. Weird artifacts almost like double exposures. My camera explicitly has a mode that captures at two shutter speeds simultaneously for higher dynamic range. I've seen artifacts like that on other cameras, but not advertised as a specialty mode. Display Issues 5. Display scan rate, frame rate, ghosting, trails. Some screens scan slowly, like rolling shutter on the display end. Others have ghosting effects, where the previous frame is still slightly visible, or trails. I see a lot more information about displays on video game tech sites, rather than film tech sites. Refresh rate, as mentioned earlier, includes pull downs or judder, to fit nondivisible integers. 6. Decode speed (laggy motion with H.265 on older computers, and stutter from high bitrate files on old mechanical drives).
  4. Yup, it was always amusing when people watched 24p on a 60 fps monitor and claimed it looks better than 30p. Maybe some people like the effect of pulldowns or frame blending, but to me it's a strong BS indicator. I am fortunate enough to have a 120 fps monitor, which is a great number because it's divisible by 24, 30, and 60. It's great that high refresh rates on our screens are the norm now! I really dislike rolling shutter. It's one of my least favorite imperfections. I'm not saying "global shutter or bust" but the faster the better, and 10ms is around the cutoff where I'm happy. Quick controlled pans don't bother me so much as the vague wobble when it's on a steadicam, or handheld. My favorite movies to make have plenty of action, running, fighting, etc. so it's way more present to me than for most corporate or wedding shooters. I'd sacrifice a stop of DR and noise from modern full frame sensors to get rs in the 5 ms range instead of the 20ms range. The nice thing is that plenty of budget cinema cameras have fast readouts these days, like the UMP 4.6k G2, FX6, and of course Komodo.
  5. Theoretically I agree, but in practice no box cameras have built the required ecosystem of accessories to create compact camcorder ergonomics. The two missing pieces are are the side handle, and the monitor. There are plenty of "dumb" side handles, but to match camcorder ergonomics it needs lots of buttons. The FS7 handle with its multiple function buttons, joystick, etc. is a starting point. There are very few good monitor options under 5", and you need bulky batteries or lots of cables. By that time you've got a cinema rig. We could really do with some more open standards for camera controls, video, and audio. Lots of vendor lock these days in terms of accessories.
  6. I loved old camcorder ergonomics. Large battery on the back, flip out screen on the side, nice slot for your hand on the right. The Sony NEXVG900 was a neat concept, but I guess it didn't sell because it was a one off.
  7. A trend towards "fewer wealthier photographers" is generally an accurate description of many industries, and is a cause for economic concern. Often when people talk about what AI can't do, they jump to comparing to the top 0.0000001% of humanity. AI might never achieve what Michelangelo, or Scorsese, or Bach, or Pink Floyd did. But if it can achieve what the bottom 30% of artists can--that's a lot of artists losing money.
  8. If a tool that reduces the cost to produce audiovisual content by 50% isn't game changing, I don't know what is. And this will be a heckuva lot more than 50% reduction over the next 3 years (in my prediction). Same. I think my favorite use case for AI would be something like Wonder Studio, where I can shoot everything with a couple actors, and then replace them with animated characters. That would make something with animatable characters (for example, a Star Wars scene with lots of droids) really easy with a limited cast size. Or if I could go out in the wilderness and shoot with amazing real scenery, and perfectly composite people in later without motion tracking and chroma key artifacts.
  9. I don't think a photo real animation with no back end labour can be described as just a better animation tool. Current animation tools, critically, take years of practice and hundreds of paid hours to create each individual work. A production going from "writer, director, and 10,000 hours of professional, lifelong technical artists" to "writer, director, and a 2 month subscription to OpenAI" is, in my opinion, something to pay attention to and expect disruption from, whether you categorize it as a "just a better tool" or not. Switching perspectives a little, these tools are absolutely perfect for hobbyists like me. I'm never going to hire artists, so my productions go from crap CGI to amazing CGI, and no one loses a job. There are no downsides! If that's the angle you're coming from, then I agree with you. However, for anyone making a living off of video work, there's a very very large chance that the amount of money that anyone is willing to pay for ANY kind of creative content creation is going to decrease, fast.
  10. It won't always look shitty. Remember 30 years ago when CGI looked like Legos photographed in stopmotion against a flickery blue screen? Let's wait 30 years on AI generated imagery. No technology can take away the enjoyment of doing something, though it can take away the economic viability of selling it. Which indirectly affects us, because if fewer cameras are sold, people like you and I will face higher equipment prices. Certainly AI is already used in the gaming industry to make assets ahead of time. It will be a bit longer before the computational power exists at the end user to fully leverage AI in real time at 60+ fps. When you have a 13 millisecond rendering budget, it's a delicate balance between clever programming and artistically deciding what you can get away with--and that it requires another leap in intelligence levels. Very few humans are able to design top-tier real time renderers. AI will get there, but it's a vastly more complex task than offline image generation. But yes, AI today already threatens every technical game artist the same way it does the film and animation industries, and will likely be the dominant producer of assets in a couple years. In the near term, humans might still make hero assets, but every rock, tree, and building in the background will be AI. Human writers and voice actors might still voice the main character, but in an RPG with 500 background characters and a million lines of dialog, it is cheaper and higher quality for AI to write and voice generic dialog.
  11. A few years ago I said something here on this forum about how AI could even replace wedding photgraphers/videographers. My point was that I didn't know what the technology would look like, but it would eventually be possible. My wild brainstorm was something along the lines of setting up a dozen video cameras, then AI uses that information to generate a whole edit, with closeups and clear audio, nicer lighting, etc. Doesn't look like the tech that far off. Budget weddings won't even need the video cameras, just a couple photos and a description of what happened. It won't be "real," but will the influencers and influencees care?
  12. It's technically trivial to make a timecode input that writes TC metadata in the file header. (Source: I've created timecode readers on arduinos, android apps, and webpages to keep virtual production units synced). I think it's lack of interest among customers. And to be clear, accurate timecode generators require more specialized hardware than your average off the shelf board--I'm talking about timecode input, which is what people mostly use on cameras since the generator is either an audio recorder, or a dedicated TC generator. I see these prices and am reminded that the demand side of supply/demand is more important on tech items like that.
  13. You're right-- I knew I had that in my head from somewhere, but for some reason I missed it when I skimmed the article again yesterday. Apologies for my share of misinformation!
  14. I was referring to audio. Octopus can use any random audio interface with USB, which is also very common. Are you sure? If so then you're right, it's not quite the same. I'd be kinda surprised if it doesn't have internal scratch audio though. I can't remember seeing anything about it not having any--but I might have missed something! Anyway, a USB dongle for a 3.5mm scratch mic is what, $10? So clearly you and many other people need 3.5mm and scratch audio at a minimum and I'm certainly not dismissing that. My point is that I don't. For me its either true scratch audio, or I'm running a real recorder. I'm happy with adding a dongle and scratch mic if I need to. I'd love timecode input, but there also aren't many (any?) options <$1k that have TC in. Digital bolex didn't. I'd say wait and see what their API looks like. If you can have some simple code that reads LTC from a USB dongle then I'd say that's already better than almost everything else at the price, including Z Cam (which I say as a Z Cam owner with the ridiculous dongle, haha) I guess I just think you're expecting too many big-production features considering the price they're targeting, while glossing over the unique feature that they have instead--and I mean truly no-one-else-has-it unique. All I'm saying is I'd trade internal audio for USB audio, but I don't expect everyone else to. Yeah 100% this camera is not for the kind of productions you usually work on.
  15. I don't think the target audience is the "film industry" to be honest. Probably more likely to find its way in the hands of tinkerers, auteurs who already have other cameras, and maybe some very specific applications like low budget crash cams when you can't afford Komodos. Z Cam relied on 3rd party monitors from day 1. Their standard was HDMI for video, Octopus' is USB for audio. Both are consumer connectors with extremely wide support among consumer products. It's worth pointing out that this Octopus camera has the same audio and TC solution as the OG BMPCC and most hybrid cams, except that it can also record synced, bit perfect audio from your favorite Zoom recorder (or thousands of low budget audio interfaces) I don't necessarily disagree with you, IronFilm, but it's a different audience and their tradeoff was in favor of smaller size, weight, and price. So I wouldn't use the term "huge middle finger" personally, more like they decided on a different audience and feature set
  16. Well yes, they spent the last few years making an 8K full frame camera, so it's implicit to me that going for 16mm size was for economic reasons and that their goal is larger format. Nothing wrong with that, in fact, many camera companies started with small sensor, simple cameras (Z Cam E1, BMCC 2.5k) before earning the credibility to sell more expensive, feature-rich models (F6 Pro, BM 12k). I read a comment years ago that the 8K LF model was going to be $13k or something. Presumably they realized that their 8K LF dream might not work out on a first model. (I've been checking on Octopus every few months since the announcement in 2019, so at one point or another I've seen most of their posts and comments)
  17. I'm not sure that giving a middle finger is a fair assessment. More like they are expecting/hoping for 3rd party solutions. I expect they'll have a fairly open API. Although--from the early pics, there's a lack of mounting points, and having the screen on the back makes a modular approach difficult. But I actually agree with what Octopus said. I'd rather bolt any decent audio interface onto a camera than rely on a small manufacturer trying to reinvent the wheel and make a quality audio. Especially since almost all Zoom recorders are interfaces, so you can have proper recorder-style ergonomics rather than (for example) trying to record on an Audient Evo or something. I guess what I'm saying is that IF someone makes a 3rd party TC solution, then I'm fine with the modular USB approach, since other manufacturers cover the all-in-one option. I hope Octopus finds success. I love what they're doing. I can't see myself going for a small sensor, not with the lenses I have, but maybe if the 16mm is successful they'll make that full frame camera they prototyped as well!
  18. I didn't realize the M4 has LTC. Out of all their products, that's a weird one to have it on when not even the F3 has it. I do like their M-series of recorders in theory, though they aren't quite what I would look for in my uses. True, and a Blue + One is the minimum viable timecode setup for audio/video at this time. Honestly, with how cheap the Blue is, I would sink ~$200 total into a single proprietary device to sync 6 other devices (e.g. camera(s), boom mic, a few DR-10L Pro's). That's a convenient setup, plus an acceptable loss if the ecosystem disappears in 5 years. But when you have to add a One on as well, the lock in gets more expensive and risky. Hopefully Ultrasync is banging on Sony, Canon, and Panasonic's doors to add support.
  19. I guess this TC approach is technically better than nothing, but I do not like being vendor locked into a closed, proprietary system. BTM_Pix's post perfectly encapsulates why. LTC is simple, works well, and has open documentation, so in the absolute worst case, you can fix problems yourself. Not so with UltraSync Blue. Buying a dongle for TC is fine , but if Zoom made a dongle that takes normal LTC via a BNC or 3.5mm, that would be sooooo much better.
  20. I love seeing the reels that everyone has posted! 2023 was a fantastic year, just not for film or video! It was the first year since 2012 where I made 0 video projects. My hope is to spend 2024 getting back to narrative films. The difficulty is finding people with the time and resources to make movies for fun. I did get to take some fun photos. I've used an A7rII for photos the last couple years. My primary lens is a 28mm Nikon AI, which it has been since I bought it 6 or 7 years ago--talk about good return on investment! I attached a couple pictures I took this year. The snowy one is a Canon 24-105, and the others are the Nikon 28. Importantly, my photography kit weighs just 2.28lbs/1.03kg with filter and lens cap, and fits in a pouch on my daypack. Most of my adventures include a lot of hiking, and some rock climbing, so I like a camera that is A) small and light B) not stored in front of me and C) can be retrieved without removing my pack. I did also side/down-grade from a Zoom F4 to an F3 for audio. It's unlikely I'll work on proper productions any time soon. The F3's size and 32 bit are great for recording effects, and for production use I'll velcro it to boom pole next to an NPF battery sled (and then hope whoever is using it while I run camera aims it well). It's overall a better fit for what I do now. For 2024, my plan is to make videos of all kinds. Narrative films, tutorials on video game design (my other creative hobby), videos about DIY projects, and maybe some animation. I'm also interested in building a projector-based virtual set--I did a proof of concept, but I'd need to invest in good backdrops to make it photorealistic. Aside from virtual sets, I have all the gear I need. However, I do plan a couple new items. - A couple lights for narrative films. Probably LEDs that can run off batteries when needed. - Switching my video camera to full frame. Maybe a Z Cam F6 instead of my M4. - Considering lens upgrades. I never owned high quality glass, it's always been borrowed or rented by the production. I might get Sigma Arts--I always enjoyed using them. I look at cinema lens sets every now and then, but honestly I won't get much more out of "real" cinema housings vs 3D printed gears, and you have to go waaay up in price before optical quality rises.
  21. Lol true. My point with 8 vs 10 was that the difference is readily apparent to the naked eye in most shooting conditions without any color grading (though again, it could just be my camera's implementation). From my experience shooting DNG vs ProRes on old blackmagic cameras, I can't say I ever saw a difference. So it's all about diminishing returns.
  22. The biggest difference I notice between 8 and 10 bit footage is that 8 bit has splotchy chroma variation. I believe this is a result of the encoder rather than inherent in bit depth, but it's been visible on every camera that I've used which natively shoots both bit depths. In this quick example, I shot 60 Mbps 4:2:0 UHD Rec 709 in 10 bit H265 and 8 bit H264, and added some saturation to exaggerate the effect. No other color corrections applied. Notice when zooming in, the 8 bit version has sort of splotches of color in places. All settings were the same, but this is not a perfectly controlled test--partially because I was lazy, and partially to demonstrate that it's not that hard to show a 10 bit benefit at least on this camera. I do, however, agree with the initial premise, that 8 bit does generally get the job done, and I generally also agree that 8 bit 4k downscales to a better image than native 10 bit 1080p.
  23. It's a couple people, really. I disagree with you about aesthetics of 24p and about the purpose of art, but agree about AI. I disagree with some others about the nature of art requiring a human origin, but agree with them about 24p and the purpose of art. And a lot of us who disagree have had a decent discussion, between the silliness, so don't give up entirely. In my opinion @zlfan has been especially inflammatory, not addressing examples or arguments, and ending every other post with lol. And I'm not interested in @Emanuel 's statements like "art made by machines is not art. Period." I know some of it is a language barrier, but it's not a useful statement or a reasoned argument. I appreciate @kye's detailed posts with actual examples, even when I disagree. I didn't read every post, but he might be the only one other than me who has tried to explain their artistic position with any depth or examples. Saying "24p is better better because it's what we've always done" is as inane a position as "more frames is better" not because of the position taken, but because neither statement contributes to anyone's understanding. I haven't posted here in a while because I don't have time for making movies anymore. I don't know if I was included in the previous statement that there are too many engineers in this thread--I would definitely prefer to be tagged if so--but I'm one of the few people who has posted original narrative work here on the forum, back when we had a screening room section (as low budget and poorly made as my work was! I'm certainly not the best filmmaker here). As an artist, I will say that anyone who does not delve into the exact mechanics behind the emotional response that art invokes, particularly in a field that requires huge amounts of collaboration, might be doing their artistic side a disservice.
  24. Right, if we gave a machine learning model only movies, then it would have a limited understanding of anything. But if we gave it a more representative slice of life, similar to what a person takes in, it would have a more human-like understanding of movies. There's no person whose sole experience is "watching billions of movies, and nothing else." We have experiences like going to work, watching a few hundred movies, listening to a few thousand songs, talking to people from different cultures, etc. That was my point about a person's life being a huge collection of prompts. We can observe more limited ranges of artistic output from groups of people who have fewer diverse experiences as well. Defining art as being made by a living person does, by definition, make it so that machines cannot produce art. It's not a useful definition though, because 1. It's very easy to make something where it is impossible to tell how it was made, and so then we don't know whether it's art. 2. We now need a new word for things that we would consider art if produced by humans, but was in fact produced by a machine Perhaps a more useful thing in that case would be for you to explain why art requires a living person, especially taking into account the two points above? Jaron Lanier wrote an interesting book 10 years ago about our value as training data, called Who Owns the Future. Worth a read for a perspective on how the data we produce for large companies is increasing their economic value. I don't disagree, but I also believe that learning art is also a process of taking in information (using a broad definition of information) over the course of a lifetime, and creating an output that is based on that information.
  25. So are you saying that humans have an aspect, known as innate humanity, which is not a learnable behavior, nor is something that can be defined by any programmable ruleset? And that this is the element that allows a human to tell whether its creation is art? I would argue that Midjourney, for example, does a pretty good job even now of determining whether its own output is artistic, before giving that result to you. It would be pretty useless to the many artists who use it, if it could not already determine the value of its output before giving it to the user. Saying it doesn't make it true. Why do you believe that to be the case?
×
×
  • Create New...