-
Posts
7,846 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
A realist... I like it!
-
The key difference is that OIS and IBIS will stabilise during the exposure, and digital IS stabilises afterwards. I did a test some time ago to show what this looks like. This test is deliberately with a long lens, so is somewhat exaggerated, but should be indicative of the issue. If you are using very short shutter speeds then it doesn't matter, it's only when using a 45-360 degree shutter angle that these effects would be visible. Once you know how to recognise it, you see it in YT content every so often, so it does happen in real-life. The combination of action-camera and low-light is particularly succeptible. Digital IS is a great tool, and for some people it's sufficient. Use whatever works for you 🙂
-
NICE! That sounds like a great way to really help out the folks downstream in the image pipeline without much extra effort. I've never heard of anyone doing that before, and yet it seems so obvious..
-
Semi-recently a certain cat-lover on YT who shall-not-be-named did a video talking about variable ND filters, the different types, and the different issues that vNDs can have. I was surprised at how complex the situation is and how many things can go wrong when using them. In the tests he showed, which included a range of vNDs, performance didn't seem to correlate much with price, but that might also have been intentional as I think he was launching his own line and therefore had a vested interest in not showing any good ones that are out there. The take-aways that I got were: there's lots happening in there, even from a single vND changing when you turn it lots of things can go wrong, depending on the type used and how they're used better to use fixed NDs or simply to accept the errors and fix what you can in post
-
Interesting link - I wasn't really that aware of the history of ambient music. I guess many people participate in an art form without really understanding the history and foundational concepts. Thanks for sharing 🙂
-
I'd settle for people that understand: The equipment has a purpose other than to have steadily increasing specifications That purpose has very little to do with "reality" (whatever the hell that means**) **I think we'd also all benefit from knowing a bit more about the human visual system, which (to be frank) is so bizarre that it's a wonder we can see at all, and (also being frank) people seem to demonstrate virtually no understanding of.
-
This is all true, but only looks at the creation side. Without humans to experience it, there is no art. Or, at least, not any kind of art that has existed so far. Maybe the robots will love strings of prime numbers or something, who knows. But this is what I base my comment on - that film-making (and all forms of art) are for human consumption. Humans are more than rational beings, we are emotional beings, and perhaps more than that too. So to reduce things down to what is objective is to throw away the entire purpose of art. Creation without subjectivity is science or engineering, not art.
-
That was from 2012. I've had a number of replacement computers since then! At the time it seemed fine, but that was my first machine with an SSD, so it wasn't hard to impress me back then 🙂
-
To clarify, in the above I meant that science and everything else does a poor job of making sense of the meaning and morality of life. Obviously, science does a pretty good job of a lot of things, which is why we're able to chat on the internet about film-making and are not rural farmers lamenting the fact that half our children died before the age of 5.
-
I would argue that art is in the perception of the beholder. I used to write electronic music and was working with a friend on some hypnotic ambient music and we were sitting in silence and listening to the song we were working on when all-of-a-sudden next door locked their car and it gave the "boop-boop" sound, and it fit perfectly with the track. I mean, perfectly. We both looked at each other and immediately set out to create a similar sound to put into the song at that point. I don't expect anyone here to appreciate this because you weren't there, you probably don't like that kind of music, you probably don't think it sounds like art, etc, but in that moment a completely non-creative event created a very aesthetically pleasing result on two listeners engaged in the creation of something that's only purpose was for aesthetic appreciation. I don't have a good definition of art, but if that isn't art then I would argue that nothing is. What this means for AI art vs anything else, I have no idea, but from that experience, I don't think that art requires intent during creation to be perceived as such by the audience. From a practical point of view though, if an AI generated every possible sequence of 2minute digital sound, the percentage that would be enjoyed by anyone would be so low that it's simply impractical to approach it with anything resembling a brute-force strategy. Science does a very poor job of making emotional sense of the experience of life. Science also does a poor job of making sense of the meaning and morality of life (along with everything else). It doesn't matter that you see the world in a certain way, everyone is entitled to their opinions, but you actively refuse to acknowledge that anyone else is different to you, or if you do then you just assume them to be wrong. This is a form of aggressive behaviour, which is why it makes people disagree with you so much. Even your language in the above uses the word "correct" which implies that everything else is "incorrect" and that anyone who thinks differently to you (which is most people here) are wrong. This is a great way to make people dislike you. The more you post, the most I dislike you. You have a lot of knowledge, but you are needlessly making other people angry - you are not convincing anyone of anything. As a practical suggestion, I would recommend you speak only from your personal experience, and try not to criticise things you don't like or don't agree with. If this was a thread about ice cream, and someone said they liked strawberry, no-one would tell them they're wrong for liking strawberry and that the only correct choice is chocolate. This is basically what you are doing, only you're telling us that we're wrong for wanting film-making to be a certain way. Film-making is a creative process that is designed to be enjoyed by the audience. That is a fact. It CAN be used for factual purposes, but this is obviously not the goal of even the majority of film-making, so to judge it on that basis is ridiculous. Please try to adjust your behaviour to be more tolerant of how others see the world.
-
Interesting stuff. I can imagine that the 'pure' sources of data that are human-only will be worth more and more. I guess that each type of model will have its own weaknesses and blind spots, and it won't be until we get a unified AI model that is fed all the data in all the formats (across all the senses, all the styles, all the topics, etc) that certain integrated elements would be possible for it to understand. It really is down the rabbit hole that we're going. One area that is fascinating to me is that because AI doesn't see the world how we do, it will notice all sorts of patterns that we either miss, or don't pay attention to, or couldn't ever have found. Potentially it could bring enormous knowledge gains about the world and about ourselves. It has the potential for destruction as well, of course, but so much up-side too.
-
Thanks for sharing. You can really see how the puppy just didn't want to stop!
-
Depending on the DR of the scene and the DR of the FS7, you might find that to create silhouettes you need to pull the shadows down, which creates a new challenge, as you'll be adding contrast and stretching the image. If this is the case then you might want to test an ETTR exposure vs a normal exposure vs underexposing so that the silhouettes are down near where you want them and then you get more headroom. All these things really are situational, and would be difficult to predict. In reality, you've probably got a decent amount of flexibility, especially if you're not afraid to use a bit of NR. Internet camera folks seem to be allergic to any form of noise in the image, but professional colourists use it regularly, and cinema cameras are a lot noisier than internet people would even believe.
-
Unless someone has specific experience of this exact challenge, I'd suggest just doing a test prior. Any high DR environment should do, just shoot both profiles and then bring up the shadows in post and see which is cleaner. If the sun is in shot then it's practically an infinite-DR scene, although if the sun is low enough in the sky then you might be able to expose such that everything except the disc of the sun is below clipping and still get non-zero shadow detail. I've seen the latest high-DR cameras like A74 and FX3 do a passable job of having recognisable humans in the foreground with the sun in shot behind them, but it's still likely a stretch for whatever codec and colour space you're working with.
-
Agreed. Most of the YouTuber crowd I see editing on laptops have some sort of mechanism to attach an SSD to the lid of their machine for editing from. Velcro appears to be popular. I used to edit on my laptop on the train and the seats are often so narrow that any plug sticking out the left or right of the machine would get bumped as people sit down or stand up from sitting next to you. I solved that issue by upgrading to a 1Tb SSD in my machines, which gives enough room to edit a few projects on the go if I need to. I only shoot h264/5 so files are reasonably sized. Most of my hardcore adventures into SSD optimisation were from the first Mac I ever bought - a 64Gb 11" MacBook Air from ~2012. When I bought it I expected to keep my files on external USB sticks but never considered that I'd need more than 64Gb for the OS and applications alone. I ended up buying a slim-fitting USB drive that basically didn't protrude from the chassis and mapped a bunch of system folders to it. I think I even ended up with the entire /Applications folder on it. It worked well but I made sure to buy a larger drive next upgrade!
-
It's an interesting question, and I think it depends. If we fed the AI every film / TV episode / etc and all the data that says how "good" each one is, then I think the AI will only be able to predict if a newly created work is a good example of a previously demonstrated pattern. For example, if we trained it on every TV episode ever and then asked it to judge an ASMR video, it would probably say that it was a very bad video, because it's nothing like a well regarded TV show. However, if AI was somehow able to extract some overall sense of the underlying dynamics at play in human perception / psychology / etc, then maybe it would see its first ASMR video and know that although it was different to other genres, it still fit into the underlying preferences humans have. I think we are getting AIs that act like the first case, but we are training them like the second (i.e. general intelligences) so depending on how well they are able to accomplish that, we might get the second one. The following quote contains spoilers for West World and the book Neuromancer:
-
Both can be innovative by doing things that diverge from what was already discovered to be "good", that's definitely true. The difference is that when AI deviates, it can't tell if the deviation is creative or just mediocre, because the only reference it has is how much the new thing matches the training data. If a human deviates, they can experience if it is good according to our own innate humanity. A human can experience something that is genuinely new, and can differentiate something mediocre from something amazing. The AI can only compare with the past. This is, I think, what great artists do. They try new stuff, and sometimes hit upon something that is new and good. This is the innovation.
-
..and the DigitalRev TV channel just reposted about 20 of the videos. Who knows what is happening at their end.
-
Sounds relatively straight-forwards if you work methodically. I suggest the following: Start with a working DCTL that does similar things Make a copy of it in the DCTL folder, run Resolve and get it loaded up and confirm it works Open it in a text editor (Once you've done this you can save the DCTL file, and reload it in Resolve without having to restart) Then make a single change to the DCTL, reload it, and confirm it still works If it doesn't work then undo whatever change you made and you should be back to having it work again, and have another go. Depending on your text editor the save might clear the un-do history, so maybe keep a copy of the whole thing in another document ready to paste back in if you have a challenge Keep making incremental changes until you get it how you want it You should be able to copy/paste functions from other scripts, change the math / logic, add controls to the interface etc. The key is doing it one change at a time so that you don't spend ages making lots of changes then even more time trying to troubleshoot it when it doesn't work.
-
I have a small amount of experience with them. What are you trying to do?
-
It seems that all the original files for the videos magically appeared on Lok's hard drive...
-
I read the link.. it said .... Dan Sasaki @panavisionofficial engineered anamorphic “ adapter “ If you look at the setup, there's a pretty large gap between what is the Panavision component and the body of the camera, and it is easily large enough to accommodate a decent percentage of 8mm lenses ever made, plus the flange distance for 8mm film is very small, so combining those factors, I did take it that it might actually be an anamorphic "adapter". I know enough about anamorphic lenses to know this setup is very common and completely plausible. Therefore, a question about the "adapter" and also about what the taking lens might be seemed warranted. Do you have further information? I haven't seen anything so far that convinces me it's a lens rather than an anamorphic adapter, and even if it was, my original question about what lens it is still stands? Pointing me to a link that obviously doesn't provide any conclusive answers is not very helpful.
-
-
I absolutely agree with @Ty Harper that with enough data it will be able to differentiate the movies that got nominated for an academy award from those that didn't, those that did well in the box office from those that didn't, etc. What it won't be able to do, or at least not by analysing only the finished film, is know that the difference between one movies success and the next one is that the director of one was connected in the industry and the second movie lacked that level of influence. But, if we give it access to enough data, it will know that too, and will tell a very uncomfortable story about how nepotism ranks highly in predicting individual successes... I also agree with @JulioD that the wisdom will be backwards-looking, but let's face it, how many of the Hollywood blockbusters are innovative? Sure, there is the odd tweak here or there that is enabled by modern production techniques, and the technology of the day changes the environment that stories are set in, but a good boy-meets-girl rom-com won't have changed much in its fundamentals because humans haven't changed in our fundamentals. Perhaps the only thing not mentioned is that while AI will be backwards looking, and only able to imitate / remix past creativity, humans inevitably use all the tools at their disposal, and like other tools before it, I think that AI will be used by a minority of people to provide inspiration for the creation of new things and new ideas, and also, it will give the creative amongst us the increased ability to realise our dreams. Take feature films for example. Lots of people set out to make their first feature film but the success rate is stunningly low for which ones get finished. Making a feature is incredibly difficult. Then how many that do get made are ever seen by anyone consequential? Likely only a small fraction too. Potentially these ideas might have been great, but those involved just couldn't get them finished, or get them seen. AI could give everyone access to this. It will give everyone else the ability to spew out mediocre dross, but that's the current state of the industry anyway isn't it? YT is full of absolute rubbish, so it's not like this will be a new challenge...
-
No, it's not an echo chamber, and people are free to have whatever perspectives they want. But take this thread as an example. It started off by saying that 24p was only chosen as a technical compromise, and that more is better. Here we are, 9 pages later, and what have we learned? The OP has argued that 60p is better because it's better. What does better even mean? What goal are they trying to achieve? They haven't specified. They've shown no signs of knowing what the purpose of cinema really is. You prefer 60p. But you also think that cinema should be as realistic as possible, which doesn't make any sense whatsoever. You are also not interested in making things intentionally un-realistic. Everyone else understands that 24p is better because they understand the goal is for creative expression, not realism. If we talk about literally any other aspect of film-making, are we going to get the same argument again, where you think something is crap because you have a completely different set of goals to the rest of us? Also, the entire tone from the OP was one of confrontation and arguing for its own sake. Do you think there was any learning here? I am under no illusions. I didn't post because I thought you or the OP had an information deficit, but were keen to learn and evolve your opinion. I posted because the internet is full of people who think technical specifications are the only things that matter and don't think about cameras in the context of the end result, they think of them as some sort of theoretical engineering challenge with no practical purpose. A frequently quoted parallel is that no-one cared about what paint brushes Michelangelo used to paint the Sistine Chapel except 1) painters at a similar level who are trying to take every advantage to achieve perfection, and 2) people that don't know anything about painting and think the tools make the artist. I like the tech just as much as the next person, but at the end of the day "better" has to be defined against some sort of goal, and your goal is diametrically opposed to the goal of the entire industry that creates cinema and TV. Further to that, the entire method of thinking is different too - yours is a goal to push to one extreme (the most realistic) and the goal of cinema and TV is to find the optimum point (the right balance between things looking real and un-real).