Jump to content

kye

Members
  • Posts

    7,456
  • Joined

  • Last visited

Everything posted by kye

  1. It will depend on the processing in-camera. Linear is Linear, so no dramas there, but getting the 3x3 matrix right to define the primaries requires a calibrated setup, and might be non-linear or some other thing normally inside the camera and hidden from us. Don't the BM cameras have a defined gamma and colour space? Wouldn't they be BM Film Gen 1?
  2. I don't think that's the case, Canon IS is very impressive. I went from using my Canon 700D and 55-250mm kit zoom with ML, sometimes in 1:1 crop mode for sports, to the Canon XC10 with the 24-240mm equivalent lens to the GH5 and the GH5 wasn't stunningly more stable than either of the previous Canon setups. It was better stabilisation overall because IBIS has the two additional dimensions of stabilisation, but I wasn't wowed when I got it, despite people all over the internet being wowed by it, so I consider the Canon IS to be very high quality, and I only had the more consumer examples of it - I don't know how good the IS would be in the more premium FF lenses.
  3. Maybe I was a bit harsh.. it's great there is an additional camera out there that can perform well, and that someone has done the work to make good colour grading available. I would actually be interested in the side-by-side images because it may be right up there with these alternative cameras - or maybe might even have some extra mojo.. we know cameras from that time were often hidden gems, and one with global shutter would be even better, maybe giving it double mojo! I think these days I've become aware of the diabolical state of colour grading knowledge out there, and the more I learn the more I realise that the majority of free colour grading resources aren't just useless, or misleading, they're often outright lies designed to confuse and disempower people so that you'll give up and just buy the stupid LUT pack the person is selling. Anyway, what are these AJA Cion cameras worth second hand?
  4. Man, of all the places to make a typo! I think the fundamental principles still stand: 1) there is a reality and if you point a camera at it then it is a direct record of that reality and if you have an AI do a bunch of math it doesn't matter how it appears to be it will never be a direct record of that reality; and, 2) reality matters to people, and although they are willing to deviate from it by small amounts, and although people have different desires and tolerances for how large that amount is, there are limits to that distance; therefore: AI will not replace the direct capture of reality in many contexts, and these contexts are a significant part of the moving images industry.
  5. How much do you know about South Korean culture? There are many things that promote such things in that culture that are not present in other cultures, or are present to a much less significant degree. Seems like a convenient subsample. 90% of the daily users of a platform specifically targeted at people who want to post photos for other people to see. 200 Million is 2.4% of the worlds population. I'd be amazed if 90% of the top 2.4% of the worlds most vain people didn't use it either. Another convenient subsample. If you're putting posing for photos or smiling into the lying category then I can tell you're either trolling or you're on some serious drugs. The people on Tinder / IG / etc are one of these vanity collections again, like if you surveyed everyone at a Miss World competition and concluded that everyone uses 450g of makeup per day. An AI video is a video where: no pixel was ever directly recorded from real-life there is no way to go back to the source because there was unknowable amounts of training data and limited input data (these mysterious GoPros scattered around) there is no way to know how the training data was processed there is no way to know how the AI works I think we're a tad beyond sucking in your guy when someone pulls out their phone to take a snapshot. On a more general note, I occasionally read members of the forums writing that the world is going down the drain due to one social trend or other, and I just roll my eyes and wonder what the hell they are looking at online. I mean, when you open a browser you don't see anything until you search for that thing, so the things that people say about the world are really just a reflection of what they have somehow become fixated on and have let it distort their world view.
  6. I have read and re-read your post over and over, and I just can't seem to get where you're coming from on the majority of your comments. Perhaps the biggest confusion is over reality, and how desirable it is as a concept. People wear nice clothes, do their hair and makeup, but most wouldn't get plastic surgery (even if it was free and instant and perfect) People are fine with having flattering lighting for portraits, but many people are very opposed to being photoshopped etc. People are happy to read fiction and "movie magic" and "the magic of the theatre" but outside of the realms where fiction is acceptable there are often very strong opinions about how far from the truth we are comfortable going. The ninth commandment is literally about lying "You shall not bear false witness against your neighbor.” Even if AI generated images are perfect beyond what we could even check, it's sort of like someone completely trustworthy testifying in court when they refuse to take the oath. It either is the truth, or it isn't.
  7. Perhaps the critical concept is that AI is calculators trained with only human input data. This whole thing is like when people learned anatomy. At first people thought that we couldn't possibly understand how the body worked because it was made by God. Over many hundreds of years we've basically worked out more and more of the organic chemistry and various principles etc, and now no-one who is familiar with modern medicine would question our ability to understand the physical body. Now comes AI, and we're back to saying that we couldn't possibly understand or replicate what it is to be human, because we're etherial magical special and knowable only to God. I think that line of thinking will suffer the same fate, and will suffer it at thousands of times the pace.
  8. Footage looks good, and global shutter is a definite plus of course, but I'd like to see a side-by-side with the OG BMPCC or P4K or Sigma FP etc. To be honest, the CST in Resolve can convert Linear gamma to whatever you want, so that part was done already. Adjusting the primaries can be done with a 3x3 RGB matrix, which isn't easy but also wouldn't be too hard if you knew what you were doing. I commend him for pursuing the project, but it's probably half a days work for a professional colour scientist, at most, maybe even an hour or two if they are setup for it already. This is an example of a RAW shooting camera with a good conversion to LogC and then a nice LogC LUT put onto it, nothing more. This has been done by Juan Melara for a range of cameras including the P4K etc, which are almost indistinguishable in side-by-side comparisons. https://juanmelara.com.au/products/bmpcc-4k-to-alexa-powergrade-and-luts This is why I moved to colour grading rather than going from camera to camera, the RAW-shooting cameras are all nearly perfect already and improving them without improving the colour grading is a waste of time really.
  9. I agree that "reality TV" won't be saved and didn't put it in my list. Apart from the fact it's scripted or the participants are poked with a stick until they explode, and the editing is practically perjury, but in the end the audience doesn't know or care who they are as people - they are simply characters that no-one knows and so could be anyone (or no-one). I think it's pretty easy to look at AI and think that it looks like an awkward and clunky human being and conclude that it has a long way to go, but I think that's a misleading way to think about it. AI is literally trillions and trillions of incredibly fast calculators. Literally. So, acknowledging that, it's more useful to think that AI technology has managed to go from being a calculator to being a human-a-like, which is a thousand billion miles from where it started. The journey from where it is now to where it will be refined and sophisticated and creative and expressive isn't nothing, but it's short in comparison to the journey it's already taken. ... and in case you're not getting a sense of it, here is how human it was to begin with: 1+1=2 3>2=true 8/2=4
  10. I agree, but I think there is a distinction here between videos that contain people I know/care-about/etc and people I don't. If a movie people see has Brad Pitt in it, people probably don't care if it was the real Brad Pitt or an AI version of him, and if they go see a movie they probably don't care if the actors are even real people or AI generated fictional characters. However, if I watch a video that has anyone I know in it, and it's a depiction of a real-life event then it matters if it was real footage or not. This might seem to be irrelevant detail, but I think that this means that the following parts of the industry may not be completely gutted: Documentaries Sports videography Engagement/Wedding videography (although some might want a more 'enhanced' version than reality) Event videography (birthdays, bar/bat-mitzvah and other religious occasions, etc) Corporate videos All live-streamed event videography News and current affairs TV perhaps others? These are pretty significant percentages of the entire professional moving images industry. It's easy to start thinking that no-one will pick up a camera professionally any more, but that's just not likely to be the case. Even if you're right that people born from now onwards don't have any special relationship with reality (which I don't think will happen for a very long time), the people who are 10 years old now might live for another 100 years and they probably want to continue to want to see real life content, so that will be phased out pretty slowly.
  11. If you're doing the colour grade yourself then here's my advice: If it looks good, it is good (and it doesn't really matter how you got there) If you're not sure what a setting does, and it doesn't make the image clearly better, then probably just leave it at whatever the default was That's literally it. As you gradually get better you'll build up a sense of what tools make what situations look better, and you'll get that from trial and error / reading forums / watching tutorials / etc, but as soon as you start using the word "should" a lot then you need to go back to the first bullet point above - if it looks good then it is good. The goal of the tech is to learn it enough to start thinking about other more creative aspects.
  12. A resume isn't meant to be a page-turner. It's a document built for a purpose. This is the same. This doesn't mean it shouldn't be interesting, but if the sound design isn't that great then that's not the point of the video, unless you're trying to get hired to do that. IIRC I've seen colourist reels without any sound at all, but I might be wrong about that.
  13. I feel completely safe in doing my own home videos of family and friends. I don't care how photorealistic the AI will get (and it will get to be perfect), there will still be a fundamental difference between what something actually did look like vs what something might have looked like. This difference will remain as long as people are attached to a physical reality at all. There are parallels in other mediums as well. Art forgeries are still forgeries, even if they're perfect. If you think that no-one will care, just google "art providence" and see how much people really do care.
  14. HLG isn't a standard, it's more like a semi-technical marketing phrase, however, rec2100 and rec2020 are HDR standards and you can use a CST to convert them to whatever you like. If I shoot a clip in 709 and then HLG on my GH5, then the conversion from rec2100 to 709 is pretty close to the 709 version SOOC. Close enough that the difference is irrelevant, because you're going to want to push and pull the image in post, and those adjustments will easily override the differences out of the camera. It's easily tested - just record a clip in HLG and then in V-LOG and use the CST to convert both to rec709 and see which input colour space / gamma is closest to the V-LOG conversion. The better you get at colour grading, the less you care about which camera or colour space was used, as long as it's a robust and efficient codec and has colour management support.
  15. I decided with my GH5 that HLG was a better log space than V-Log, because in HLG the amount of bits for the midtones and saturation were higher (and therefore suffered less from the compression). This comes at the expense of the shadows and highlights, but these are far less important than skin-tones, so I figure I'm making a good trade-off.
  16. kye

    24p is outdated

    22 pages on... and nothing has been learned.
  17. The other thing that occurs to me is to watch BTS videos of music video shoots. These will include lots of tricks about how to achieve various looks, as well as showing what the final looks are. Yeah, a 58mm F2.0 lens isn't so easy to manually focus, especially on a gimbal, unless you had a remote-follow-focus, a very good monitor with excellent focus-peaking controls (that can compensate for the softness when wide-open), and are practiced at operating a gimbal and follow-focus at the same time. It's not that difficult to degrade the image in post to achieve at least some of the major optical aberrations that these vintage lenses have. One power grade you could setup for yourself and just pull out when needed might include: a power-window that darkens the edges (ie, vignette) and also adds a blur towards the edges (bonus points for mixing a normal blur with a radial blur) a lens correction that adds some barrel distortion a glow / haze effect to lower contrast (can easily be done with a blur that is put over the top of the footage at a low opacity) a very small-radius blur over the whole clip to knock the sharpness down a bit When fine-tuned, the above can do a pretty decent imitation of a vintage lens, so is worthwhile putting into your toolkit if you're a fan of the vintage look. Now we have AI there are other things you can play with too.... The Resolve Magic Mask is able to identify objects pretty well now (see the red overlay over the man): So you could potentially mask out the main objects and then do things to the background like darken it, blur it, make it a different hue, etc. The Depth Map feature is also interesting: I would suggest that neither of these is good enough for really strong adjustments (without visible artefacting - they're roughly like the iPhone DOF simulations) but in music videos the look might be appropriate. Filters and blend modes are definitely your friend with music videos. If all else fails, you can pull a key of the skin-tones and then do whatever you want to the rest of the image. With all these things, if the edges aren't great or it doesn't quite stand-up then just back off the opacity. You should always be applying too much of an effect and then pulling it back to something acceptable. Then you can A/B it with and without the effect and adjust to taste. If you don't go too far and then pull back, you will always be used to the image without that effect and so it will always look like it's overdone. Also, most colour grading breakdowns show that it's about applying a number of small and subtle adjustments that add up - it's very rare for any one individual adjustment to be significant, with the exception of colour space transforms and the basics like adjusting exposure and contrast and saturation.
  18. I've heard anecdotally that tertiary education (university and technical colleges etc) is being stripped bare by cost-cutting and efficiency programs, to the point where a friend of mine did a Graduate Certificate (which is the first third of a masters degree) and wanted advice on which course/units to choose when re-enrolling in the next part and there was literally no-one he could talk to. All lecturers were only accessible to students enrolled in specific units and he was ignored / actually turned away, the course controller was also the head of the school and not even contactable and the entire enrolment process was all online and there literally wasn't even a student services desk where you could talk to a human being. This was at a major university here in Australia that appears in lists of top universities world-wide from time to time, so it's no backwater institution, and it's not like the fees aren't putting a truckload of money into their coffers either. Maybe life is different in the marketing department?
  19. Do you mean technical parts like the chips etc? If so, maybe they're out of stock and no longer being manufactured, so would be practically infinite in cost, rather than cheap.
  20. I also thought that the colours were overzealous, but this is actually a good thing - increasing saturation on poor quality footage really reveals weaknesses in the codec and colour science, so the fact that the colours don't reveal such weaknesses is a real plus. Had they posted only desaturated images I would have immediately suspected they were covering up some weakness, but this is obviously not the case.
  21. That's very promising looking footage, and definitely the right sized cameras - 80 x 80 x 50mm and under 400g (3.2" x 3.2" x 2" / 0.89lbs)
  22. Yeah, the sponsorship would have meant the default app and the anaemic codec. I don't know if this one supports RAW, but that would have been the ultimate choice.
  23. Wow.. good to know the Kingstons aren't reliable over the long term. My dad does a lot with Raspberry Pi computers, which use SD cards for storage, and apparently they chew through the cheap SD cards, but the Samsung ones are reliable over time, so they must also be made properly. I haven't heard much discussion about them, but I have a Samsung EVO card that I use with my other cameras and seems ok. I haven't tested it with the BM cameras though.
×
×
  • Create New...