Jump to content

kye

Members
  • Posts

    7,849
  • Joined

  • Last visited

Everything posted by kye

  1. I thought it was that they made you look like a complete turnip! ???
  2. I think this depends on what level of film-making you're doing. The professional colourists over at liftgammagain are likely working with RAW and high quality Prores files from the high end cinema cameras, and live in a different world to the one we talk about on here. There was a thread about buying "cheap" client monitors for their grading studio (these aren't monitors to grade with, they're only for the clients to view the grade) and I just about had a heart attack when someone said that their budget was $8000 per monitor, and they were looking to buy half-a-dozen of them!!
  3. I think it's to do with the colour accuracy after a conversion, especially considering that most proxies are much lower quality than the original footage. If you were transforming from RAW to something like Prores 4444 HQ then I don't imagine it would be a huge problem, but I could be wrong about that. Another thing I didn't mention is that if you're doing VFX work or any precise tracking then you want to use the original files too because they'll enable much better tracking accuracy. I've heard VFX people say that for realistic 3D compositing work you sometimes need to be able to track to within a single pixel of accuracy, or even to within half or a quarter of a pixel. I've never done it but it makes sense if you're moving the camera and putting in a 3D object that is also meant to move like it's part of the environment then if the VFX object wobbles around in comparison to the real environment you filmed then that's going to be quite noticeable if your tracking isn't great.
  4. Cat overlord!!! oh, that made me laugh!! ???
  5. I remember back in the day seeing a little video about a guy who worked for Nokia whose job it was to travel the world and see what cool things people were doing with phones, and report those things back to Nokia as product and feature development ideas. One of the most interesting ones he mentioned was that villagers in rural Africa were buying a mobile phone as a business for their village. Obviously the other villagers would pay them to be able to call people on the phone, and the phone owner would make a percentage, but the big thing was that the phone acted like a bank. In Africa there is a very rural-oriented culture where people go into the city to get a job for a few years, but are sending the money back to their families in their village. So the city worker buys a pre-paid phone card, sends the card details to the phone owner in the village who then pays the family most of the value of the card, taking a percentage on top as a 'transaction fee'. I thought this was brilliant for Nokia to be basically crowd-sourcing their R&D. I wonder if Apple or Samsung are doing similarly, or if perhaps the innovation is simply in software not business models? The most interesting thing about the smartphone, to me at least, was that it's basically a generic device, it has no specific interface, no specific display limitations, no specific processing limitations, etc. It's a device that can be adapted to as many different styles of interface, display, or app design that app developers care to create. It's kind of the antithesis of how old phones used to work (and how cameras currently work) where you choose a product and you get their hardware, their OS, and their apps all in one.
  6. I agree that the chipset and requirement for AC power are both severely limiting factors.. I watched a Hackintosh video where the guy mentioned something about the compatibility of various chipset manufacturers that surprised me, but the fact they didn't put a higher powered Radeon in there is a bit strange perhaps.. Unless it was designed to a spec, perhaps something like "FCPX must be able to play 4K 30 in real-time with 2 LUTs and some basic curves adjustments applied on the nicest Apple display" or similar? The first battery-powered one will be interesting. One thing I learned over at liftgammagain is that you can ingest footage with a slower machine, you can generate proxies with a slower machine, you can edit with proxies, you can do sound with proxy video, but you can't grade with proxies, so if you are grading in front of a client then your machine needs to be able to play the original footage with all the grades applied in real-time. Grading, however, is done in a controlled lighting situation with calibrated monitors, so it's a situation where AC power is available, so in that sense an AC powered solution still makes sense. Personally I don't need to be able to play graded footage in real-time, so it doesn't matter for me. I'm happy to scrub the playhead around and see how things look over the footage, and then render it out and watch that render back again, and then make changes if required. For my home videos my 'clients' are my family so I can play them the mostly finished project and get their inputs but also be reviewing the output for any strange things that catch my eye too.
  7. You forgot to include "flange distance" which after all that discussion in the Nikon FF and Pocket 2 threads must surely be the leading search term in all of photography today!! ???
  8. My view is that the novelty will wear off, and that's a good thing, but ultimately the future is 3D because AR (augmented reality) will dominate. The popularity of smartphones is undeniable, and while they are great for having constant access to apps and the internet, they absolutely suck as a user interface because the screens are so small. The future of the smartphone interface will be AR, essentially placing smartphone technology within your field of view, like the heads-up-display did for driving. A glimpse of it is with the Apple Watch which is essentially a second screen for your phone, but one that is much handier than having to retrieve your phone from your bag / pocket / nightstand, and AR would eliminate that separation between a person and their device. In terms of how well this particular product does in the market, who knows. There's a saying in startup culture - "being early is just like being wrong" but even if they are early, establishing yourself as one of the early developers with the knowledge, patents, tech, and company culture will mean that when demand goes up they can be ready. Capturing 3D won't really take off until there is a decent appetite for consuming 3D, which didn't happen with TVs, but might with VR, and definitely will with AR, however I think that AR could well be a long way off, at least in product lifecycle timescales. Google glass was obviously a failure (for many reasons), but products like snapchat spectacles are bringing wearable tech into the market in ways that google glass couldn't accomplish, and if snapchat spectacles had the functionality of the Apple Watch to display instant messages and other basic info then they would be very popular. I think VR will get some traction before AR, most likely from gaming, but the fact it blinds you means that it's limiting in how and where people will actually use it. We're not about to see the average commuter put on a pair of VR goggles on the train for example!!
  9. You may well be right. I have an Apple laptop and I use it for the above, but I chose Apple because of other factors. If someone didn't have a laptop already and was wanting to only use it for video I wouldn't recommend an Apple laptop unless they were a FCPX user. When I was in the market for a new laptop I did a detailed comparison between the MBP and a couple of non-Apple laptops, and it was things like the integration with the Apple ecosystem that decided it for me - something that had nothing to do with video. Of course, being my only machine, it's what I use for video, and so being able to upgrade it with an eGPU if I want the extra performance provides some extra flexibility and options to extend my current setup, which is nice to have. I think that the average person on this site is oriented around video more than the target users for eGPUs, or certainly eGPUs with this processor anyway. If you're looking for a video-first machine then this eGPU isn't the way to go, and I think people that are video-first find it hard to understand that video products should be made for anyone other than video-first people. In the same way that it would be silly for me to hang around on mobile phone forums criticising every smartphone because it doesn't shoot 4K 60 in 16-bit RAW, have XLR inputs, or SDI connections, I find it strange that people who would never buy an Apple laptop are all-of-a-sudden the experts in what to buy for those people who do own an Apple laptop. It's like the film-making industry hasn't worked out that convenience and decent video can now co-exist, that the vast majority of film-makers are amateurs, that the biggest networks aren't broadcasters, and that the majority of video content isn't consumed on projectors, and possibly even on TVs!
  10. Ah! I thought you were saying it couldn't be done. If you're saying it would be a bad idea, then that's a different conversation. I agree that triggering people with epilepsy everywhere you go would be a bad idea, and you can't overpower the sun with anything except very very bright lights, so that's the ballgame for overpowering lighting for video right there. The alternative would be continuous lighting for a burst but that would be pretty nasty in power requirements and pretty horrific for the poor people the light is aimed at too. The answer is probably high ISO and digital relighting. Apples portrait mode but with the tech advanced by a dozen or so generations. People say that? Wow! Now that sounds great!! Why didn't I think of that!
  11. You're right about the size / resolution of the output files not changing, but the aspect ratio of the things being filmed has changed (circles aren't round any more).
  12. That makes sense.. It sounded like you were suggesting that because some codecs require a lot of processing everyone should switch from a laptop to a desktop and completely lose all the benefits of having a portable computer. The way I see it is that there are many stages of film-making in which a computer is required: On set, ingesting footage Reviewing dailies after shooting Editing Colour correcting VFX / compositing Grading Titles and export Archiving If you use a laptop you can do all of the above with the one computer - with the same computer. But then 4K H265 codecs appear and because you can't edit or grade that footage the advice is to switch to a desktop, which means maintaining two complete setups, or losing the ability to do many of the remaining steps. There's an assumption that because grading requires a calibrated monitor and environment that you'll be doing all the other things in that environment too, which is complete bollocks.
  13. Changing your lifestyle makes sense? So, when I bought a 4k camera I should have stopped editing video on the 2 hours per day commute that I had to my day job and instead edited video at home instead of spending time with my family? Are you on drugs? How about this - if you can edit on a desktop computer then why are you even in a thread about an eGPU which is clearly aimed at laptop users...?
  14. Nice post @jonpais. There are so many different aspects to consider, and for many of us, a camera that really stuffs-up a single feature is worse than one that is passable at everything but doesn't wow. I think that's the difference between different types of filming - some situations call for a camera to be great at some things and don't need other things, whereas other styles need everything to operate above a certain minimum level of performance, even if that minimum level is quite modest. A great example is the GH5 vs A7III - 10-bit or 4K60 doesn't matter to me if the AF has failed, yet there are many cinema cameras that don't even have AF. People have told me flat out that I'm expecting too much but unlike perhaps the vast majority of people on here, I started making films with what I had (a Samsung Galaxy S2 and a ~$300 Panasonic GF3 m43) and only upgraded when I went on a trip, filmed real things, messed up shots all over the place, and then looked for ways to improve.
  15. Whenever I see things like "we can't see in 4K" or "no-one will ever need 8K" I just hear "640k should be enough for anybody" ???
  16. A really simple example might be the home videos from Minority Report: Ignoring the 3D aspect of it, right now we have the ability to shoot really wide angle and then project really wide angle. All you need is a GoPro and one of those projectors designed to be close to the screen - existing tech right now. If you shoot 4K but project it 8 foot tall and 14 foot wide then most people sure as hell will be able to see it - especially if you've shot H265 at 35Mbps!! Projecting people life-sized is a pretty attractive viewing experience, so we're not talking some kind of abstract niche kind of thing - we're talking something that a percentage of the worlds population would see in the big-box store and say "I want that" I understand that. If you read my post carefully you will notice I mentioned that they might have a 24/25/30fps sync - this is different to continuous lighting. While this isn't currently available at full power, there are strobes that can recycle fast enough (eg, Profoto D2 - link can recycle in 0.03s and can already sync to 20fps bursts). All that is missing is having a big enough buffer (capacitor bank) to do full power that fast.
  17. That looks really good.. thanks!
  18. My impression was that it was a quick 'fix' to bypass some kind of issue. Considering how important aspect ratios are in both still and moving images, and the fact that everyone has known they're important for almost the entire history of photography, it's unlikely it was a mistake. Of course, regardless of how and why it happened, it's likely they'll fix it quietly and we'll never find out
  19. I watched that too.. He makes a point, BUT he's assuming that we shoot, process, and display in the same resolution, and that we display that resolution in a 16:9 rectangle with a viewing angle aligned with the THX or SMPTE specifications. This is old thinking. We need to shoot higher than 4K if we want visually-acceptable results with reasonable cropping, VR, or any kind of immersive projection. Not if Canikon gives us an early xmas!! I've gone a little ways down this path, and there are a few adjustments. With a shutter speed you can either shoot for video (180 degree rule), for images (very short exposures), or a compromise. I shoot a compromise because I think that the motion blur you get from something like 1/100 or 1/150 is appropriate because if it's significantly blurred then it was an action shot and I like that in the photo, but this is personal taste. In terms of formal posed moments, video modes aren't currently well suited to this, so you're better off just shooting images. However, cameras could evolve in this aspect (and would need to) by perhaps having something like a 'burst video mode' where it shot RAW for a burst, perhaps 0.5s before and after the shutter button was pushed. This combined with high ISO performance could give sufficient image quality to work fine with continuous light, or you could get continuous lighting with a burst mode, or 24/25/30fps sync for that burst. These technologies are either already with us, or are pretty close, so it's more a case of them being combined and the market working out what photogs would use and find practical. In terms of picking shots, I think it's actually a bit easier to choose shots than with photos, but it requires a different mind-set. When you're taking photographs you're trying to notice everything and then hit the shutter at exactly the right moment to capture it. You then take those moments into post and have to choose between the shot with the nicest groom smile but slightly forced bride smile, and the better bride smile but less groom smile, or to spend the time photoshopping the two together. With video, you're capturing every moment and now you get to choose any moment to retrospectively 'hit the shutter button'. In this sense, you should view it as a continuous capture, not individual images. During a moment, there are lots of micro-moments happening: the bride smiles, the groom smiles, the trees flutter, the distracting noise happens in the background, the couple hear it, the couple look puzzled while they process it, then they laugh at slightly different times. You will have captured all of those moments and all of the moments in-between, so the task is to scrub the play-head back and forwards to find the moments of 'peak smile'. This is kind of how sports photographers choose images from bursts - they don't think "crap, I've got to choose between 3000 images" they know that in each burst there is the best moment and they just scroll through them to pick that best moment. The difference will be that even if you're shooting in the fastest bursts of Canikon (around 10-12 fps) the magic moment in sports is often between frames. There's a reason that flagship bodies are pushing for higher burst rates - pro shooters aren't saying "oh shit, higher burst rates make my life worse by making me choose between more images". In the future the burst video mode for flagship bodies will be much faster than video - it will be 50, then 100, then higher frame rates. If you look at the iPhone burst functionality, it captures a burst and then shows you the whole burst as one image, which if you click on it to edit it you can choose which ones are kept and there's a button to discard all the rest. Apple has worked out that a burst is different to a whole string of individual images, this is more aligned with the new way of looking at this type of capability.
  20. Same here. It's just strange when a topic like "how can I boost the power of my laptop" comes up and people answer "buy a 24 ton supercomputer" and you're like "how on earth does that even make sense?"
  21. In case anyone hasn't seen it... The video references a kickstarter for a battery adapter that looks really useful as it's both an external battery solution for multiple devices but is also a charger:
  22. LOL! Yeah, I love the guessing too!
  23. Cool. The more control they have over these things and the better they are at colour the closer they should be able to get in matching colours between cameras with different sensors. With tools like Resolve, if you have time and some dedication you should be able to get the look you want from most cameras these days. Matching in a multi-camera setup requires a higher skill level, but I've seen youngsters on YT match many cameras passably well in a very short period of time, using only the LGG controls, and if you're willing to dive into colour checkers you can dial things in pretty quickly if you do a bit of homework about how the colours differ beforehand.
  24. So in one corner we have the argument that companies without a cinema line could go all in and will make a non-crippled camera. In the other corner we have the argument of trickle-down technology where those companies with a cinema line will have already recouped the R&D costs of really fancy tech and the low-end of their range will be a combination of high performance and lower cost. Maybe we should just toss a coin to predict these things!
  25. It won't be a direct competitor, no. But for some people who are in between, it might be a toss up between two options that only partially provide what they want. I know this won't be that many people though.
×
×
  • Create New...