Jump to content

kye

Members
  • Posts

    7,849
  • Joined

  • Last visited

Everything posted by kye

  1. kye

    Lenses

    I guess you'd have to know more about his project to see the links - certainly he can! I posted it as the Takumars are one of the famous vintage lenses that people like to collect, and the footage from the S1H should be top quality, so assuming that Mark did a good job filming (which I would assume he did) then the video should be a good example of footage from those lenses. I've got a few Takumars, and although they're not the SMC Takumars, they're still excellent and if I shot FF I think I would have just collected a whole set of them, they are beautifully made things and the images are very nice too. A great balance of being high resolution but not too sharp.
  2. I agree. It's easy to get used to the manual nature of SD cards and converting footage and to stop seeing how antiquated the process really is. Anyone who thinks this is common sense can get a reality check by explaining the workflow to anyone under 20 - but be warned that the 'open and honest' feedback might be a shock! Even if it was via an app, being able to open an app and see thumbnails of what's on the camera and interact with them in useful ways would be good. Even if it was a case of the app being more of a configuration tool where you can link it to your social media accounts or to your file sharing devices and then be able to use the camera to do the stuff, which might be easier as it has more buttons so can use the screen as a display instead of having to show the UI as well. While I'm travelling my wife takes her own photos of the trip on her phone and shares them on facebook for the relatives to all see. If you take the sequence of operations to be: Take a bunch of photos or video Select some images Adjust those so they look nice (cropping, basic colour adjustments) Share on social The workflow on a camera / computer might take 20 times as long perhaps, and heaven help you if you're on Mac and want to draw on the image or put a comment there - now you're editing the image in photoshop because there's no equivalent of Paint on OSX! Even if your workflow is about quickly editing together a little highlights reel, the apps are so much faster without having to do all the media transfers etc..
  3. I visited a site the other day that said it was targeted at professionals and amateurs, but the registration process required a link to your showreel or portfolio, which struck me as a bit strange considering I always associated those with being a pro. But it got me thinking - why not create a showreel? Yes, it would be terrible compared to the pros, but I'm not looking for work so in a way, who cares? It could also be a fun project that could be a real learning experience, especially if I post it asking for feedback. Has anyone who isn't looking for work or to attract others to a project ever made a showreel or portfolio? What were your experiences? The more I think about it, the more it seems like a good idea.
  4. Just a follow-up to say thanks, and after a long time I've now pulled the trigger and bought the Beatstep Resolve controller software. It's now only 100euros, normal price, so that's cheaper again. After much more experimentation in grading I've realised that often the best approach is just using lift/gamma/gain controls straight-out on LOG footage, without converters or anything else, but adjusting these is a PITA if you're not using a control surface and therefore able to control two at once to kind of pull against each other. The learning journey continues!
  5. Wow. Points for ambition and experimentation, but yeah, not so great. Is there a fundamental issue of lens size relating to sensor size? ie, larger sensors means larger lenses because the focal length has to get larger as the sensor gets larger in order to keep a consistent FOV. I also don't think it would fit in my pocket with the FD 70-210/4 on it, and the 14/2.5 pancake would be only be pocketable in very loose clothing. Maybe the Olympus lens cap lenses would be a good fit? Assuming you could actually get good images from it of course!
  6. H254/h265 files are always difficult to deal with, but if you can adapt your work flow to rendering proxies then things get way easier to manage. I do understand that they don't work for jobs where you have to turn around an edit asap though.
  7. I'm not sure about the Sony, but on the GH5 the focus peaking is only calculated on the screen/EVF resolution, not on the full-resolution image being filmed. I've found this a real limitation and one of the reasons that MF is easier with the EVF than the screen, as it has a higher resolution than the screen. I never thought about it before, but in that sense, extra resolution is useful for MF.
  8. I agree that AI could doo a reasonable job of it if implemented well. I have wondered why there aren't more tools that fix issues in footage like this, but you're right that we've come a long way, and what's yet to come will dwarf everything still!
  9. One challenge would be how to fill the frame. If you straighten the lines (from this //////// to this ||||||||) then the problem is that you have to crop off the sides a bit (see diagram #2) Then you'll either get black bars on the sides, or you'd have to crop in, but if it just cropped in and out based on horizontal movement then that would be very strange. If you were doing it in post with accelerometer data then you could make good decisions about cropping and other things, so maybe that's the better approach - to save the accelerometer data in the footage then process it afterwards.
  10. To be clear, I'd suggest the vast vast majority are good natured team players. It seems that there are a select few who have talent so significant that they are tolerated in the industry, perhaps finding a small cohort that can stand to work with them. In many other industries it doesn't matter how good you are, there are levels of attitude problem that mean you can't really operate in any meaningful way at all.
  11. I agree. Correlation isn't causation, but I do suspect that quite a few on here are also higher-level pros of some kind or other who care about their reputation. I'm not a film-making professional, but everything online is potential fodder for a background search when applying for work, so there is that. Of course, I've noticed that the film-making industry seems to be relatively tolerant of people who simply can't get along with others, whereas they'd have a much harder time in some other industries where things are more about getting along with people rather than your talent eclipsing your attitude problems. For example I can't imagine how Werner Herzog would go in the forums!
  12. Interesting. The liftgammagain.com forums are real names only and are very civil. Other sites even require you to submit a profile and/or folio of work before you can join. It's an interesting idea, and anything that raises the level is worth a try. I'd support it.
  13. The lensrental blog talks about how some modern lenses (including my MFT Voigtlanders) are built in such a way that they can't be serviced. I'd imagine its likely to be things like glueing instead of screwing things together etc. In a sense, almost everything is repairable, but the thing working against that is the cost of labour. When you buy a $1500 lens and it breaks, if its going to take 30 hours of labour to take it apart, diagnose the issue, order spare parts, re-assemble, test everything and measure the optics, and send it back to you, and they're charging $50 p/h then you just paid the cost of a new one in labour alone. You can argue that the one serviced by the technician might be better aligned and setup than one out of a factory, but in high quality manufacturing environments the equipment may be so specialised that its hard to replicate the things manually. For example machines might have special tools that can exert huge forces onto a part but do it accurately and do it without leaving marks because the tool shape is exactly the same shape as the surface they're pushing onto. There's a big push in places like the US for "Right to repair" legislation because you buy a huge $250k tractor and it develops a fault and in order to diagnose it you have to call out a licensed service technician because the computer port requires proprietary software and is encrypted to stop you fooling with it. So instead of you being able to diagnose and fix the tractor in the middle of the field in an afternoon you have to wait, pay a call-out fee, then have the tech spend 2 minutes working out that a sensor needs to be replaced and another 5 minutes fixing it.
  14. In terms of hardware acceleration, the new T2 Chip in newer Apple computers has some kind of H.265 hardware acceleration, but I'm not that clear on if it's just decoding or encoding too. As it also does encryption it's hard to find references that don't talk about that and talk about the h265, but I've seen it crop up a few times in benchmark tests.
  15. My lens owning goal is to use the lenses I have, and also to not use the lenses I have. We still have a trip booked with return flights to Europe in September and we're really hoping they cancel it because "Just don't use them, and forfeit the fare" is higher up on the preference list than "Fly to Europe". I say goal, because at this point in the global pandemic having a plan is ridiculous. In terms of using the lenses I have, I'd like to do some lens tests of the lineup I have, and also maybe do a bit of shooting locally might be nice.
  16. kye

    Lenses

    Mark Holtze is pitching a Netflix show using his SMC Takumars and this video shows some footage from an advanced test he shot. Footage from the Takumars and S1H.
  17. Haven't watched it yet, but noticed this in my feed:
  18. @Super8 Is there anything you can show us where specifically the GH5 have plastic skin tones or where you can't get the same look? I keep asking because there are people who I have spoken to at length and whom I respect that believe that there is a difference, but I can never get enough information about what they're looking at in order to be able to see it myself. It's easy to point to a video shot in glorious light with a cast and crew that are on their game and say that a different setup can't do that, because the only evidence against that statement would be a video exactly the same but shot on a different camera, and at sunset and with the light and breeze just-so it's not possible to replicate. My questions is about what specifically can't be matched. I am literally interested in someone pointing at part of a still frame of a video and saying 'see this thing here.. FF doesn't do that' or 'see that there.. MFT doesn't do that', or 'see how this thing moves here... and now see on the other one how it's different... if you can't see it then watch for the way that X does Y' One resource that I found very interesting was this: https://www.yedlin.net/NerdyFilmTechStuff/MatchLensBlur.html The basic idea that you can match blur on different crop factors isn't the headline here, it's that Steve Yedlin is saying it (he'd know!) and is probably the most thorough analysis I've yet found. He doesn't talk about availability of lenses, but he definitely discredits the people that straight-out suggest that you can't get shallow DoF on a smaller sensor. He also doesn't talk about if there's a 'look' inherent in various sensor sizes, but he rules a whole bunch of variables out. I'll be the first person to admit that wider lenses with wider apertures that are sharp wide-open aren't available for MFT, and maybe that's the 'look' that you're referring to, but that would only apply to shots where there needs to be a larger aperture - MFT can easily match a FF 50mm at F4 for example, so in that particular shot it can't be the lack of lenses contributing to it. I'mm also be the first to admit I haven't got any glorious images to post that will "prove" the GH5 does hold up, but even if you gave me an Alexa I think I still couldn't do that - the weak link in both setups would be my skills in post! I'm also half-suspecting that it's actually not the camera at all, but everything else. What I mean is that film-making is a very deep and very difficult thing to do and get spectacular results, and by the time that you're good enough to do all the other stuff right you are spending so much money anyway that of course you just rent an Alexa for the shoot. So in a way I question if it's not that it's not possible to do on smaller sensor cameras, maybe just that it isn't done on smaller cameras. You're saying you can see the look, I want to also be able to see it.
  19. lol about spray tan. I'd be the last person to need one! The Micro sure makes all reds go pink and saturates things pretty strongly. The colour checker is a data colour and I just got it off ebay. They're quite reasonably priced (for a colour checker anyway) and the reverse is a big grey card with a greyscale on the side, so useful for setting WB on location if you need to do that. In terms of grabbing Olympus stuff, I'd wait for the prices to all drop and then peruse the bargains.
  20. They were shot as a stress test and thus are very high DR - I'm about 50cm from a naked 150W halogen bulb and there is no-fill in order to stress-test the DR. IIRC the exposure was setup so that the whites were just below clipping, and the blacks are definitely clipping because the scene is >13 stops. You're probably right about the saturation on the first two, but I would have thought that would expose issues in skin-tone even more than a more neutrally saturated image? I figured that the test was a good one as it was the GH5 compared against an uncompressed RAW image in 100% controlled test-conditions. I'm simply trying to get a baseline for what you deem as "doesn't hold up", which is a subjective judgement.
  21. Great conversation. @Super8 I don't really mind if MFT has a 'look', and I guess in retrospect I wouldn't have thought that having a look was a bad thing, after all the 'FF look' is an often used phrase and that is normally referred to as a desirable thing, not a liability. I have a theory that once a camera is above a certain level of quality, you can match it colour to any other camera assuming you have enough skill in post. I figure that I'm either right or wrong, and by pursuing it then I'll learn a lot either way, so some months ago I bought a BMMCC in order to be able to shoot it side-by-side with the GH5 in identical conditions and try and match them. I chose the BMMCC as it has a reputation for excellent colour and there's no way I can afford an Alexa, so this was the best compromise. People also like the motion cadence and other aspects of it, but I'm not there yet in my comparisons. The project is ongoing (although got paused during covid times as due to my day job I had less spare time and energy rather than more) but did manage some colour matching I thought wasn't too bad. I'm curious to get your impressions of the below. The first one is GH5 in 150Mbps 4K 16:9 HLG mode, the second is the BMMCC in 1:1 RAW graded with WB and CST only. I'm not that pink in real life but I'm probably not too far off it - office worker tan lol. Also, are you seeing the plasticising of the skin in this GH5 shot? Once again, genuine question. I don't doubt that things like skin texture are negatively impacted by compression, the question is how keen is our judgement and how much are each of us willing to tolerate in their images. Personally I'm not that picky, and for my purposes it's totally fine, but getting your impressions might help to calibrate the discussion. Here's the GH5 shot graded via a WB and CST (although the GH5 HLG doesn't actually correlate to either rec.2100 or rec.2020 so it's not a perfect conversion, which is super annoying) so I don't think anyone would think this is a nice grade (or let's hope not!).
  22. I actually think that the GH5 and MFT vs FF debate throughout the thread is on topic, assuming that's what @Andrew Reid was referencing. Olympus has sold its imaging business, it was one half of the MFT alliance and was responsible for basically half the MFT cameras made (excepting the odd model from BM, Zcam, etc), so one of the biggest impacts might be the death of the MFT system. To that end I'm interested in if the GH5 really was so bad as @Super8 has made out, and if MFT does have a fundamental look to it (beyond people not knowing how to choose focal lengths). If the GH5 really was that bad and has a fundamental look to it then it really won't be helped, but if there aren't fundamental issues then 1) what is going on, and 2) why are people mistaken? These feed into the future of MFT and the implications of Olympus selling its imaging business. I'm yet to actually get a straight answer on either of these issues - either on the GH5 or on a given sensor size having a 'look'. Happy to take it offline if people aren't interested, but lots of people were liking / disliking the conversation so I figured I wasn't the only one interested. Thoughts?
  23. Maybe you could help a basement dweller out and reply to the grading questions I asked about the GH5 image not holding up? It was a genuine question and however basement-y you think I am, a gracious individual would realise that for every person who comments, there are dozens more who follow along silently, and we could all do with learning more. It would help us to understand your perspective as well. Lots of people blow through these forums and when they have a different perspective or different requirements or standards then it's easy to get riled up, but it's worth it if they manage to explain their perspective and then the rest of us can understand why they have particular requirements or opinions. Sometimes it even happens that when they share theirs, we can share ours and very very occasionally, we all learn something.
  24. True. Lenses sometimes have T-Stops slower than their F-Stop due to transmission loss, but when comparing between sensor sizes and given reasonably modern glass, it's kind of safe to assume that the T-Stop of a lens is relatively close to it's F-stop.
  25. It's difficult to tell if it will or not. The fact it's implemented in commercial products is a good sign that it's possible and someone thought it had enough value to implement, but one of the main challenges would be the parallax error between a depth pixel and a light-sensing pixel. IIRC at the moment the depth camera is separate to the optical camera, so that's a big problem, but may be almost eliminated once we start seeing sensors with both sets of pixels on the sensor, similar to how we have PDAF pixels embedded on sensors now. Ultimately they'll still have a problem with fine detail and things like flyaways on a backlit portrait, but those will get better with AI and with more pixels and less distance between the optical and depth pixels. I wouldn't hold your breath or sell your vintage glass though!
×
×
  • Create New...