Jump to content

kye

Members
  • Posts

    7,490
  • Joined

  • Last visited

Everything posted by kye

  1. One challenge would be how to fill the frame. If you straighten the lines (from this //////// to this ||||||||) then the problem is that you have to crop off the sides a bit (see diagram #2) Then you'll either get black bars on the sides, or you'd have to crop in, but if it just cropped in and out based on horizontal movement then that would be very strange. If you were doing it in post with accelerometer data then you could make good decisions about cropping and other things, so maybe that's the better approach - to save the accelerometer data in the footage then process it afterwards.
  2. To be clear, I'd suggest the vast vast majority are good natured team players. It seems that there are a select few who have talent so significant that they are tolerated in the industry, perhaps finding a small cohort that can stand to work with them. In many other industries it doesn't matter how good you are, there are levels of attitude problem that mean you can't really operate in any meaningful way at all.
  3. I agree. Correlation isn't causation, but I do suspect that quite a few on here are also higher-level pros of some kind or other who care about their reputation. I'm not a film-making professional, but everything online is potential fodder for a background search when applying for work, so there is that. Of course, I've noticed that the film-making industry seems to be relatively tolerant of people who simply can't get along with others, whereas they'd have a much harder time in some other industries where things are more about getting along with people rather than your talent eclipsing your attitude problems. For example I can't imagine how Werner Herzog would go in the forums!
  4. Interesting. The liftgammagain.com forums are real names only and are very civil. Other sites even require you to submit a profile and/or folio of work before you can join. It's an interesting idea, and anything that raises the level is worth a try. I'd support it.
  5. The lensrental blog talks about how some modern lenses (including my MFT Voigtlanders) are built in such a way that they can't be serviced. I'd imagine its likely to be things like glueing instead of screwing things together etc. In a sense, almost everything is repairable, but the thing working against that is the cost of labour. When you buy a $1500 lens and it breaks, if its going to take 30 hours of labour to take it apart, diagnose the issue, order spare parts, re-assemble, test everything and measure the optics, and send it back to you, and they're charging $50 p/h then you just paid the cost of a new one in labour alone. You can argue that the one serviced by the technician might be better aligned and setup than one out of a factory, but in high quality manufacturing environments the equipment may be so specialised that its hard to replicate the things manually. For example machines might have special tools that can exert huge forces onto a part but do it accurately and do it without leaving marks because the tool shape is exactly the same shape as the surface they're pushing onto. There's a big push in places like the US for "Right to repair" legislation because you buy a huge $250k tractor and it develops a fault and in order to diagnose it you have to call out a licensed service technician because the computer port requires proprietary software and is encrypted to stop you fooling with it. So instead of you being able to diagnose and fix the tractor in the middle of the field in an afternoon you have to wait, pay a call-out fee, then have the tech spend 2 minutes working out that a sensor needs to be replaced and another 5 minutes fixing it.
  6. In terms of hardware acceleration, the new T2 Chip in newer Apple computers has some kind of H.265 hardware acceleration, but I'm not that clear on if it's just decoding or encoding too. As it also does encryption it's hard to find references that don't talk about that and talk about the h265, but I've seen it crop up a few times in benchmark tests.
  7. My lens owning goal is to use the lenses I have, and also to not use the lenses I have. We still have a trip booked with return flights to Europe in September and we're really hoping they cancel it because "Just don't use them, and forfeit the fare" is higher up on the preference list than "Fly to Europe". I say goal, because at this point in the global pandemic having a plan is ridiculous. In terms of using the lenses I have, I'd like to do some lens tests of the lineup I have, and also maybe do a bit of shooting locally might be nice.
  8. kye

    Lenses

    Mark Holtze is pitching a Netflix show using his SMC Takumars and this video shows some footage from an advanced test he shot. Footage from the Takumars and S1H.
  9. Haven't watched it yet, but noticed this in my feed:
  10. @Super8 Is there anything you can show us where specifically the GH5 have plastic skin tones or where you can't get the same look? I keep asking because there are people who I have spoken to at length and whom I respect that believe that there is a difference, but I can never get enough information about what they're looking at in order to be able to see it myself. It's easy to point to a video shot in glorious light with a cast and crew that are on their game and say that a different setup can't do that, because the only evidence against that statement would be a video exactly the same but shot on a different camera, and at sunset and with the light and breeze just-so it's not possible to replicate. My questions is about what specifically can't be matched. I am literally interested in someone pointing at part of a still frame of a video and saying 'see this thing here.. FF doesn't do that' or 'see that there.. MFT doesn't do that', or 'see how this thing moves here... and now see on the other one how it's different... if you can't see it then watch for the way that X does Y' One resource that I found very interesting was this: https://www.yedlin.net/NerdyFilmTechStuff/MatchLensBlur.html The basic idea that you can match blur on different crop factors isn't the headline here, it's that Steve Yedlin is saying it (he'd know!) and is probably the most thorough analysis I've yet found. He doesn't talk about availability of lenses, but he definitely discredits the people that straight-out suggest that you can't get shallow DoF on a smaller sensor. He also doesn't talk about if there's a 'look' inherent in various sensor sizes, but he rules a whole bunch of variables out. I'll be the first person to admit that wider lenses with wider apertures that are sharp wide-open aren't available for MFT, and maybe that's the 'look' that you're referring to, but that would only apply to shots where there needs to be a larger aperture - MFT can easily match a FF 50mm at F4 for example, so in that particular shot it can't be the lack of lenses contributing to it. I'mm also be the first to admit I haven't got any glorious images to post that will "prove" the GH5 does hold up, but even if you gave me an Alexa I think I still couldn't do that - the weak link in both setups would be my skills in post! I'm also half-suspecting that it's actually not the camera at all, but everything else. What I mean is that film-making is a very deep and very difficult thing to do and get spectacular results, and by the time that you're good enough to do all the other stuff right you are spending so much money anyway that of course you just rent an Alexa for the shoot. So in a way I question if it's not that it's not possible to do on smaller sensor cameras, maybe just that it isn't done on smaller cameras. You're saying you can see the look, I want to also be able to see it.
  11. lol about spray tan. I'd be the last person to need one! The Micro sure makes all reds go pink and saturates things pretty strongly. The colour checker is a data colour and I just got it off ebay. They're quite reasonably priced (for a colour checker anyway) and the reverse is a big grey card with a greyscale on the side, so useful for setting WB on location if you need to do that. In terms of grabbing Olympus stuff, I'd wait for the prices to all drop and then peruse the bargains.
  12. They were shot as a stress test and thus are very high DR - I'm about 50cm from a naked 150W halogen bulb and there is no-fill in order to stress-test the DR. IIRC the exposure was setup so that the whites were just below clipping, and the blacks are definitely clipping because the scene is >13 stops. You're probably right about the saturation on the first two, but I would have thought that would expose issues in skin-tone even more than a more neutrally saturated image? I figured that the test was a good one as it was the GH5 compared against an uncompressed RAW image in 100% controlled test-conditions. I'm simply trying to get a baseline for what you deem as "doesn't hold up", which is a subjective judgement.
  13. Great conversation. @Super8 I don't really mind if MFT has a 'look', and I guess in retrospect I wouldn't have thought that having a look was a bad thing, after all the 'FF look' is an often used phrase and that is normally referred to as a desirable thing, not a liability. I have a theory that once a camera is above a certain level of quality, you can match it colour to any other camera assuming you have enough skill in post. I figure that I'm either right or wrong, and by pursuing it then I'll learn a lot either way, so some months ago I bought a BMMCC in order to be able to shoot it side-by-side with the GH5 in identical conditions and try and match them. I chose the BMMCC as it has a reputation for excellent colour and there's no way I can afford an Alexa, so this was the best compromise. People also like the motion cadence and other aspects of it, but I'm not there yet in my comparisons. The project is ongoing (although got paused during covid times as due to my day job I had less spare time and energy rather than more) but did manage some colour matching I thought wasn't too bad. I'm curious to get your impressions of the below. The first one is GH5 in 150Mbps 4K 16:9 HLG mode, the second is the BMMCC in 1:1 RAW graded with WB and CST only. I'm not that pink in real life but I'm probably not too far off it - office worker tan lol. Also, are you seeing the plasticising of the skin in this GH5 shot? Once again, genuine question. I don't doubt that things like skin texture are negatively impacted by compression, the question is how keen is our judgement and how much are each of us willing to tolerate in their images. Personally I'm not that picky, and for my purposes it's totally fine, but getting your impressions might help to calibrate the discussion. Here's the GH5 shot graded via a WB and CST (although the GH5 HLG doesn't actually correlate to either rec.2100 or rec.2020 so it's not a perfect conversion, which is super annoying) so I don't think anyone would think this is a nice grade (or let's hope not!).
  14. I actually think that the GH5 and MFT vs FF debate throughout the thread is on topic, assuming that's what @Andrew Reid was referencing. Olympus has sold its imaging business, it was one half of the MFT alliance and was responsible for basically half the MFT cameras made (excepting the odd model from BM, Zcam, etc), so one of the biggest impacts might be the death of the MFT system. To that end I'm interested in if the GH5 really was so bad as @Super8 has made out, and if MFT does have a fundamental look to it (beyond people not knowing how to choose focal lengths). If the GH5 really was that bad and has a fundamental look to it then it really won't be helped, but if there aren't fundamental issues then 1) what is going on, and 2) why are people mistaken? These feed into the future of MFT and the implications of Olympus selling its imaging business. I'm yet to actually get a straight answer on either of these issues - either on the GH5 or on a given sensor size having a 'look'. Happy to take it offline if people aren't interested, but lots of people were liking / disliking the conversation so I figured I wasn't the only one interested. Thoughts?
  15. Maybe you could help a basement dweller out and reply to the grading questions I asked about the GH5 image not holding up? It was a genuine question and however basement-y you think I am, a gracious individual would realise that for every person who comments, there are dozens more who follow along silently, and we could all do with learning more. It would help us to understand your perspective as well. Lots of people blow through these forums and when they have a different perspective or different requirements or standards then it's easy to get riled up, but it's worth it if they manage to explain their perspective and then the rest of us can understand why they have particular requirements or opinions. Sometimes it even happens that when they share theirs, we can share ours and very very occasionally, we all learn something.
  16. True. Lenses sometimes have T-Stops slower than their F-Stop due to transmission loss, but when comparing between sensor sizes and given reasonably modern glass, it's kind of safe to assume that the T-Stop of a lens is relatively close to it's F-stop.
  17. It's difficult to tell if it will or not. The fact it's implemented in commercial products is a good sign that it's possible and someone thought it had enough value to implement, but one of the main challenges would be the parallax error between a depth pixel and a light-sensing pixel. IIRC at the moment the depth camera is separate to the optical camera, so that's a big problem, but may be almost eliminated once we start seeing sensors with both sets of pixels on the sensor, similar to how we have PDAF pixels embedded on sensors now. Ultimately they'll still have a problem with fine detail and things like flyaways on a backlit portrait, but those will get better with AI and with more pixels and less distance between the optical and depth pixels. I wouldn't hold your breath or sell your vintage glass though!
  18. I think it's a market strategy - to prioritise new features over being rock-solid. However, having said that, when I started on Resolve 12.5 it would crash or need a restart about once every 30 minutes, but now that's down to maybe once every two hours or more, and it's stopped crashing but just needs a restart because something has gone funny and you adjust something and nothing happens. Also, have a look at what the PP users are complaining about - they have unreliable software that isn't adding 15% of new features on an annual basis, and they're paying for their software again and again and again and again instead of paying for it once and never again. Resolve was no-where in the market a few years ago and now it's commonly listed as part of the holy trinity - PP, FCPX, Resolve. Given the current trajectory they'll have to slow down new development because they'll run out of things to add, and at that point I think they'll go into more of a refinement mode and it will get even more stable at that point.
  19. Absolutely. One thing I really like about MFT is that I can use my f0.95 primes to stop-down to f1.4 and get a bump in sharpness, I get an exposure about T1.4, and a DoF equivalent to a FF f2.8. Of course, FF has the advantage with lower ISO noise, but getting a higher FF equivalent T-stop while keeping a FF equivalent F-stop makes up for the difference in many ways. I am also a bit in love with a shallower DoF, I'll admit it. However, it's part of a larger creative context, which I'll elaborate on a bit. There are a number of things that make an image look 'cinematic', or if you don't like that word (some don't), make an image have a higher level of production value. These include things like lighting the talent brighter than the background (or the other way around), creating colour contrast with things like orange/teal grades that provide more subject/background contrast, fog to make distant objects less contrasted and also create rays if desired, using out-of-focus backgrounds, using subject and camera movement which outline the varying planes in the scene, etc. One thing that all of these things has in common is that they all emphasise depth in the image. I think that creating depth in the image is a fundamental goal of the medium due to the fact that photography and videography is the attempt to replicate a 3D world on a 2D medium. To this end, Deakin and I operate in very different worlds. Deakin will use all of the above and more to create depth, whereas I operate in completely uncontrolled conditions, with available light, and often without the ability to even move the camera around that much to manage subject to background distance. So I am interested in having a slightly shallower DoF in my images in order to partly compensate for the less ideal other factors, and also in situations where I have a greater subject distance (or a higher ratio of camera/subject distance : subject/background distance than Deakin would choose) I want a lens that goes faster so that I can get the same amount of background defocus under the more challenging situation. This is kind of like when we've talked in other threads about shutter angle and someone said that they like having a >180 shutter angle in order to compensate for other areas where their image is a bit lacking. I bang on about tech on these forums probably more than the average member, but I do so in the context of the creative output. I do it so I can get nice images, and my learning journey has been one where I work out what matters more, what matters less, and what doesn't matter at all, to me at least. I'm aware of smartphones getting better in low light and also in the folded camera modules with longer focal lengths. I had a few long conversations with my dad about the Light L16 as had it lived up to its claims of being a DSLR replacement (it didn't) he would have bought one. His primary interest was using it in very high dust environments which due to it being completely sealed would have been a great fit. He's killed a number of cameras and has now basically given up due to this. We talked about those P&S 'tough' cameras that have a standard zoom but are completely waterproof, but the image never stacked up. The A7 series are definitely quite small, and I was considering an A7iii + 24-105/4 setup against the GH5 back when I bought the GH5. From memory it was the 10-bit and better IBIS that sealed the deal for me. I was interested in the better low-light of the A7iii, but in the end the GH5 with fast primes sees slightly better in the dark than I do, and that's good enough for me. If I can't see it, I won't miss shooting it. I've said above that I think that a mid-sized sensor will hang on. I don't know if it will be MFT or 1inch but considering there aren't a lot of ILC 1" cameras, I think MFT has the edge in that situation, although the RX series sure seems to have made a lot of sales. The advancements in smartphone low light and performance will trickle into the mid-sized sensor format, so in that sense it will benefit from the tiny smartphone sensor market, and the mum and dad taking photos of junior running around in normal indoor lighting, which places very high but completely practical requirements on low-light performance.
  20. This whole conversation is jumping around. We're simultaneously talking about if MFT can create professional images, and also about what is being sold in the market. These aren't part of the same conversation, because the market for what can create professional images is very small in comparison to what is getting sold in the market. The percentage of the market that is made up by cameras that are approved by Netflix or by AES EBU etc is very very small. You are the one making this personal, not me. Have a read back through our conversation and look for every comment where one of us made a statement about the other ones level of knowledge, character, or capacity for reasoned judgement. Seriously, I encourage you to do so. I did. Size and weight isn't a cop-out argument, smartphones are a counter-example of both your comments about sensor sizes moving up and also size/weight (which are a proxy for convenience). Size and weight aren't important for productions where camera rigs are larger and typically supported as part of some kind of rig, be it a tripod, shoulder rig, or other, but they are important in the market because lots of cameras are used hand-held or in ultra-portable light-weight setups. My view is that the market is becoming a U-shaped curve, with smartphones at the tiny-sensor end representing the vast majority of consumer photography, and S35 / FF / FF+ at the other end representing the high-end photography and cinematography markets. The middle used to be full of pocket cameras for the general public and that's the part of the market that smartphones decimated. The question is how far the drop in the middle of the curve will go. MFT is almost exactly in the middle of that dip. If we imagine a situation where all that is left is smartphones at the one-end and FF/FF+ professional stills and cine-cameras at the other, would that make sense? Smartphones are terrible with long focal lengths as they're too large physically for the form-factor, and are bad in low light. FF cameras aren't pocketable and the lenses are very large. From this scenario, I suggest that there is a place in the market for a middle-sized sensor. Maybe it's a battle between the 1" sensor and the MFT sensor, but I see a market for a "middle sized" sensor well into the future. Of course the GH5 footage seen by colourists is going to be more challenging than Alexa footage. Productions shot on an Alexa are much more likely to be lit well and shot under controlled circumstances. Productions shot on an Alexa are more likely to have a colourist as a matter of course, rather than as a trouble-shooting tactic. Productions shot on the GH5 and many other cameras of this price-point and market tier will be graded by the DP or Editor as a matter of course and only brought to a colourist when there is a problem that can't be dealt with by the team. Of course, when I have discussed the GH5 with the colourists the reaction wasn't one of dread, it was one of 'sure, we see them on a regular basis, no worries'. I agree with you that the cine market will be fine long-term. What we're seeing is a shake-up of the market, not a decline of the market. Home theatre had an amazing impact to the multiplex, but we're watching multiplex content more than ever on streaming sites. The fact that people can earn a living making high production quality content for free on YT or Instagram through their own branding deals means that these platforms are also feeding into the high-end market. Whenever I see a glimpse of a camera rig (eg, a reflection in a window) I try and get a good look to see what they're using, and the number of 'normal' YT shows that use a C300 or FS5 is surprising. You see it less on reality shows on streaming, as they're more carefully edited, but you still see reflections from time to time. Of course, counter to your claims of Netflix approved requirements, I see other setups too. One show used a GoPro Fusion 360 quite a bit. Even if this is true, getting RAW or higher frame rates or better EIS or better AF etc are all possible. Things like getting a better combination of things, like 4k60 10-bit, or even 5k60 12-bit may be possible (a quick google suggests that the GH5 uses the IMX272AQK sensor, which can do 5K60 open-gate 12-bit, 5K80p 16:9 10-bit, 5K111 2.66:1 10-bit, 2.6k180 2:1 10-bit, etc). It could even implement V-Log instead of V-LogL or even a straight rec2100 or rec2020 implementation instead of whatever the hell it is using now (it's not either, I did tests). So, genuine question. What are you looking for when you say the image "never held up"? Is this a value judgement in terms of how nice things were, or was there some objective measure? After watching a bunch of 8-bit vs 10-bit comparison videos where people tried to break 8-bit with varying levels of success, I tried to break the 10-bit and couldn't. For example, I found an image with subtle graduations and applied a curve that looked like a square-wave, way beyond anything that would occur in real-life: Or are you referring to a situation where a more normal colour grade exposed compression artefacts or colour glitches in the image? I agree that depth and lens choice is very poorly discussed by MFT users. For example, the fact that the default MFT pro lens is the 12-35/2.8 and it's the equivalent of a 24-70/5.6 but people always refer to it as a 2.8 in the same way as a 24-70/2.8 lens for FF is infuriating to me. The Sigma 18-35/1.8 with 0.71x SB is equivalent to an f2.5 is better in this regard, but I agree that it's not spoken about in the way it should be. I bought fast primes, knowing what depth I was interested in getting, but it's a compromise. Of course, on the other hand, I also use a 70-210/4 zoom + 2X TC for sports, and the money I saved from not having to buy a 400mm FF lens paid for my entire setup, so there is that.
  21. It is Panasonic, so I suppose there's a slim chance that they'll improve it in a future firmware upgrade, but I sure wouldn't be buying one and betting on that though.
  22. So you couldn't just put the same processing on it and have it broadly match the other footage? That's a real issue - it was sold as that so wasn't that kind of the point?
  23. This is interesting and useful. It shows you how Resolve can render the colour grading up to a certain node, but not beyond it. So for example, if your node tree started with a Deflicker (great for timelapses), then NR, then a Motion Blur (for fake 180 shutter!) then you could render these three nodes into a cache but then grade normally after these and unless you go back and adjust one of them it wouldn't need to re-render that cache. I've never heard about this before, so thought it was worth sharing.
  24. I agree with @Mustafa Dogan that it obviously isn't well-matched to the marketing spiel, but that we should just consider it on its merits. Remember when half the P4K thread was just people jumping up and down about it having the word "Pocket" in the title? I think this is similar 😂😂😂
  25. Tom Antos did this comparison below, in which he pushed the cameras both under and over, and he shows the results of pulling the exposure back in post so it's nice and clear. What I'm not so clear on is if this is the right version of the Ursa that you're referring to? There are a few and I'm not across all of them. I think there are two challenges, the first is that if one brand directly compares their product to another brand then they can get sued, which seems odd, but seems to be a thing. That's why any comparisons are always with a "leading competitor" instead of naming them directly. The other challenge is that there is no official way to measure DR objectively. When someone says it's 12.1 stops then they're choosing a level of noise, and had they chosen a different level of noise then it might have been 12.4 or 11.8. So because there's no standard threshold you can't compare measurements done by different people against each other. Even if we defined a number and everyone started testing against that number, there are other factors to consider too, such as the noise being an RMS, or peak-to-peak, or some other approach. Remember how you used to go to big box stores and there were these tiny little boom boxes that said they were 3000W, but your home theatre system is only 100W and it's way louder? That's a difference of measurement methodology. You can, however, directly compare and see how a given camera compares to other cameras that you kind of have a reference for, so that's useful. Reading a number for DR is less-so.
×
×
  • Create New...