-
Posts
7,853 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
It wasn't a miraculous firmware fix - it was deliberately crippling their product and getting called out for it and then them fixing it without apologising. I wouldn't call you dubious, I'd call you a realist. Now, I'd happily buy a Canon camera if my needs aligned with one and the price was reasonable, but before I parted with any money I'd have to do a borderline forensic analysis to make sure that the camera did everything I needed it to do, because history has shown that what you assume will be provided may not be.
-
Good points. Perhaps the first thing I should say is that I've never declared that MF is superior, just better for me, and better for some people, although probably a minority at this point. I'm trying to engage in a more nuanced discussion, rather than it just being one-sided. Nothing is all-good or all-bad, and everything depends on perspective. In terms of "getting the shot" the situation is more complicated than just AF. The primary one, for me, is shooting stable hand-held footage. No point slightly improving my focusing hit-rate if I'm going to lose more shots to shake than I gain by having better focus. I find that shooting MF with left-hand under the lens gripping the MF ring and right-hand on the cameras grip is a pretty good approach if the camera is lower than eye-level. If it's above eye level then I'll normally be lifting it high above my head and so I'll normally close the aperture a bit and pre-focus and then hold onto something near the end of the lens that doesn't move in order to keep my two hands as far apart as possible. I don't normally miss focus on these and the screen is normally too small for anything except composition so this setup probably doesn't matter either way. My favourite position is looking through the EVF with left-hand doing MF and right-hand on the grip (like above), and I'll be getting three points of contact. Stabilisation isn't normally a big issue in this setup as I wouldn't normally do this unless I was stationary, whereas the other setups are often when I'm walking or squatting or holding the camera out at arms length etc. There's no good way I've seen to use touch-AF on the screen while also stabilising the camera with both hands, so that doesn't beat MF in terms of getting the most shots. It also doesn't work when using the EVF (pretty sure I'm pressing the screen with my face at this point too!). I could potentially use the thumb of my right-hand on a joystick, although that would really lessen the grip from that hand as it would really limit the amount of pressure I could apply with my thumb so I wouldn't really be holding the camera much with that hand. Thinking about it now, MF loses me far less shots that stabilisation does. If it was the other way around then I could absolutely see that my cost/benefit assessment of AF would change substantially. The other thing to consider is that, at least in my eyes, imperfect MF still has a loose kind of human feel that suits the content that I shoot, but when AF misses focus or is in the process of acquiring focus the aesthetic just isn't desirable. I tend to miss focus or fail to track a subject when there are lots of things happening and lots of movement, so it suits the vibe. If I have a second or two to adjust aperture and focus then I don't miss those shots. When AF misses a shot it just seems really at odds with the aesthetic that the rest of the image is creating. Watching AF pulse, or rack focus either too fast or too slowly always seems so artificial, like a robot having a mal-function. Also, I'm not sure how many AF mechanisms are able to ease-in? They seem to rack at one speed and instantly stop on the subject, or pulse momentarily (which is worse). This is probably made worse when the focus speed is set too high, but I'd suggest it's still a factor. "Getting the shot" isn't just making sure it's exposed and focused properly and not too shaky, it's about getting the shot that has the most chance of making the final edit. Getting the technical things right are the bare-minimums in this context and only get a shot to survive the assembly stage of the edit, it's the subject matter and the aesthetic that determine if the shot makes it to the final edit, and if the shot is interesting then technical imperfections can be tolerated. I can imagine that if you're shooting commercially, the equation between technical imperfection and aesthetic is very different, so would generate different decisions based on different trade-offs.
-
Well, we're back to the "what is visible" topic again. *sigh* I setup my office (and the display that I'm writing this on) according to the THX and SMPTE standards for cinemas by choosing a screen-size and viewing-distance ratio that falls right in the middle of their recommendations (the two were a bit different so I picked a point between them). My setup is a 32" UHD display at about 50" viewing distance. At this distance I cannot reliably differentiate between an 8K RAW video downscaled to UHD and downscaled to 1080p. At my last eye check I had better than perfect vision. For 8K to be visibly different to 4K, you'd have to have a viewing angle considerably wider than I have while sitting here, which would (for those billboard screens) make them either incredibly large, or quite close to street level. Judging from the angle and lack of wide-angle lens distortion from those videos, I'd suggest those signs are too far away from street level and not large enough to compensate, but maybe I mis-judged. It really is a factor of viewing angle. I don't doubt that they're effective though. HDR is hugely impressive and looks incredibly realistic in comparison to lower DR displays. I'd imagine that they'd have added more than a modicum of sharpening in post to those images, plus they're CGI which will be pixel perfect 4:4:4 to begin with (unlike any image generated by a physical lens and captured 1:1 from a sensor). In terms of your cinema comparison, I don't doubt that the images are different, although half of each cinema will be sitting closer than the recommended distance, and the distance that the 4K vs 1080p differences are tested at. If we assumed that the middle-row is where 4K stops being visibly better than 2K, then I would imagine that the first row is probably more than half that distance to the screen, so it might be past the point where 4K can keep up with 8K. I remember sitting in the first row of a crowded cinema (the last seats available unfortunately!) and having to turn my head side-to-side during scenes with more than one person - the viewing angle on that must have been absolutely huge!
-
When I wrote that I was only thinking of video, yes. Of course, unless you're shooting sports or wildlife then most AF is pretty good these days. The GH5 AF is quite unreliable for video, but for stills it's pretty good. In terms of conspiracies, there are none. The common theme of my posts is that manufactures are shafting their consumers as much as they can to maximise their profits as much as they can - that's not a conspiracy, that's basic economics! I don't like it, but that doesn't make it a conspiracy. I think you've oversimplified this. I shoot in some of the least controlled situations around and switched to manual focus because the AF didn't choose the right thing to focus on, not because it couldn't focus on the thing it picked. The more chaotic a scene the more things there are to choose from and the less chance the AF will choose correctly. The way I see it, it's the middle of the spectrum that benefits most from good AF: if you have complete control of the scene then MF is probably fine if you have a simple scene like an interview then AF is great because it will be reliable in choosing what to focus on and can track your subject as they move forwards and backwards, or a scene where you have a bridal couple standing clear of other objects and people and you're filming them from a gimbal, etc if you have too much chaos then the AF will lose more shots by choosing the wrong thing than MF would miss by not hitting the target People keep forgetting that focus is a creative element, one of the elements that is used to direct the viewers attention and experience over the course of your final edit, not a purely technical aspect. The GH5 AF is spectacular at focusing, it's crap at choosing which thing to focus on. That's the broader challenge... People talk about Eye-AF, but notice that they test AF with only one person in frame? There's a reason for that, and it's not that camera reviewers on YT don't have any friends 🙂 I'm none of those things, I'm into this thing that people don't seem to have heard of, it's called "getting the shot". I get more shots with MF because I know what I want to focus on. The Canon R3 mode where the AF point moves around by looking at your eye would probably be the only AF system that would meet my needs, and if that was available in a camera within my budget then I'd happily swap to that. I'd probably miss the ability to have live control on how fast the focus transition was, but it would still be a better outcome because I'd miss less shots. I get more shots with IBIS because I shoot in situations where even OIS can't compensate (OIS doesn't stabilise roll, for example), and those shots are unusable. If you've somehow concluded that I'm anti-AF then you've (once again) misinterpreted what I'm saying. I'm saying that AF isn't perfect, and that sometimes it gets in the way of the shot. As a member of the "get the shot" club, I'm against that. If you're in situations where AF will help you get the shot, then cool, I'm all for it. If manufacturers want to push AF as the only acceptable way to operate, that's fine with me up until they start limiting MY ability to get the shot, which does happen. Thus my posts talking about the downsides of it. Mistaking some mild criticism of a technical function for trying to "discredit the majority of users out there that value things like AF" is, quite frankly, preposterous. Well, if that's your argument, then you're agreeing with me. Camera YT, and all the camera groups that I can find online, idolise all the things that are associated with cinema. Colour science is best when it's like film (no-one is talking about getting that VHS colour science), lenses are best when the aperture is fastest (people aren't talking about which lenses go to F32 vs F22 in order to get that deep-DoF fuzzy camcorder look), etc etc. Besides, EOSHD seems to be a rare sanctuary of people who know things. Most other groups only seem to talk about shallow-DOF, LUTs, and how to get your camera and the Sigma 18-35 to balance on your gimbal, and never get beyond that.
-
That would be great. "The pixels are just awful, but it's ok because there's a bazillion of them!" was never an attractive concept. Of course, photographers have been shooting RAW for way longer than we have in video, and in video they give you more bitrate for higher resolutions, so it was never about the straight number of pixels anyway. We may get there eventually, but it's not going to be any time soon!
-
I'd also be particularly interested in an FPii if they added: tilt or flip screen IBIS (OIS doesn't stabilise rotation, which has ruined shots of mine on many occasions) improved compressed codecs (the h264 in the FP was very disappointing compared to the RAW) I do wonder though if those additions (especially IBIS) would kind-of make it a different type of camera altogether. Hollywood really dislikes IBIS (as the sensor moves around even when it's off) so adding IBIS kind of eliminating it from the world of cinema cameras, however, they didn't really market it as a cinema camera to being with, despite it being a 4K FF uncompressed RAW camera, so I'm not sure who / what Sigma thinks this camera is for.
-
Considering the economics, I'd suggest that the shift towards AF and native lenses is (at least partially) due to manufacturers taking advantage of naive consumers by pushing these self-serving concepts in order to sell more lenses and lock users into their own ecosystems. I say 'naive' because huge numbers of internet users want AF and cinematic images, despite the fact that cinematic images are generated mostly by adapted manual lenses.
-
I never thought about that, but yes, that makes sense. When I was doing stills I would shoot exclusively RAW images as the JPG versions always clipped the highlights (which is madness, but there we are), so if the file sizes of those doubled/tripled/quadrupled then that would potentially be a big deal and most people wouldn't really want 48MP over 12MP / 16MP / 24MP. I mean, 12MP sounds pretty low res, but it's the same detail as 4K RAW video, which is plenty good enough for most purposes. Of course, the storage requirements of shooting RAW stills is laughable compared to that of video, but for stills-only shooters it might be a thing.
-
These little things really make me wonder what Canon are doing. I mean, if one Canon development team-lead spent one day at CES writing down all the things that people suggested then things like this would be added to every camera without any real challenges at all - if they already have a function that puts frame guides on the screen and a menu to choose between different ones, then adding more should be only a day or two worth of work for an engineer. This seems relatively plausible. A WFM requires that the image (or a low-res version of it) be processed and a graph generated from that analysis. My guess is that they might not have a spare chip available to generate the WFM, or (if they can do it in preview mode but not during recording) it might be being generated by a chip that is only busy during recording (eg, for NR or compression etc). False-colour, on the other hand, could simply just be a display LUT, which requires no additional processing requirements as the functionality to apply a display LUT and a recording LUT are already present.
-
Another way to look at it is that for the same sensor read-out (data rate), you can have: 8K with 26.8ms of RS (that's what the A74 gives in full-sensor readout mode) 4K with 6.7ms of RS (roughly on par with the Alexa 35) I know which of those I'd prefer. Unfortunately, video is so complex that much of the camera-buying public (from parents to professional videographers) simply doesn't know any better and are therefore subject to the "more is better" marketing tactics. In cameras, and also in life, I've come to realise that every statement that is worthwhile begins with "well, it's actually more complicated than that, ...." but I've also come to realise that most people tune out when they hear those exact words. There is one thing that I am quite puzzled about, which is why they don't use the extra pixels to increase the DR of the camera. Especially considering that DR is one of the hyped marketing specs that gets used a lot. For example, if they took an 8K sensor, installed an OLPF that gave ~4K of resolution, and made it so that each colour (RGB) was made of a 2x2 grid of photosites of that colour, they could either: Average the values of each group of 4 photosites to lower the noise-floor by a couple of stops, or They could make each of the photosites in the 2x2 grid have a different level of ND dye, in addition to the RGB dye, potentially giving that hue (RGB) at up to 4 different luminance levels If they did the latter, spacing the ND dyes perhaps 3 stops apart (which is lots of overlap considering each photo site will have at least 10-stops of DR on its own), then the photo site with the most ND would have 9 stops of extra highlights before clipping, potentially giving 20 stops of DR when combined with its neighbours in that 2x2 grid. This wouldn't need to include two separate ADR circuits the way that ALEV/DGO/GH6 sensors work, it would only need a very simple binary processor to merge the 8K readout into a 4K signal with huge DR. I mean, wouldn't Sony marketing department love to have a camera with 4K and 20 stops of DR? That's more than ARRI and would make headlines everywhere. Plus, it can be done with existing tech and just a single extra chip in the camera. Of course, they'd charge $10K for it, but still.
-
Those look awesome! (The middle one is geoblocked, but the others are fun). However, at this kind of viewing angle, I think that even an uncompressed RAW 720p image would look detailed and high-quality. Most movie theatres are high-quality 2K, and they look far superior to 4K YT due to the bitrate differences, despite the cinemas being much larger viewing angles than most YT setups. I think the true feature of those billboards is the HDR, not the resolution. Maybe they are very high resolution images, and maybe that is visible in person with a telescope / binoculars, but I just doubt it's visible to the naked eye at that distance.
-
More grading fun with cheap Lumix FZ47 CCD sensor bridge camera
kye replied to dreamplayhouse's topic in Cameras
Looking really good! I'm no expert, but it looked pretty close to a 16mm film look to me. Potentially an earlier 16mm film perhaps? My understanding is that 8mm film was pretty noisy at first but advances in the stock mean that modern 8mm is better than 16mm used to be, and that modern 16mm is now better than 35mm used to be, etc. In my brief explorations, a bit of blur and some grain and a bit of a push to the colours (or a subject with limited hues) really does wonders to knock the digititis out of an image. -
@PannySVHS @John Matthews @canonlyme I've been meaning to do a codec test on my GX85, so might use this thread to motivate me to do it. Any suggestions for how / what to shoot? I'm thinking a prime lens stopped down so it's nice and sharp, potentially with a moving subject (assuming I can find a time when there's a bit of wind to make the trees nearby move a bit).
-
Just catching up on this thread, and the only thing I can think to say is Damn! Those images from @OleB are just wonderful. Great subject / lighting / lens of course, but the sensor in this thing truly does not disappoint!
-
You've got it backwards.... rumoured cameras are absolutely killer, it's the ones that get announced and released that are disappointing in almost every aspect! Just read any "what camera should I buy" thread - there's always "wait for the X to be released" recommendations 🙂
-
This is how I feel about Sony as well. I suspect if I was much better at colour grading, or had a professional colourist to do it for me then that might change things. Fuji, on the other hand, really delivers in the X-Factor department, despite being technically not as good in a number of ways.
-
What kind of hi-res ads are around? I haven't seen anything like this. and what kind of high-resolution are we talking? you can blow things up pretty seriously before the pixels become visible..
-
I firmly believe that almost everything can be quantified, but this one falls very far beyond the point of what is practical. The GH5 tests I saw with the person stepping into and out of frame had the AF recognise a face and change focus across a range of reaction times - sometimes it was fast and other times reluctant, and occasionally the person would just stand there being ignored like a camera nerd at a high-school dance. There aren't any easy way to quantify this. GH5 testers couldn't even get the test to replicate, providing a number of hilarious examples were the person was walking around saying how bad it was and it tracking them just fine, and other testers saying it was really good and it doing quite badly. One YouTuber who got the C70 when it first came out admitted in a follow-up review that he had to stop using the C70 until it had a firmware update or two because when it first arrived it had trouble recognising faces of darker-skinned people. IIRC he had to hire something to use on commercial shoots because the C70 wasn't ready yet. Canon can't even test their AF properly and it's one of their key brand differentiators! If you were to quantify AF performance, not only would you have to have a dozen or so metrics (speed to recognise a face, how out-of-focus the face can be before it recognises a face, maximum tracking speed, how much of the face has to be visible, how far around the side of the face it detects, how bad the lighting has to be for it to recognise a face, etc) but you'd really struggle to quantify the GH5-style lack of reliability except to have an enormous sample size. Peter McKinnon made a promo video for his new product, and at the 6:12 mark, the camera goes from focusing on the object: to focusing on his face: the two frames above are 2 frames apart. Why did the mighty Canon PDAF randomly choose that moment to change focus to his face from the largest object in the centre of the frame? Heck knows, but there were even previous frames where more of his face was showing and it didn't choose those times to change focus.... Here's the video linked to that time - judge it for yourself... That's an AF problem right there. People tend to think that Canon PDAF or Sony eye-detect PDAF are perfect but in reality they stuff up from time to time, and they tend to think that GH5 is completely useless when it actually gets things right quite a large percentage of the time. I've seen shots in vlogs where Canon PDAF cameras just randomly focus on the background when the persons face was visible the whole time. They're rare, but I've seen at least two that made it to the final edit - we can only guess how many others ended up being cut. Any methodology that quantifies AF performance would be useless if it ignored the lack-of-reliability problem (because it would declare the GH5 AF to be great when it's obviously not) and it would be wrong if it gave Canon and Sony a perfect score when they obviously aren't quite there, despite being impressively close. Sure, you can quantify some aspects of AF performance, but to be even remotely useful, you'd have to test so many variables and some of them would require such incredibly large sample sizes that it just wouldn't be practical.
-
Personally, I'd really appreciate the extra digital zoom capabilities, but for shooting 1080p (as I do) the 6K sensors are just as good and most cameras don't give you the digital zoom options that I'd really like, so the resolution of the sensor is secondary in that sense. Also, if the battery life is crap and it overheats, then it's giving me zero footage rather than even sub-optimal footage, a pretty fundamental issue. The other issue is that if you're using the sensor to get a 4x digital crop (1080p --> 8K) then your lenses will be by far the limit to the quality rather than the sensor. You might find that a 6K sensor upscaled to 8K has the same level of fine detail as an 8K sensor without scaling. Assuming you have a fast enough machine. Many people would argue that any decent machine can edit 4K, but with many cameras shifting to IPB, 10-bit, h265, decoding that footage in real-time is no small feat at all. If I end up with a GH6, which lacks the 2x zoom function from the GH5, then I'd probably just program a mode to be 4K 1:1 and swap to that and then crop in post. Hardly ideal, but would give me a bit more reach and still be downsampled to 1080p.
-
8K (and more!) seems to be coming, if we want it or not. How important is it to you and what would you use it for?
-
Unusable is (normally) a word that is relatively useless because: It says that the AF performance falls below some specific threshold of performance that is not quantified and isn't disclosed, so there's no way to know if it will be good enough for your needs. I have determined that Canon and Sony AF is unusable for my needs (seriously - I'm not being dramatic) and I use MF because of that. The word is normally used by people that mostly shoot their mouths off online, but rarely shoot anything other than camera tests (if they shoot at all!) 🙂 If only! The primary purpose for 8K cameras is to sell 8K TVs, so there's no way they're going to let overheating of 8K MILC cameras get in the way of selling 200 million TVs each year! It there any chance they'll release one for this camera? The giveaway might be the little port that it plugs into... Considering that 8K is 16x the data of 1080p, and 6K is only about 8x the data of 1080p, there's quite a substantial difference.
-
My bad! I didn't see the thread when you posted it. Nothing to see here, please move along....
-
Was it Fuji that had a screw-on fan that attached to the back of the camera and took power from the camera? or am I remembering another brand?
-
I think this is an aspect that gets overlooked by people that aren't out filming on their feet all day. My own equivalent is that I have to be able to hold my camera in-hand for a whole day, only resting during breaks for food and bathrooms etc. If it's not in my hand then it's not ready to film and so I miss shots and we all know that even barely usable footage is still better than the shot you didn't get. This is one of the reasons I sold my Sigma 18-35, it was just too heavy.
-
The press release doesn't say a whole lot: https://company-announcements.afr.com/asx/ams/0f93042b-44f6-11ed-810a-2616002949eb.pdf Atomos states that: "it has completed development of a world class 8K video sensor to allow video cameras to record in 8K ultra high resolution" they "acquired the intellectual property rights and technical team from broadcast equipment firm, Grass Valley five years ago to develop a leading-edge 8K video sensor" and, they are "actively exploring opportunities for commercialisation and is in discussion with several camera makers who are showing great interest" Does anyone know anything about this? @androidlad perhaps? Could this mean that cameras without the Sony sensor look might be forthcoming? Could Atomos be moving into the camera industry? Their external recorders are what, 70% of a camera already - just lacking a sensor and some mics and chips to connect everything up?