Jump to content

kye

Members
  • Posts

    7,475
  • Joined

  • Last visited

Everything posted by kye

  1. Great write-up and nice to hear a nuanced view of pros and cons and feedback from real use. My advice (for you and everyone getting a new camera) is to shoot and edit some test footage every day until you're familiar with the camera. I find that it's not until I shoot some footage and then look at it on the computer that I learn something. If you have a list of questions then just film a little test to explore each question and gradually work through them... eg, what do all the focus modes do? and what do the settings for each one do? how far can I overexpose? under? how far can I push the WB? highISO? what does every codec look like at every quality setting (find something with consistent movement - waves at the beach is a good one)? how long a focal length can I hand-hold without OIS (film your best handholding at a range of focal lengths, repeat the same test at the same focal lengths when you're warm, when you're freezing cold, when you're hungry, and when you've had too little sleep and too much coffee). etc etc etc... I've done a lot of these tests and I find myself referring back to them when contemplating things. Like, if I use my camera in a certain way then that will mean pushing this parameter, then I just go and look at that test and I'll know what that will look like. In terms of the screen, can you design a LUT that is hugely aggressive and can be seen in bright light? eg, everything from 30IRE down is 100% pink, 30-60IRE is 100% white, and above 60% is 100% blue? or even everything is black except 35-65 which is red below 50 and blue above 50? Seeing something would be better than nothing. Best of luck and good shooting! 🙂
  2. kye

    Lenses

    It's too long ago for me to remember accurately, but mostly I think they dried up. Of course, in a lens it's more complicated because you shouldn't humidify them too much or you'll get fungus. Also, when @PannySVHS says the rubber "dissolved" that's something I think is literally true - which is very different to rubber drying out. It's probably a case of reading the advice from known good sources, which considering the level of completely wrong info posted all over the internet about it, I'd only get from lens manufacturers directly.
  3. kye

    Panasonic GH6

    So, can you get the 16-bit files out of the camera and see them in an NLE? or just the 12-bit files? There's nothing stopping them from reading the data off the sensor, changing it in whatever ways they want to without debayering it or anything like that, and then saving it to a card. I talented high-school student could write an algorithm to do that without any problems at all, so it's not impossible. When light goes into a camera it will go through the optical filters (eg, OLPF) and the bayer filter (the first element of the colour science as these filters will determine the spectral response of the R, G, and B photosites. Then it gets converted from analog to digital, and then it's data. There's very little opportunity for colour science tweaks there. I've looked at their 709 LUT and it doesn't seem to be there either. I'm seeing things in the colour science of the footage, but I'm just not sure where they are being applied in the signal path, and in-camera seems to be the only place I haven't looked. It would be amazing if we were to get that tech in affordable cameras. It will give better DR and may prompt even higher quality files (i.e. 12-bit LOG is way better than 12-bit RAW). It's not a small guy vs corp thing at all. Most of the people pointing Alexas or REDs at something have control of that something. Most of the hours of footage captured by those cameras will be properly exposed at native ISO, will be in high-CRI single-temperature lighting, and will be pointed at something where the entire contents of the frame are within certain tolerances (eg, lighting ratios and no out-of-gamut colours, etc). Most of the people pointing sub-$2K cameras at something do not have total control of that something, and many even have no control over that something. A lot of the hours of footage captured by those cameras will not be properly exposed at native ISO (or wouldn't be at 180 shutter), won't be in high-CRI single-temperature lighting, and won't be pointed at something where the entire contents of the frame are within certain tolerances (eg, lighting ratios and no out-of-gamut colours, etc). You really notice how well your camera/codec handles mixed lighting when you arrive somewhere that looks completely neutral lighting and look through the viewfinder and see this: This was a shoot I had a lot of trouble grading but managed to make at least passable, for my standards anyway. There are other shots that I've tried for years to grade and haven't been able to, even through automating grades, because things moved between light-sources. Unfortunately that's the reality for most low-cost camera owners 😕 The difficult situations I find myself in are: low-light / highISO mixed-lighting high DR and when I adjust shots from the above to have the proper WB and exposure and run NR to remove ISO noise, the footage just looks so disappointing. Resolution can't help with any of those. I've shot in 5K, 4K, 3.3K and 1080p, and it's rare that the "difficult situation" I find myself in would be helped by having extra resolution. I appreciate that my camera downsamples in-camera, which reduces noise in-camera, and the 5K sensor on the GH5 allows me to shoot in downsampled 1080p and also engage the 2X digital zoom and still have it downsampling (IIRC it's taking that from something like 2.5K) but I'd swap that for a lower resolution sensor with better low-light and more robust colour science without even having to think about it. They care about quality over quantity, and realise that one comes at the expense of the other. This is literally what I've been trying to explain to you for (what seems like) weeks now. Interesting stuff. In that thread, the post you linked to from @androidlad says: This idea of taking frames "a few milliseconds apart" sounds like taking two exposures where the exposure time doesn't overlap. Assuming that this is the case then yeah, motion artefacts are the downside. Of course, with drones it's less of a risk as things are often further away and unless you put an ND on it the SS will be very short, so motion blur is negligible anyway. We definitely want two readouts from the same exposure for normal film-making.
  4. How to circumvent a rigid procurement process... 101. "It was an open tender - honest!"
  5. kye

    Olympus OM-1

    That's an interesting possibility and, although being hugely processor intensive, would unlock things not currently possible with the current setup. For example, if they took a huge number of short exposures and then stabilised and then combined them they could simulate a 180 shutter (eg, 1/50s exposure) that was stabilised DURING the exposure (which current EIS tech cannot do), but also they could do things like fade-in the first few images and fade out the last few creating motion trails that don't stop abruptly. They could even have adjacent frames overlap.. ie, if you imagine a point on a spinning wheel they could have frame one going from 12-oclock to 3-oclock, and frame 2 going from 2-oclock to 5-oclock, etc. IIRC RED did some things with using an eND filter to fade-in and fade-out each exposure, but of course they couldn't overlap the exposures like this. The whole research into 24fps being the threshold of continuous motion and the 180 shutter rule being the most natural amount of motion blur could be revisited as the limitations that they were working with in the film days would no longer apply.
  6. kye

    Panasonic GH6

    I never bought the Vlog update so I haven't tried that. I've also never used ACES either. I do use Resolve Colour Management now - I think that's new in R17? When I interpret the GH5 HLG footage as Rec2100 the controls work pretty well. As I've mentioned before I've tested it against Rec2100 and Rec2020 and it isn't a perfect match with either of those, but Rec2100 works well enough to be useful. I did look at buying V-Log, but once I saw that it also isn't natively supported I figured there was no point - I already have one thing that's useful but not an exact match so why pay for another one 🙂 All this is in context of how you're grading of course, and I'm really liking grading under a PFE LUT (2393 is pretty good) which helps to obscure the GH5 WB / exposure / lacklustre colour science quirks in the images.
  7. kye

    Lenses

    I used to be an IT tech and we used to have to do major services on printers that were old, even if they had low page-counts, because all the rubber in the rollers used to push the paper around had dried out and cracked and no longer worked reliably. What was interesting was that we had similar printers of similar ages in lots of different buildings with different types of air-conditioning systems (refrigerative, evaporative) and this really impacted the lifetime of the rubber rollers.
  8. kye

    Panasonic GH6

    I'm still not convinced about this. Yes, they do say: "Nothing is "baked" into an ARRIRAW image: Image processing steps like de-Bayer, white balance, sensitivity, up-sampling or down-sampling, which are irreversibly applied in-camera for compressed recording, HD-SDI outputs, and the viewfinder image, are not applied to ARRIRAW. All these parameters can be applied to the image in post." However, immediately before that, they say: "For the absolute best in image quality, for the greatest flexibility in post, and for safest archiving, the 16-bit (linear) raw data stream from the sensor can be recorded as 12-bit (log) ARRIRAW files." So in this sense, the "nothing" baked into the ARRIRAW includes the combination of two streams of ADC and also a colour space conversion. It's perfectly possible to do whatever you like to the image and still have the first statement be true in a figurative sense, which is how they have obviously intended it - "nothing you don't want baked in is baked in". The ARRI colour science could still well be baked in and no-one would be critical and the statement they make would still be true in that figurative sense. To me, the proof is in the pudding, and even that Alexa vs LF test included small and subtle shifts between the two images that is unexplained by the difference in lenses. That doesn't mean you care about colour as much as I do, or see it in the way I do. The fact you can hire a colourist means that the colour science from the manufacturer means less as you can cover any shortfall by hiring a pro, I don't have that luxury unfortunately, and thus why the colour science has a greater impact to me. I suspect most people buying a GH6 also probably don't have regular access to a colourist to take up any shortfalls. Ironically, the cheaper the camera the more that someone would need to have more latitude, great colour science, and a solid codec to work with. By having the best image come from a camera that costs $100K you're giving the most robust image to the very people who need it least as they can afford to use the camera at its exact sweet spot and not need any of its latitude. Agreed, and being skeptical makes sense. Realistically I'd be happy with any log colour space as long as it's a standard that is supported by ACES or RCM. Like I said, the folks with access to higher-end equipment and colourists for backup and troubleshooting swap to the higher budget stuff when the going gets tough, but those of us who don't have that luxury are stuck with what we have and have to make the best of it without any backup. But when we suggest that we'd prefer things that make our lives easier (robust colour science, codecs, DR, ISO performance, etc) rather than things that don't really help in difficult situations (resolution, etc) somehow that doesn't make sense to the people that aren't in our shoes? I think the top comment on the Alexa vs LF video is the most telling.... "Based on image aesthetics I'd go with the 65, but based on my budget I went with my Panasonic G7." Expecting ARRI level images from a GH6 is definitely unrealistic. But I mention it for precisely the same reason you mention wanting more DR - the better they can make it the better our results will be. Magic Lantern gives options for 14-bit RAW on the 5D and also on my 700D, so I'm assuming that the Canon sensors have a 14-bit output. I'd assume that means it's not that difficult to do - after all the 5D isn't a new camera by any means! According to CineD the GH5 gets 9.7/10.8 stops, the OG BMPCC gets 11.2/12.5, the S1H gets 12.7/13.8 and the Alexa gets 14/15.3. I really notice a difference between the GH5 and OG BMPCC, and obviously the S1H is significantly more again. Have you shot with an Alexa? If so I'm curious to hear if you noticed any differences from the extra DR when using them? Obviously there's an amount of DR that would be "enough" for almost all situations, and I'm curious to know where that amount is. I'm also skeptical about motion artefacts that would come from non-sequential exposures. I guess we'll see. It might be more useful in situations where motion blur isn't significant and extra DR is, for example locked-off shots of architecture and the like. You could easily have it take a set of images around the 1/1000s mark, perhaps at 300fps, and then combine them. That would give you great DR to include the highlights in the sky etc. Based on your logic about 12-bit ADC being a limitation and new sensor tech being required it will be interesting to see what this new sensor can do. Only a few sleeps left until the formal announcement!
  9. +1 on that. Also, sometimes you're pushing the ISO in ways you'd prefer not to. Some of the nicest shots I've got of the kids is when they're on their phones and the available light is so low that they're basically being exclusively lit by their phone. That's not a native 100 situation, at least on the single native ISO sensors! It's relatively typical for colourists (and film-emulation packages like FilmConvert and Dehancer) to apply grain to some parts of the image more than others. IIRC they apply more to the mids and almost none to the highlights and shadows, as I think that emulates the grain from a negative/positive film process, where the roll-offs lessen the strength of the grain. Don't quote me on that logic, but it sounds right. If I understand the math properly you can get basically the same effect by desaturating the brightest highlights on your image. I contemplated writing a DCTL plugin for Resolve to recover highlights on non-RAW footage (as you can't use the option in the RAW panel) and over the course of figuring out the logic this was where I got to.
  10. Finally got around to looking at the TV episode I chopped up and, to be honest, I am completely blown away. The show is a long running and award winning travel show with a few interview / talking segments per episode with b-roll sections in-between, or so I thought. I've watched dozens and dozens of episodes from this show, so I am quite familiar with it, but on first inspection I have multiplied my understanding of it by perhaps a factor of 100. Some initial impressions: there is far more b-roll than I thought. There are multiple b-roll sequences between most interviews, there are V/O with b-roll sequences within the interviews, etc. the sheer quantity of shots is just immense. I have a pretty good intuitive understanding about how much shooting results in how much finished footage, and obviously I'm no-where near as good as these cinematographers, but even if their hit-rate is 10x mine, they're still shooting spectacular quantities of b-roll. the editors are doing strange things with structure, even "fading" between locations by cutting more and more b-roll into talking sections that the talking section kind of fades out and then gradually cutting in clips from the new location so there aren't any clear transitions and questions like "when does this scene end" become sort of meaningless it seems to be a lot like music with "call and response" where two or more things are intercut. it also seems to be very technical in terms of the rhythm, where any timing queues established (by music or talking or anything) need to be aligned to, but can be doubled, or switched to the "off-beat" etc. It really reminds me of programming break-beats when I was making electronic music. I've watched hundreds or even thousands of hours of content at this level but the editing is so seamless that I really had no clue about what was going on until I chopped it up and started to try and categorise and understand it. Perhaps the most significant impression is that this is completely beyond anything I have seen on YT, or even discussed anywhere online. and I mean, this is several orders of magnitude more sophisticated. Admittedly, I haven't found many good resources for editing online actually, so I'm hoping they are out there and that I'll find them. If anyone is reading this and is tempted to start cutting up great work then I encourage you to do so. Once you start looking you'll start noticing things immediately - it's like opening a window and looking through into another world, and you don't get to see it unless you pull it into an editor. Take a look at this: It's obvious from this that there are a series of sections of building intensity with faster and faster cuts, leading up to a change of pace and going much slower. One interesting thing is that the transition actually happens at the playhead (red line), and the two shots prior to that are the release of the tension before it changes. The playhead is where an ad break is placed. You might think that this would be obvious, but the problem is that the above image represents about 5 minutes, which is very difficult to watch and keep the overall structure clear in your head - the above is about 160 cuts! Also, if you look more carefully, it starts off with shorter bursts that then find a mid-level of pulsing intensity then a final push and then release. Even if you could keep track of the pattern of building intensity over and over you wouldn't be noticing that pattern. I'd imagine that not everything these masters are doing will be obvious from looking at the edit, but there's so much that is that it's an all-you-can-eat buffet regardless.
  11. kye

    The Aesthetic

    Simply wonderful! The tech is needed and is a necessary discipline and skillset. The first challenge we have (especially in forums such as this) is to remember that the tech serves the art. The second challenge we have is to understand how the tech serves the art. How shutter angle makes the viewer feel. Colour. Motion. Lighting. Depth of field. Composition. Dynamic range. etc. All have tangible emotional impacts on an audience. The third challenge we have is to understand how to align all the tech to push the art in the same direction so that the desired aesthetic and emotional experience is clear and strong, being supported from all angles and with all factors. Most discussion doesn't acknowledge these challenges even exist, let alone satisfy the first challenge, and then the others.
  12. kye

    Olympus OM-1

    Is there room in an MFT mount for an eND? I have no idea how thick those things are. That would be a great addition for stills as well as video. DJI could really easily make an MFT camera. They've shown they're not afraid of shaking things up - the design process for the 4D seems to have been them brainstorming every crazy idea they could think of and then implementing every one of them they could get to work into the final product. It's a pity that it's the camera equivalent of carrying around a live duck, but a great testbed for the tech.
  13. That (might) indicate it's not a bug, but a deliberate decision. It all seems rather odd to me.. I get why Canon etc cripple their cameras, but why would Fuji do it? and in such a strange way? My knowledge of X-Trans sensors is limited, but I thought they were simply an alternative layout of the RGB photosites in order to get an advantage when de-bayering. Nothing in that seems to indicate they'd have some sort of horrendous chroma noise issue that would require such brutal chroma NR.
  14. kye

    Panasonic GH6

    I'm keen to read more about this - can you link the source? I'm not explaining the entire image pipeline in every post that I make, but even if I did no-one understands the details anyway so it wouldn't help. Perhaps instead of just saying I don't understand things, speak to something tangible that I can talk to. In terms of having used RAW footage? Sure. Probably more than a dozen. I own three cameras that shoot RAW - one Canon and two BM. What's your point? I'm trying to get the best colour possible. For that I have pulled apart the colour science from many brands, trying to understand what they are doing. I talk about ARRI mostly because they give the nicest colour by just applying their official LUT. RED is right up there too, along with BM 2012. BM 2018+ and Canon (RAW) are nice too, but aren't really at the highest level. I own the Emotive Colour matrix and don't talk about it much here, partly because I've pulled it apart and don't want to give away too much, and partly because the it's so fragile to use - if the stars didn't align then you're not getting good results. It doesn't like exposure changes much and doesn't deal with mixed colour temperatures basically at all, which is present in most situations I film in. I have also bought film simulations from Juan Melara, downloaded DCTL plugins from professional colourists, LUTs from post production houses, and many other things. Did you see the difference in image between the Alexa and the LF from that guy who did the comparison by splitting the light with a piece of glass to duplicate the cameras position between the two? The other manufacturers may have been closing the gap but that was ARRI leaping ahead by a mile. You don't seem to really acknowledge the differences, but I read lots of comments from people who are amazed at how much of an improvement they were able to make to what was already a world-leading performer. I suspect colour matters more to me than to you. Sadly, in the blind tests I always pick the most expensive cameras, even if I watch the comparison in 480p on YT. You missed my point. If I was a manufacturer including h264 in a camera, I know it's going to be targeted at consumers and videographers, so I will go a bit heavier on the processing because that's what this audience wants. If I then decide to include Prores, I know it's for a different target market who have different expectations about the image, and so I'm likely to apply far less processing when the camera is set to record Prores rather than h264. The anamorphic mode in the GH5 applies less processing than the 16:9 modes, so Panasonic clearly understand that it's for a different target audience. Panasonic would be insane to include consumer amounts of processing and NR when the camera is set to record Prores. Cool. Personally I record in a lower resolution than the sensor and don't want the camera to crop. For this, Prores is the winning option. Saying that would be plain wrong. Good thing I didn't say it! Thanks for pointing that out? I've done tests on a number of RAW cameras comparing the various bit-depths and also comparing the latitude of RAW vs Prores and mostly I'm ok with 10-bit Prores recording a log profile. I'd prefer a 12-bit Prores if it's available, but I'll happily take Prores HQ. Colour science quality (and image quality in general) has two factors for me, the first is how good things look when under the optimal conditions. This is how (you'd hope) most professionals working on controlled sets are working. The other main factor of colour science is how robust the image is when conditions are far from perfect. This is probably something you're not very experienced with, but it's the vast majority of clips that I shoot. I've mounted my BMMCC and GH5 together and put them through a number of sub-optimal situations and then pulled the footage into Resolve and graded them side-by-side and the results are eye-opening. The BM footage just does what you tell it to do, whereas the GH5 footage suffers almost immediately. The sweet spot of the GH5 is has pretty good colour, not great but good, but that very quickly disappears when pushing and pulling the footage, even just adjusting the WB reveals that the small amount of colour magic it does have is pretty fragile. I've graded files from the S1 and the colour in the sweet spot was nicer, the DR was higher, but the magic was still quite fragile and the footage felt like the GH5. I'm hoping that with colour science improvements and Prores that the files will have more colour science magic and that the magic will be more robust when graded.
  15. kye

    Panasonic GH6

    Absolutely, and that's perhaps the biggest reason that I keep referencing ARRI. Their sensor is great, but it's the processing that they do to the image that really sets them apart. In fact, it's so valuable to them that they apply it in-camera rather than in LUTs that can be pulled apart and analysed. I'm not sure how much you know about colour science, but I have done many deep dives into pulling apart the colour science from a number of cameras, including Alexas, REDs, BMs, Canons, and Panasonics. I have spent a lot of time on the colourist forums reading their responses, reading the articles they link to, and studying their methods (and when I say "studying" I mean opening Resolve and trying to re-create the methods they explain, and then using those methods on my own footage to get a feel for what is going on - like how you do when you do assignments at school - literally studying). What I have found is that: most colour science is a long way from neutral, but are almost universally pushing the colours in the same ways, typically in the ways that film does you can take clips from multiple cameras and match them (and I'm talking about footage with larger colour checkers with lots of patches) and they look the same, but the ARRI or RED will have magic that the other simply won't have I have also observed time and time again that the colourists are doing very complicated adjustments (often in alternate colour spaces that work in very different ways) and applying them very subtly. What I conclude from both the comparisons I have done and the little tweaks that the colourists are willing to share (there is a lot they're not willing to share too) is that the magic is in the tiny little adjustments. Like in cooking how some chefs can add tiny amounts of various seasonings that are so subtle you can't pick them out but they really lift the flavour. You are absolutely right that each manufacturer has the opportunity to be building these things into their colour science (and not relying on their sensors), but the problem is that they just don't. Year on year they are getting incrementally better but really aren't closing the gap between their $2K-5K cameras and what the leaders are doing. The end result is that we're getting food that has come from the same ingredients (Sony sensors) and has only been seasoned with salt and pepper and therefore tastes rather bland in comparison to ARRI/RED who are demonstrating mastery in their use of spices. I don't have V-Log on my GH5 for precisely this purpose - it wouldn't get me anything. That's why I've been shooting HLG and testing it (it's not exactly either rec2020 or rec2100, but it's close enough to rec2100 to use that in Resolve). It would be great if the GH6 had real V-Log. I'm very keen to see how they go about using the Prores. Currently the GH5 HLG implementation is 10-bit rec2100-like colour and gamma, which isn't too bad to work with. The extra bit depth of Prores 4444 would be most welcome. In camera NR and sharpening are definitely an issue and I'd hope that implementing Prores will mean they'll tune the image to that codec and the expectations that pros would have. I don't think the idea that Prores isn't sharpened is true - I read somewhere that as Prores is compressed its best to add a small amount of sharpening to match the look of RAW. I can't remember where I read that but I remember it coming from a source beyond questioning - perhaps ARRI or RED or the like. It makes sense, as does the idea they would match the perceived sharpness of RAW. In a sense, Prores isn't just a codec, but a complete approach to the processing of the image. The flavours of Prores will be interesting to see. It is unfortunate that Prores wasn't included in the GH4 and GH5, but the bitrates might have been more than they could handle. With h264 there's no "right" bitrate, but with Prores there are standards, and it doesn't look so good on marketing if you're only giving people Prores LT, even though the bitrate of 4K Prores LT is 328Mbps - more than most other cameras and almost as much as the headline grabbing 400Mbps GH5 ALL-I codec. Marketing is real, and often irrational, unfortunately. In terms of saying prores doesn't matter because other cameras have internal RAW is just ridiculous. It's like someone saying that their Ferrari doesn't have cupholders and someone else saying that most family sedans now have cupholders. A different camera having a good codec doesn't matter if that other camera doesn't meet other criteria. I can't go outside and capture images using the sensor of one camera, the colour science of a second camera, and the codecs of a third camera. RAW is also different to Prores in that RAW tends to be a 1:1 sensor read-out, meaning that you either have to have the huge resolution and huge file sizes of the full sensor read-out or cope with some kind of crop which screws up your whole lens collection. Lots of people shoot with a lower-resolution codec than their sensor and enjoy having the benefits of downsampling. I am one of them, so RAW isn't of that much interest. One of the other benefits of Prores is that it was designed to be mostly indistinguishable from RAW under most conditions, so it's a very practical thing. Otherwise, why would every / most cinema cameras offer it in addition to shooting RAW?
  16. kye

    Panasonic GH6

    Well, there's the gradual Sony-fication that has been happening in camera colour science. Hopefully this is a departure from that, more in the direction of manufacturers-that-cannot-be-named.
  17. What a quirky and cool edit. Those people who make model profile videos have a lot to learn in terms of composition and posing! I love the variety of aspect ratios and cropping too 🙂
  18. kye

    Panasonic GH6

    I'm not the biggest fan of the GH5 colour science either. Which makes me happy that Panasonic seem to be chasing a better image with the GH6 instead of just chasing endless resolution at the expense of everything else. The colour from the more recent Panasonic cameras has all been incremental improvements over the GH5, so I'm optimistic about what they'll do with a new sensor. The other aspect to making images pop is lenses, which there are more and more available all the time now with third-party manufacturers like 7artisans, TTartisans, Meike, Mikaton, etc making interesting offerings. To me they're interesting because they are a perfect-combination of features - they have simpler optical designs and simpler coatings that are reminiscent of vintage lenses that are now climbing radically in price, but due to cheap Chinese manufacturing are both low-cost and also relatively high-quality. Unlike modern high-resolution high-precision zero-distortion lenses which have a very dull and lifeless rendering, these third-party primes tend to exhibit the aberrations that make lenses like the "Zeiss 28mm f2 'Hollywood'" lens famous, plus with MFT or APS-C lenses you're seeing more of the edges of the image circle than you do from FF lenses and so you're getting more of those character-providing flaws. I'll be talking more about this in coming weeks, but there's a lot to talk about, let's just say that.
  19. kye

    Panasonic GH6

    If you've built a FF glass collection then that's definitely a bridge to a mirrorless S35 or FF, so in a way you were always keeping your options open. In terms of low-light I'll be very interested in the GH6's low light as that's one of the aspects of the GH5 that I really push. For example, here's a shot I took with the Voigtlander 17.5mm F0.95 lens wide open - the scene is solely lit by the lights on the river bank: I'd recommend you wait for the sample footage and tests as the specifics of how they have implemented the sensor tech will really matter. The Alexa has definitely become the gold standard within some circles. Those circles are basically people who appreciate great colour. I actually don't want ARRI to have the best colour - I'd prefer if the GH6 ends up with the worlds best colour, having the best colour come from a camera I can't afford, couldn't carry, and couldn't realistically use would be a completely stupid wish. I just find it odd that people can say "Sony might include the Venice colour science in their next A7S camera" and it's fine, but saying "I wish someone would make colour science approaching ARRI" somehow is crazy talk, as if the Alexa is a magical unicorn instead of a sensor and a processor in a box... just like every other digital camera ever. I agree. People talk about high resolutions like a "just in case / when you need it it's there" kind of thing. I see that "ready for anything" aspect as the design brief of the GH line. Do people need GH5-level IBIS on every shot? No. Most of the time a lesser-IBIS would suffice, but are there times when you need it? Absolutely. I routinely push the GH5 IBIS past its limits on trips - maybe because I'm cold or low-blood sugar or I'm filming from a helicopter with the door open at 200kph or whatever. Do people need 10-bit on every shot? No. 8-bit cameras make gorgeous footage in when exposed properly and under modest-DR situations, but are there times when you need the flexibility in post? Absolutely. I shot with the XC10 in 8-bit C-Log on a 5-week trip to Italy and have really struggled to clean up the footage because it doesn't have the latitude the GH5 has. Same logic for high-bitrates. Every now and then you film in a situation where there's lots of chaotic movement like in rain or snow or with trees moving in the wind or whatever. Also, sometimes you want to crop in post a bit and not reveal compression nasties. +1 about not wanting an external recorder. People who don't care about camera size seem incapable of understanding that everyone isn't like them, it's rather odd. Its like saying you prefer chocolate ice cream and them saying "no you don't". Thanks for your thoughts on the ALEV vs the GH6 tech. Obviously the proof is in the pudding with the images, which I am really looking forward to, but once it arrives I must admit I'd be very interested in learning more about how it works 🙂 If all you care about is specs then you can make a complete comparison from the spec sheet. Some people will be comparing specs until we see the images, but sadly, other people here only care about specs, and when talking about cameras seem uninterested or incapable of understanding the various other considerations that go into making an engaging end product. Agreed - Prores is a big deal. One thing that people probably aren't aware of is how good a codec Prores really is. For reference, a very large proportion of the movies that people saw in the cinema between the mid 1990's and the mid 2010's probably went through Prores HQ, and a good chunk of those would have been Prores HQ in 1080p. It was the bread-and-butter codec for Hollywood and often still is. People don't seem to understand that.
  20. kye

    Olympus OM-1

    Man, the GH6 looks so much larger with a huge lens on it than the others do without any lenses at all! GH5 is 725g, so basically zero extra weight. For me the size is really the height, with the width a distant second. For example, comparing my GH5 and GX85 the widths are almost the same, but the "look at me and my huge camera" factor comes from the height: I'd definitely trade height for depth.
  21. There are all sorts of image processing algorithms they could be using. Whatever it is though, it's not high quality! It's such a pity as Fuji have such a great reputation for their colour science.
  22. kye

    Panasonic GH6

    This is definitely a concern, considering how freakishly huge the S1H is. These pics aren't exactly the same angles, but using the mount should be a common reference point. GH6: GH5 II: GH6: GH5 II: The fan/screen on the GH6 definitely adds some thickness to it, but it doesn't look that much larger to me. Weight is another thing entirely, so who knows about that. Unless I missed a site with more GH6 details?
  23. All great points and reminds me of my other idea for testing stabilisation mechanisms by putting the camera on a mount that will shake it in a controlled way. All you need to do is have it shake the camera at a few speeds (slow, medium, faster, etc) and gradually ramp up the amount of shake. If you took a picture of a control chart with a fixed exposure time then you can compare two cameras and see that camera X had perfect images until strength level 4, and the other was good up until level 7. Back to AF, I really agree with you. The DJI robochicken had a spectacular combination of AF with MF help. Really, almost no manufacturers have even tried. I think Olympus did with their clutches and the XC10 let you help it's AF by flicking it in the right direction when it was completely lost (more common than you'd hope for), but they were pretty pathetic really. Even down to things like on the GH5 it will do face-detect AF and face-detect exposure, but if you turn off the AF then it stops looking for faces and exposes the frame with general "put the histogram in the middle". I mean, if I don't have an AF lens then you'll do AE but you won't even look for faces? Grrrr.
  24. kye

    Olympus OM-1

    Yeah, we'll have to see I guess. There's a shot of it with no lens showing the mount, so we should be able to compare it to the GH5 using that as a reference. I might give that a go and post in the other thread. I think one of the biggest challenges with cameras is that there are so many features. You can literally tell someone that you want 12 features, they recommend you a camera, and you realise that you'd eliminated that camera because of a 13th feature or criteria you forgot to mention. I mean, who'd have thought that along with all the crazy tech stuff that you'd have to specify to someone that the camera not shut down to stop itself catching fire. I mean, I also don't want my camera to give off toxic gas, but I wouldn't have thought that would need to be specified! This is one of the reasons I am so critical of almost every camera manufacturer focusing on resolution about 4K - it's a feature that most people don't want but it comes at the cost of almost every other feature.
×
×
  • Create New...