Jump to content

kye

Members
  • Posts

    7,499
  • Joined

  • Last visited

Everything posted by kye

  1. kye

    Fuji XT4 in 2023?

    200Mbps should be fine. The only times having more bitrate would help is when there is huge amounts of random movement, so scenes with water / snow falling / rain / trees blowing in the wind / etc. Other things to try are doing a factory reset on the camera, and it might be a good time to update the firmware if you're not on the latest version. Also, check the firmware for each lens you use, those have firmware too!
  2. Perhaps you were thinking of @Mako Sports?
  3. kye

    Panasonic G9 mk2

    Obviously the decision between these would be heavily dependent on the situation, but obviously I'd choose the smallest one for the millions of reasons I've already shared. When I look at the choices of S5iiX, Sony, GX85, Nikon, Fuji, I see the choice as being around: Size and weight - for both your own ergonomics as well as how stealth you wish to be Dynamic Range - some of these setups have higher DR than others Stabilisation - some of these setups will have Dual IS, but i'm assuming some won't Obviously there are loads of other things going on too, but these are the ones that I see mattering to me. Ones that don't matter to me are: DoF - almost all these lenses are capable of creating levels of background defocus used in cinema, which is enough for me Codec / resolution - the GX85 with 100Mbps 4K IPB h264 codec edits fine on my Intel MBP and is sufficient on a 1080p timeline (if it's good enough for Hollywood it's good enough for me) so I don't need extra resolution or ALL-I to edit with Colour science - with colour management any of these cameras can be coloured just as easily as any other and [production design] > [colour grading ability] > [camera colour science]
  4. This is a great point - if you don't need the portability then a Mac Mini would be spectacular value. They're not powerful enough for the serious colourists, but I think those not as far into their careers use them, and I've heard that large post-houses often use them for background / batch tasks like preparing footage, rendering projects etc. The Mac Studios with the Ultra processors are apparently stunning performers, but more expensive obviously.
  5. In this context IDT and CST are the same thing. I think that IDT is an acronym from ACES (Input Device Transform) which obviously is used to transform from the input devices' colour space to ACES, whereas CST is a Colour Space Transform which can go from any space to any other space, and you might use them in various places in the node graph for various purposes. I've seen colourists put a CST as their first node transforming from camera to working colour space, but label it IDT, and the output one ODT, so it's more of a terminology thing at this point. I've heard colourists say that you can tell that people who use IDT and ODT learned colour management at a certain time period or on certain equipment, so it sort-of dates you in a sense. The other reason you might use the IDT / ODT acronyms is that Resolve only displays very short node labels (if they're too long it chops the end off) so IDT and ODT are short and useful node names.
  6. I don't have V-LOG on the GH5 - I just use the HLG profile and CST using rec2100 which works fine 🙂
  7. I've been paying mild attention to the relative performance since the M1 came out and it's really difficult to get a sense of the economics vs performance for a couple of reasons: Unless you have infinite money, getting a faster CPU will mean not being able to upgrade the RAM, which is often shared, and depending on the circumstances you're in the RAM might be a bottleneck rather than the processing The price of older Apple silicone products hasn't really dropped significantly, so although the M1 and M2 chips are great performers you're still going to pay decently for them I've seen threads of colourists talking about upgrades and what to get, and there are lots of discussions about what trade-offs should be made and which shouldn't. Professional colourists are perhaps at the cutting edge of this stuff because they have to be able to colour grade any footage in full delivery resolution and in real-time with the client sitting there, so there is no possibility of using any proxies or performance settings etc.
  8. It's an interesting question, but I think a clue is likely evident from the question. If we rephrase the question to be "Why don't they support 24/30p when it's a trivial technical change?" then it becomes obvious it's not a technical decision, and in a capitalist consumerist marketplace, if something isn't technical then it's probably economic..... Also, if you just record in 23.976p instead of 24p, put the clips onto a 24p timeline, and have your NLE configured to re-time by selecting the nearest frame, then it will be frame perfect for any clips under ~20s long. If being frame/subframe perfect worries you, then I would suggest you do not ever consider the fact that almost all computer displays are 60p or some other refresh rate that isn't a multiple of 24, so when your video is viewed by almost all viewers, it'll be going through something like a 3:2 pull-down, so most frames will be off by a significant percentage and motion will be sputtering all over the place. I've been thinking about this lately, and there are some very interesting things going on. For example, if you record 30p and put it on a 24p timeline, and then display that 24p timeline on a 60p display, almost every frames from the original 30p will be time-perfect with zero time-shift, but there is a repeating pattern of the odd repeated frame, so the feel will still be that of 30p rather than 24p. If you put 24p on a 24p timeline and view that on a 60p display, almost all the frames will be off by some significant percentage, but it will still feel like 24p. I've come to really dislike the feel of 30p - it feels like 60p but only about half as 'slippery', so this stuff matters to me, but might not be visible to others.
  9. I'd imagine it is partly to do with the analog and ADCs, but don't forget that just because modern ICs are very capable doesn't mean the manufacturer will sell you a product for a song. Also, digital processing is another step that the high-end products may well also have capabilities for.
  10. On the colourist forums there's a sample test project and users submit their specs and FPS on the various tests, here's the M1 and M3 - the numbers are FPS: Each colour is the same test between them. The jump up from the Intel Macs is enormous - mine gets about 4FPS on the light blue test and about 3FPS on the lighter-orange test, but once you're on the Apple silicone there seems to only be incremental improvement. Those tests above are pretty brutal by the way - the light blue test is a UHD Prores file with 18 blur nodes, and the dark blue is 66 blur nodes, and the orange ones are many nodes of temporal noise reduction! The differences between the Pro, Max, and Ultra chipsets is much more significant though. I found that the Resolve results correlated pretty well with the Metal tests in Geekbench: https://browser.geekbench.com/metal-benchmarks
  11. Preposterous! It would never work!!
  12. I was talking about the location of the hot-shoe, which is where you'd mount the mic on this hypothetical small vlogging camera. No point having a flip-up screen for vlogging if the mic will obscure it. That's why cameras with the flip-up screens like the G7X don't have a hot-shoe and require external accessories: or they have a hot-shoe and have a flippy screen, not a tilt-up one: ....and before you start mixing vlogging with a pocket camera with rigging out a cinema camera, by the time you have to put a cage on it, you might as well use a bigger camera that already has a mic port, interchangeable lenses, 4k120p, 15 stops of DR, and all the other crap that the independent-Sony-marketing-affiliates camera reviewers all use.
  13. It probably wouldn't be too hard to adjust the screen tilt mechanism to make it pop up and become a selfie screen too, which would make it infinitely more attractive to that market segment. Of course, the clash between an on-camera mic and tilt-up selfie screen is a difficult one to reconcile, as many/most vloggers view both as a requirement. They could take notes from other manufacturers and sell a separate mic that uses the hot-shoe (and therefore camera power) and also doesn't get in the way of the flip-up screen, that would be awesome! I did a bit of googling on that Sony HX99 and it looks like an interesting little camera. When you have gotten familiar with the ZV-1 it would be interesting to see a comparison between them. Of course, we can anticipate that low-light will be a weaker area for it of course.
  14. BTS: This looks good to me, but I think could have been shot with almost anything..
  15. Absolutely.. the thing is, this has probably been the case for many years now. It certainly has been the case that at least some of the affordable cinema cameras would be indistinguishable to audiences, the OG BMPCC / BMMCC / 5D + ML RAW for example. To me the milestone of having at least one affordable cinema camera be good enough is a much more significant event than "every" cinema camera being of that standard - who gives a crap how long it takes for the worst models to catch up?
  16. My condolences about your mother. We're here for you - especially if it involves fighting over irrelevant technical details! I've said it previously, but I actually don't have a huge list of wants for an updated GX-line camera, and the GX85 is now my daily driver as-it-were. I've heard people ask for a range of improvements but they all seem quite modest actually. Things like PDAF, 10-bit, LOG, full-sensor readout, etc. These may well only require a sensor upgrade and a processor upgrade, perhaps to existing chipsets that aren't even the latest generation, which may not actually require that much additional power / cooling / space. At the moment the GX85 already has: IBIS / Dual-IS Tilt screen / touch screen EVF half-decent codec etc etc Plus, it's right at the limit of actually being too small from an ergonomic perspective. The grip design on it is really quite effective and I enjoy using it, but I probably wouldn't want it to be any smaller. If it had a slightly larger grip then that would actually be an advantage ergonomically, wouldn't make the camera much bigger in practice because it would still be smaller than most lenses, and would allow for a larger battery size. With proper colour management it is actually a very malleable image too, I did an expose test with it some time ago, doing exposures under and over and bringing them back to normal in post, and while the DR wasn't great, the image was very workable. The endless pursuit of megapixels has proven that a feature can be designed and then marketed and people will go nuts over it, despite the fact that it's not of any practical use to most consumers and it also requires upgrading of all associated equipment in turn. I think this shows that even if you did make a perfect camera for the people who shoot, the people who like progress for its own sake will always think that the perfect camera is the next one. Please no!!!!!
  17. I don't really have a clear understanding of how Yedlin manages his pipeline - do you have a link to something I can look at? In some senses I guess he's got one of the most "processed" image pipelines. One thing I'm becoming much more aware of is the difference between colour grading and colour science / image science, where the colourist works on individual projects and the colour scientist develops the tools. Obviously some colourists are also doing colour science things as well, so there is definitely some crossover. After my previous post I was thinking about it more and realised that the colourist often acts as a sort-of central coordinator of visual processing, where they understand the needs of the client / project and then apply a variety of tools as is appropriate. Some of these tools can be enormously sophisticated, with the well-known examples being the film emulation packages like Dehancer / FilmConvert / FilmBox / etc, but also lesser known things like BorisFX / NeatVideo / etc also getting heavy use. I'd suggest that with this increasing level of sophistication these tools are really now a different form of VFX, maybe a 2D VFX? So in that sense the 3D VFX is mostly done pre-colourist, but then the colourist would apply a whole bunch of other VFX treatments after that. I don't know if this is making sense, but it seems like the workflows and scope of the VFX / DIT / colourist are going to change in interesting ways in the future. The move to the Cloud and having the ability in Resolve to go back and forth between the Edit and Fusion and Colour pages certainly supports this idea that it's no longer a one-directional process but a set of interactions with iterations etc. As a one-person setup the ability to colour grade and edit in an iterative fashion, going back and forth, always made sense, and the idea of the linear workflow just seemed restrictive, although in a big production I can see why it would make sense. Anyway, hope those thoughts were semi-coherent. It's a fascinating space. The film industry can be incredibly slow to innovate and change, especially in regards to how the different departments work with each other, but in some areas there is definitely innovation and it seems like this is one of those.
  18. kye

    Panasonic G9 mk2

    Interesting about the sensors (perhaps?) not being Sony - it makes sense though if they developed the dual-gain architecture as a custom design, which I am assuming they did. It also then follows that the colour science would be slightly different too. I suppose they could have made it identical if they'd wanted, but I think if you were designing such a thing you'd constantly be tinkering with it, trying to improve, adapting to more recent tastes etc. The fact that the G9ii is as good as the GH6 in many ways could be a promising sign for what a potential GH7 could bring. I think the difficulties with the GH6 were unfortunate, as was that it was overshadowed by the PDAF implementations of the next releases. I think that Panasonic were being bold and trying to really push forward in doing a dual-gain architecture sensor, but unfortunately it just didn't quite make the kind of difference that you'd hope from such a bold move. I guess Panasonic might take the "lesson" to just go back to incrementalism and play it safe etc, but I really hope they don't do that, but instead view it as a good-but-not-great project and keep being bold and trying things. If the GH6 and G9ii sensors are from TowerJazz (or any other non-Sony provider) then it might be a sign they're going out on their own and will continue to iterate on the design and improve it over time. That would be a great outcome I think.
  19. Yeah. One distinct advantage of doing this stuff in post is that you can tweak and tune it shot-to-shot if required, and if the results are ok but not great you can often lower the strength so it's not so visible, although there is no guarantee that you will be able to get an acceptable result, so if you have to rely on it you're still better off doing it manually / properly in-camera.
  20. kye

    Panasonic G9 mk2

    Yes, lots of stuff is done in-camera and cannot be un-done, so it's just a case of trying all the tricks we have and seeing how far we get. If you do a custom WB on the G9ii and GH6 on the same scene, does it remove the differences in WB between them?
  21. Just to round up the reason I was talking about MTF and digital having an in-natural resolution response, the primary function of the "look" of a film is to support the content of the subject matter. In most cases this will be to be slightly sharper or softer than a neutral point, but for it to not stand out and call attention to itself unless an artificial feel was deliberately being added, like if a scene was set in a fake reality etc. I would suggest that film was a relatively neutral reference in terms of the aesthetic. We didn't go to theatres and think "oh my god the whole thing is a soft blurry mess!" so by having something massively sharper I would suggest it's diverging from an ideal neutral position. Thus my comment about resolution vs "making a meaningful final film" which is meaningful because of the content.
  22. I was being a bit provocative, mostly just to challenge the blind pursuit of specifications over anything actually meaningful, which unfortunately is a bit like trying to hold back the ocean. I have seen a lot of footage from the UMP12K and the image truly is lovely, that's for sure. Especially, looking at the videos from Matteo Bertoli shoots with UMP12K and the BMPCC6K, because it's the same person shooting and grading both so the comparison has a lot less external factors, the 12K has a certain look that the P6K doesn't quite reach. The P6K is also a great image too, so a high standard to beat. The idea of massively oversampling is a valid one, and I guess it depends on the aesthetic you're attempting to create. In a professional situation having a RAW 12K image is a very very neutral starting position in todays context. I say "in todays context" because since we went to digital, the fundamental nature of resolution has changed. In film, when you exposed it to a series of lines of decreasing size (and therefore higher frequencies) at a certain point the contrast starts to decrease as the frequency rises, to the point where the contrast was indistinguishable. The MTF curve of film shows a slope down as frequency goes up. In digital, the MTF curve is flat until aliasing kicks in, where it might dip up and down a bit and then it will fall off a cliff when the frequency reaches half the pixel distance. In audio this would be the Nyquist frequency, and OLPFs are designed to make this a nicer transition from full contrast to zero contrast. While there is no right and wrong, this type of response is decidedly unnatural - virtually nothing in the physical world operates like this, which I believe is one of the reasons that the digital look is so distinctive. The resolution that the contrast starts to decrease on Kodak 500T is somewhere around 500-1000 pixels across, so the difference in contrast on detail (otherwise called 'sharpness') is significant by the time you get to 4K and up. So to have a 12K RAW image is to have pixels that are significantly smaller than human perception (by a looong way) so in a sense it takes the OLPF and moire and associated effects, of "the grid" as you say, out of the equation, but it also creates an unnatural MTF / frequency response curve. In professional circles, this flat MTF curve would be softened by filters, the lens, and then by the colourist. If you look at how cinematographers choose lenses, their resolution-limiting characteristics is often a significantly desirable trait in guiding these decisions. Going in the opposite direction, away from very high resolutions with deliberately limited MTF properties that Hollywood normally chooses, we have the low resolutions which limit MTF in their own ways. For example, a native 1080p sensor won't appear as sharp as a 1080p image downsampled from a higher resolution source. 1080p is around the limits of human vision in normal viewing conditions (cinemas, TVs, phones). In a practical sense, when people these days are filming at resolutions around 1080p, MTF control from filters and un-sharpening in post is normally absent, and even most budget lenses are sharper than 1080p (2MP!) so this needs some active treatment to knock the digital edge off things. The other challenge is that these images are likely to be sharpened and compressed in-camera, so will have digital looking artefacts to deal with, these are often best un-sharpened too as they are often related to the pixel-size. 4K is perhaps the worst of all worlds. It isn't enough resolution to be significantly greater than human vision and have no grid effects, but also has a flat MTF curve that extends waaay further than appears natural. Folks who are obsessed with the resolution of their camera are also more likely to resist softening the MTF curve, so are essentially pushing everything into the digital realm and having the image resemble the physical world the least. I find that "cinematic" videos on YT shot in 4K are the most digital / least analog / least cinematic images, with those shot in 1080p normally being better, and with the ones shot in 6K or greater being the best (because up until recently those were limited to people who understand that sharpness and sharpening aren't infinitely desirable). The advantage that 4K has over 1080p is that the compression artefacts from poor codecs tend to be smaller, and are therefore less visually offensive and more easily obscured by un-sharpening in post. Ironically, a flat MTF curve is just like if you filmed with ultra-low-noise film and then performed a massive sharpening operation on it. The resulting MTF curve is the same. I'm happy to provide more info if you're curious. I've written lots of posts around various aspects of this subject. Yep, massively overenthusiastic amateur here. I mostly limit myself to speaking about things that I have personal experience with, but I work really hard behind the scenes, shooting my own tests, reading books, doing paid courses, and asking questions and listening to professionals. I challenge myself regularly, fact-check myself before making statements in posts, and have done dozens / hundreds of camera and lens tests to try and isolate various aspects of the image and how it works. I have qualifications and deep experience in some relevant fields. I also have a pretty good testing setup, do blind tests on myself using it, and (sadly!) rank cameras in blind test by increasing order of cost! 😂😂😂 I'm happy to be questioned, as normally I have either tested something myself, or can provide references, or both. Sadly, most people don't have the interest, attention span, or capacity to go deep on these things, so I try and make things as brief as possible, so they end up sounding like wild statements unless you already understand the topic. Unlike many professionals, I manage the whole production pipeline from beginning to end and have developed understandings of things that span departments and often fall through the cracks or involve changing something in one part of the pipeline and compensating for it at a later point in the image pipeline. Anything that spans several departments would rarely be tested except on large budget productions where the cinematographer is able to shoot tests and then worked with a large post house, which unfortunately is the exception rather than the norm. Ironically, because I shoot with massively compromised equipment in very challenging situations, I work harder than most to extract the most from my equipment by pushing it to breaking and beyond and trying to salvage things. Professional colourists are, unfortunately, used to dealing with very compromised footage from lower-end productions, but they are rarely consulted before production to give tips on how to maximise things and prevent issues.
  23. Additionally, it's easy to look at residual noise on the timeline and turn up our noses, but by the time you have exported the video from your NLE, then it's been uploaded to the streaming platform, then they have done goodness-knows-what to it before recompressing it to razor-thin bitrates, much of what we were seeing on the timeline is gone. The "what is acceptable and what is visible" discussion needs to be shifted to what is visible in the final stream. Anything visible upstream that isn't visible to the viewer is a distraction IMHO.
  24. I'm actually not that fussed by 8-bit video anymore, assuming you know how to use colour management in your grades. If you are shooting 8-bit in a 709 profile you can transform from rec709 to a decent working space, grade in there, then convert back to 709. Assuming the 709 profile is relatively neutral, this gives almost perfect ability to make significant exposure / WB changes in post, and by grading in a decent working space (RCM, ACES) all the grading tools work the same as with any other footage. The fact you're going from 8-bit 709 capture to 8-bit 709 delivery means that the bit-depth is mostly kept in the same place and therefore doesn't get stretched too much. The challenge is when you're capturing 8-bit in a LOG space, or a very flat space. This is what I faced when recording my XC10 in 4K 300Mbps 8-bit in C-Log. I have spoken in great detail about this in another thread. This was a real challenge and forced me to explore and learn proper texture management. Texture management isn't spoken about much online, but it includes things like NR (temporal, spatial), glow effects, halation effects, sharpening / un-sharpening, grain, etc. I found with the low-contrast 8-bit C-Log shots from the XC10, that by the time I applied very modest amounts of temporal NR, spatial NR, glow, and un-sharpening, that not only was I left with a far more organic and pleasing image, but the noise was mostly gone. It's easy for uninformed folks to look at 8-bit LOG images like the XC10 and think they're vastly inferior to cameras where NR isn't required, but this isn't true - the real high-end cinema cameras are noisy as hell in comparison to even the current mid-range offerings, and professional colourists are expected to know about NR. A recent post in a professional colour grading group I am in was about NeatVideo and the post mentioned that NR was essential on almost every professional colour grading job. I'd almost go so far as to say that if you can't get a half-decent image from 8-bit LOG footage then you couldn't grade high-end cinema camera footage either. There are limits though, and things like green/magenta macro-blocking in the shadows were evident on shots where I had under-exposed significantly, but on cameras that have a larger sensor than the 1" XC10 sensor, and if exposed properly, these things are far less likely to be real issues.
  25. kye

    DJI Pocket 3?

    I've been talking about OIS and IBIS having this advantage over EIS for years..... sadly, most people don't understand the differences enough to even know what I'm talking about.
×
×
  • Create New...