Jump to content

kye

Members
  • Posts

    7,980
  • Joined

  • Last visited

Everything posted by kye

  1. Useful, but I suspect it doesn't solve the issue for most people who want to render out the clips separately.
  2. kye

    Panasonic GH6

    Ah yes, sorry, I'd completely forgotten the pulsing. I suspect that there's probably a tradeoff in the coding somewhere. My understanding was that the only advantage of PDAF vs CDAF was that PDAF knows which direction focus is in, whereas CDAF just knows what is in focus but not which direction to go in to get better focus. This is why DfD is pulsing - it's deliberately going too far one way and too far the other way just to keep track of where the focus point is. Maybe Olympus has just tuned their algorithm to be more chill about it, which would result in less pulsing but potentially more time slightly out of focus when the subject moves. Of course, the reason that eye AF is now a thing is because people want to have a DOF that isn't deep enough to get the whole face in focus, so they need an AF mechanism that won't focus on someones nose or ear but get the eyes out of focus. This really makes the job of AF much more difficult, and any errors that much more obvious. I wonder how much Sony is implementing their focus breathing compensation due to the crazy amount of background blur that people want nowadays. Even if you have perfect focus, when it tracks the small movements of an interview subject moving their head around the changes in size of the bokeh are so large and so distracting that focus breathing becomes a subtle (or not so subtle) pulsing of the size of the whole image. I'm glad I don't have to deal with it. Even though I'm moving to AF, I'm using AF-S only and having DoFs that are much more practical (and TBH, cinematic). Maybe it's time to reverse the 'common wisdom' online and start saying that if you want things to be cinematic then you need to close down the aperture, and that the talking-head-at-F1.4 is a sign of something being video rather than cinema. Wow.. So we're back to my iPhone 6 Plus where it had PDAF but didn't use it for certain things! I didn't expect that from Panasonic in 2023. I've seen a number of those "the AF is great, but you have to know how it works and choose the mode and perform integrals in your head to get the most out of it" videos, and I'm glad that I'm not using it TBH.
  3. Their only job is to record and legally identify criminals. That doesn't require 15 stops of 10-bit 444 in 800Mbps!!
  4. ..and you can do stuff like put the adjustment layer over the clips, but under the titles. Lots of my early edits had halation applied to the titles, and, well, it's A look, just not the one I wanted! The other alternative is to create a default node tree that has a bunch of Shared Nodes at the end, and then copy that across all shots prior to grading, then when you change them they get applied to all the shots, but are still at clip level in the Resolve rendering pipeline. There's probably a handy way to add those after you've graded the images too, but I'm not sure how.
  5. Interesting comment about colour, thanks for sharing. My impression of Vivid on GX85 was that it boosted the saturation. In theory, this is a superior approach to recording a flat profile, as long as it doesn't clip any colours in the scene. This is because if colours are boosted in-camera they will be processed in RAW and not yet subject to quantisation errors and compression errors. Then in post when you would presumably reduce the saturation any quantisation errors and compression errors such as blocking etc will be reduced. This is why B&W footage from older cameras looks so much nicer than full-colour images. I've also noticed that downsampling in post or adding a very slight blur can obscure pixel-level artefacts and quantisation issues. I learned this while grading XC10 footage, which is the worst of all worlds - 8-bit 4K C-Log. I've also had success with a CST from 709/2.4 to a wider space (like DWG), then grading there, then a CST/LUT to get back to 709/2.4 output. This has the distinct advantage of applying adjustments in a (roughly) proportional way, so if you change exposure or WB it does it in proportion across the luma range, much closer to doing it in-camera than operating directly on the 709 with all its built-in curves.
  6. kye

    Panasonic GH6

    At this point, even if the CDAF made coffee and told you next weeks lotto numbers, no-one would believe that it was capable of anything. I've maintained that DfD and AI based processing AF would get good enough that it would catch up, but haters gonna hate and "PD=GOOD CD=BAD" was always a simpler and therefore more desirable view to have. Plus, anyone over 20 has had a bad experience with a super-cheap CDAF system. Maybe the prejudice will finish with Gen Z? It's been years since I've seen a cameras AF fail to focus, these days focus "failures" are where the camera focuses quickly and accurately on the wrong thing, and PDAF does this just as much because CD or PD AF has literally nothing to do with that part of the AF functionality.
  7. Quality is crap, you pay to buy it, you pay ongoing fees to use it. Its a reasonable solution for security / security theatre, but if you just want a camera that you can watch via wifi then there are cheaper / better ways. Sample footage: and in case you think that's YT compression, here's one uploaded at 4K60:
  8. Not a solution, but maybe a workaround... Maybe just prior to the export you could create a powergrade with the nodes in the adjustment layer, then append those nodes to all the clips and then render that out. I'm not sure if you can grab a still from an adjustment layer, but I know you can't grab one from the Timeline.
  9. Yeah, they're pretty remarkable. Zooming out is an easier task as often the edges of the image don't include people, are often out-of-focus, and are usually of little consequence to the viewer of the image. My take on AI image generation is that it will replace the generic soul-less images that are of people that no-one knows and no-one cares about. It'll be sort-of generated stock images. I think it will be a very very long time before anyone wants AI generated photos or video of people they know rather than the real thing, except in special cases, just because they're not real. AI has been good enough to write fiction for literally decades, but hasn't overtaken people yet. Reviewers thought this 1993 romance novel was better than the majority of human-written books in the genre: https://en.wikipedia.org/wiki/Just_This_Once
  10. No idea about that one. I have a Ring doorbell, and you'd be better off setting up the telescope system in the bunker from Lost rather than using one of those.
  11. Or upgrade it to an eND like Sony have. Do we know if Sony have a patent on those things, or are they an available option for other manufacturers? I have a vague recollection of someone (maybe @BTM_Pix?) talking about RED doing some awesome experiments with an eND. One example was to use it as a global shutter but to fade it up at the start of the exposure and down at the end so that motion trails had softer edges. I think there were other things they did with it too, but that was the one that stuck in my mind.
  12. Ah! When I read "S16 sensor size pocket love" and I saw the lovely organic colour grade I took the word "pocket" to mean the OG BMPCC... You did very well! The more I learn about colour grading (and other post-production image manipulations), the more I realise that the potential of these cameras is absolutely huge, but sadly, un-utilised to the point where many cameras have never been seen even remotely close to their potential. The typical level of knowledge from solo-operators of cameras/cinematography vs colour grading is equivalent to a high-school teacher vs a hunter-gatherer. I am working on the latter for myself, trying to even-up this balance as much as I can. As you're aware I developed a powergrade to match iPhone with GX85 and it works as a template I can just drop onto each shot. Unfortunately I am now changing workflows in Resolve and the new one breaks one of the nodes, so it looks like I will have to manually re-construct that node, which I have been putting off. I've also been reviewing the excellent YT channel of Cullen Kelly, a rare example of a professional colourist who also puts knowledge onto YouTube, and have been adapting my thinking with some of the ideas he's shared. One area of particular interest is his thoughts on film emulation. To (over)simplify his philosophy, he suggests that film creates a desirable look and character that we may want to emulate, but it was also subject to a great number of limitations that were not desirable at the time and are likely not desirable now (unless you are attempting to get a historically accurate look) and so we should study film in order to understand and emulate the things that are desirable while moving past the limitations that came with real film. I recommend this Q&A (and his whole channel) if this is of interest: As I gradually understand and adopt various things from his content I anticipate I will further develop my own power grades. I'm curious to see how you're grading your LX15 footage, if you're willing to share. Wow, that is small! I'd love something that small.. it's a pity that the stabilisation doesn't work in 4K or high-frame-rates. Having a 36-108mm equivalent lens is a great focal range, and similar to many of the all-in-one S16 zooms back in the day. I love the combination of the GX85 + 14mm f2.5 + 4K mode crop as it makes a FOV equivalent to a 31mm lens. I used to be quite "over" the 28mm focal length, preferring 35mm, but I must admit I did find the 31mm FOV to be very useful when out and about, and having the extra reach is perfect for general purpose outdoor work. I want to upgrade to the 12-32mm kit lens, which gives the GX85 a 26-70mm FOV in 4K mode (and 52-140 with the 2x digital zoom for extra reach).
  13. RAW is uncompressed. By definition, anything that isn't RAW is compressed with a lossy compression. By definition, anything compressed with a lossy compression is lower quality than something than RAW. Therefore, RAW is superior in almost all aspects relating to image quality. Your comment: indicates that it's only the ability to colour grade it that is improved, not anything else. In the context of pulling stills out of video files, all image quality aspects are important and relevant, not just the colour grading aspects. Maybe you meant something else, but that's not what you said.
  14. A YouTuber I follow appeared on a TV show and vlogged a bit of BTS content, and also got permission to share a bit of the finished content, so a rare YouTuber vs professional crew comparison moment occurred. I'll preface this by saying that the YouTuber makes great content about Japan and is a talented film-maker. She does have, however, the same approach to YouTube camera equipment as most, based around the old combo of Sony A7S3, DJI drone, GoPro, and the usual approach to colour grading etc: and the show was shot on Sony cinema cameras, so same / similar image pipeline as the below, from the TV show: I know which one looks nicer to my eye... The other difference I see is that images for cinema get graded a lot darker than social media. Cinema treats 50% IRE as where the highlights start, and social media colour grading thinks 50% IRE is where skin tones should be!
  15. On the contrary - the vast majority of content on this forum is devoted to the idea that any non-RAW codec is inferior to a RAW one, otherwise when I shoot with any of my non-RAW cameras there wouldn't be any loss of image quality compared to RAW. RAW is special for still images for lots of reasons: The frame will have no compression artefacts The frame will have no processing artefacts (sharpening, temporal or spatial NR, etc) As well as being easier to colour This would matter a lot more for stills, as they're potentially printed and put on walls and looked at on a regular basis for years or even decades. Far less scrutiny is placed upon an individual frame from a movie or TV show! Do people even grade anymore? The modern Sony sensors have so much DR that most of YT looks like C-Log or V-Log from the days when cameras had 11-12 stops. The turning point for me was realising that things shot on film were super contrasty, and had clipped highlights and crushed blacks, so it wasn't mandatory to keep all the DR of the image if it didn't serve your purpose. Then when I started grading to have that level of contrast I worked out why people don't do it - it's hard to get a great looking image with that much contrast (and therefore saturation) without it looking cheap and digital. Now I don't even bother capturing that much DR unless it's for a specific scene, like a sunset etc. Which allowed me to go from a GH5 to the GX85 and shoot lower DR 8-bit 709 instead of 10-bit HLG.
  16. Did you watch it? Is it a good interview? There's heaps of content on Joker around online - I think they did a bunch of it as PR. Here's an interview of The Joker colourist Jill Bogdanowicz by Cullen Kelly who is a professional colourist: I highly recommend Cullens YT channel BTW. He's a real working pro and the more of his videos I watch the more embarrassed I get for the other YouTubers that pretend to be colourists.
  17. The BM Camera forums are constantly talking about new cameras. There was a thread that had hundreds of posts and went from March 2022 to April 2023 and the new mount thread simply replaced that thread as the current water cooler. The entire premise of the old thread was that BM hadn't announced a new camera already. https://forum.blackmagicdesign.com/viewtopic.php?f=2&t=157059 It's pretty simple: If BM teases an announcement then people go wild speculating if it's a new camera If BM teases a new camera then people go wild speculating what it will be If BM announces a new camera then people go wild speculating what features it will have If BM doesn't announce anything then people go wild speculating what is going on
  18. "The best camera is the one you have with you" is often translated as "the best camera is the one that doesn't get confiscated by security for looking too professional" 🙂 As I've mentioned previously, on my trip to South Korea earlier this year I favoured the GX85 + 14mm F2.5 pancake lens combo over the GH5, simply due to the size while filming in public and private spaces (like museums, etc). I would have liked more flexibility and since then have worked out that the best two lenses for my purposes are the 12-32mm pancake and the 45-150mm tele - the two kit zooms that were originally paired with the GX85. In terms of the quality of the files, it's adequate for my purposes, but I can understand if it doesn't meet the needs of others. Certainly I would appreciate a few added features if I was given a magic wand, but I'm ok with it how it is, and I certainly wouldn't make it any larger to accommodate any of the additional things I'd ask for.
  19. The industry is both cautiously innovative as well as hugely traditional and very slow to change. What I mean by this is that if there is a new type of light source (e.g. LED) then the industry will take it's time to evaluate the technology, but then once it is understood in terms of strengths and weaknesses and impacts down the image pipeline etc, then it will adopt it. Moving from film to digital acquisition was another example of this cautious innovation. However, if there are any structural changes to how things are done, then they can take literally decades to be adopted. It's been common practice for there to be a digital intermediary step in the image pipeline even before there was digital acquisition and digital distribution, and everyone knew that colour grading and compositing was a huge factor in the image, but it's not universal practice even now for the colourist to be involved in the decisions up front on choosing the camera package and LUT(s) used to view images on set etc, despite the image being completely reliant on the colourist being able to deliver the desired look. This inertia is because it involves a change in how the team is structured - you have to involve someone from post-production in pre-production!! The Ronin 4D is a less dramatic example of such a structural change. If you use the 4D to shoot, then who operates the camera - is it the steadicam operator? If that's the only camera then is the steadicam operator the DoP? The number of films that were shot completely on steadicam is pretty low, so this isn't the normal practice. Are there union implications? It's a new lens system - is everyone familiar with the look? Do they like it? What availability is there in rental houses? What test footage is available (important for those who haven't got the luxury for camera tests). Its worth mentioning that DJI don't currently have a way to shoot with the 4D that isn't using the gimbal. Not many people are willing to shoot their whole film on a gimbal and if you're already using a RED or Alexa there's not much advantage to the 4D over just using a Komodo-X or Alexa Mini on a gimbal. It's easy for one-person operations to just see the whole process from start to finish as completely up for grabs, but the industry has been designed as a production line where each step of the process is done in a specific way and by a specific role requiring a specific skillset. No-one wants to go first, no-one wants to take a risk, people have enough change happening within the current structure to deal with, so no-one wants to look at things that throws the current approach up into the air.
  20. Lots of cameras have a wifi function that will display a live preview of the image. I've used the GoPro one, the Panasonic one, Sony X3000 one, Canon one, etc. Do you have an older spare camera? It might have one. Things like Ring are all-in-one solutions that work via the cloud, but the Ring one requires a paid plan, and the quality is pretty low because it's going via the internet and the motion events are being stored on their servers etc Here's the legacy Panasonic app - it works with a million of their older camera models, some of which are probably going for almost nothing on eBay: https://av.jpn.support.panasonic.com/support/global/cs/soft/image_app/index.html
  21. You should be able to use any camera that has a video out and a power in connection. The front door camera that Casey Neistat used in his studio was just a GoPro that was powered via USB and connected to a TV. Surely you'd have an old camera around that has these two connections?
  22. I played with the Ptools hack but could never get it working on my GF3. Unfortunately lots of the information online seems to have been lost and the things I did find took a good deal of searching, reading entire threads and following all links etc. The GF3 would be spectacular with a bit-rate bump - 17Mbps wasn't really enough for 1080p. For anyone reading that hasn't seen them, here's a few from the mighty GF3: and here's a random comparison of JPG vs Video with a few lenses / filters: Looking back on the above, I remember the difference in detail being a lot more between the JPGs and the video - maybe the 1080p YT compression really softened it. I've done a lot more videos with it, both tests as well as real edits, and my conclusions were that: If you use it without a stabilised lens or with any lens that isn't razor sharp then it's only good for a Super-8 style final product due to the soft rendering and micro-jitters If you use it in mixed lighting or with low-CRI lighting then you're going to have a very bad time colour correcting in post If you use it in any sort of low-light situation then it might not even get up to Super-8 level of video AND you're going to have a bad time in post DAMN ITS SMALL!!!! When paired with the Olympus 15mm F8 body cap lens it is almost as pocketable as a smartphone, and if used outside when the sun is up, it'll do a great job of Super-8 style videos. It is usable with the 12-35/2.8 lens, but if you're going to have a setup that large, you may as well take the GX85 or P2K. Here's a shot of some of my collection - the GF3 is the second smallest, only larger than the mighty (pink) J20 that was what I entered into the camera challenge. It was great looking back over these GF3 videos - fun times!
  23. You're right about you needing the 300/2.8 to get some background defocus for those shots, but it really depends on the subject distances involved. Your shots are of a sports ground that is absolutely enormous when compared with a lot of other very common sports grounds, like basketball, various forms of football, lacrosse, etc. My sports photography was on Australian Football fields, which seem to be significantly larger than almost any other sports field, so I struggled with lens reach and had issues manually focusing etc. This isn't always the case - if you're shooting a basketball game then it's a whole other ballgame....(sorry - couldn't resist!)
  24. A lack of glass is definitely a problem. It would be a move that relied on there being more over time, that's for sure. Re-thinking about it now, it would make more sense for them to make a camera with interchangeable mounts, and make ones for EF, PL, and whatever else they could make work. As you said, it's unlikely to be a FF pocket, it would be a FF Ursa and I would think it would be their equivalent of the Alexa 65. In cinema circles, S35 is regarded as standard and FF is regarded as "larger than standard" which is why ARRI called the Mini LF even though the sensor was basically FF. I don't know how much a rumour like this would be true either. If you spend any time on the BM forums, the number of rabid camera bros becomes obvious and every move that BM makes is answered with hundreds of comments asking for everything under the sun as the next upgrade. I've seen people ask for the pocket cameras to get IBIS via a firmware upgrade! So yeah, is there a rumour that BM will go FF with L-mount, probably. Is there a rumour that BM will go FF with 17 stops of DR and IBIS and DPAF and a built in AI drone, probably! My understanding is that they want to create and control the full pipeline - to have a complete ecosystem. If you run a YT channel then you can easily do this, just but a P4K and use Resolve. I don't know enough about running a TV studio but from the outside it looks like you could set one up using all BM equipment. In all the BTS stuff I've seen for YT streamers it's normally the BM ATEM switchers they use to manage sharing their screen, one or more cameras, external sound, monitoring, and any graphics. They have studio only cameras, etc. They even have a film scanner! From this perspective, getting more people to use BRAW the better. Something you may not be aware of, Resolve supports almost all professional and broadcast codecs except Prores RAW. So BM and Apple have extended the FCPX vs Resolve competition into having dedicated input formats. This is another sign it's about ecosystems. BM continue to support entry-level cameras like the P4K but are gradually extending up into the high-end cinema line with the UMP12K (and the just-released UMP12K OLPF). The P4K is for making sure that people start with BM in film-school and the UMP12K OLPF is for making sure that your BM user-base doesn't jump ship when they get a big budget.
×
×
  • Create New...