Jump to content

kye

Members
  • Posts

    8,032
  • Joined

  • Last visited

Everything posted by kye

  1. If you have questions then ask away!
  2. Nice to hear your workflow. If you were doing feature work with one camera then you might be using groups to match lenses! Or if not that, then you could use groups for scenes, which would be handy. I really do like Resolve for how it is so flexible, but it really adds to the learning curve! I've had a go at grading this. It's probably terrible and I have no idea what you would consider a good grade in the business, but I've white balanced on the two windows and faded the shots between those two, so see how you go. Happy to share my settings if they are of interest. (sigh, it's uploading with an ETA of over an hour... I'll post the link when it's done) That would be very handy!! 10-bit is really where it's at for that lovely thick colour look. I find that the look of RAW is wonderful, but to my eye 10-bit video with a half-decent codec and little-to-no-sharpening has most of the advantages of that look. You can take a sledge-hammer to the footage in post and it doesn't even bat an eyelid, which is great if you're shooting flat LOG profiles, because converting them to rec709 is the equivalent of a very large hammer.
  3. I used to use those Pre-Clip and Post-Clip groups, but I got a bit frustrated with them because you couldn't have clips in multiple groups. Resolve has now gone one better and has Shared Nodes, which means you can combine treatments in any way that you feel you might want to. I always think of the example of shooting two scenes with two cameras. You obviously want to grade the cameras differently to match them so you want all the shots from each camera to share the same processing. Now they all have the same kind of look, you want to apply a creative grade to them, and you actually want to grade the two scenes differently as they have different dramatic content. Previously you could use the grouping to combine the processing of either the cameras, or the scenes, but not both. Now with the shared nodes you can mix and match them however you like.
  4. I bought the GHa LUT pack that @Sage developed and he recommended -5, the lowest available as that's what matched his Alexa. I figured that whatever matches an Alexa is good enough for me! I've just switched to the 5K Open Gate mode though, and I think that has less processing again, but I haven't shot anything real with it yet so I am yet to compare.
  5. I've just had a look over those settings and I'm a little confused. The manual seems to indicate that the colour space and gamma options in the Camera RAW tab are to specify how the clip was shot, and it specifically says that it supports footage from RED, ARRI, etc but I see no options for them in the ML RAW project I opened. I suspect footage from those cameras will have metadata and Resolve will recognise them and might add more options. [Edit: on, hang on, no, those options are what colour space and gamma to convert the files to. I suspect that Resolve just works out what the RAW format is by looking at the files.] Regardless, that looks like it doesn't have many options at all, so your current workflow seems to be the best strategy.
  6. kye

    Lenses

    I'm noticing a bunch of stuff, but it's hard to know what the lens is contributing vs the other variables. The Zeiss shot has a slightly greener tone overall, seems to have slightly less contrast. Any contemplations of comparing sharpness went with the movement blur. Out-of-focus areas seem similar. The Canon in a way seems wider, but I suspect that's the composition of the frames as the Zeiss shot has things front-to-back and the Canon has things side-to-side. Interesting comparison though ???
  7. The MFT mount has one of the lowest flange distances of any mount, so you can adapt almost any lens onto it. With EF the distance is much larger and so there are a bunch of lenses you can't adapt to EF. it won't matter to most people, but if you happen to fall desperately in love with any of the lenses in that in-between range then it rules out the whole camera because of the mount. https://en.wikipedia.org/wiki/Flange_focal_distance lists them. [Edit: and specifically because those lenses can't be adapted to EF, and because EF is the standard mount for many people, those lenses in the 'gap' can be massive bargains! ]
  8. Yeah, likely a fixed focus with quite a small aperture. Which just reinforces the need to get plenty of light into the scene. I'm far from the expert in colour grading, but I found that my problem was not getting the fundamentals right. If you don't get WB right, or don't get your colour space right then no amount of the advanced tools in Resolve will help you. I don't know what your footage looks like but if you can share a troublesome clip SOOC then maybe we can see what you're looking at? I have heard that Sony colours aren't the easiest to process, but I haven't played with any Sony footage myself so can't really say if that's the problem or not.
  9. Dave Dugdale with thoughts on HDMI cables. Not really any strong conclusions, but might be useful to some?
  10. kye

    Lenses

    Any chance of posting that other shot? Seeing them side-by-side would be great and I went back 10 pages and couldn't find it lol. If you go to Insert Other Media -> Insert Existing Attachment then you can choose between every image you've ever attached on the forums, it's quite useful for revisiting previous images
  11. Makes sense to me. Of course, that's not always a predictor of what Canon will do, so who knows!! ???
  12. Looks like a GH5 to me. The shot at 0:40 shows the red dot on the record button quite clearly. 10-bit video can help in mixed lighting, but I shoot in mixed lighting all the time and I found that the weak link was my colour correction skills. Knowing how to process the image and how to handle colour spaces in post really upped my game. Here's some random thoughts: If you have a shot where you move from one WB to another, you can create two copies of the shot, one with the first WB and the second with the second, then just cross-fade between the two. It saves all kind of work trying to automate changes etc. Depending on what software you're using, you can try doing a manual WB adjustment before converting from whatever capture colour space (eg, LOG) vs afterwards. I used to try grading the LOG footage manually, without having anything convert between the colour spaces and I'd create awful looking images - using a technical LUT or Colour Space Transform to do the conversion really made a huge difference for me I don't know about you but I often shot in mixed lighting because that was the only lighting and because the camera just wasn't getting the right exposure levels or the ISO was straining (I use auto ISO) then that's a source of awful colours, maybe just use heaps of light In a sense you can either pay the guy or just do a bunch of tests at home and try to figure it out yourself. I'd suggest: Work out how to change the WB in post by shooting a scene in a good WB and then in a wrong WB, then in post work out how to match the wrong one to the good one Then work out how to go between two scenes with different lighting by doing one shot that moves between two rooms with different WB and use the cross-fade technique above to figure that out Then work out how to deal with mixed lighting by having two different coloured light sources in the same room and moving between them and working out how to grade that. Basically, start simple, then add more difficulty until you're comfortable shooting by the light of a xmas tree. You may find that shooting in a middle-of-the-range WB in-camera will give you the best results, but it might also be that one lighting source is the most difficult and you just set it to that and then adjust for the others in post. Experimentation is the key with this stuff. But keep your head up - this shit is hard. Colour grading well shot footage in robust codecs is as easy as applying a LUT. Colour grading badly-lit footage from consumer codecs is the real challenge and will test all but the most seasoned colourists, so in a way we're operating at the hard end of the spectrum.
  13. I agree about IBIS. Making a tiny camera and then having to pack a gimbal, shoulder rig, or tripod to go with it sure takes away a lot of the advantages of making it small and light. The XC10 is an invisible camera. People who bought it don't talk online about their C-cameras and that's what the XC10/15 is. XC10 footage is as good as C300 footage for most shots, so as long as you're using it within its design limitations then you could be seeing the footage from it every day and not have a clue. Canon said they sold more of them than expected and they are seen on Hollywood sets recording BTS and interviews. Interesting observations and it would probably sell well. They practiced a bit with the XC10/15 so should know how to combine the C100, the XC10/15, and their lenses for a good product. If it was an RF mount then they'd have to make sure that an adapter was available and worked well. That form-factor really lends itself to people using the 18-35 f1.8 and other similar lenses, so EF support would be mandatory - no-one buying a C50 would want to re-buy all their lenses. The EF-M mount might be the better alternative.
  14. I think it uses the Blackmagic Design Film colour and gamma spaces? This would be for Prores clips. If you're shooting RAW then you would be using the RAW Tab and you wouldn't have to specify what the inputs were. I haven't used this though, so maybe someone can confirm? BM could have a major win with some tools that let you transfer files between media sources in the field. I'm not sure if that's possible, but if you could record to an internal card and then periodically download that card to a HDD to free up the card again that would be great. This would allow small fast cards to work with large slow HDDs and get the best of both worlds. In the absence of the camera doing it, I recall @BTM_Pix was talking about such a device for transferring files from any media to and other media in the field?
  15. The camera RAW tab is kind of like the CST plugin in a sense, because it forces you to change your RAW clips into some other format that Resolve can work with. The controls are broadly the same too: You could try decoding a clip straight to LogC and not doing the CST plugin and see if that has a different effect. In theory it shouldn't, but the RAW tab doesn't have all the nice Tone Mapping and Gamut Mapping rolloff features, so it might be in your interests to keep your current workflow. Good article. Resolve is so flexible that there are lots of ways to do the same thing. I tell people to use the CST plugin because it's the simplest and because it's the most flexible. If you adjust the colour space in the Media pool then you can't make adjustments before the conversion, and you only get a conversion at the start of the processing and then at the end for viewing or for export. Some people prefer workflows where you can adjust the clip before it gets converted (I do WB and exposure adjustments before the conversion) and sometimes you want to have multiple conversions and do different grading steps in different colour spaces. The Film Look LUTs included in Resolve work with a LogC input, but you probably don't want to grade in LogC, so you convert the clip to rec709, then need to convert it before you use these LUTs: You can also hide the transforms within a node by changing the colour space that a single node operates in: These work by changing the colour space, applying the node adjustments, and then changing the colour space back again. These don't take up extra nodes like the CST plugins do, but I think they can only change from the timeline space and then back to it afterwards, so they're not as flexible as the separate plugin. They can also confuse the hell out of you if you lose track of what colour spaces are being used where as it's not obvious the order of operations. There are kind of two main types of colour work, the people that just want to convert the colour to something usable and move on with their lives, and those who have the time to have complex node trees. You can turn complex node trees into an efficient workflow if you know what you're doing and set everything up with presets etc, so there's no right answer. I think Resolve is showing its age with things like colour management being everywhere, because it probably only used to exist in one place, and then they added it to another to make it more flexible, then another, etc. In some ways Resolve is like the plucky startup who is challenging the big players with new technical offerings, and in other ways they're the technical behemoth that has existed for 15 versions.
  16. Ok, here's a thought. We've already found out that fungus spores are in the air and we can't keep them out of our lenses, so the solution is low humidity storage to prevent the fungus from growing. So, this begs the question - if you have a lens with fungus growing in it, why use nasty chemicals to kill all the fungus when it's everywhere anyway? I can't think of a reason to use chemicals at all. Unless I'm missing something really obvious, why not just use soap and water and non-abrasive cloths to physically remove the fungus, then just rinse in distilled water, air dry and reassemble?
  17. I think the DSLR revolution caught most people by surprise, including the manufacturers themselves who mostly added video as a "why not" kind of feature. Random fact, but SMS messages were originally a technical feature and not designed as a consumer feature, and they only released them because "why not", not anticipating texting or deaf people being able to use telephones for the first time, etc. Anyway, because the DSLR manufacturers were in the low margin stills photography business they created technical architectures that had all kinds of limitations that mostly didn't apply to stills photography. When the revolution hit they start the current race for a video market with legacy architectures and economies of scale that meant that everything was severely restricted. In a perfect world they would just re-design from the ground up, but that is the main reason that makes cinema cameras expensive, and combined with the lack of demand for video quality in this area of the market I suspect the economics doesn't stack up. There are additional barriers too, like protecting their cinema camera lines, organisational politics and power broking, and the same lack of vision that meant the DSLR revolution was a surprise in the first place.
  18. Why you should use the Colour Space Transform plugin instead of a LUT (even the built-in ones from BM)...
  19. Just found this thread about how to de-haze a lens: http://www.reduser.net/forum/showthread.php?160423-Lens-fog-haze-what-is-it The summary is that you put the lens in a vacuum chamber under hard vacuum for about a minute at room temp. This makes sense as fog/haze is typically stuff that has evaporated from the lens materials and then condensed onto the glass. Putting the lens under vacuum will force those things to evaporate again, and considering that the haze is all stuff that can evaporate (because that's how it got there), once it is evaporated again by the vacuum pressure there will be no residue on the lens elements. It won't fix fungus or any other lens problem, but considering it doesn't require any disassembly it is a very convenient treatment
  20. How did you convert from S-Log to the GHa? I'd suggest that an S-Log3 to V-Log conversion in Colour Space Transform plugin in Resolve should do a pretty good job. Or maybe this LUT calculator? https://cameramanben.github.io/LUTCalc/LUTCalc/index.html
  21. GH6. Who knows what spec it would have, but to get GH5 owners to upgrade would take some serious changes. Maybe 8K, internal RAW, or both!
  22. They may also use the third digit in the serial as an identifier of some kind, like if they had multiple production lines, or something like that. Of course, 100K cameras might be right too. It is a spectacular camera if it suits your needs. The problem with working out how many of these have been made or sold is that the typical buyer for something like this would basically be invisible. They are busy shooting real work, aren't visible on social media, or if they are then it's not to geek out about cameras, and when the footage ends up in something you'd never know. If every person with a RAW-shooting cinema camera bought two P4Ks then they'd sell a bunch of them and there would basically be no ripples to show it. This is the problem with the XC10, the people who wanted it as an A-cam found the fixed lens and high-ISO NR to be too restrictive. The people who use them for C-cameras or as BTS don't go online talking about it a lot, so it seems like they don't get used at all, but Cinematography Database YT channel kept seeing them in BTS pictures of big Hollywood productions and Canon said they sold more than they expected. The cheaper cameras that can create great images could be 10% of all shots in every movie and TV series and we'd never know. The GH5 can be made to look like an Alexa, the P4K should be able to match basically anything.
  23. kye

    Lenses

    Ah, that makes a lot of sense I shot a pre-test-test today with a few of the lenses and it was interesting. I might be different to other people, but I find it difficult to evaluate lenses without having them in a controlled comparison. Other people seem to be able to see random videos shot with different equipment / different lighting / different grading and be able to kind of triangulate the attributes of lenses and even compare them. I can't seem to see past the dozen or so other variables, at least not enough to spend hundreds of dollars on a lens. So in that sense the direct comparison is useful for me, even if no-one else. I am trying to create a set of lenses for myself, which is why I'm testing the lenses I will definitely keep as well as the other candidates. I may end up choosing a lens I don't own, but I'd have to learn to work out how to evaluate without comparative tests, so I'm not sure about that. My test today compared the 14/2.5 Panasonic, 17.5/0.95 Voigtlander, 37/2.8 Mir, and 58/2 Helios on 0.7x SB. I was curious about the performance of the Mir (it's meant to be apochromatic), to see how the Voigt compared to a modern lens and a vintage lens, and also what character the Mir had. The results were all over the place, with each lens winning outright in some aspect. Both the Voigt and the Mir were modern in some ways, vintage in others, and both had better performance than the 14mm at some things (kind of making them more modern than it), and the Helios is no slouch either, even with my cheap Chinese focal-reducer. I think the complete test will be really interesting.
  24. kye

    Lenses

    EF mount has a considerable flange distance, and there may well be mirrored lens systems with a smaller flange distance than it, which would mean that flawless adapting isn't possible. I'd be surprised if the largest mirrorless flange distance was as large as the smallest mirrored flange distance. This is the beauty of mirrorless, basically every SLR lens system can be used. I have non-SB MFT adapters for Minolta MD, Pentax PK, and M42, and a M42 SB 0.7x adapter. The Konica AR to MFT adapter is still in transit, and I bought a Nikon to MFT adapter by mistake because the auction title said M42, but unfortunately all the Nikon lenses have the focus ring the wrong way so I won't have any use for it. I started reading about lenses and worked out that M42 was a common mount, so I got the adapter and then started looking at those lenses because I had worked out how to use them. If you're buying non-SB adapters then they're really cheap, so it's an easy way to do it.
×
×
  • Create New...