Jump to content

Towd

Members
  • Posts

    117
  • Joined

  • Last visited

Posts posted by Towd

  1. Just wanted to point out that the OP is shooting on a GH5 and not a GH5s.  They have totally different sensors and very different noise patterns.

    I personally really like the GH5 noise.  At 1600 ISO and less it has very little color noise.  I actually really like the way it looks at 400 and 800 ISO... feels organic.

    Also, the video posted above is a perfect example of why I believe you should analyze footage using a view LUT.  V-LOG maps black to 128 out of 0-1023 which I believe is higher than any other manufacturer.  You can take any camera and lift it's blacks 10% and see all kinds of garbage.  That said, the fixed pattern and color noise of the GH5s is not ideal.

  2. 4 hours ago, kye said:

    I used to try grading the LOG footage manually, without having anything convert between the colour spaces and I'd create awful looking images - using a technical LUT or Colour Space Transform to do the conversion really made a huge difference for me

    A big +1 on this for myself as well.  Some people seem to get good results just pulling log curves until they look good and can get get nice results, but I find if I handle all the color transformations properly, I'm reducing the number of variables I'm dealing with and I have the confidence that I'm already working in the proper color space.  Once in the proper color space, controls work more reliably, and it is also a big help if you are pulling material from a variety of sources.  

    I have not tried the Aces workflow, but since I'm pretty much always delivering in rec 709, I like to use that as my timeline colorspace.  So, I just convert everything into that.

    One feature I also really like about Resolve is the ability to use the "grouping" functionality that opens up "Pre-Clip" and "Post-Clip" grading.  Then I group footage based on different cameras, and I can apply the Color Space Transform and any little adjustments I want to make per camera in the "Pre-Clip" for that group/camera.  That way when I get down to the shot by shot balancing and grading, I already have all my source material happily in roughly the same state and I can begin my main balance and grade with a clean node graph.  On a typical project, I may have material from a variety of Reds with different sensors, DSLRs, GoPros, Drones, and stock footage.  If I had to balance everything shot by shot by just pulling on curves, I think I'd go crazy.

    If you don't work in Resolve, you can do roughly the same thing by using adjustment layers in Premiere and just building them up.  Use the bottom adjustment layer to get all your footage into Rec 709 with any custom tweaks, then build your main grade above that. 

    Even if you are not working from a huge variety of source material, many projects will at least have Drone and DSLR footage to balance.  You can then develop your own LUTs for each camera you use, or just use the manufacturers LUTs to get you into a proper starting place.

    One final advantage if you can use the Color Space Transform instead of LUTs is that LUTs will clip your whites and blacks if you make adjustments pre-LUT and go outside the legal range.  The Color Space Transform node will hold onto your out of range color information if you plan to later bring it back further down the line.

  3. I watched a video a few times on my laptop, but the noise didn't look unusual.  I think you're just seeing a the noise in the lifted shadows.  Log is not a normal viewing format.  That is why view luts are typically created for monitoring on a set or in camera.  It is supposed to be adjusted to your delivery format (rec709, rec2020, DC3, film print) before final viewing. 

    It's sometimes described as a "digital negative" in that like a film negative it holds a wider dynamic range than a final film print.  But when you view it without a display LUT, you are seeing all the gorey details that shouldn't be present or at least very suppressed in your final grade.

  4. If you ever end up with some of that in a shot you want to use, you can suppress it pretty well by using a soft mask around the problem area, key the purple and desaturate it.  Best to just avoid it if possible ?or use a better lens if you have one when shooting high contrast.  

  5. I think the most distracting element in the sample footage is the purple fringing and pulsing focus.

    Log color profiles have very lifted shadows, so you will see a lot of noise in them when looking at it without a view LUT or grade.  Normally, you would apply a LUT, or an S-curve to push your shadows to near black and roll off the highlights.  Once the shadow area is compressed, you wont see as much noise, if any.  If you are going for a flat look, you will probably want to run some kind of denoise on the shadows to clean them up.

    The advantage of LOG is that you have the option to compress shadows and highlights and have more flexibility in how much you want to lift them.  If you shoot with a standard profile its much harder to adjust shadow detail... if there is any at all.

  6. 12 minutes ago, Castorp said:

    I didn’t find that slanted lens comparison of Z6 vs A7iii much better than the rest of the YouTube stuff.

    I’m not sure the Nikon is over-exposing. Isn’t it more that Sony (and Fujifilm) are underexposed? They had to raise the exposure in post of all the Sony images since they weren’t correctly exposed with the lightmeter reading. 

    Looks like they’re shooting the Nikon with its default sharpening (which is too high).

    The Z 6 colour and white balance is to my eyes far nicer than the Sony.

    PS And Canon too for that matter. I never liked Canon colour - much to warm and glossy-feeling. 

    The thing about the Slanted Lens comparison for me was just that it was obvious at regular exposures, the Nikon was clipping whites earlier than the Sony and general consensus seems to be that they are using a very similar if not the same sensor.  Maybe the same sensor thing is wrong though.

    I do think that clipping or overexposing highlights does give a video-ish look.  You may be right though that the Fuji and Sony cameras underexpose on purpose to help protect them.  

    It is interesting in the test how much detail in the shadows could be recovered by the Nikon.  That's been my general experience with Nikon.  I *think* I also remember some extra highlight superwhite detail that could be pulled out of the Nikon video image once it was brought in to grade, so that may be what accounts for the clipping. But I could be totally misremembering.

    I agree Canon footage renders skin too warm and pinkish, but people seem to like that and they seem to give pretty good results out of the box without adjustments.

  7. Ahhhh, "Motion Cadence".  I love it when that old chestnut gets pulled out regarding a cinematic image.  I've spent many days in my career tracking and matchmoving a wide variety of camera footage from scanned film, to digital cinema cameras, to cheap DSLR footage.  So I find the whole motion cadence thing fascinating since I sometimes spend hours to days staring at one shot trying to reverse engineer the movement of a camera so it can be layered with CGI.

    So leaving out subtle magical qualities visible to a select subset of humans who have superior visual perception, or describing it like the tasting of a fine wine, I can only think of a few possible reasons for perceptible motion cadence.  I'll lay them out here, but I'm genuinely curious as to any other factors that may contribute to it, because "motion cadence" in general plays hell with computer generated images that typically have zero motion cadence.

    #1  ROLLING SHUTTER.  The bane of VFX.  Hate it!  Hate it! Hate it!  For me personally, this must be 90% of what people refer to as motion cadence.  Plays hell with the registration of anything you are trying to add into a moving image.  Pictures, and billboards have to be skewed and fulling rendered images slip in the frame depending on whether you are pinning it to the top, middle, or bottom of the frame.  I work extensively with a software package called Syntheyes that tries to adjust for this, but it can never be fully corrected.  For pinning 2d objects into a shot, Mocha in After Effects offers a skew parameter that will slant the tracking solution to help compensate.  This helps marginally.

    #2 Codec Encoding issues.  I have to think this contributes minimally since I think extreme encoding errors would show up more as noise.  I've read theories about how long GOP can contribute to this versus All-I, but I've never really noticed it bending or changing an image in a way I could detect.  I'd think it would be more influenced by rolling shutter however, so I can only think it would contribute to like 5-10% of the motion cadence in an image.  Would love to know if I'm wrong here and if it's a major factor.  More than just casual observation, anything technical I could read regarding this would be welcome.

    #3 Variable inconsistent frame recording.  This is what I think most people think they are referring to when they bring up the motion cadence of a camera.  But outside of a camera doing something really bizarro like recording at 30 fps then dropping frames to encode at 24, I can't believe that cameras are that variable in their recording rates.  I may be totally wrong, but do people believe that cameras record frames sometimes a few milliseconds faster and slower from frame to frame?  Does this really happen in DSLR and mirrorless cameras?  I find it hard to believe this would happen.  I could see a possibility of a camera waiting on a buffer to clear before scanning the sensor for the next frame, but I can't believe its all that highly variable.  If it is really that common wouldn't it be fairly trivial to measure and test?  At the very least some kind of millisecond timer could be displayed on a 144 hz or higher monitor and then recorded at 24p to see if the timer interval varies appreciably.

    #4 Incorrect shutter angle.  This could be from user error.  I've seen enough of it on Youtube to know it's common.  I'd assume it's also possible that a camera would read the sensor at a non-constant rate for some reason, but I'd think that would show up in rolling shutter anomalies as well.  Dunno about this one, but think it may be more of a stretch.  Should also be visible on a frame level by looking for tearing, warping, or some kind of smearing on the frame.  So, I doubt this happens much, but it should be measurable like rolling shutter with a grid or chart and detectable by matchmoving software the way rolling shutter is.

    That's generally all I can think of, and without any kind of proof, I'm calling bulshit on #3, but I'd be happy to be proven wrong.  I'd be genuinely intrigued to find that some camera's vary their frame recording intervals at any amount visible to the human eye.

    If anyone has any real insight into this, I'd love to read more about it because it directly affects something I deal with frequently.

  8. Regarding a video-ish look to the Z6 footage, I found the following video interesting.  Nikon seems to overexpose by about a stop or so compared to Sony.  I remember when I used to shoot with a d5200 a few years back I got in the habit of underexposing by a stop or two and pulling up my images in post.  Considering clipped or blown out highlights is one of the major contributors to a "video look", my guess is that Nikon's default exposure levels may be to blame for this opinion among some shooters who shoot mostly full auto.

    Luckily the easy solution is to underexpose and balance the exposure in post.  I remember my old d5200 footage graded really well in post as long as I protected the highlights. It produced a really nice, thick, cinematic image with some minor corrections in post.  That thing was a gem of a camera for $500.  But yeah, if you shot standard profile and just auto-exposed it, the result was pretty crap.  Nikon even let you install custom exposure curves into it so you could tweak the rolloff for highlights and shadows.  I remember we intercut it with a $50,000 cinema camera on a few projects and nobody was the wiser.  ?

    While, I know there are a lot of really talented filmmakers who experiment, measure, and adjust their workflow to wring every last drop out of the images their camera's make, I think there are a lot of folks who shoot "full auto", drop their video into a timeline and become disappointed by the result.  Hell, one youtuber's whole approach is that he's just an average guy who shoots everything at pretty much default settings, and then he does camera reviews.  Nothing wrong with that for the target market I guess.  Here you can see him blowing out the sky and background in his video while the exposure auto-adjusts.  Nikon needs to work on getting better default settings into their cameras to help support this crowd.  I think it's a weak spot for them from the video reviews I've been seeing because I know with even just a little effort they can create a superb image.

    Canon cameras also produce great images, but one of Canon's strengths, IMO, is that their default out of the camera results are really exceptional.  Whoever does the final tweaking for their cameras seems to be really good at ensuring the default settings come out of the camera looking really nice.

  9. Yeah, to be honest now that I think on it more, just because they are hitting 100 IRE on the waveform, who knows what's going on in the camera's color profile.  There could be surperwhite data, or some other special sauce that can be recovered above 100 IRE.

    7 minutes ago, thebrothersthre3 said:

    If you look at the brightest portion of the chart on all four cameras none of them really match. That said I don't know really how it works. 

    Yeah, I'd expect them to at least try to match the image output a little better.  Still curious....

  10. 12 hours ago, ntblowz said:

    C5D test the DR on S1

    with HLG it got 12.2 stops, Cine-D got 10.8 stops, so hopefully with full vlog it get 14dr

     

    https://***URL not allowed***/panasonic-lumix-s1-review/

     

     

    DR-Panasonic-S1-HEVC-HLG-ISO-400--640x360.jpg

    Maybe I'm a noob, but it doesn't look to me like the leftmost square for the S1 was exposed as brightly as all the other test examples.  If exposure had been pushed another stop or more, the dark end and noise floor maybe have fared better.

    Did they not want it to beat the Ursa?  Conspiracy theorists want to know...   But seriously, I'm curious if anyone has experience with these test charts.

    [EDIT]  I checked their waveform plots for the S1 and Ursa, and they look similar for the left box, so it may just be grading anomaly, but I'm still suspicious....  ?

  11. 3 hours ago, DBounce said:

    You do know The Hurt Locker was shot on film right?

    Of course.  It was also shot on 16mm and not full frame.  That was kind of my point.

    Everything is a trade off.  Yes, there are varying levels rolling shutter on different cameras.  Yes you can get away with some things with moderate rolling shutter.  Yes, you can drop your resolution and improve rolling shutter performance.

    All the mental gymnastics aside, I'm just pointing out that if you are shooting on a camera with large amounts of rolling shutter, you are going to need to modify your shooting style, limit a subject's motion, or possibly make image quality sacrifices.  You can also just stick your head in the sand and pretend rolling shutter is not a crappy artifact because you've spotted it in something you saw on Netflix. 

    I'm not trying to discount the controlled camera motion crowd, the neighborhood walkabout cinematographer, the sleeping cat aficionado, or the Youtube vlogger (although they could really benefit from low rolling shutter).  There are a lot of valid styles and subjects.  I just think that ignoring it is a real issue in many scenarios and a tell tale sign of a cheap camera is a myopic viewpoint.

  12. 15 hours ago, thephoenix said:

    so i am thinking two options:

    - i shoot on a white background but it will not be large enough and will have to expand it on fusion. has anyone done that before ?

    - i shoot on green background and do chromakey to add a white background i have in my photo files.

    I think there are possible pitfalls to both options that can be solved, but it depends on how you prefer approaching it.

    By shooting on a white background and then expanding it in post, you'll have the benefit of not having to pull a key on your subject.  The trick will be to match your extended background to the foreground element you shot.  I'd personally do this with a soft "garbage mask".  You'll want to frame your foreground subject with enough room so that they obviously don't get cut off by your framing, but you'll also want to leave yourself some room around them for this soft matte.  This will fix any crease between the foreground and background you mentioned above.

    You should be able to do this in just about any editing package and it doesn't require Fusion.  I've done soft masks in Premiere and Resolve successfully.  Just use a lot of edge feathering on the mask and place the foreground on a layer above the background.  Cut the mask around your subject and use at least a few hundred pixels of feathering-- maybe more depending on the resolution.  The tricky part depending on your comfort level with grading will be to bring your background element into a similar range to your foreground so that the transition is nearly invisible.  How close your foreground and expanded background are to each other exposure-wise or visually will also be a factor.  If you are just blowing things out to all white, this should be pretty simple though.  You could then potentially just bring in a bit of a vignette on top after matching the foreground and background.

    [edit]:  I should point out you'll need to have enough of your white background surrounding the subject for this soft transition area of the matte.  If the white background is too tight around your actor, you wont have room for this soft mask to transition to the background smoothly.

    The second option of pulling a key can also yield really nice results, but it will also depend a lot on the camera you use, the quality of your greenscreen and how uniformly lit it is, and possibly your comfort pulling keys.  However with this solution you can just drop in your new background and call it done.  No real matching between foreground and background exposure.  The only downside, is that this solution will always have a some subtle edge degradation where you pulled the key.  But it can be practically invisible.

    In both cases just be sure you give yourself enough framing on your subject for the final composition, so you don't suddenly realize you don't have legs on your actor when you pull the camera out in the comp.  ?

  13. Yeah, m43 definitely wins the rolling shutter game.  Different people have different needs for sure, but I'm just pointing out that it's a seriously egregious problem for the way a lot of directors shoot action, and can be very limiting stylistically.   Though it is true, a good solution for almost all current cameras would be to just shoot in 1080p.

  14. Have fun shooting something in this style on a camera with a lot of rolling shutter:

    In my opinion it is one of the most creatively limiting artifacts in low budget cameras.  Just about everything else can be worked around from limited dynamic range, to depth of field, to iffy "color science".  But when something forces you to shoot in a certain style, it's very limiting creatively. 

    It also can't really be fixed in post, and it's hell on VFX as well.  There is a reason pro cinema cameras all have extremely low rolling shutter.

  15. @DBounceI agree, it would be nice for them to keep the costs under control for these smaller sensor offerings.  Although if the GH6 has better IBIS and some form of internal raw recording, I'd probably spring upwards of $3k for one.  ?

    Of all the cameras released in the last six months, the Fuji XT3 looked like the most interesting to me.  But now that I've become a huge fan of IBIS on these small cameras, it didn't tick all the boxes for me in a way that made me want to try it.  I'm very curious to see what they offer in an X-H2 model though.   However, by the time it comes out, the GH6 may be just around the corner.

    Also, based on your experience and some other stuff I've read, I'm a little wary of Fuji's build quality.  Nikon, Canon, and Panasonic all make bomb proof cameras.

     

  16. 3 minutes ago, DBounce said:

    @Towd I still believe the end days of M43 are drawing nearer. Look at Olympus's latest entry. Also the GH line will seemingly be taking a back seat to the newer S-line. At least it should given that the new S-line is Panasonics new flagship. 

    Well, Olympus seems to be focusing on stills.  But the S1 doesn't seem to offer up much competition to Panasonic's two year old GH5 in regard to video.  So, saying the GH5 is taking a back seat feels premature.  Especially when Panasonic has said the S1 would not be focused on video, and now we've seen the specs that confirm that.

    I understand a lot of people love the full frame look and want to use it.  Personally, I've shot some projects with full frame sensors back in the 5D mII days, and found it a huge PITA in post.  Maybe as autofocus improves it'll find some use for me again.

    I personally just find so many downsides to it right now-- from slow read out speeds, to bad rolling shutter, to crop sensor recording, to difficulty nailing focus.  Just my observations and experience.

  17. 22 minutes ago, KC Kelly said:

    I'm waiting to see if the S1 gets the full V-log.  Even if you have to pay extra for it.  It would also be nice to have better timecode than the GH-5s.  I don't have a problem with keeping a TC sync on it all the time.  S35 compatibility mode would also be great.

    My understanding was that V-Log L was the same mapping as V-Log on the Varicam, but with less dynamic range.  Panasonic's GH5 V-Log  LUT seems to be interchangeable with the regular Varicam LUT.

    Are people just hoping for more dynamic range out of the V-log mapping on the S1?  My impression has always been that V-log L was just marketing gimmickry to help separate the Varicam from the GH5 and emphasize the lower dynamic range on Panasonic's prosumer camera.

  18. So now that all the major manufacturers have released their full frame offerings, is it safe to assume that the calls for the early demise of micro 4/3rds have been been greatly exaggerated?

    The GH5 was my first Panasonic camera, mostly due to the 10 bit internal recording, but the class leading IBIS and incredible versatility won me over.  I see some complaints about it, but from my experience the warpy edge thing is much more controlled than on any of the full frame offerings.  Sony IBIS and the GH5 are not even comparable.

    I almost wonder if the people suffering from bad IBIS on the GH5 were using adapted lenses without properly setting the focal length before shooting.  The only time it ever got me, was when I accidentally forgot to disable it on a tripod shot in heavy wind and the sensor would compensate for wind gusts.  It's not perfect, but it I've found it dare I say "game changing" with regard to grabbing shots that wouldn't even be possible without a gimbal or heavy rig.

  19. 3 hours ago, kye said:

    Do you aspire to the "more is more" style of Michael Bay, or the wealth of Michael Bay?

    Either one is fine - no judgements from me - just curious if the style is the goal, or a means to an end.

    Haha, may we all make Michael Bay money some day in our career.

    But I posted the Michael Bay example maybe to be a little provocative to the art house crowd.  Saying you hate Bay is a bit like saying you hate Trump-- it's always a safe response.  (Please nobody attack me.  I promise I didn't vote for the guy.  I'm just saying...)

    Seriously though, I find that I look for "Bayhem" when I'm editing a project.  Is there a shot with more parallax in the foreground?  Or a shot with more motion?  Maybe a whip pan or some other movement.  Those are the shots I seem to gravitate to, so I think that the Bayhem style is something that I'm looking for.  I certainly can't recreate the complexity of some of his shots.  Just that panning background on a telephoto lens with a circular dolly track that he does is technically very hard to recreate on a rushed shoot.  (I've tried with limited success.)  But, I think when even Werner Herzog is acknowledging the style, there is something there worth studying.

  20. 3 hours ago, kye said:

    In terms of being too sharp, I'm curing that with lenses - like many people do.  Although if you're delivering in 1080 I'm not sure how much that really matters, I haven't done much testing to see, and for me it doesn't matter that much for my projects.

    Just as an FYI, I used to shoot Nikon for stills, so my lens collection is primarily old Nikon glass.  Even when delivering at 1080p, I've found I need to defocus my GH5 footage to match the softness of footage coming out of Red cameras.  Once that that sharpness creeps into the image, it seems to hang on even after the downrez.

    I think I read a while back that some users like Sage only shoot in 1080p with their GH5 for the more cinematic image.  Being the maximalist that I am, I can't seem to let myself do that when 4k and 5k are available formats.  But yeah, GH5 footage is REALLY sharp compared to other cameras I've matched it to.

  21. I'm always pushing to cut one more frame from a shot while still telling the story, or adding something else into a frame to heighten visual interest.  Maybe I fear the ability of the modern audience to click away, check their email, or start browsing the web.  I find whenever I'm watching a film and a character stares wistfully off into the sunset, I'm going to the timebar to check how much more of the film is left.  I really don't want that to happen on any of my work, so I'm constantly fighting it.

    I don't know if that is a style, but it's something I'm constantly aware of while cutting a project.

    Love it or hate it, this guy seems to have figured out a formula that's made him quite wealthy.  Maybe that is what I aspire to...  ?

    BTW, I love how the narrator relates the style with layers of condescension.  Props to him for recognizing the style and breaking it down, but I doubt Bay is too stupid to really know what he is doing.

  22. I really like shooting with the 5k "open gate" format on the GH5.  I find it to to be practically indistinguishable to the eye from the 4k 422 mode even when I zoom in and pixel peep a shot.  You can then drop it on a 4k timeline as a center extraction which allows some panning and stabilization possibilities.  This also allows you to deliver in C4k or UHD if you are just out collecting B-Roll that may be used in a variety of projects.  All that said, I find that the vast majority of the time, I'm still delivering 1080p as a final video format.  But the 5k full frame feels the most future proof down the line.  It'll probably upres to 8k pretty well if we're ever delivering that format in 10 years.  GH5 footage is so sharp-- really too sharp!   I've matched it with 5k and 6k Red footage in projects, and find I have to always use a small amount of the defocus filter on it in post so it is not so crisp.  This is with the GH5's sharpness set to the lowest settings.

    As far as extreme grading, I typically always shoot V-LOG and run my footage through Neat Video first so the 420 vs 422 has not made much of a difference.  I got into this habit while salvaging 5D mark II footage as it really helps the gradability of 8-bit.

    One thing you will notice is that you lose the Ex Tele Converter zoom functionality in 5k, but you can still punch in in post on the image, so the versatility is still there.  Maybe if I was shooting something that I knew was going to be shot entirely with the Ex Tele Converter, I'd just shoot in 4k 422 for the extra chroma sampling, but that's never come up for me.

    The one place where I do find the 4k 422 better is in green screen extraction.  The 422 pulls a finer edge.  Where any time I've had to pull a key on 420 footage, I'm left with a 1 pixel outline on my initial matte edge from the chroma sub sampling even after running it through Neat Video.  You can erode the mask to fix the edge, but then you loose fine hair detail, etc.  I have tested the two modes and both are quite usable compared to some old DSLR 420 8 bit footage, but the 422 is marginally nicer.  Obviously an external recorder for green screen footage would be ideal, but in a pinch I've found the 4k 422 recorded internally to give really nice results-- especially for a 2k delivery.  And I don't have or use an external recorder with my GH5.

    Obviously, if you are doing an elaborate green screen shoot, use a pro camera with 444 uncompressed color output, but I sometimes just need to shoot a quick element for a composite and the GH5 in 4k 422 works just fine. 

    For my needs GH5 is just a great little workhorse to get footage to support a larger production or to generate high quality footage for any quick small budget or personal project.  I love that I can just throw the camera in a bag without external monitors, gimbals, or other crap and shoot high quality footage from such a tiny form factor.  There really is nothing else like it.

  23.  

    9 hours ago, TurboRat said:

    Yeah the Full Frame cameras are better in Low Light. What do you think are the advantages of GH5s aside from the flip screen and the small lenses?

    A7III has ibis though (not as good as GH5) so I think that's a better choice

    5 hours ago, kye said:

    10-bit is a pretty big advantage of the GH5 over the Sonys.

    The GH5 and GH5s typically give less heat and more reliable operation than the Sonys.  There's also longer battery life, less rolling shutter, and more high speed frame rates combined with better data rates and codecs.

×
×
  • Create New...