Jump to content

Towd

Members
  • Posts

    117
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Towd reacted to webrunner5 in How Long Will I Be Waiting?   
    Buy a GH5 and put a speedbooster on it. Problem solved and for 1500 bucks to boot.
  2. Like
    Towd got a reaction from heart0less in The Resolve / Colour Grading resource thread   
    You typically want to put any finishing/sharpening a the very end after of your processing.  Typically just before running it out.  How strong you make the sharpening and the radius you use will just depend on the final look you are going for and how soft your original footage is.  Depending on the sharpening filter you are using there is typically a strength and a radius value. 
    As part of your finishing process, you may also want to add grain into the image.  Opinions on whether you add the grain before or after the sharpening can vary, but I find that if I'm dealing with  an image that needs a lot of sharpening, I'll add the grain last so I'm not heavily sharpening my grain.  Conversely if I'm doing a composite and matching grain between elements, the grain will come earlier in the pipeline.  But in that case, any sharpening I'm adding back in is very minimal to just restore any detail loss due to encoding.
    If you want to dial in some specific values for minimal sharpening to make up for re-encoding your original source to  whatever your final format is, you can run a "wedge" and try a variety of values then compare the detail you see with various sharpness settings of your output video against your raw source and pick one that matches most closely.  Depending on what your source material is and what your delivery format is, this can vary wildly.  In cases where I'm working from a high res (6k-4k) source and delivering at 2k, I may not even use sharpening.
    Finally, opinions also vary about where to put any degraining, but I prefer to put it near or at the very beginning of my color pipeline.  If your degrainer is expecting something in rec 709 though, do your color transform into video space and then run your degrainer.  A little testing with your camera's source footage will go a long way here as well.  But,  I find that especially when I'm working with 8 bit source material, it can add some pseudo bit depth to the images before I begin pulling on the color. 
    So, I'd typically do:
    .Raw adjustments to set color balance and exposure. (if working with raw footage) Degrain/NR Color sharpen/grain
  3. Haha
    Towd reacted to BTM_Pix in Cinemartin Fran 8K Global Shutter Camera   
    Z-Cam: "We have produced the most mundane and uninspiring footage that any company could ever possibly publish for promoting their cinema camera"
    Cinemartin: "Hold my beer....."
  4. Haha
    Towd reacted to Vladimir in Meet the GimbalGun™   
    penis camera look like a serious product now
  5. Like
    Towd reacted to pauli in Lenses   
    Panasonic Leica DG VARIO-SUMMILUX 10-25mm f/1.7 ASPH
    https://www.ephotozine.com/article/panasonic-leica-dg-vario-summilux-10-25mm-f-1-7-asph-hands-on-33371
    - manual clutch on focus ring
    - clickless aperture ring 
    - ca. 13cm long 650g
  6. Like
    Towd reacted to kye in I need a hug   
    Totally agree, and to take this one step further, I've discovered that in life people tend to project an image of who they want to be rather than who they are.
    People who project confidence are typically very insecure, people who project wealth tend to be spending all their money on showing off and are broke, those who are showing off how happy they are (eg, instagram) are typically miserable.  I saw a documentary on what life is like for most billionaires and it's pretty awful actually - they can't trust anyone because people are after their money, they have these huge houses but are constantly renovating them to try and one-up their other billionaire friends, and spend most of their time driving fancy cars and drinking expensive champagne wishing they had some real friends.
    So @kaylee, when you're looking at friends who are married and having kids and feeling that you're missing something, your friends are being torn apart by trying to have careers as well as families, pay their mortgages, not strangle their kids after the 4th sleepless night with the baby crying and the toddler drawing all over the good couch with their favourite lipstick that isn't made anymore, and wishing they could just live in a small town with their dog and get to have a bit of glitz and glam of the film world.
    I'm out here in the suburbs surrounded by lots of broken people who are now single parents because their relationships failed and when their kids phone gets hacked and their nudes are posted to other kids and they're getting bullied and coming home in tears and don't know WTF to do.
    Never compare your insides with someone elses outsides.
  7. Like
    Towd reacted to thebrothersthre3 in Panasonic GH5 - all is revealed!   
    Panasonic has a long reputation now of being a camera for video, since the GH2 days. Fuji kind of just jumped into the game with the XT3. It just doesn't have the following Canon, Sony and Panasonic have yet.

    You can't go wrong with either camera man. Pretty much the Panasonic will win in terms of having no record limit and IBIS. Fuji wins in terms of ISO performance, auto focus, color, resolution, and higher frame rates(better quality higher frame rates). It also comes down to if you have a preference towards M43 or APSC(maybe you have a lot of m43 lenses or simply like the versatility of them).
  8. Like
    Towd got a reaction from thebrothersthre3 in The peer-to-peer colour grading thread   
    I once worked with a really good compositor at a large VFX house who admitted to me once that he was totally colorblind.  His trick was that he just matched everything by reading the code values from his color picker tool and matching the parts of his composite purely from the values he sampled.
    I've always remembered that when I feel I can't trust my eyes, or something is not working for me.  You can color grade just by making sure everything is neutral and balanced.  Later, as you become more comfortable with the process and gain more experience you can start creating looks or an affected grade.
    Generally to start you want to get your white balance correct.  Find something in your shot that you know should be neutral or white.  A wall, a t-shirt, a piece of paper, or anything else that should be white or gray.  After that, check your blacks and make sure they are neutral, then double check your whites.  Finally, check your skin tones and make sure they are correct.  You can do this by using the vectorscope and just getting them on the skin tone line.  Somewhere in this process you'll want to set your exposure.  I generally just make a rough exposure adjustment at the beginning so I can see everything, then dial it in once my balance is set.
    One thing I do a lot when studying how a film I like is graded is to take screen captures from a Netflix stream or other source and pull them into a project to compare the color values on it.  Then you'll have a roadmap for for what you are trying to match.  
  9. Like
    Towd got a reaction from webrunner5 in The peer-to-peer colour grading thread   
    I once worked with a really good compositor at a large VFX house who admitted to me once that he was totally colorblind.  His trick was that he just matched everything by reading the code values from his color picker tool and matching the parts of his composite purely from the values he sampled.
    I've always remembered that when I feel I can't trust my eyes, or something is not working for me.  You can color grade just by making sure everything is neutral and balanced.  Later, as you become more comfortable with the process and gain more experience you can start creating looks or an affected grade.
    Generally to start you want to get your white balance correct.  Find something in your shot that you know should be neutral or white.  A wall, a t-shirt, a piece of paper, or anything else that should be white or gray.  After that, check your blacks and make sure they are neutral, then double check your whites.  Finally, check your skin tones and make sure they are correct.  You can do this by using the vectorscope and just getting them on the skin tone line.  Somewhere in this process you'll want to set your exposure.  I generally just make a rough exposure adjustment at the beginning so I can see everything, then dial it in once my balance is set.
    One thing I do a lot when studying how a film I like is graded is to take screen captures from a Netflix stream or other source and pull them into a project to compare the color values on it.  Then you'll have a roadmap for for what you are trying to match.  
  10. Like
    Towd reacted to kye in HLG explained   
    Great video talking about HLG
    Talks about HLG acquisition, HLG delivery, backwards compatibility, grading, 709 conversions, exposure, HDR10, the HLG curve, cine gammas and commercial opportunities.
  11. Thanks
    Towd reacted to Django in Blackmagic Design Announces New URSA Mini Pro G2   
    very impressive RS numbers on the G2 thanks to about double readout speed :
    4.6k - 7.59ms
    4k crop - 6.32ms
    2k crop - 3.16ms
  12. Haha
    Towd reacted to kye in The peer-to-peer colour grading thread   
    I've had two pivotal moments in my colour grading.
    The first was a Lightroom video where a guy edited a photo of a little Italian town with the sunset in the background.  It looked OK, but nothing special.  He did the usual curves and saturation and a graduated filter to bring the sunset down but what really brought it to life and made it look magical was he put very yellow power windows on all the little ornate street lights, and faked a candle on all the tables with people sitting on them, and did other local adjustments.  In the end it looked like the best romantic little town in Italy fantasy anyone has ever had.  I'd never seen anyone use 50-100 local adjustments in LR before so it was a new approach for me.
    The second experience was when I was looking at what editing package to go for, and as I was evaluating them finding out that Resolve could do the same local adjustments but it could track them for movement.  Basically a light-bulb went off and I realised that Resolve was LR for video.  After seeing that, PP and FCPX didn't stand a chance and I realised that in the way you darken and lighten things in photo editing to control the composition and the viewers attention in a photo could also be applied to video.
    If only he was shining a torch up from under his chin...   ".........and then they heard a scraping sound coming from the empty cupboard!!"
  13. Like
    Towd reacted to JordanWright in Lenses   
    Meike 25mm Cine in action, there a full set coming soon. Some seem to think they are Veydra clones. 
    https://www.reddit.com/r/Filmmakers/comments/atv62q/meike_25mm_is_literally_the_same_as_veydra_25mm/
     
  14. Like
    Towd reacted to Mark Romero 2 in What Camera Is He Using?   
    Thanks to everyone who has replied. Really appreciate all the input and tips.
    Here is the latest video I did over the weekend. I think overall the colors are a bit better, as I am becoming a bit more proficient with the qualifier tool. There are still some color casts, but I think that they are a bit better controlled (for the interiors, at least).
    As for the backyard shots, it appears to me that some of the brightest clouds have a pinkish / magenta tinge to them. Is anyone else seeing that? Mostly in the bright part of the clouds along the horizon, starting at the 1:19 mark. Did I just grade those exterior shots too warm???
     
  15. Like
    Towd reacted to Wild Ranger in GH5 to Alexa Conversion   
    Hello!
    I have new stills from a recent shot I was DPing. It's another theme, Kinda of game of thrones thing ?. I think this is a good point of reference to see how color look in this type of forest/swamp with those costumes and production design. Here i used some old vintage Takumar lenses because of the organic look they have, i think they help to sell the fantasy theme.
    I use the same procedure I always do, and my style of grade using GHa Daylight to LogC. And diffusion LOT of diffusion!

  16. Like
    Towd reacted to kye in can't decide on a new camera   
    The GH5 has a mode called Open Gate where it shoots with the whole sensor - 5K 4:3 video.  This mode has less sharpening than the 4K modes with the sharpening turned all the way down.  Also, you can soften things up in post quite easily too.  Your vintage lenses will also do that in-camera if you're using them for those shots.
    I don't know what editing software you are using, but if you have the time and energy to do it, Resolve has a free version that you can use to basically make the footage look however you want it to look.  There are also LUTs that make the GH5 look like an ARRI Alexa, including the sharpness, so the image coming out of it is very flexible if you want it to be.
    4K can be downsampled to create extra bit depth and colour information, however it depends on how the codec performs and how much noise there is in the signal (less is worse).  
    The explanation is very technical, but basically it works because for every 1080 10-bit pixel you get after the conversion, you've averaged 4x 8-bit pixels, and so the average can be between the 8-bit limitation, and those 4 pixels contain all three colours before debayering.  In practice it will be somewhere between 10-bit 4:4:4 and 8-bit 4:2:0 depending on exactly how that camera operates, but the benefit is real.
    If the OP is willing to wait for the computer to render proxies and take longer to render out the final project then basically any computer can edit any resolution.  We forget that people used to make broadcast TV in SD, which is 27x less pixels than 4K.  Resolution pretty much doesn't matter when you're editing (it matters when you're grading or doing other things) so that means you can render proxies and edit them nicely on a computer that is less than 4% of the performance required for 4K editing.  
    I render proxies at 720p and edit them on my laptop on the train. My computer is perfectly capable of playing 4K files but I don't need the extra resolution and saving space on the internal SSD that I'm editing from are advantages too.
  17. Like
    Towd got a reaction from IronFilm in What Camera Is He Using?   
    So, I watched the 2Gems Media's video a few times and some of their other videos on their channel.  It's an interesting modern style that he's obviously using to much success.  I wouldn't call it a cinematic style.  He's not afraid to let his whites clip and it looks like he degrains his footage and doesn't add any back, but just leaves it very clean. 
    The most important thing he does is get a nice neutral white balance.  Also, he seem to push overall exposure into the upper range.  I'm not saying he lifts blacks, but his middle exposure area feels higher than normal.  Conversely, for a cinema look, I'd push everything much darker.  I'm sure this is to make a home feel warm and inviting.
    I also noticed he seems to put a soft glow around his highlights-- or he has a filter that does it.  In any case, I put a little glow at the very top of the exposure range.  
    Didn't use any secondaries or keys, or animate any values.  So, I just let the beginning remain a bit green since it's getting the bounce off the walls anyway.  It is a bit of a challenging shot with the mix of light sources and colors, so I just aimed for a fairly neutral white balance that I just tweaked a tad after pulling a white sample off the back window frame.
    Anyway, here's my interpretation of his style.  Let me know if you think I got close.  I included my node graph for the order I did stuff.  Only one operation per node.  Just posting last and first frames.  Last frame first, since I looked at that the most for the hero look.  I feel I could lift the exposure even a quarter to half stop more, but it bugs the hell out of me to be this bright already, and I did try to protect highlights a little more than I think the 2gems guy does.   Must... protect... highlights.... ?
     


  18. Like
    Towd got a reaction from Mark Romero 2 in What Camera Is He Using?   
    So, I watched the 2Gems Media's video a few times and some of their other videos on their channel.  It's an interesting modern style that he's obviously using to much success.  I wouldn't call it a cinematic style.  He's not afraid to let his whites clip and it looks like he degrains his footage and doesn't add any back, but just leaves it very clean. 
    The most important thing he does is get a nice neutral white balance.  Also, he seem to push overall exposure into the upper range.  I'm not saying he lifts blacks, but his middle exposure area feels higher than normal.  Conversely, for a cinema look, I'd push everything much darker.  I'm sure this is to make a home feel warm and inviting.
    I also noticed he seems to put a soft glow around his highlights-- or he has a filter that does it.  In any case, I put a little glow at the very top of the exposure range.  
    Didn't use any secondaries or keys, or animate any values.  So, I just let the beginning remain a bit green since it's getting the bounce off the walls anyway.  It is a bit of a challenging shot with the mix of light sources and colors, so I just aimed for a fairly neutral white balance that I just tweaked a tad after pulling a white sample off the back window frame.
    Anyway, here's my interpretation of his style.  Let me know if you think I got close.  I included my node graph for the order I did stuff.  Only one operation per node.  Just posting last and first frames.  Last frame first, since I looked at that the most for the hero look.  I feel I could lift the exposure even a quarter to half stop more, but it bugs the hell out of me to be this bright already, and I did try to protect highlights a little more than I think the 2gems guy does.   Must... protect... highlights.... ?
     


  19. Like
    Towd got a reaction from mercer in What Camera Is He Using?   
    So, I watched the 2Gems Media's video a few times and some of their other videos on their channel.  It's an interesting modern style that he's obviously using to much success.  I wouldn't call it a cinematic style.  He's not afraid to let his whites clip and it looks like he degrains his footage and doesn't add any back, but just leaves it very clean. 
    The most important thing he does is get a nice neutral white balance.  Also, he seem to push overall exposure into the upper range.  I'm not saying he lifts blacks, but his middle exposure area feels higher than normal.  Conversely, for a cinema look, I'd push everything much darker.  I'm sure this is to make a home feel warm and inviting.
    I also noticed he seems to put a soft glow around his highlights-- or he has a filter that does it.  In any case, I put a little glow at the very top of the exposure range.  
    Didn't use any secondaries or keys, or animate any values.  So, I just let the beginning remain a bit green since it's getting the bounce off the walls anyway.  It is a bit of a challenging shot with the mix of light sources and colors, so I just aimed for a fairly neutral white balance that I just tweaked a tad after pulling a white sample off the back window frame.
    Anyway, here's my interpretation of his style.  Let me know if you think I got close.  I included my node graph for the order I did stuff.  Only one operation per node.  Just posting last and first frames.  Last frame first, since I looked at that the most for the hero look.  I feel I could lift the exposure even a quarter to half stop more, but it bugs the hell out of me to be this bright already, and I did try to protect highlights a little more than I think the 2gems guy does.   Must... protect... highlights.... ?
     


  20. Like
    Towd got a reaction from Mark Romero 2 in What Camera Is He Using?   
    Yes, shared nodes are really useful for making a scene adjustment ripple across all shots in a scene.  It's something more useful to me in the main grade after I get everything in my timeline's color space.
    For me, what I like about pre and post-clips is that I typically have 2 or 3 nodes in my pre grade and the purpose of my pre-clip is just to prepare footage for my main grade.   For example, a team I work with frequently really likes slightly lifted shadow detail, so I'll give a little bump to shadow detail then run the color space transform in my pre-clip.  If one camera is set up really badly one day and I need two different pre-clips for that camera, I'll just make multiple incrementing numbered groups for that camera, so I've never had a reason to put a shot in multiple groups. The other thing, I really like about groups is that you get a little colored visual icon of all shots in a current group that appear on the thumbnails in the timeline.  This makes for a nice visual sanity check when I'm scanning through a ton of footage on a long project.  Usually, the camera used is fairly obvious from A cams, to drones, to body cams by the thumbnail on the timeline so the visual reference of thumbnail and colored group icon is a nice check that I've prepped all my footage correctly.
    I know there is some extra flexibility in putting grading nodes before or after a color space transform, but for me on a large project, my main purpose in the pre-clip is to just get things close and into the proper color space.  If I really need to do more adjustments that have to be done pre-color space transform, I'll flip around color spaces in my main grade.  But my goal is to do all my shot to shot and scene balancing in my main grade with everything in my delivery color space.  Keeps things sorted for me.  ?
    Ultimately, it all depends on the type of work you are doing.  If I was doing feature work that is all shot on one camera type my system wouldn't be very useful.  But I do a lot of doc work, and outdoorsy adventure stuff that are typically shot on all types of cameras and conditions, so it can be really useful for keeping things organized.
    One last trick with the groups is that if I'm also mixing 6k, 4k, and 2k footage, I can throw a little sharpening or blurring into the post-clip section to match up visual detail between cameras.  Then use the timeline grade to do any overall finishing if needed.
    Ultimately, Davinci is just a wonderfully flexible system for developing custom workflows that works for you.  I love that their are so many ways to organize and sort through the color process.  ?
  21. Like
    Towd got a reaction from Mark Romero 2 in What Camera Is He Using?   
    A big +1 on this for myself as well.  Some people seem to get good results just pulling log curves until they look good and can get get nice results, but I find if I handle all the color transformations properly, I'm reducing the number of variables I'm dealing with and I have the confidence that I'm already working in the proper color space.  Once in the proper color space, controls work more reliably, and it is also a big help if you are pulling material from a variety of sources.  
    I have not tried the Aces workflow, but since I'm pretty much always delivering in rec 709, I like to use that as my timeline colorspace.  So, I just convert everything into that.
    One feature I also really like about Resolve is the ability to use the "grouping" functionality that opens up "Pre-Clip" and "Post-Clip" grading.  Then I group footage based on different cameras, and I can apply the Color Space Transform and any little adjustments I want to make per camera in the "Pre-Clip" for that group/camera.  That way when I get down to the shot by shot balancing and grading, I already have all my source material happily in roughly the same state and I can begin my main balance and grade with a clean node graph.  On a typical project, I may have material from a variety of Reds with different sensors, DSLRs, GoPros, Drones, and stock footage.  If I had to balance everything shot by shot by just pulling on curves, I think I'd go crazy.
    If you don't work in Resolve, you can do roughly the same thing by using adjustment layers in Premiere and just building them up.  Use the bottom adjustment layer to get all your footage into Rec 709 with any custom tweaks, then build your main grade above that. 
    Even if you are not working from a huge variety of source material, many projects will at least have Drone and DSLR footage to balance.  You can then develop your own LUTs for each camera you use, or just use the manufacturers LUTs to get you into a proper starting place.
    One final advantage if you can use the Color Space Transform instead of LUTs is that LUTs will clip your whites and blacks if you make adjustments pre-LUT and go outside the legal range.  The Color Space Transform node will hold onto your out of range color information if you plan to later bring it back further down the line.
  22. Like
    Towd got a reaction from kye in What Camera Is He Using?   
    A big +1 on this for myself as well.  Some people seem to get good results just pulling log curves until they look good and can get get nice results, but I find if I handle all the color transformations properly, I'm reducing the number of variables I'm dealing with and I have the confidence that I'm already working in the proper color space.  Once in the proper color space, controls work more reliably, and it is also a big help if you are pulling material from a variety of sources.  
    I have not tried the Aces workflow, but since I'm pretty much always delivering in rec 709, I like to use that as my timeline colorspace.  So, I just convert everything into that.
    One feature I also really like about Resolve is the ability to use the "grouping" functionality that opens up "Pre-Clip" and "Post-Clip" grading.  Then I group footage based on different cameras, and I can apply the Color Space Transform and any little adjustments I want to make per camera in the "Pre-Clip" for that group/camera.  That way when I get down to the shot by shot balancing and grading, I already have all my source material happily in roughly the same state and I can begin my main balance and grade with a clean node graph.  On a typical project, I may have material from a variety of Reds with different sensors, DSLRs, GoPros, Drones, and stock footage.  If I had to balance everything shot by shot by just pulling on curves, I think I'd go crazy.
    If you don't work in Resolve, you can do roughly the same thing by using adjustment layers in Premiere and just building them up.  Use the bottom adjustment layer to get all your footage into Rec 709 with any custom tweaks, then build your main grade above that. 
    Even if you are not working from a huge variety of source material, many projects will at least have Drone and DSLR footage to balance.  You can then develop your own LUTs for each camera you use, or just use the manufacturers LUTs to get you into a proper starting place.
    One final advantage if you can use the Color Space Transform instead of LUTs is that LUTs will clip your whites and blacks if you make adjustments pre-LUT and go outside the legal range.  The Color Space Transform node will hold onto your out of range color information if you plan to later bring it back further down the line.
  23. Like
    Towd reacted to kye in What Camera Is He Using?   
    Looks like a GH5 to me.  The shot at 0:40 shows the red dot on the record button quite clearly.
    10-bit video can help in mixed lighting, but I shoot in mixed lighting all the time and I found that the weak link was my colour correction skills.  Knowing how to process the image and how to handle colour spaces in post really upped my game.
    Here's some random thoughts:
    If you have a shot where you move from one WB to another, you can create two copies of the shot, one with the first WB and the second with the second, then just cross-fade between the two.  It saves all kind of work trying to automate changes etc. Depending on what software you're using, you can try doing a manual WB adjustment before converting from whatever capture colour space (eg, LOG) vs afterwards. I used to try grading the LOG footage manually, without having anything convert between the colour spaces and I'd create awful looking images - using a technical LUT or Colour Space Transform to do the conversion really made a huge difference for me I don't know about you but I often shot in mixed lighting because that was the only lighting and because the camera just wasn't getting the right exposure levels or the ISO was straining (I use auto ISO) then that's a source of awful colours, maybe just use heaps of light In a sense you can either pay the guy or just do a bunch of tests at home and try to figure it out yourself.  I'd suggest:
    Work out how to change the WB in post by shooting a scene in a good WB and then in a wrong WB, then in post work out how to match the wrong one to the good one Then work out how to go between two scenes with different lighting by doing one shot that moves between two rooms with different WB and use the cross-fade technique above to figure that out Then work out how to deal with mixed lighting by having two different coloured light sources in the same room and moving between them and working out how to grade that. Basically, start simple, then add more difficulty until you're comfortable shooting by the light of a xmas tree.  You may find that shooting in a middle-of-the-range WB in-camera will give you the best results, but it might also be that one lighting source is the most difficult and you just set it to that and then adjust for the others in post.  Experimentation is the key with this stuff.
    But keep your head up - this shit is hard.  Colour grading well shot footage in robust codecs is as easy as applying a LUT.  Colour grading badly-lit footage from consumer codecs is the real challenge and will test all but the most seasoned colourists, so in a way we're operating at the hard end of the spectrum.
     
  24. Thanks
    Towd got a reaction from IronFilm in The Nikon Z6 will be the firat consumer camera to output 12 bit video   
    Ahhhh, "Motion Cadence".  I love it when that old chestnut gets pulled out regarding a cinematic image.  I've spent many days in my career tracking and matchmoving a wide variety of camera footage from scanned film, to digital cinema cameras, to cheap DSLR footage.  So I find the whole motion cadence thing fascinating since I sometimes spend hours to days staring at one shot trying to reverse engineer the movement of a camera so it can be layered with CGI.
    So leaving out subtle magical qualities visible to a select subset of humans who have superior visual perception, or describing it like the tasting of a fine wine, I can only think of a few possible reasons for perceptible motion cadence.  I'll lay them out here, but I'm genuinely curious as to any other factors that may contribute to it, because "motion cadence" in general plays hell with computer generated images that typically have zero motion cadence.
    #1  ROLLING SHUTTER.  The bane of VFX.  Hate it!  Hate it! Hate it!  For me personally, this must be 90% of what people refer to as motion cadence.  Plays hell with the registration of anything you are trying to add into a moving image.  Pictures, and billboards have to be skewed and fulling rendered images slip in the frame depending on whether you are pinning it to the top, middle, or bottom of the frame.  I work extensively with a software package called Syntheyes that tries to adjust for this, but it can never be fully corrected.  For pinning 2d objects into a shot, Mocha in After Effects offers a skew parameter that will slant the tracking solution to help compensate.  This helps marginally.
    #2 Codec Encoding issues.  I have to think this contributes minimally since I think extreme encoding errors would show up more as noise.  I've read theories about how long GOP can contribute to this versus All-I, but I've never really noticed it bending or changing an image in a way I could detect.  I'd think it would be more influenced by rolling shutter however, so I can only think it would contribute to like 5-10% of the motion cadence in an image.  Would love to know if I'm wrong here and if it's a major factor.  More than just casual observation, anything technical I could read regarding this would be welcome.
    #3 Variable inconsistent frame recording.  This is what I think most people think they are referring to when they bring up the motion cadence of a camera.  But outside of a camera doing something really bizarro like recording at 30 fps then dropping frames to encode at 24, I can't believe that cameras are that variable in their recording rates.  I may be totally wrong, but do people believe that cameras record frames sometimes a few milliseconds faster and slower from frame to frame?  Does this really happen in DSLR and mirrorless cameras?  I find it hard to believe this would happen.  I could see a possibility of a camera waiting on a buffer to clear before scanning the sensor for the next frame, but I can't believe its all that highly variable.  If it is really that common wouldn't it be fairly trivial to measure and test?  At the very least some kind of millisecond timer could be displayed on a 144 hz or higher monitor and then recorded at 24p to see if the timer interval varies appreciably.
    #4 Incorrect shutter angle.  This could be from user error.  I've seen enough of it on Youtube to know it's common.  I'd assume it's also possible that a camera would read the sensor at a non-constant rate for some reason, but I'd think that would show up in rolling shutter anomalies as well.  Dunno about this one, but think it may be more of a stretch.  Should also be visible on a frame level by looking for tearing, warping, or some kind of smearing on the frame.  So, I doubt this happens much, but it should be measurable like rolling shutter with a grid or chart and detectable by matchmoving software the way rolling shutter is.
    That's generally all I can think of, and without any kind of proof, I'm calling bulshit on #3, but I'd be happy to be proven wrong.  I'd be genuinely intrigued to find that some camera's vary their frame recording intervals at any amount visible to the human eye.
    If anyone has any real insight into this, I'd love to read more about it because it directly affects something I deal with frequently.
  25. Like
    Towd got a reaction from IronFilm in The Nikon Z6 will be the firat consumer camera to output 12 bit video   
    Regarding a video-ish look to the Z6 footage, I found the following video interesting.  Nikon seems to overexpose by about a stop or so compared to Sony.  I remember when I used to shoot with a d5200 a few years back I got in the habit of underexposing by a stop or two and pulling up my images in post.  Considering clipped or blown out highlights is one of the major contributors to a "video look", my guess is that Nikon's default exposure levels may be to blame for this opinion among some shooters who shoot mostly full auto.
    Luckily the easy solution is to underexpose and balance the exposure in post.  I remember my old d5200 footage graded really well in post as long as I protected the highlights. It produced a really nice, thick, cinematic image with some minor corrections in post.  That thing was a gem of a camera for $500.  But yeah, if you shot standard profile and just auto-exposed it, the result was pretty crap.  Nikon even let you install custom exposure curves into it so you could tweak the rolloff for highlights and shadows.  I remember we intercut it with a $50,000 cinema camera on a few projects and nobody was the wiser.  ?
    While, I know there are a lot of really talented filmmakers who experiment, measure, and adjust their workflow to wring every last drop out of the images their camera's make, I think there are a lot of folks who shoot "full auto", drop their video into a timeline and become disappointed by the result.  Hell, one youtuber's whole approach is that he's just an average guy who shoots everything at pretty much default settings, and then he does camera reviews.  Nothing wrong with that for the target market I guess.  Here you can see him blowing out the sky and background in his video while the exposure auto-adjusts.  Nikon needs to work on getting better default settings into their cameras to help support this crowd.  I think it's a weak spot for them from the video reviews I've been seeing because I know with even just a little effort they can create a superb image.
    Canon cameras also produce great images, but one of Canon's strengths, IMO, is that their default out of the camera results are really exceptional.  Whoever does the final tweaking for their cameras seems to be really good at ensuring the default settings come out of the camera looking really nice.
×
×
  • Create New...