Jump to content

James_H

Members
  • Posts

    44
  • Joined

  • Last visited

Posts posted by James_H

  1. I agree with @Mondo.  You should really try to get a proper microphone to record this with.  I really don't have much experience with production audio, so maybe others can chime in on this, but if you're able to get a decent mic that you can place close to the actors (i.e. not attached to the camera, but operated by someone else) then you should greatly improve the quality of this film.

  2. Thanks for this.  I am taking this comment on board and am currently playing with profiles using more saturation to determine how much your suggestions have an impact on my footage.  I appreciate your effort here with this comment and your logic

    I'm glad to help.  I've played around with saturation values a lot lately, and it seems to me that only really low saturation levels (i.e. -3), and only some bad scenes, can have a noticeable impact.  I shot stuff with -1 yesterday and today and it all came out pretty nice.  I'd love to hear the outcome of your tests.

  3. I think the conclusions I have come to are:

    1. Make sure the camera's codec is capturing enough colour information, i.e. don't set the saturation too low in-camera;
    2. Make sure your footage isn't creating invalid RGB values, i.e. keep the saturation within reasonable limits throughout each step of the grade;
    3. Always export using 32 bit processing.

    For my setup, that means setting the saturation to 0 (default) in my 5N and reducing saturation if needed using the FCC in Premiere Pro.  I think this discussion has been very useful, thanks to everyone who shared their input on the matter.

  4. Yes, if you stay YCbCr then no clipping, we just risk making even more invalid RGB values that when the conversion to 8bit does happen, such at playback on RGB devices then the clipping happens.

    This is the beef about past advice on dialing down saturation, 8bit and probably 16bit workflows when the YCC to RGB happens in the workflow, channels clipped leading to real loss of data, unretrievable.

     

    So, if I understand this properly, then the best workflow to use (in terms of keeping as much information and precision as possible) would be to shoot with relatively high saturation in-camera, but then tone it down using something like the FCC once it's imported.  That way we keep the precision of shooting high sat, but prevent invalid RGB values by toning it down in post.

  5. It turns out the clipping has to do with the colour space, not the bit depth.  The old premiere effects use RGB colour to perform their operations.  When I create an effect that stays in Y'CbCr, the values don't get clipped.

     

    On another note, I've created an effect that tries to simulate the "Vibrance" effect from Lightroom.  Essentially it boosts the saturation for pixels that are desaturated, but doesn't affect the pixels that are already saturated.  I will test out how this affects footage with high and low saturation and I'll post my results.  Hopefully I'll be able to test it today.

  6. Could you just confirm that the flagoff.mp4 was the file you tested the FCC & levels with? sorry to fuss I wasn't very clear before.

     

    Still have to look at your thread on dvxuser but have to register so...

     

    I'd be interested to know what FCP X makes of the test files to, it's 32bit but all the same it is Apple and previously QT has made the 16 & 255 text appear to not clip but the text didn't have RGB values of 16 & 255 skewing results, due to the stupid gamma issues with color management in OSX and QT, prior to Mountain Lion that is.

     

    I did indeed use the flagoff.mp4 file.  But I didn't use PP's "Levels" effect because it clips.  It must be an old effect that hasn't been updated to use 32 bit colour.  The FCC, on the other hand, does not clip and clearly shows values above and below 100 & 0.  

     

    I just tried throwing the files in a FCP X timeline and they seemed to work just fine.  On the waveform, I could clearly see the values beyond 0 & 100 for the flagoff.mp4.  And for flagon.mp4, the values were mapped properly to 0 - 100.  This is on FPC X (10.0.4) running on OS X (10.8.2) Mountain Lion.  

     

    One thing that I find bizarre is that PP effects that are only 8 bit clip the levels above and below 100 & 0, even on 8 bit footage.  From what I understand, these values are stored in an 8 bit stream, but the effects can't handle it for some reason.  I've begun learning about making plugins for Premiere and After Effects and I've noticed that when I disable 32 bit, my effect will clip the values.  But when I enable 32 bit, it handles it just fine.  Am I missing something?  I find it quite weird.

     

    EDIT: I think the clipping of 8 bit effects doesn't have to do with bit depth at all.  I have a feeling it has to do with the fact that these effects use the RGB colour space.  It seems that when working in Y'CbCr space holds the high and low values, even when using only 8 bit values.

  7. One question, what about contrast? Is it the same principle or does it work slightly differently. I've found that if you dial all the way down, you help the shadows but you end up sacrificing the highlights - this was really evident in the GH3 footage i graded.

    In theory, the same should apply to the luma channel.  Therefore, if you have a very low contrast seen, and also dial down the contrast in-camera, then you'll introduce banding into the shot when you increase the contrast in post.  Again in theory, lowering the contrast should actually help both shadow and highlight areas, but often at the expense of the mid tones.

     

    As for sacrificing shadows/highlights, it really depends on the camera.  I've never used a Panasonic GH camera.  The only one I've actually taken the time to tinker with is the Sony NEX-5N.  In this camera, lowering the contrast reduced the S-Curve applied to the luma channel, while maintaining the same black and white levels.  In a 32 bit codec, this would have virtually no permanent effect.  But since it's an 8 bit codec, it can be the difference between nice smooth gradients and ugly jarring banding.

  8. To me I find using the low bitrate nex5n footage 24mbs, and de saturating in camera, but if required, bringing back colour using a saturation boost in the 32bit effects is just the same as shooting non low saturated and lowering the saturation.  The data is still there.  If anything, if you know you will be doing any type of grading i find it better to get WB right then shoot flat (lowest contrast and saturation) then monitoring is easier since you are dealing with less information and can focus on framing, exposure etc without letting a unaccurate representation of the final colour get in the way.  It is also a nice surprise taking your flat footage into post and having a flat canvas onto which you can manipulate.  

     

    The arguments for and against saturated or de saturated in camera are split 50/50 from what I have read.  I certainly dont think my captured footage is limited in colour information due to desaturation in camera - I feel my dynamic headroom in both exposure and rgb colour handling is actually increased using a low saturated profile in 90% of situations.

     

    EDIT: All of what I've written here is to the best of my knowledge, but should also be verified.  As a computer science university student, I feel comfortable with the claims I have made.

     

    @rich101 Indeed, there are some key advantages to both paths.  But for the sake of making decisions, we should understand all the implications of both options.  Unfortunately, I think you've made a mistake in saying that desaturating the footage in-camera and bringing it back in post will deliver the same result as having the saturation high in-camera, even if you work in a 32 bit float colour space.  Let me explain with an example.

     

    Suppose you have a smooth gradient in one of the channels from numerical value 0 to 5.  In 32 bit float, the data might look like this:

    [0.0 - 0.3 - 0.6 - 1.0 - 1.3 - 1.6 - 2.0 - 2.3 - 2.6 - 3.0 - 3.3 - 3.6 - 4.0 - 4.3 - 4.6 - 5.0]

    where each number corresponds to the value of one of the chroma channels of a pixel.  I.E. each number in the sequence is a different pixel.

    If we increased saturation on this data, let's say double its original value, it would look like this:

    [0.0 - 0.6 - 1.2 - 2.0 - 2.6 - 3.2 - 4.0 - 4.6 - 5.2 - 6.0 - 6.6 - 7.2 - 8.0 - 8.6 - 9.2 - 10.0]

     

    Now suppose that same gradient were captured using an 8 bit int codec, it would look like this:

    [0 - 0 - 1 - 1 - 1 - 2 - 2 - 2 - 3 - 3 - 3 - 4 - 4 - 4 - 5 - 5]

    8 bit encoding does not allow for decimal places, so the numbers are rounded to their nearest integer.  If we convert these to 32 bit float values after the fact, then we get this:

    [0.0 - 0.0 - 1.0 - 1.0 - 1.0 - 2.0 - 2.0 - 2.0 - 3.0 - 3.0 - 3.0 - 4.0 - 4.0 - 4.0 - 5.0 - 5.0]

    As you can see, converting to 32 bit float does not bring back any information that was lost in the 8 bit encoding.  So, if we apply the same amount of saturation on these clips, i.e. doubling it, we get this:

    [0.0 - 0.0 - 2.0 - 2.0 - 2.0 - 4.0 - 4.0 - 4.0 - 6.0 - 6.0 - 6.0 - 8.0 - 8.0 - 8.0 - 10.0 - 10.0]

     

    As you can see, even though we've converted the 8 bit footage into 32 bit float, we have not gained any information.  Therefore, increasing saturation will still cause banding.  If we set the saturation high in-camera, the colour is affected before being encoded into 8 bit values.  We are still left with 8 bit footage, but the values would look more like this:

    [0 - 1 - 1 - 2 - 3 - 3 - 4 - 5 - 5 - 6 - 7 - 7 - 8 - 9 - 9 - 10]

     

    By starting with greater saturation values in-camera, we maintain a greater degree of colour precision.  And since modern codecs encode in a Y'CbCr colour space, this not only affects the saturation, but also the hue, which IMO is even more important.  Depending on the shot, it might not have a huge impact on the colour accuracy.  But I have definitely experienced some shots that have suffered from this, notably the test shots I did in my DVXuser post.

     

    Does this make sense?

  9. If you use the full range mp4 in the zip, when added to the timeline it should appear as black and white horizontal bars, but the Y waveform should show levels above and below 0 - 100, dropping a levels filter and remapping levels into typical 0 - 255 into 16 - 235 should make the 16 and 255 text appear?

    Disable the levels filter and add Fast Color Corrector in between the levels filter and the clip, make slight adjust to FCC and then reenable the levels filter, do you still see the 16 and 255 text?

    If it doesn't then the FCC is clipping.

     

    With the FCC applied, the values above and below 100 & 0 are still maintained in the waveform.  And, just as you predicted, the values mapped perfectly to 16 - 235.  These results are expected since the FCC is a 32-bit float effect.

     

    In PP, there's a little tag that appears next to effects to show whether they're 32-bit, hardware accelerated, and whether they maintain the Y'CbCr colour space.  FCC, RGB Curves, Luma Curve and Three-Way Color Corrector all have these tags.  I often use MB Colorista II to correct and grade my footage.  Colorista is 32-bit, but it's not hardware accelerated, nor does it maintain the Y'CbCr space.  So applying the FCC to adjust the saturation before applying Colorista is probably the best way to go.

     

    Which NLE & effects do you use?  (FCP7, FCPX, Avid MC...)

  10. The care needs to be taken not with whether 'chroma clips' as chroma is a YCbCr concept, but the resultant RGB values generated when doing probably 90% of the color processing in a NLE or grading app, once the luma and chroma planes are combined to generate RGB that's when 'channels' get clipped. Compounded by using a typical 8 or 16bit mode to do the color conversion, invalid RGB values created leading to abnormalities in color, gamma and overall 'quality' of the resultant image.

     

    That's a good point @yellow.  But I think I might have a solution.  I'm not sure how other NLEs work, but with Premiere Pro, there is an effect called "Fast Color Corrector" that does all its functions within the Y'CbCr colour space.  Therefore, you could reduce the saturation of a shot before applying any other effects.  This prevents you from locking in a saturation value that could potentially be too low and maintains a higher bit depth.

     

    Basically, by keeping the saturation from going too low in-camera, we're protecting the colour information which we could reduce in post if need be.  I've experimented with different saturation values before and I have a post on DVXuser discussing the results.

    http://www.dvxuser.com/V6/showthread.php?300891-Quick-Comparison-of-the-Picture-Profiles-on-the-NEX-5N/page2

    I'm using the NEX-5N and I found that boosting the saturation to its maximum in-camera produced better results than reducing it to its minimum.  Obviously, I wouldn't set the saturation so high in a real life setting, but it helps to exaggerate the results.

     

     

     

    that's for the 5D's FF sensor & for some reason it seems to behave differently to the APS-C

     

    I'm pretty sure it varies from sensor to sensor.  I'm assuming that in this case you're talking about Canon APS-C (550D - 650D, 60D, 7D).  I think you're right about having to test it out for each camera.  For example, with my Sony, I couldn't see any added aliasing artefacts when sharpness was set anywhere from -3 to -1.

  11. Also, if the GH series of cameras give better resolution & are sharper, why dial it down all the way just to re-sharpen in post? That just seems pointless to me & yes i do understand that they can over sharpen in-camera.

     

    Stu Maschwitz from prolost.com talks about the implications of in-camera sharpening on his blog.  He brings up some interesting points.  The conclusion that he makes is that in-camera sharpening should be completely avoided (not even one tick above the minimum should be used).

     

    http://prolost.com/blog/2012/4/10/prolost-flat.html

  12. Just think that the 8-bit quality does need a little help & the theory that flat is best is probably the most misguided advice for DSLRs. You simply can't put back in stuff that isn't there in the first place - this isn't RAW. So dialing down everything, IMO, isn't helping anything, its disabling you. With a DSLR you should try to get it as perfect as possible & then tweek it in post - 8-bit isn't that flexible before it falls apart on you.

     

    I couldn't agree more.  I understand the idea of shooting at low contrast levels to gain DR, but all too often people seem to just leave it as low as it goes without ever changing it.  It's something that should be adjusted for each shot to get the most out of those 8 bits.  One thing I REALLY don't understand is why anyone would dial down the saturation of a clip.  How often do you even come close to clipping the chroma channels?  I believe this is one of the most misunderstood parts of DSLR video.

     

    In this case, a lower contrast was clearly needed and /p/ was right to keep it low.  But it's really not a one size fits all kind of setting.

  13. Here's my go at a simple grade.  My objective was to give it a warm inviting feeling.  I may have pushed the colours a little too far, not entirely sure.

     

    I used MB Colorista II for this one.  I used a powermask to bring back some of the sky's natural cool tones farther from the sun.

     

    https://vimeo.com/60835279

     

    The password is: /p/

  14. I can't say if shooting flat will reduce noise in any way, this is one thing I'm not sure of.  What I do know is that shooting flat will help prevent clipping of the shadows and highlights.  In other words, it maximizes your camera's dynamic range.  In addition, from my understanding, shooting in log can help reduce compression artefacts in the darker parts of the image.  If you're shooting in log, you'll want to apply at LUT to the footage in post to get the right colours back.

  15. [font=georgia,serif]I really enjoyed this article Andrew. This sort of in-depth coverage is really nice to see and you did it very well. Well done![/font][font=georgia, serif] [/font][font=georgia,serif] :rolleyes:[/font]
    [font=georgia,serif]I really believe this camera, with all the coverage it gets, is still [/font][font="georgia, serif"]underrated. I've been totally blown away with its performance.[/font]
×
×
  • Create New...