Jump to content

Tone Mapping to achieve sharpening


Sean Cunningham
 Share

Recommended Posts

I posted these images to a topic in the anamorphic forum, since it's an additional example of adding grain in post to enhance DSLR video but perhaps even more important is I did not apply any form of traditional sharpening kernal to achieve the improvements in clarity you see here for this Canon 7D footage.

Here are some examples from my most recent project, showing 7D before and after (de-moire, mild tone-mapping for sharpening, simulated high-speed grain from AE, MagicBullet grading)...

[IMG]http://i47.tinypic.com/2nsxv1g.png[/img]
[IMG]http://i50.tinypic.com/2d9p91x.png[/img]
[IMG]http://i49.tinypic.com/2jexyqb.png[/img]
[IMG]http://i50.tinypic.com/dm9snt.png[/img]
[IMG]http://i48.tinypic.com/z8ffp.png[/img]
[IMG]http://i45.tinypic.com/33u75he.png[/img]

...top two scenes were shot with the kit zoom (car interior with the camera mounted via StickyPod) and the bottom CU was shot with the f1.2 85mm L which is an amazing, amazing lens.

Anyway, as I said, I used a tone-mapping technique on the luminance channel only.  You'll see that I wasn't pushing the technique so far as to go for its pseudo-HDR look.  This method not only provides sharpening with a much higher threshold for false-edging than traditional sharpening but by processing the chroma separately and then re-combining it with the luminance channel I'm able to also do chroma-smoothing/de-moire.
Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
These are great techniques, I too work this way, fwiw, especially avoiding typical sharpening methods aka USM (a kind of tonemapping / LCE) and use local contrast enhancement instead, tonemapping is really a HDR to LDR process whereas LCE is about refining edges and tightening gradients to increase actuance, perceived sharpness.

And only work on the native YCbCr ie: luma and chroma separately whole route.

What inspired me years ago was listening to Peter Doyle talking about grading in Harry Potter.

The podcasts are here: http://www.fxguide.com/fxpodcasts/color_in_harry_potter/ and http://www.fxguide.com/fxpodcasts/the-art-of-grading-harry-potter-peter-doyle/ for anyone interested, the first of the two from 2009 I found most inspiring talking about tonemapping, luma sharpening etc.

And his quip about '1D look up table jockies' still cracks me up today, that is the general assumption that 'color grading' means Lift Gamma Gain, 3 wheel color corrector and some preset 'Look' :-) They have their uses and not knocking them but listening to Peter Doyle techniques opens up the thinking.

Also the great work done by Lowry Digital, now Reliance Mediaworks on movies like Zodiac http://www.theasc.com/ac_magazine/April2007/Zodiac/page1.php & BB http://www.moviemaker.com/producing/article/lowry_digital_the_curious_case_of_benjamin_button_brad_pitt_20090203/ processing the Thomson Viper source files.

Not owning AE and for anyone interested in a free route, I use these techniques a lot on DSLR h264 mainly, but also mpeg2 HDV and even uprezzing / deinterlacing / luma chroma processing DV, via Avisynth and the following plugins:

MCTDmod [Motion Compensated Temporal Denoise] :- http://forum.doom9.org/showthread.php?t=139766

MCTDmod allows various tools in one function / plugin, in no particular order.

Deblock: A compression macroblock deblocker that reinterpolates pixel values within the macroblocks to smash them. http://avisynth.org/mediawiki/Deblock_QED alternative method http://forum.doom9.org/showthread.php?t=164800 https://sites.google.com/site/jconklin754smoothd2/

Various methods of denoising and control over strength / areas. Choice to denoise only Luma or luma + chroma and in separate passes. Denosing through masks created by the integral motion analysis plugin MVtools2 http://avisynth.org.ru/mvtools/mvtools2.html + masktools http://avisynth.org/mediawiki/MaskTools2

Various Sharpening methods again motion compensated temporal and or spacial via masktool generated masks, sharpen only edges if required, luma only or luma + chroma, USM. http://avisynth.org/mediawiki/LSFmod

Reduction of star & bright point 'tings'

Antialising edges, edge clean, dehalo and deringing.

Temporal stabilizing of flat areas within the frame to avoid shimmer and nervousness.

Debanding: Enhance flat areas to remove / reduce banding and blocking. http://avisynth.org/mediawiki/GradFun2DBmod

Adding controlled grain to bright, midtone and dark areas differently depending on scene, controlling size and texture. This is a more intelligent method than just overlay a grain scan.

Dithertools to work on a 16bit resampled version of the 8bit source: http://forum.doom9.org/showthread.php?p=1386559#post1386559

For LCE, Local Contrast Enhancement, aka Tonemapping  alternative to Unsharp Mask: http://forum.doom9.org/showthread.php?t=161986

Other sharpening methods based on brights, midtone, darks : http://forum.doom9.org/showthread.php?t=165187

SmoothAdjust: http://forum.doom9.org/showthread.php?t=154971 works on 16bit version of 8bit source created via Dithertools, allows adjustments with interpolation of 'missing' data to keep smooth gradients at 16bit with encoding options to 16bit image sequences, 10bit lossless h264 or back to 8bit codecs including lossless via numerous dither / noise / grain methods. Including a 32 point 'S' curve for Cinestyle.

Not suggesting for 1 minute this is as user friendly as some sliders and plugins in AE but Avisynth+AVSPmod+plugins provides a powerful free opensource option, yes more manual although AVSPmod does offer sliders for plugins :-) and http://forum.doom9.org a wealth of users willing to help.

Once a script is created it's able to be used in a batch situation, preprocessing many clips in an automated way rather than manually.

As a preprocess operation such as deblock -> denoise -> 8bit to 10 or 16bit gradients -> deband -> encode to intermediate and/or as a post processing operation after editing / color correction including resizing to target delivery -> sharpening -> add grain/noise -> levels adjustment to 16 - 235 -> encode to delivery codec.
Link to comment
Share on other sites

That's awesome, thanks for posting the links to the Boyle podcasts!

The thing I really like about a lot of these techniques is the use of the image and information in the image to enhance itself rather than deforming its values based on a static "handle" or "dial", affecting the image in a broad way. 

It seems counter-intuitive to me to follow the practically universal advice that electronic sharpening should be turned down or off in-camera, down or off in-monitor/projection but then okay to apply these same techniques with their same limitations and artifacts through slower software applications.  Image-based techniques actually take more horsepower but the proof is in the pudding...or rather, absence of easily spotted artifacts.
Link to comment
Share on other sites

[quote author=yellow link=topic=920.msg6700#msg6700 date=1341126681]
...
Also the great work done by Lowry Digital, now Reliance Mediaworks on movies like Zodiac http://www.theasc.com/ac_magazine/April2007/Zodiac/page1.php & BB http://www.moviemaker.com/producing/article/lowry_digital_the_curious_case_of_benjamin_button_brad_pitt_20090203/ processing the Thomson Viper source files.
...
[/quote]

Oh, and this is an especially revealing bit of info.  Here is one of the single best examples on how technique in-camera and technique in-post has [i]radical[/i] implications on the end result of a film.  Here are two films that look like [i]films[/i], shot on a Viper.  Clearly, David Fincher has a more masterful grasp of how to use digital imaging tools than Michael Mann had during a similar period, a director (whose films I've loved even longer than Fincher's) who used the same camera to shoot [b][i]Collateral[/i][/b] and [b][i]Miami Vice[/i][/b], two movies obviously shot on some form of video device.
Link to comment
Share on other sites

[quote author=BurnetRhoades link=topic=920.msg6704#msg6704 date=1341158480]
That's awesome, thanks for posting the links to the Boyle podcasts!

The thing I really like about a lot of these techniques is the use of the image and information in the image to enhance itself rather than deforming its values based on a static "handle" or "dial", affecting the image in a broad way. 

It seems counter-intuitive to me to follow the practically universal advice that electronic sharpening should be turned down or off in-camera, down or off in-monitor/projection but then okay to apply these same techniques with their same limitations and artifacts through slower software applications.  Image-based techniques actually take more horsepower but the proof is in the pudding...or rather, absence of easily spotted artifacts.
[/quote]

No problem, hope you find the podcasts useful.

Regarding using the image itself for information, as this is video and motion is involved rather than affecting a static image in photoshop we're working on many image frames per second and therefore decent techniques require motion analysis to jump forward and back through what can be hundred frames analysing the image data, building automated masks and then denoising, sharpening etc through those masks.

The Lowry process is much about analysing each frame and getting a consistent appearance with regard to motion, noise levels etc between what could be numerous camera sources and shooting conditions.
Link to comment
Share on other sites

It's possible to create USM (and LCE) in PPro which is 100% GPU accelerated and runs in real-time:

1. Create a copy of the clip and stack above the original.
2. Apply a Gaussian Blur to original. Set to 10 to start.
3. On the upper clip, apply a Brightness & Contrast and set Brightness to 63.7 and Contrast to 50. Set Blend Mode to Difference.

Adjust Gaussian Blur on original to control radius, adjust Opacity on the top video to control amount. It's not quite the same as the built-in USM, however it shows how it's done and that Adobe could include a GPU accelerated version instead of the slow CPU version.

Also try adding an Adjustment Layer (CS6) or Nesting (CS5.x), and applying a (convolution) Sharpen to the Layer or Nested clip.

Tested this technique in CS6 with a 1280x720@60 clip and it brings detail up significantly (and runs in real-time): can easily cut with 1080p material.

It's too bad PixelBender effects can't run in PPro (must create in AE then use Dynamic Link, which is slow). With general GPU-shader support, PPro would be a much better package, providing a means for much more real-time GPU effects.

Virtualdub+avisynth et all are powerful tools and run pretty fast, but require round-tripping footage. As a time saver, I'm limiting my effects/tools palette to what runs in real-time directly in PPro (Neat Video & Warp Stabilize in CS6 are the only two exceptions).
Link to comment
Share on other sites

  • 2 weeks later...
  • 2 months later...
Sorry guys, I've been moving and out of town and all sorts of stuff distracting me and haven't been here in a while.

I agree it's more a LCE technique but I attribute that to using it with a small filter size (set by the blur filter applied to the layer doing the enhancement). Larger filters brings out more of a broad enhancement that achieves a pseudo-HDR look with single images. It's essentially the same methodology used in a very specific way. I think the LCE is more of a secondary effect for people trying to create the pseudo-HDR look where in my case it's the primary focus and broader tone enhancement is, more than gravy, but still very minimal.

I made the connection to Tone Mapping myself, and perhaps in error, after picking up a book on HDR photography and reading their chapter on achieving an HDR look with a single exposure.

The above images were created with four layers in AfterEffects. Three of them are duplicates of the comp and the fourth is my grain layer. I chose to render out a loopable clip of my hero grain settings because that ended up being faster than computing the grain filter every frame. YMMV.

The top copy of the comp contains my chroma filtering and the chroma portion of my color correction. Depending on the footage I'll either do a horizontally bound blur of only a few pixels or median filter which smooths the chroma sub-sampling in any non-raw footage. With the 7D footage I found that I got the best results by actually doing a median filter. This filtering also took care of the moire and noise that was most evident in the fine, blonde hair of our lead actress. The blending mode for this layer is COLOR.

The next layer down is where the LCE/TM techniques were used, on a luma-only copy of the comp. This layer is essentially comp'd with itself internally using a gaussian blur technique like was outlined above. The method I went was based on a technique I found for building a high-pass filter in AfterEffects since there wasn't just a drag and drop version of the filter like in Photoshop. The blending mode for this layer is OVERLAY.

The next layer down is just a luma-only copy of the comp. The effect of these three together is all the contrast enhancement happens where our eye actually picks up on detail and edges, in the luma content. The net result is contrast and details are enhanced without muddying or adversely altering colors and potentially stronger manipulation of colors.

The film grain layer I've used on top or below the color layer, with a blending mode of ADD, SCREEN and OVERLAY. You get subtle variations in the final look based on your grain source so I don't think there's a "right" answer here it all depends on the look you're going for. In the case above I used OVERLAY. I wanted most of the grain to be visible in the mid tones. This helped to dither the sub-sampled, filtered chroma and make it feel as organic as possible.
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...