Jump to content

Multi-spectral Detail Enhancement


jcs
 Share

Recommended Posts

Sometimes we need to enhance the detail of a shot: a very soft lens, slightly out of focus, slow motion, post cropping (for story/emotion or after stabilization), and so on. Most are familiar with the sharpen effect and the unsharp masking effect. We can combine both effects, as well as use unsharp masking to create a local contrast enhancement effect as well.

Canon 1DX II and Canon 50mm 1.4 at 1.4, 1080p (Filmic Skin picture style):

1DXII_Soft.thumb.jpg.91cd77b056edf7f062bb822513c65785.jpg

 

Multi-spectral Detail Enhancement (let's call it MSDE, based on the physics of Acutance)

  1. Fine noise grain: adds texture and increases perception of detail (Noise effect: 2%, color, not clipped)
  2. High frequency sharpening: in PP CC this is called Sharpen (as a standalone effect) or via Lumetri/Creative/Sharpen (as used here: 93.4)
  3. Mid frequency sharpening: Unsharp masking effect with amount 41 and a radius of 5
  4. Low frequency sharpening (Local Contrast Enhancement or LCE): Unsharp masking effect with amount 50 and a radius of 300

1DXII_MS_Sharpen.thumb.jpg.28a77e8780bf2b73f38211158e09e5f3.jpg

While this may be a bit too sharp/detailed for some, it illustrates MSDE, and one can add detail to taste using this technique. Note we didn't use a contrast effect or curves to achieve this look.

MSDE can also be use to improve HD to 4K upscales: apply after upscaling. Also a great way to use Canon's soft-ish 1080p along with DPAF (since it's not currently available in any other cameras on the market). The GH5 is the new kid on the block with excellent detail, however Canon still looks more filmic to me and has excellent AF :) 

Someday Adobe will GPU accelerate their Unsharp Mask effect (it's a trivially easy effect to code too!), so this can easily run in real-time while editing.

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

That's a pretty incredible result! Guessing this was inspired by the mention of Finisher in the other thread :) Couple questions -

  • Do the radii change in proportion to resolution? IE with 4K: Unsharp mask would be 20 for mid and 1200 for low?
  • Do the standard sharpen effects in other programs work the same as premiere (FCP and AE, etc.)?
Link to comment
Share on other sites

@EthanAlexander thanks, yeah I've been mentioning LCE for a while, thought I'd expand the concept and show how it's done.

I'd use the same settings for 4K and do minor tweaks as desired.

Sharpen is typically implemented with a convolution sharpen, where surrounding pixels are used to increase high-frequency detail (by enhancing differences). Unsharp Masking works by subtracting a blurred copy (reducing low frequency: a high-pass filter), and has a variety of uses based on the radius. FCPX seems to have a hybrid sharpen- a plugin which does PP/AE style sharpen and Unsharp Mask is probably needed for FCPX.

AE CC Test: Same soft 1DX II 1080p file (no stabilization, so wider shot), 4K Comp. Resized to 2K before upload (tried uploading 4K, was resized to 2K by website, so uploaded Photoshop resized 4K to 2K instead):

 

MSDE_AE_2K.thumb.jpg.b07801b4bbcb687d2e66931306d711da.jpg

AE Settings:

  1. Noise: 2%, color, no clipping
  2. Sharpen: 50
  3. Unsharp Mask: Amount: 41, Radius 5
  4. Unsharp Mask: Amount 50, Radius 300

AE's Sharpen appears to have a bug- edges sharpened without repeat, so thin line around border: do Transform / Scale to fix (in this case 200% became 201% (2K to 4K scale)).

 

Link to comment
Share on other sites

12 hours ago, jcs said:

Sometimes we need to enhance the detail of a shot: a very soft lens, slightly out of focus, slow motion, post cropping (for story/emotion or after stabilization), and so on. Most are familiar with the sharpen effect and the unsharp masking effect. We can combine both effects, as well as use unsharp masking to create a local contrast enhancement effect as well.

Canon 1DX II and Canon 50mm 1.4 at 1.4, 1080p (Filmic Skin picture style):

MSDE can also be use to improve HD to 4K upscales: apply after upscaling. Also a great way to use Canon's soft-ish 1080p along with DPAF (since it's not currently available in any other cameras on the market). The GH5 is the new kid on the block with excellent detail, however Canon still looks more filmic to me and has excellent AF :) 

 

Cool. 

I'm in no way disregarding your results using a Canon DSLR however in my experience of using sharpening tools I've found that Sony and Panasonic images respond far better - any reason why this may be? 

Alos I've only found Finisher and the DaVinci resolve built-in sharpening tools to be any good, amongst native and third party plugins. Some of them do something hideous to the image, like it's drawing black extruded lines around the edges of the subject.

I've been using Finisher since 2012 and it literally transforms the perceived resolution of the image, Soft detail appears pin sharp. Fantastic for slow motion modes, where soft images are common. 

I tried Finisher on the 4k image from the A6500 for a laugh. :astonished:

Link to comment
Share on other sites

9 hours ago, Oliver Daniel said:

 

Cool. 

I'm in no way disregarding your results using a Canon DSLR however in my experience of using sharpening tools I've found that Sony and Panasonic images respond far better - any reason why this may be? 

Alos I've only found Finisher and the DaVinci resolve built-in sharpening tools to be any good, amongst native and third party plugins. Some of them do something hideous to the image, like it's drawing black extruded lines around the edges of the subject.

I've been using Finisher since 2012 and it literally transforms the perceived resolution of the image, Soft detail appears pin sharp. Fantastic for slow motion modes, where soft images are common. 

I tried Finisher on the 4k image from the A6500 for a laugh. :astonished:

Regarding Sony & Panasonic vs. Canon for post-sharpening: Canon DSLR's are simply softer, where the finest detail is gone, either from the OLPF or from sensor binning/processing. You can see this when studying resolution charts. Canon C-series cameras (except the 1DC) are all very sharp, however they also alias like crazy for high-frequency images (fabric, brick, etc.). The 5D3 has practically no aliasing, however it's very soft. Part of the 5D3 H.264 softness comes from low-quality in-camera processing, as we can see much more detailed results when using ML RAW with post-de-Bayer and sharpening. Part of the Canon DLSR video softness may also be related to business reasons: protecting the C-line.

As was noted in the Netflix 4K thread, only the 8K F65 produces True(-ish) 4K- everyone else is cheating (except perhaps Red 8K). This is easily observed by examining the test charts and aliasing. Cameras that are 'cheating' can appear sharp, however that's partially from aliasing (note the F65 is razor sharp & detailed with no visible aliasing!). Trying to sharpen soft, aliased footage in only the high frequencies can look ugly, as you've noticed. So what I did with MSDE (not a standard term; made it up), was sharpen the high frequencies only so much (else will get ugly), then went to a lower frequency and sharpened more, then finally to an even lower frequency, for the final sharpen. Here sharpen means contrast enhancement: amplifying differences in pixels and groups of pixels. Sharpening in the normal use of the word is contrast enhancement at the highest frequencies only.

Adding pixel-level noise first creates the highest possible frequency information for texture. I tried adding noise later and it didn't work as well: even more noise was required to see the effect. When we add noise we are increasing acutance (not real resolution), however we are also reducing the signal to noise ratio, so we want to use a little as possible. To my eye, the results of this test look a lot like film, vs. the typical Sony & Panasonic video look, don't you agree? I think one of the reasons film looks the way it does is because of the acutance result from the chemical process and zero aliasing, where grain provides texture so it has that somewhat soft and detailed look at the same time.

MSDE is based on spatial domain transforms, it's also possible to perform detail enhancement in the frequency domain using the discrete cosine and wavelet transforms (DCT & DWT): https://link.springer.com/chapter/10.1007%2F978-3-642-01209-9_13. Not clear why this isn't used more- could be patent related. New technologies based on feature extraction (generative processing) will be able to figure out generative structures and be able to re-render them at any resolution. Genuine Fractals made progress in this area a few years ago: https://blog.codinghorror.com/better-image-resizing/. While the results are 'sharper', it wasn't good enough vs. bicubic, Lanczos, etc.

The MSDE technique only needs noise, convolution sharpen (or perhaps Unsharp Mask with a width around .5-1.0), and Unsharp Mask. It will work in any NLE where these effects are available (native or via plugin), and it's free :) 

Link to comment
Share on other sites

1 hour ago, EthanAlexander said:

I'm now wondering if this wasn't the upscaling algorithm that Yedlin was using in his comparison video. What do you think, @jcs ?

I wouldn't be surprised- upscaling, adding noise, and sharpening alone work pretty well. I'm pretty sure PP CC (and AE?) use bicubic scaling. Yedlin's tools (Nuke?) may also provide more advanced (and expensive) scalers such as lanczos-3, which along with sharpening performed best in this 2014 state-of-the-art study: https://hal.inria.fr/hal-01073920/document, surprisingly performing better than super-resolution (which creates real extra detail from aliasing information).

I think Yedlin could have made a better point by shortening the videos dramatically: way too long and rambly, especially if the goal was to appeal to studio execs and producers with short attention spans ;) 

I jumped around his videos and didn't see the application of Local Contrast Enhancement or mention of acutance (perhaps I missed it?), which should have been paramount in such a test. It really felt like an advert/defense-a-tutorial for ARRI's low-res sensors :) (including the Alexa 65- still not capable of True 4K (only ~6.6K Bayer photosites; need 8K)). Love ARRI's color, F65 wins for ultimate color and real 4K detail (see Lucy, Oblivion).

From https://hal.inria.fr/hal-01073920/document:

Quote

In this study, we have evaluated the performance of several upscaling algorithms by upscaling the video sequences from 720p, 1080p to 4K resolution. The results indicated that in general conditions, the current state of the art upscaling algorithms could not achieve as good perceptual quality as the original UHD versions. In addition, the best upscaling algorithm may be not the state-of-the-art computationally expensive super resolution algorithms but the less costly one, for example, lanczos-3 eventually with added sharpening. Due to the different viewing conditions on UHD and the corresponding viewing behavior of observers, improvement may be expected if the upscaling algorithm is particularly adapted to UHD.

This is my experience as well, so I fundamentally disagree with Yedlin. True 4K capture, with real detail, will always be better than upscaling and fancy tricks like MSDE. Will most people notice or care? That's a different argument :) 

Link to comment
Share on other sites

36 minutes ago, jcs said:

I think Yedlin could have made a better point by shortening the videos dramatically: way too long and rambly, especially if the goal was to appeal to studio execs and producers with short attention spans ;)

He's a DP not an editor ;) ... (I agree)

37 minutes ago, jcs said:

Love ARRI's color, F65 wins for ultimate color and real 4K detail (see Lucy, Oblivion).

I personally prefer the Alexa 65 from everything I've seen, whether or not it's true 4K. Just preference. Don't get me wrong though: I'm a huge Sony fan and have watched Oblivion several times on Blu-Ray through a sony BD player to a Sony 4K HDR tv, and the F65's no doubt a FANTASTIC camera. (Supposedly sony products use the same algorithm to upscale to 4K as they do to downscale the movie to HD BD. Can't remember where I saw that, but it was definitely an ad for the F65.)

43 minutes ago, jcs said:

This is my experience as well, so I fundamentally disagree with Yedlin. True 4K capture, with real detail, will always be better than upscaling and fancy tricks like MSDE. Will most people notice or care? That's a different argument :) 

I totally get where you're coming from, and I've learned a lot from what you post here on EOSHD just from exploring the idea of "true 4K." My only thing, and the reason I am glad Yedlin made the video, is that I'd much rather consumers (professional, commercial, hobbyists alike) be aware of how much goes into an image than just resolution and start demanding things like better compression and color space. This is especially true because at a certain point the captured pixels are surpassing the resolving power of all but the most detailed lenses.

Link to comment
Share on other sites

12 hours ago, Oliver Daniel said:

 

I'm in no way disregarding your results using a Canon DSLR however in my experience of using sharpening tools I've found that Sony and Panasonic images respond far better - any reason why this may be? 

 

Probably because they have higher resolution to work from. You will always get better results with good native resolution rather than synthesized resolution. You can't enhance information that is not there to start with, so that will always be the limiting factor in terms of what you can do with the image.

Link to comment
Share on other sites

1 hour ago, EthanAlexander said:

He's a DP not an editor ;) ... (I agree)

I personally prefer the Alexa 65 from everything I've seen, whether or not it's true 4K. Just preference. Don't get me wrong though: I'm a huge Sony fan and have watched Oblivion several times on Blu-Ray through a sony BD player to a Sony 4K HDR tv, and the F65's no doubt a FANTASTIC camera. (Supposedly sony products use the same algorithm to upscale to 4K as they do to downscale the movie to HD BD. Can't remember where I saw that, but it was definitely an ad for the F65.)

I totally get where you're coming from, and I've learned a lot from what you post here on EOSHD just from exploring the idea of "true 4K." My only thing, and the reason I am glad Yedlin made the video, is that I'd much rather consumers (professional, commercial, hobbyists alike) be aware of how much goes into an image than just resolution and start demanding things like better compression and color space. This is especially true because at a certain point the captured pixels are surpassing the resolving power of all but the most detailed lenses.

Overall ARRI is my favorite cinema camera brand, however I haven't seen anything from ARRI that blew me away like Lucy and Oblivion with the F65. F65 beats ARRI on test charts, would love to see a people/skintone head to head test ?.

Imagine 8K => 4K processing in a small camera with IBIS, 10-bit 422, and DPAF- that's all possible today, without a fan! Held back for business reasons. The only way to demand anything is to not give them money for crappy products ?

Link to comment
Share on other sites

After scanning this thread, this method reminds me of the wavelet decompose plug-ins for the GIMP, which is a functionality that has been in open-source software for years.   Basically, these plug-ins separate detail "frequencies" into their own separate layers.  I use wavelet decompose mostly for skin retouching, but some have been using it for sharpening for quite awhile.

 

One of the wavelet decompose plug-ins can separate an image into 100 different frequency layers, but I can't imagine why that many separate frequencies would ever be needed.

 

I don't think that proper wavelet decompose functionality has yet appeared in proprietary imaging software.  Often, advanced features such as this show up in Photoshop years after the GIMP, and these "new" features are usually much trumpeted by the Adobe crowd.

Link to comment
Share on other sites

42 minutes ago, tomekk said:

Isn't wavelet decompose in GIMP called frequency separation technique in Photoshop? 

Yes.  Essentially, frequency separation is wavelet decompression with with just two layers -- the residual layer and the high frequency layer.   However, on Photoshop it probably still has to be done manually (similar to the manual procedure given by the OP).

 

Two layer frequency separation sets up a little more quickly in the GIMP, due to the grain extract and grain merge features.  Of course, it is even faster to get two-layer frequency separation in the GIMP with either of the wavelet decompression plug-ins, but setting it up manually probably gives one more control over the "frequency."

 

I don't know if Photoshop currently has a wavelet decompression plug-in (it didn't have one four years ago).  If it doesn't, manually making five wavelet scale layers plus a residual layer would probably be a long, arduous process in Photoshop.

Link to comment
Share on other sites

@tomekk @tupp frequency separation is very similar to what MSDE does. It uses a high-pass filter and Gaussian blur in the spatial domain: https://fstoppers.com/post-production/ultimate-guide-frequency-separation-technique-8699

Wavelet's operate in the frequency domain (same as Fourier transforms (+DFT/DCT) with different pros/cons). For wavelet frequency filtering, an image is converted into a wavelet, desired frequencies are filtered, then the wavelet is converted back into an image. Note no compression or decompression takes place (same with a DCT and inverse DCT, which can also be used for frequency filtering). The GIMP plugin performs a wavelet transform which allows frequencies to be decomposed (vs. decompressed) and filtered before converting back into an image.

Yeah, it's puzzling that Photoshop doesn't provide frequency-based filtering options or that no one's made plugins for Photoshop/Premiere/AE/FCPX/Resolve etc.

For retouching stills, I haven't used frequency separation since I got Portrait Pro: http://www.portraitprofessional.com/.

Link to comment
Share on other sites

@EthanAlexander (from PM): you can do Unsharp mask in Resolve with 2 nodes: the first node "doubles" the image intensity and saturation, then the next node subtracts a Gaussian blurred copy from the "doubled' image. Mathematically it's 2*(original RGB pixels) - (blurred original RGB pixels). If you don't blur the 2nd copy, 2*original - original = original: you can use that to make sure the operation is set up correctly. Then start blurring the 2nd copy to see the results: small blur is the traditional Unsharp mask, and larger amounts perform LCE.

I don't use Resolve very often (only to test it every now and then), here's what I came up with in a couple minutes of experimenting (maybe Resolve experts have a better way):

  1. In Color, add a Serial Node. Set the Gain to 2.0
  2. Add a Layer Node to this Serial Node
  3. Right-click and set the Layer Mixer Composite Mode to Subtract
  4. The output should appear normal
  5. Add a Box Blur to the newly created node after the Layer Node was created
  6. Set Iterations to 6, Border Type: Replicate, and turn up strength and see how it works: you need a lot of blur for LCE
  7. Use Gaussian blur and less strength instead of box blur for traditional unsharp masking sharpening

That should get you started (fully GPU accelerated too)!

Link to comment
Share on other sites

10 hours ago, jcs said:

@EthanAlexander (from PM): you can do Unsharp mask in Resolve with 2 nodes: the first node "doubles" the image intensity and saturation, then the next node subtracts a Gaussian blurred copy from the "doubled' image. Mathematically it's 2*(original RGB pixels) - (blurred original RGB pixels). If you don't blur the 2nd copy, 2*original - original = original: you can use that to make sure the operation is set up correctly. Then start blurring the 2nd copy to see the results: small blur is the traditional Unsharp mask, and larger amounts perform LCE.

I don't use Resolve very often (only to test it every now and then), here's what I came up with in a couple minutes of experimenting (maybe Resolve experts have a better way):

  1. In Color, add a Serial Node. Set the Gain to 2.0
  2. Add a Layer Node to this Serial Node
  3. Right-click and set the Layer Mixer Composite Mode to Subtract
  4. The output should appear normal
  5. Add a Box Blur to the newly created node after the Layer Node was created
  6. Set Iterations to 6, Border Type: Replicate, and turn up strength and see how it works: you need a lot of blur for LCE
  7. Use Gaussian blur and less strength instead of box blur for traditional unsharp masking sharpening

That should get you started (fully GPU accelerated too)!

I'm getting to step 3 and the composite mode set to subtract is giving me black

5991dbfb2f871_ScreenShot2017-08-14at12_19_28PM.thumb.png.5f472380f58cd148f80aa9d6e9b33e39.png

Do you see anything I've done wrong so far?

Also, I've only ever done blur/sharpen by dragging the RGB sliders on radius. How do I differentiate between box and gaussian?

Also, thank you

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...