Jump to content

Educate me please: What is down-scaling?


Guest Ebrahim Saadawi
 Share

Recommended Posts

Guest Ebrahim Saadawi

What is down scaling? 

If I have a 4K image, and I scale it down with an NLE timeline, what does it do?

1- Throw away the information beyond 1080p and keep full 1080p bucket of pixels.

2- Squeeze the pixels closer to shrink it into fitting a 1080p window, thus making use of all the extra information?

I've noticed there different algorithms for down-scaling, called Lanczos, Bubic, Nearest and so on, what is the difference and what do our NLEs use? I ask because I tested and I am afraid NLEs (Vegas at least) are losing us quality based on their down-sampling technique. 

 

Here's a test between downscaling algorithms.

-Original Pure 4K frame. 

UMRC20I.jpg

-Lanczos down-sampling technique 

bbU6dY7.jpg

-Vegas downsampling own technique to 1080p when rendering to 1080 from a 4K image. 

9NgM7Qo.jpg

 

 

Let's take a closer look (about 600x just for demonstration)

Original Pure 4K frame

i0I7ozz.jpg

Lanczos down-sampling technique 

kmVyzIk.jpg

Vegas down-sampling technique 

1VwZwaM.jpg

_______

1- There isn't that MUCH difference between the 4K and lanczos 1080p down-sampled image, eye opening really, which suggests this downscaling really is squeezing nearly all the 4K information to 1080p without a huge loss, barely visible. BUT, there IS a loss, so it's not just squeezing ALL the 4K data, so even with the best downsampling method data is lost and delivering in 4K will give the best option no matter what.

So I guess that answers my first question about whether downscaling shrinks but leaves all data. It doesn't, the answer is that it does both shrink and throw away, there are a few algorithms and some of them squeeze more information than the others, but never 100% all information fit into a 1080p window even using the most sophisticated downsampling algorithm.

2- I've been wasting A LOT of information using my NLE's downscaling method, which seems to really just discard a whole lot of data to shrink the size. A shame. 

 

So my three questions are:

-Is that conclusion correct?

-Do the other NLEs (premiere) use a better downsampling algorithm like Lanczos or have we been all losing quality?

-I need to continue using vegas while shooting 4K and downscaling for 1080p. So I need a separate software to down-sample my final 4K master to 1080p using a good (like Lanczos) algorithm. Do such software exist? Maybe I get lucky and find Lanczos in Vegas or as a separate purchased plug-in? 

 

Help needed. I am losing quality yet can't do anything about it. And I urge people who shoot 4K for 1080p to test their downsampling method, it's obviously an important matter

Any information regarding downscaling would be very helpful, and would help many of us understand and produce better images. What does downscaling do? 

 

 

 

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

I think you're overthinking this. But, in some ways, especially the original, I think the Vegas downsample looks the best... Even better than the 4K. I notice it in the textures of the concrete balcony, specifically. The whole image looks weightier as well. If anything, I think you helped me decide that 4K downsampled to 1080 is the best way to deliver 4K footage. 

Link to comment
Share on other sites

Hi Ebrahim. Agree with wither mercer, but just to have fun, I'll argue you're not over-thinking it enough ;)

Is an image a collection of perfect data points?  If each pixel recorded the color and saturation perfectly do you end up with a perfect image?  

I'll argue that the answer is 'no' because no matter how well a pixel records data there is space between each pixel that does NOT record data.  A digital image is really black canvas populated with color dots that never touch each other.  What this means is that if you image a field of tall grass there will be parts of the small grass blades that, when their light makes its way through the optics, fall on dead space on the sensor.  This is true, AFAIK, with every sensor made and it doesn't matter how high the resolution 1K or 4K.  Obviously, the higher the resolution, the less noticeable this problem.

If a blade of grass "breaks apart" so to speak, between the pixels of your 4K image, downscaling cannot create "data" information that isn't there.  Just like when your image processing system (camera or post) must deal with it (aliasing) with the original image, the downscaler must deal with it when combining the larger set of pixels into a smaller.

The chief difference between all these algorithms, AFAIK, is how many pixels around the hypothetical center pixel the software looks at to determine the best value (though much of it is subjective, some will like one algo, others, another).  Take the blade of grass.  If the algo only look at the pixels above, it would never see the disconnect between the pixel to the left.  An algo that looks at 16 pixels, say, to calculate 1 pixel, can often do it better than 4 pixels because it can "see" more of the image and make a good decision about what to create.

The more pixels the algo looks at, the better, in my experience.  Though, like Mercer says, this isn't a problem that yields significant improvement in most footage.

So my answer to your question is that running your footage through the most sophisticated algorithm before you edit in your NLE should deliver the best results.  Most likely, the NLE will be sluggish if running it real time (which is why you would process it before).

For example, Amaze is a great debayering algorithm, but I doubt most PCs can use it rendering RAW in real time.

Hope this makes sense!

 

 

Link to comment
Share on other sites

Guest Ebrahim Saadawi

Thank you guys for the valuable information. 

I am surprised you find the vegas algorithm better, when you technically analyze it, it just has less information and detail, perhaps you like the slightly softer image but I find losing resolution is disturbing. Lanczos left nearly all the information (but not quite), then if I want it softer I will do that in post with a gaussian blurr filter or by turning sharpness on the camera all the way off. 

So that Amaze software can downscale my final 4K file to 1080p using Lanczos (or something of the same quality)? 

I'd absolutely love that. 

Link to comment
Share on other sites

Ebrahim- I'll bite here and go a little further on Maxotics' point. No matter what algorithm you use to downscale, it's impossible to preserve data. A better descriptor would be transformation vs preservation, from one facsimile of reality to another smaller one. By using different algorithms to downsample, you aren't getting better accuracy, you're getting different results, which are more like creative choices. Some algos like Lanczos look sharper when resizing images because they are keeping areas of transition in contrast (edges) sharper than say Cubic, through iteration. Edges are how we see shapes, but that's not a more accurate method, it's just sharper looking. There are even sharper algos than Lanczos if that's what you're looking for, but the only way to really preserve data is to keep the source 4k files. Beyond that, choices with reformatting are subjective. For me recently, I needed to stick with Cubic when reformatting 4k because I was matching HD alexa background plates, it's also fast. But nuke's help page on their available reformatting algos is pretty useful, check it out. 

 

http://help.thefoundry.co.uk/nuke/9.0/Default.html#comp_environment/transforming_elements/filtering_algorithm_2d.html?Highlight=Lanczos

 

 

Link to comment
Share on other sites

So that Amaze software can downscale my final 4K file to 1080p using Lanczos (or something of the same quality)? 

Sorry Amaze is an algorithm for RAW bayer sensor data.  It won't work with H.264 4K.  I was just using it as an example of other types of calculations where a bunch of pixels are "looked" at to come up with a representative pixel.  In Amaze's case it is looking at Red, Green and Blue pixels and trying to figure out the best 3-color pixel.

Link to comment
Share on other sites

 

  Ebrahim, I think this might be  what your looking for.      http://www.cambridgeincolour.com/tutorials/image-interpolation.htm

 

Also, to help answer your question regarding scaling in PP.  Todd Kopriva from Adobe: 

When Premiere Pro is just using the CPU for the processing of scaling operations, it uses the following scaling methods:

The variable-radius bicubic scaling done on the CPU is very similar to the standard bicubic mode in Photoshop, though the Premiere Pro version is multi-threaded and optimized with some SSE instructions. Even with these optimizations, it is still extremely slow. For high-quality scaling at faster-than-real-time processing, you need to use the GPU.

When Premiere Pro is using CUDA or OpenCL on the GPU to accelerate the processing of scaling operations, it uses the following scaling methods:

 

I agree with sunyata, Interpolation is very subjective. (Although different algorithms may be better suited than others for specialized purposes such as up-scaling) 

 

 

Link to comment
Share on other sites

You are downscaling 4k to 1080p, Then upscaling 1080p up to compare to 4k. Downscaling is not a reversible operation. Why? Because different groupings of pixels can result in the same output pixel once downscaled. When upscaling back the software can't create the original pixels out of thin air, because there are many possibilities for these pixels. In the end, you are comparing a 1080p image to a 4K image. And of course, the 4K image will have more resolution.

So a proper downscaling is losing you resolution and gaining you bitdepth/color precision at the lower resolution, in the sense that each new pixel is essentially "quantized" at a higher bitdepth.

Link to comment
Share on other sites

So a proper downscaling is losing you resolution and gaining you bitdepth/color precision at the lower resolution, in the sense that each new pixel is essentially "quantized" at a higher bitdepth.

​What I would like to know is whether there is agreement on how much colour depth is gained when downsampling 8-bit 4K to 1080p. Is it comparable to, say, the 10-bit 422 that the BMPCC outputs?

Link to comment
Share on other sites

​What I would like to know is whether there is agreement on how much colour depth is gained when downsampling 8-bit 4K to 1080p. Is it comparable to, say, the 10-bit 422 that the BMPCC outputs?

Well, I wrote this last year: ​http://www.shutterangle.com/2014/shooting-4k-video-for-2k-delivery-bitdepth-advantage/

Downscaling 4:2:0 8-bit video from 4K to 2K will give you 4:4:4 video with 10-bit luma and 8-bit chroma. But have in mind that these are the theoretical limits. In practice, what you gain depends on pixel variation and compression used on the source image. The more detailed the 4K image, the more true color precision you gain in the downscaled image.

In any case, downscaling 4K in post delivers the best looking 2K/1080p from the current gen 4K cameras.

Link to comment
Share on other sites

​What I would like to know is whether there is agreement on how much colour depth is gained when downsampling 8-bit 4K to 1080p. Is it comparable to, say, the 10-bit 422 that the BMPCC outputs?

​It depends on whether you define color depth more as color accuracy OR more as dynamic range (levels of brightness).  

Because each pixel is NOT full-color when it is imaged (it's Red Green or Blue) it must borrow color information from pixels around it.  That means, in 1080p, you really only have 25% red information, 25% blue and 25% green.  When you down-scale 4K, you get 100% of each color value, more or less.  So the color accuracy is greatly improved (which is partly why you see less color moire issues in 4K downscaled).  As an aside, this is why the Canon C100 is no slouch in a camera.  It is creating 1080p from 4K internally.  Bottom line, to get really good 1080p, you need to start with 4K somewhere.

However, averaging two 8-bit values does NOT give you a 10-bit value, in dynamic range.  4K is not a RAW equivalent and never will be.   When you try to adjust your exposure in that 8-bit footage it won't go far.  The 10-bit BMPCC footage will, because you essentially have 2-bits of brightness information to work with.   You have more latitude to change the "color" (because you have 2 more bits of color information to work with) and not depend on the camera's internal decision making.

Link to comment
Share on other sites

@cpc @maxotics thank you both for those very interesting answers (and the article link). 

Speaking as someone who's slightly disappointed that the current technological thrust in image acquisition is towards ever greater resolution, but not greater colour depth, I'm very interested in this question of the extent to which downsampling 8-bit 4k yields the extra colour information I'd like to see in my footage. I think for the money, the BMPCC is still impossible to beat for colour.

Link to comment
Share on other sites

Guest Ebrahim Saadawi

Thank you very much for all the great and valuable information. 

I'll share a few testing information for anyone interested, 

I've been doing some testing after reading here and over at the interweb. 

Sony Vegas 13 uses Bicubic downsampling algorithm, not Lanczos. And what I've found after long testing between the two algorithms is that the cubic one, produces softer images. Noticeable softer but mot just is sharpness but in resolution, there's a slight loss compared to Lanczos. 

Lanczos preserves more resolution, but the problem is, adds a bit of sharpening to the final image as edge enhancement and in some situation produces Halos. 

None of them induces aliasing. 

So I've decided to go with Lanczos, it works perfectly and gives the sharpnest results GIVEN you downsanple a non-sharpened image. In the example above the 4K image was already sharpened digitally and Lanczos introduced Halos. 

But after testing a completely non-sharpened 4K image scaled down by Lanczos, it's really the best image, just perfect organic resolution with no visible sharpening effects, while with cubic it just leaves a less resolved image, ever so slightly. 

From now on I am not going to sharpen my 4K images before downscaling, not in-camera and not in-post, just scaling down by Lanczos is the best. 

I will produce the final 4K master graded with no sharpening and ingest it into VirtuaDub, a free software, that allows downsamplimg using Lanczos algorithm, fast, easy and simple. I tested MPEG Streamclip which also allows it but it changes the gamma and saturation for some reason. 

I'll shoot 4K with zero sharpening, edit and grade in 4K, add no post sharpening, render in a high quality 4K master, ingest into Virtualdub and render to HD using Lanczos. This gave me hugely improved 1080p than I was doing of just dumbing it on a 1080p timelime using cubic and adding sharpening and sometimes introduced ugly edges and aliasing. Lanczos hits the sweet spot for sharpening (non-sharpened) 4K into HD. Very natural and organic without a hint of artefacting. 

For premiere users you're lucky to have it downsample with that algorithm in the render, but I'd also remember to edit, grade and work on 4K and add NO sharpening before downscaling with Lanczos, it takes care of that, unless you want overly sharpened edges which Lanczos does to any sharpened material. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...