Jump to content

Anamorphic squeeze/stretch and its effect on color space & bit depth


Recommended Posts

Most forum users have probably already read Andrew's article on the effects of scaling 8-bit 4:2:0 4K footage down to 10-bit 4444 1080P footage: '?do=embed' frameborder='0' data-embedContent>>


We in the anamorphic world are constantly rescaling our footage, though only on one axis. What I am curious to know is whether this can affect (positively or negatively) the color space and bit depth of the resulting footage. Also, can we better preserve footage by using wider pixels (as opposed to using square pixels + resizing)? This is new territory for me, and probably for a lot of others as well.


Note: I posed the question in the thread above, but I think that this deserves its own thread.

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

Interesting, but surely we aren't doing the same type of up/down scaling (4k to 1080p), we're just stretching or squeezing & not changing resolution to such extremes.

With DSLR footage I don't think i could ever notice, just because it was so crap to begin with.

Furthermore, with the Anamorphic BM Pocket footage I've filmed, there's been no noticeable difference either.


However, with FCPX I don't think i get the option anymore concerning square pixels. But surely the pixels would not be square until you unsqueezed them? So perhaps we should really be thinking about only unsqueezing Anamorphic footage, thus getting the true resolution & then downscaling afterwards - I don't think it will give us anymore bit depth/colour space, but it might better preserve it.


Or do you mean by unsqueezing we might actually lose something?


Damn! I think i've just confused myself!


Quick, get an expert in here...

Link to comment
Share on other sites

For anamorphic, particularly if your final conform is to letterbox on a standard 1080P frame, you're effectively oversampling the image in a non-uniform way if nothing else.  You'll be filtering some of the effects of compression.  I would think this calls for some experimentation to see just how much mileage you get out of it.


I'm willing to bet you get the best, cleanest results pulling keys on footage un-squeezed in the vertical direction versus from the squeezed footage or from imagery un-squeezed horizontally.  I'm less sure what sort of difference you might see when you go to sharpen.  For Canon AVCHD shooters I wonder how much or how little un-squeezing helps with moire, or Blackmagic for that matter.


You'll want to be in a high bit depth project and the type of filter used in both the scale as well as transcoding the 8bit 4:2:0 YUV footage into an RGB colorspace will play a big part (some you can choose how you filter and some you can't).  With the same piece of video you could very well get entirely different results depending on the application used.

Link to comment
Share on other sites

For a 2x squeeze, If you were to squash your footage not stretch it then maybe you could pick up 1 extra bit, but probably not 2 like the 4k to hd scaling.


Since 4k is HD * 2 you are basically squashing it in x and y you then can use 4 pixels to make 1 pixel or 4 data points per pixel. Those 4 data points are equal to 2 bits.


With anamorphic you  will end up with 2 data points per pixel since you are making 2 pixels 1 giving you 1 bit extra so I would think that a 9 bit image would be possible.

Link to comment
Share on other sites



Suppose you recorded 4:2:0   1080x1280 in CinemaScope. After stretching horizontally you get 4:2:0 1080x2560.


Then after further scaling by 0.75x to fit a 810x1920 box, the colour sampling would increase by 1.33x to somewhere between 4:2:0 and 4:4:4.



Of course you would only see this benefit if you did the edit/scaling in 4:4:4 codec


If you recorded with 5D MLRaw then you would get very close to 4:4:4,  Full height 14-bit true CinemaScope


Also see A1ex's reply in this thread for custom crops


Link to comment
Share on other sites

Probably but I am not sure what it is but, here is one that will tell you how much you need to scale an image down to get the desired number of extra bits. If you can solve for bits you will have your equation, my math is a bit rusty in that area.


scale = 1/(sqrt(2^bits)) where the scale is a value from 0 to 1, and bits is the number of extra bits you want.




0.5 = 1/(sqrt(2^2)) is UHD to HD.


4k to 480P would be about 12 bits. I am considering 480P to be 864 pixels wide with square pixels, even though that is not the case.


0.25 = 1/(sqrt(2^4))


3840 x 0.25 = 960.


Since bits have to be in integer numbers not every resolution change is going to result in a extra bit being added

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
  • Create New...