Jump to content
Sign in to follow this  
jeliza

Unsqueezing/Stretching Software

Recommended Posts

Hey Guys,

 

I am about to shoot some footage on my T3 with my Singer 16d. What do you use to unsqueeze the footage? Premiere Pro? I don't have Final Cut...

 

Also, what's this about using Magic Lantern? What's the advantage?

Share this post


Link to post
Share on other sites
EOSHD Pro Color for Sony cameras EOSHD Pro LOG for Sony CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

Most editing software should be able to desqueeze your anamorphic footage. I don't have a Canon camera, but the advantage (specific to anamorphic) of shooting with ML is the aspect ratio features. ML allows you to shoot 4:3 instead of 16:9, which can be helpful in keeping the unsqueezed ratio from being to extreme (with 2X anamorphics on 16:9, you end up with a 3.55:1 ratio image).

Share this post


Link to post
Share on other sites

Okay,

 

 

So I have shot both video and stills with my Singer 16D lens...would like to know how to unsqueeze footage in After Effects or Premiere Pro??

 

Also for stills, do I change the Pixel Aspect Ratio from Square to Anamorphic in Photoshop? Or should I be changing the image size by a percentage i.e. Multiply the focal length times the anamorphic length and increase the size by that percentage?

 

Help!! :)

Share this post


Link to post
Share on other sites

Okay,

 

 

So I have shot both video and stills with my Singer 16D lens...would like to know how to unsqueeze footage in After Effects or Premiere Pro??

 

Also for stills, do I change the Pixel Aspect Ratio from Square to Anamorphic in Photoshop? Or should I be changing the image size by a percentage i.e. Multiply the focal length times the anamorphic length and increase the size by that percentage?

 

Help!! :)

 

I generally change pixel aspect ratio when working with footage and have my final output be square pixels. Not everything handles pixel aspect ratio's properly so I find it is best to get rid of it for anything that I consider a final product. I do find that using pixel aspect ratio's during color correction allows me to keep my full resolution and minimize memory since a 2.0 pixel aspect changes the size of the image and unsqueezes it but doesn't change the number of pixels unlike scaling the footage by 2.

 

For images in photoshop I would scale the image rather than use pixel aspect, I imagine that image viewers would be much more likely to not support pixel aspect ratio's.

 

For AE or Premiere I always have the sequence/comp the exact same height as my raw footage and use pixel aspect ratios to preview the un-squeezed footage. There are par's for 1.33, 1.5, and 2.0 in both apps.  I then set the width of the comp/sequence to have a final aspect of 2.39 or 2.67 (usually 1440 x 1080), and re-frame my footage during edit. A 2x anamorphic shot 16:9 is way too wide and I like to get it back to a standard aspect in the beginning and frame my footage horizontally as I go rather than keeping it ultra wide and re-framing at the end as a whole since I like to have the option to individually frame each shot.

 

Once all editing and CC is done I export out a master in a high quality codec, 10bit black magic at this point since I am on windows and DNxHD doesn't allow resolutions like 1440 x 1080. From there I use my master to make my final square pixel version.

 

As a work around for 2x squeeze with a 1440 x 1080 comp and DNxHD I have found I can set my footage to have a 1.33 aspect ratio and render out 1920 x 1080 as 1440 * 1.333 = 1920. Then when working with the DNxHD files I assign a 1.5 pixel aspect ratio to it. That gives it a 2x final stretch since 1.5 * 1.333 = 2 and still keeps some of the benefits of using pixel aspect instead of scaling.. 

Share this post


Link to post
Share on other sites

It's often a good idea to shoot some footage or stills of a circular shape or object using your setup, this will allow you to stretch footage using your own reference, it also allows you to understand the stretch an give you a guide as to what looks good vs actual required stretch, hope this helps.

Share this post


Link to post
Share on other sites

Hi - I'm working with magic lantern footage from the 5diii shot with an iscorama. My editing system is a 2009 Mac tower. I'm using FCPX 10.1.2. I'd like to be able to put the viewer window on a second computer monitor and also output hdmi to a 10 bit monitor. I have an intensity pro card. Everything works OK except the output from FCPX to the hdmi monitor bypasses the anamorphic setting so it not unsquished ( a technical term ;)
I can output from DaVinci Resolve lite to the monitor in anamorphic aspect ratio.

I am using transform in FCPX to anamorphically stretch each clip.

Any thoughts on this? I've checked everything I can think of and it's driving me nuts - I would think I could output from whatever was happening in the viewer window.

Thanks - any thoughts appreciated!!
Toni

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

  • Similar Content

    • By Elliot M
      Hi all,
      Hoping someone can help with this edit workflow question:
      I currently shoot video on Canon DSLRs (in H264 MOV format), and edit on a late 2009 iMac (2.8ghz i7 processor, 16gb memory).
      The films I make are mainly for web rather than TV broadcast, and beyond basic colour grade / tidying up, have minimal effects added (no CGI).
      Until recently, I used Final Cut Pro 7, using FCP's Log & Transfer function to import and edit footage in Pro Res 422 format.
      Having just moved to Premiere Pro CC 2017, I'm trying to figure out the most efficient workflow with the best resulting image.
      Should I import and edit in native H264 MOV? Or ingest and edit as either Pro Res or DNxHD?
      If Pro Res or DNxHD, what's the best way to ingest (or import / transcode)?
      I've been reading mixed things via Google; mainly Adobe-related articles explaining a native workflow, vs various articles sponsored by transcoding software companies, saying that transcoding will have a better result.
      Any thoughts would be much appreciated.
      Thanks!
      Elliot
    • By thetimecode
      I recently shot a music video with the GH4(in 4k) and GH2(1080) all the footage looks really good in preview playback but I notice a Gamma change with the .mov GH4 files if I open in FCP or VLC? I've had this problem with RED files and I just continued editing with the darkened gamma change because the files seem to correct themselves upon export. With that said I have a couple questions with how I should continue with this project or if there is a better way.
      My workflow plan right now is to Transcode first all of AVCHD GH2 files with 5dToRGB to PRORES 422HQ
      After that I'll import the PRORES files into FCP and drag a clip to the timeline and match the sequence settings to the clip because the final deliverable will be in 1080.
      My question is with the 4k footage and if this workflow makes sense? Should I import the .mov files directly into the 1080 sequence? or should I transcode the .mov files to 4K PRORES? I know importing the .mov files will slow me down because I'll have to constantly render but I don't want to lose image quality in conversion.
      My concern is the gamma will change and I wont know the true image quality until after I export. I just want to keep the images looking like they are.
      What would be some of your workflows when working with the same type media?
    • By harharj
      I've been looking everywhere, but I can seem to find anything solid on squeezing anamorphic footage on Premiere Pro, only found methods for FCP.  If anyone knows the best method of squeezing 1.5x and 2x anamorphic footage on Premiere Pro, that would be greatly appreciated.
    • By gethin
      THere must be a way to do this! I just want to get slow mo. I'm on a 25p timeline and I've been given 60p footage to incorporate. Its all silent.  I've dropped the 60p (59.94) into premiere and slowed it down 41.71%.  Seems a bit of a bodgy way to do it. Is there a better way?  (like in virtualdub? Ideally you could just hack the header info to change the time signature.
       
       
    • By nathanleebush
      The 1080p image quality gains from a 4K GH4 file are well documented, but I've got a basic workflow question to extract the maximum IQ from my 4K files destined for FullHD output:
       
      I've got a basic two cam interview I shot with the GH4. I'm wondering how to maximize the extra information when outputting to 1080p. Will I have equal sharpness and color information gains if I edit on a 1080p timeline and output to 1080p as if I edit on a 4K timeline and output to 1080p? Is it the same effect in the end?
×
×
  • Create New...