Jump to content

Robot dolly + projection mapping = magic


Andrew Reid
 Share

Recommended Posts

  • Administrators
[media]http://vimeo.com/75260457[/media]

Here's a new technique executed to perfection...

Amazingly what you see isn't CGI added to the scene in post but a live set of visuals.

The technique uses robotics, motion control and projection, all filmed in-camera.

[url=http://www.eoshd.com/content/11220/robot-dolly-projection-mapping-magic]Read the full article here[/url]
Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

What's being projected is (mostly) CG, it's just not done through post production techniques.  The system knowing the position and orientation of the robotic arm at all times relative to the camera POV (like how the realtime line of scrimmage and yards-to-go lines are created for live sporting event broadcast) makes the orientation of projection surfaces easier to derive compared to having to track them in post. 

 

You could, additionally, sync a second motion controlled camera to photograph a completely different live scene, instead of using pre-rendered or on-the-fly CGI, to project onto the surfaces.  After 5th Element we built a system within Houdini to interface to the Digital Domain motion-control stage that allowed animators to design physically accurate moves for motion control miniatures rather than the screen-based methodology that was standard to the industry that often introduced "skating" and other kinds of motion or performance artifacts that have plagued flying miniatures for decades (though miniatures were phased out shortly afterwards so it was a neat proof of concept that never really got used).  This adds a projection component to that sort of realtime telemetry and view-dependent spacial awareness.

 

I didn't see specific info on the website or VIMEO but I'm wondering if the projector is on-axis with the composite camera via beam splitter or if they're pre-distorting the projection so that the projector can be fixed, up out of frame and not attached to the composite camera.  

 

It's neat looking, however it's done.

Link to comment
Share on other sites

Amazing,

but to execute this will take more time then to shot it and do it in post!

 

Not really.  The imagery being projected is being generated in realtime, most likely.  It's really simple graphics.  

 

It's likely more expensive than doing it in post, because of the large robots and the motion controlled camera, which is a third, unseen robot as well as bespoke software to handle integration of the telemetry, motion control and imagery.

 

Setting this up means calibrating the three robots to the host software so that it knows the position and orientation of the tip of each at least, controlling position and orientation of the projection surfaces (which would be known dimensions) and the camera (which would also have known dimensions).  MoCo software that calibrates itself to a fixed origin has been around forever and takes minutes, if that.  This is doing that at least three times and then you're set for the shoot.   The rest is mostly involved in generating the realtime imagery being projected.

Link to comment
Share on other sites

Not really.  The imagery being projected is being generated in realtime, most likely.  It's really simple graphics.  

 

 

Hopes they'll have a making of eventually....

 

I actually think the images are pre-rendered. Why do it in realtime with games quality renderer (they would probably use processing as the Creative designer is Gmunk, and he is used to processing) when you can have hi-end CG with global illumination, ambient occlusion, area lights, etc...

 

Pretty sure the camera is mounted on one of those 6-axis robot. That way, everything is tight to the millisecond and to the millimeter. no need to build and to test a tracking system - and anyway you could not do a system precise like that in the dark. Everything is animated in 3D, from the cam to the surfaces, so that way you are sure all visual is perfectly tight in 3D.

Link to comment
Share on other sites

 

I actually think the images are pre-rendered. Why do it in realtime with games quality renderer (they would probably use processing as the Creative designer is Gmunk, and he is used to processing) when you can have hi-end CG with global illumination, ambient occlusion, area lights, etc...

 

 

All of those things are doable in realtime with realtime engines.  Area lights, ambient occlusion and GI effects.  Reduced to Black-n-White it's even faster.  The graphic simplicity of most of what was projected here served an artistic purpose but also represents a wouldn't-break-a-sweat level of imagery for a modern gaming engine.

 

A realtime solution allows them the freedom of performance and isn't just about playback, calibration and registration.  It may very well be pre-rendered.  There's just no reason for this particular level of imagery to be pre-rendered except by choice.

 

Being able to see a virtual representation on set, giving actors, directors and cameramen better reference than dots or X's or "picture this" motivations to go on, leveraging realtime gaming technology (in many cases), has already been used on several films.  Avatar used it to create camera moves with actors in virtual sets.  The same sort of technology was used on fully-animated films like Polar Express with cameramen operating familiar dolly and other camera gear to create virtual camera moves in a virtual space but it was all driven by, essentially, mapping the mechanics of real equipment.

 

Then you have the realtime overlay of fighting robots for Real Steel that let the camera crew create realistic, hand-held camerawork for the boxing matches, shooting virtual characters fighting in real spaces.  You also have the app created for the filming of the massive jaeger hanger in Pacific Rim that let anyone on set hold up their iPhone or iPad and look around them at where they were standing and how the set related to what would eventually be seen all around them.

 

Perhaps something like Google Glass will allow actors to see a representation of non-human characters they're interacting with, overlayed on their view of a stage space along with a representation of the virtual world they'll eventually be standing in.  Projection like this is largely view dependent and so it's not quite as useful in that regard.  The main issue with something like Google Glass, however, is how any kind of eye glasses or head-mounted display could potentially disrupt facial performance capture, especially since getting the eyes right is one of the most important components.

Link to comment
Share on other sites

If you think jazz musicians "improvise" real-time, then you can believe this short has been done "real-time". Jazz musicians must study music and their instrument for years before they can improvise. To make this short "real-time", they have worked for days, probably weeks. Programming movements can't be improvised. 

I agree a making-off would be great! 

Link to comment
Share on other sites

If you think jazz musicians "improvise" real-time, then you can believe this short has been done "real-time". Jazz musicians must study music and their instrument for years before they can improvise. To make this short "real-time", they have worked for days, probably weeks. Programming movements can't be improvised. 

I agree a making-off would be great! 

 

Real-time in this case means that the CG isn't pre-rendered, but the rendering happens in real-time, just like the 3D graphics are rendered in real-time in most computer games. Not real-time would be like special effects in movies, where the rendering of the CG for just one frame can take from minutes to days to render for one processor core.

Link to comment
Share on other sites

Yes, I understand that. But from the video, it all seems a long sequence, no cuts. So I really think a lot of programming work was needed! A making-off is utterly necessary! ;-DDD

Video-mapping isn't new, it can be wonderful, even if most examples I have seen are more like "presets" than custom/well thought material...

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...