Jump to content

My Journey To Virtual Production


BTM_Pix
 Share

Recommended Posts

  • Super Members

They say "A journey of a thousand miles begins with a single step" and that describes exactly where I am with Virtual Production.

The reason for setting off on this journey ?

Curiosity in the process is one aspect - and is something I'm hoping will sustain when the expected big boulders block the route - but I also think VP is going to be a process that trickles down to lower budget projects faster than people might expect so it is something I'm keen to get involved in.

What aspects of VP am I looking to learn about and get involved with ?

In the first instance, it will be about simple static background replacement compositing but will move on to more "synthetic set" 3D territory both for narrative but also multi camera for virtual studio applications.

There will also be a side aspect of creating 3D assets for the sets but that won't be happening until some comfort level of how to actually use them is achieved !

And the ultimate objective ?

I'm not expecting to be producing The Mandalorian in my spare bedroom but I am hoping to end up in a position where a very low-fi version of the techniques used in it can be seen to be viable. I'm also hopeful that I can get a two camera virtual broadcast studio setup working for my upcoming YouTube crossover channel Shockface which is a vlog about a gangster who sells illicit thumbnails to haplessly addicted algorithm junkies.

I'm fully expecting this journey to make regular stops at "Bewilderment", "Fuck Up", "Disillusionment" and "Poverty" but if you'd like to be an observer then keep an eye on this thread as it unfolds.

Or more likely hurtles off the rails.

Oh and from my initial research about the current state of play, there may well be a product or two that I develop along the way to use with it too...

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

I too have looked a bit into it, but my Gray Matter is beginning to turn into Black Matter at my age and I see total defeat, and not really any sort of application in my life for the ass whipping I would take, not counting money it would cost.

But yes, I agree it is coming and is going to be another trick in your bag you are going to need down the road.

Link to comment
Share on other sites

  • Administrators
1 hour ago, BTM_Pix said:

There will also be a side aspect of creating 3D assets for the sets but that won't be happening until some comfort level of how to actually use them is achieved !

I have always been curious to check out the Unreal Engine.

Emulating a lens, a camera, lights, all so advanced now in game engines.

Will be interested to see what you discover, software and hardware wise.

There are a big studios in Australia now that really push the boat out and make all sorts. Dare say they can sell a lot of their virtual assets to other studios as well!

Virtual production asset marketplace anyone?

Link to comment
Share on other sites

  • Super Members
2 hours ago, webrunner5 said:

I too have looked a bit into it, but my Gray Matter is beginning to turn into Black Matter at my age and I see total defeat.

You and me both.

1 hour ago, Andrew Reid said:

I have always been curious to check out the Unreal Engine.

Emulating a lens, a camera, lights, all so advanced now in game engines.

Will be interested to see what you discover, software and hardware wise.

 

Like most things these days, the amount of stuff and graft required to get systems set up with "real" cameras just to get the same starting point in terms of real time composting etc as some phone applications is quite mad !

The interesting thing is how it may impact the role/expense of the camera and lenses in such productions as, from a practical though simplistic point of view, the onus just becomes on achieving a sharp capture of the live element as all of the background elements where the character/mojo/hype kicks in with real cameras and lenses will be synthetically created.

It offers the unwanted potential for some horribly synthetic "portrait mode" stuff like some smartphones of course.

But with much more expensive cameras and lenses.

And tons more computing power required !

 

Link to comment
Share on other sites

Last year my friends and I put together a show which we filmed in virtual reality. We tried to make it as much like actual filmmaking as possible, except we were 1000 miles away. That didn't turn out to be as possible as we'd hoped, due to bugs and limited features. The engine is entirely built from scratch using Unity, so I was doing making patches every 10 minutes just to get it to work. Most of the 3D assets were made for the show, maybe 10% or so were found free on the web. Definitely one of the more difficult aspects was building a pipeline entirely remotely for scripts, storyboards, assets, recordings, and then at the very end the relatively simple task of collaborative remote editing with video and audio files.

If you're interested you can watch it here (mature audiences) https://www.youtube.com/playlist?list=PLocS26VJsm_2dYcuwrN36ZgrOVtx49urd

I've been toying with the idea of going solo on my own virtual production as well.

Link to comment
Share on other sites

  • Administrators
3 hours ago, KnightsFan said:

Thanks! I've been building games in Unity and Blender for many years, so there was a lot of build up to making this particular project. All that's to say I'm interested in virtual production, and will definitely be interested to see what BTM or anyone else comes up with in this area.

How dependant on coding is Unity? Do you need to learn a language or is it more of lego kit of existing libraries and engines that you can build a game or immersive 3D environment with?

Link to comment
Share on other sites

46 minutes ago, Andrew Reid said:

How dependant on coding is Unity? Do you need to learn a language or is it more of lego kit of existing libraries and engines that you can build a game or immersive 3D environment with?

Unity is very dependent on coding but is designed to be easy to code in. There are existing packages on the asset store (both paid and free) that can get you pretty far, but if you aren't planning to write code, then a lot of Unity's benefits are lost. For example you can make a 3D animation without writing any code at all, but at that point just use Blender or Unreal. The speed with which you can write and iterate code in Unity is its selling point.

Edit: Also, Unity uses C# so that would be the language to learn

Link to comment
Share on other sites

  • Super Members

Step 1 - The Pros And Not Pros

So, to look at sort of what we want to end up with conceptually, this is the most obviously well discussed example of VP at the moment.

Fundamentally, its about seamlessly blending real foreground people/objects synthetic backgrounds and its a new-ish take on the (very) old Hollywood staple of rear projection as used with varying degrees of success in the past.

The difference now is that productions like The Mandalorian are using dynamically generated backgrounds created in real time to synchronise with the camera movement of the real camera that is filming the foreground content.

Even just for a simple background replacement like in the driving examples from the old films, the new technique has huge advantages in terms of being able to sell the effect through changeable lighting and optical effect simulation on the background etc.

Beyond that, though, the scope for the generation and re-configuring of complex background sets in real time is pretty amazing for creating series such as The Mandalorian.

OK, so this is all well and good for a series that has a budget of $8.5m an episode, shot in an aircraft hangar sized building with more OLED screens than a Samsung production run and a small army of people working on it but what's in it for the rest of us ?

Well, we are now at the point where it scales down and, whilst not exactly free, its becoming far more achievable at a lower budget level as we can see in this example.

And it is that sort of level that I initially want to head towards and possibly spot some opportunities to add some more enhancements of my own in terms of camera integration.

I'm pretty much going from a standing start with this stuff so there won't be anything exactly revolutionary in this journey as I'll largely be leaning pretty heavily on the findings of the many people who are way further down the track.

In tomorrow's thrilling instalment, I'll be getting a rough kit list together and then having a long lie down after I've totted up the cost of it and have to re-calibrate my "at a budget level" definition.

Link to comment
Share on other sites

For highly detailed & inexpensive virtual sets, check out DAZ 3D.

I rendered these with a DAZ model by Jack Tomalin (with two different texture sets),

imported to Maxon Cinema 4D. Its a great way to experiment with lighting setups:

1762424950_BaroqueGrandeurB.jpg.c240491341a0c6be9f6472a481c1dcf6.jpg

1722530608_BaroqueGrandeurA.jpg.3e6eea8015ff9e9444e50021a0318951.jpg
Stonemason is another great DAZ modeler for virtual interiors/exteriors:

https://www.daz3d.com/the-streets-of-venice

These models are generally less than $50 and are getting more detailed every year.


Lately I’ve been looking at low cost motion capture systems.

Anyone tried the Perception Neuron 3 system? - $2,400

https://www.neuronmocap.com/perception-neuron-3-motion-capture-system

 

Link to comment
Share on other sites

  • Super Members

Step 2 - Holy Kit

OK, so we've established that there is no way we can contemplate the financial, physical and personnel resources available to The Mandalorian but we do need to look at just how much kit we need and how much it will cost to mimic the concept.

From the front of our eyeball to the end of the background, we are going to need the following :

  1. Camera
  2. Lens
  3. Tracking system for both camera and lens
  4. Video capture device
  5. Computer
  6. Software
  7. Video output device
  8. Screen

No problem with 1 and 2 obviously but things start to get more challenging from step 3 onwards.

In terms of the tracking system, we need to be able to synchronise the position of our physical camera in the real world to the synthetic camera inside the virtual 3D set.

There are numerous ways to do this but at our level we are looking at re-purposing the Vive Tracker units mounted on top of the camera.

vive_tracker.jpg.db30bdce12d95fffde9536a6093f23ff.jpg

With the aid of a couple of base station units, this will transmit the physical position of the camera to the computer and whilst there may be some scope to look at different options, this is a proven solution that initially at least will be the one for me to look at getting.

Price wise, it is €600 for a tracker and two base stations so I won't be getting the credit card out for these until I've got the initial setup working so in the initial stages it will be a locked off camera only !

For the lens control, there also needs to be synchronisation between the focus and zoom position of the physical lens on the physical camera with that of the virtual lens on the virtual camera.

Again, the current norm is to use additional Vive trackers attached to follow focus wheels and translating the movements to focus and lens positions so that will require an additional two of those at €200 each.

Of course, these are also definitely in the "to be added at some point" category as the camera tracker.

The other option here are lens encoders which attach to the lens itself and transmit the physical position of the lens to the computer which then uses a calibration process to map these to distance or focal length but these are roughly €600 each so, well, not happening initially either.

Moving on to video capture, that is something that can range from the €10 HDMI capture dongles up to BM's high end cards but in the first instance where we are just exploring the concepts then I'll use what I have around.

The computer is the thing that is giving me a fit of the vapours.

My current late 2015 MacBook Pro isn't going to cut it in terms of running this stuff full bore and I'm looking to get one of the new fancy dan M1 Max ones anyway (just not till the mid part of the year) but I'm not sure if the support is there yet for the software.  Again, just to get it going, I am going to see if I can run it with what I have for proof of concept and then defer the decision until later regarding whether I'm going to take it further enough to hasten the upgrade.

What I'm most worried about is the support to fully utilise the new MacBooks not being there and having to get a Windows machine.

Eek !

At least the software part is sorted as obviously I'll be using Unreal Engine to do everything and with it being free (for this application at least) there is no way of going cheaper !

The video output and screen aspect are ones that, again, can be deferred until or if such time as I want to become more serious than just tinkering around.

At the serious end, the output will need an output card such as one of BM's and a display device such as a projector so that live in camera compositing can be done.

At the conceptual end, the output can just be the computer screen for monitoring and a green screen and save the compositing for later.

To be honest, even if I can get my addled old brain to get things advanced to the stage that I want to use it then its unlikely that the budget would stretch to the full on in camera compositing so it will be green screen all the way !

OK, so the next step is to install Unreal Engine and see what I can get going with what I have to hand before putting any money into what might be a dead end/five minute wonder for me.

Link to comment
Share on other sites

22 hours ago, Jay60p said:

Lately I’ve been looking at low cost motion capture systems.

How accurate does it need to be? There are open source repos on github for real time, full body motion tracking based on a single camera which are surprisingly accurate https://github.com/digital-standard/ThreeDPoseUnityBarracuda. It's a pretty significant jump in price up to an actual mocap suit, even a "cheap" one.

 

39 minutes ago, BTM_Pix said:

Price wise, it is €600 for a tracker and two base stations so I won't be getting the credit card out for these until I've got the initial setup working so in the initial stages it will be a locked off camera only !

I wonder how accurate or cost effective it would be to instead mount one of these on your camera. https://www.stereolabs.com/zed/ I keep almost buying a Zed camera to mess around with, because I never had a project to use it in. Though if you already have a Vive ecosystem, a single tracker isn't a ton more money.

 

42 minutes ago, BTM_Pix said:

The other option here are lens encoders which attach to the lens itself and transmit the physical position of the lens to the computer which then uses a calibration process to map these to distance or focal length but these are roughly €600 each so, well, not happening initially either.

You can make your own focus ring rotation sensor with a potentiometer and an arduino for next to nothing. (As you might be able to tell, I'm all for spending way too much effort on things that have commercial solutions).

 

50 minutes ago, BTM_Pix said:

At the serious end, the output will need an output card such as one of BM's and a display device such as a projector so that live in camera compositing can be done.

One piece that I haven't researched at all is how to actually project the image. I don't know what type of projector, what type of screen, how bad the color cast will be, or the potential cost of all that would be. An LCD would be the cleanest way to go, but for any size at all that's a hefty investment. I once did a test with a background LCD screen using lego stopmotion so that the screen would be a sufficient size, but that was long before I had any idea what I was doing.

 

Enjoying the write up so far!

Link to comment
Share on other sites

  • Super Members
40 minutes ago, KnightsFan said:

I wonder how accurate or cost effective it would be to instead mount one of these on your camera. https://www.stereolabs.com/zed/ I keep almost buying a Zed camera to mess around with, because I never had a project to use it in. Though if you already have a Vive ecosystem, a single tracker isn't a ton more money.

 

Yeah, I have seen some people looking to integrate them and similar (like the now sadly discontinued Intel Realsense cameras) and they do have Unreal Engine integration.

At $499 for the new Zed 2i they work out a fair bit cheaper than a start from scratch Vive system.

For now the conservative option is the Vive because so many people use it that its easier to find resources for it so I'll see where I'm at when the time comes as they will likely be easier for me to integrate.

46 minutes ago, KnightsFan said:

You can make your own focus ring rotation sensor with a potentiometer and an arduino for next to nothing. (As you might be able to tell, I'm all for spending way too much effort on things that have commercial solutions).

The lens control aspect is the target product development project for me to emerge with from doing this journey. 

Well, the first one anyway.

49 minutes ago, KnightsFan said:

One piece that I haven't researched at all is how to actually project the image. I don't know what type of projector, what type of screen, how bad the color cast will be, or the potential cost of all that would be. An LCD would be the cleanest way to go, but for any size at all that's a hefty investment. I once did a test with a background LCD screen using lego stopmotion so that the screen would be a sufficient size, but that was long before I had any idea what I was doing.

The idea of having the enormous OLED walls is obviously a pipe dream but I guess the peak at this level for me I think would be to use one of the 4K Ultra Short Throw Laser projectors like the LG Cinebeam etc.

Sat off the wall at 38cm they'll give a 2.2 metre screen size and go up to just under 4 metres at about 50cm.

Price is roughly €2-2.5K for those type of projectors (plus screen) so I'd have to be very keen on this to get one but in an era when no one is batting much of an eyelid at €6K hybrid cameras then its not really that eye watering a sum to invest.

Particularly as if it all turns to shit you can always have fantastic movie nights with it in even the smallest of spaces !

Until then, I'll have a mess about with my current little projector to experiment with.

At this point, not to mention for many points afterwards, I'm more concerned with the functional aspect rather than any semblance of quality !

Link to comment
Share on other sites

On 1/24/2022 at 11:32 PM, Andrew Reid said:

There are a big studios in Australia now that really push the boat out and make all sorts. Dare say they can sell a lot of their virtual assets to other studios as well!

 

i can rent you some of my back yard for cheap.🤔

Link to comment
Share on other sites

9 hours ago, KnightsFan said:

How accurate does it need to be? There are open source repos on github for real time, full body motion tracking based on a single camera which are surprisingly accurate https://github.com/digital-standard/ThreeDPoseUnityBarracuda. It's a pretty significant jump in price up to an actual mocap suit, even a "cheap" one.

The big drive for resolution in Hollywood comes from the VFX teams, who require the resolution for getting clean keys but also for tracking purposes.  I've heard VFX guys talk about sub-pixel accuracy being required for good trackers as by the time you use that information to composite in 3D elements, which could be quite far into the background, the errors can add up.  Obviously each technical discipline wants to do its job as well as it can, and people do over-engineer things to get margins of safety, but I got the impression that sub-pixel level accuracy was in the realm of what was required for things to look natural.

The human visual system and spatial capability is highly refined and not to be underestimated, but of course this will be context-dependent.  If you were doing a background replacement on a hand-held shot of a closeup that involved mountains that were relatively in-focus then a tiny amount of rotation will cause a large offset in the background and it would be quite visible.  Altering the background of a shot that has moderately shallow DoF and only involves the camera moving on a slider would be a far less critical task.

Link to comment
Share on other sites

8 hours ago, kye said:

 The big drive for resolution in Hollywood comes from the VFX teams, who require the resolution for getting clean keys but also for tracking purposes.....If you were doing a background replacement on a hand-held shot of a closeup that involved mountains that were relatively in-focus then a tiny amount of rotation will cause a large offset in the background and it would be quite visible.

Sorry I made a typo, I was talking about motion capture in my response to Jay, not motion tracking. The repo I linked to can give pretty good results for full body motion capture. Resolution is still important there, but not in the sense of background objects moving. As with any motion capture system, there will be some level of manual cleanup required.

For real time motion tracking, the solution is typically a dedicated multicam system like Vive trackers, or a depth sensing camera. Not a high resolution camera for VFX/post style tracking markers.

Link to comment
Share on other sites

  • Administrators
21 hours ago, BTM_Pix said:

From the front of our eyeball to the end of the background, we are going to need the following :

  1. Camera
  2. Lens
  3. Tracking system for both camera and lens
  4. Video capture device
  5. Computer
  6. Software
  7. Video output device
  8. Screen

No problem with 1 and 2 obviously but things start to get more challenging from step 3 onwards.

In terms of the tracking system, we need to be able to synchronise the position of our physical camera in the real world to the synthetic camera inside the virtual 3D set.

There are numerous ways to do this but at our level we are looking at re-purposing the Vive Tracker units mounted on top of the camera.

vive_tracker.jpg.db30bdce12d95fffde9536a6093f23ff.jpg

Could a simple accelerometer, gyro and lidar module on top of the camera feed the same info into the computer?

If camera always begins from the centre of a big room, you could measure distance to all 4 walls to see where the camera is moved.

Smartphone sensors could measure tilt, orientation, etc.

Or programme a lidar sensor (or 4) to keep track of a pattern on the camera and track the movement and calculate location of it within the physical space?

21 hours ago, BTM_Pix said:

To be honest, even if I can get my addled old brain to get things advanced to the stage that I want to use it then its unlikely that the budget would stretch to the full on in camera compositing so it will be green screen all the way !

Budget version would have to be a smartphone camera wouldn't it!

They have the sensors, wireless communications, software side already in one device.

Just all needs bridging up and feeding into the computer.

21 hours ago, BTM_Pix said:

OK, so the next step is to install Unreal Engine and see what I can get going with what I have to hand before putting any money into what might be a dead end/five minute wonder for me.

Maybe try Unity on a Mac as well.

Not as graphically spectacular but might be less taunting to get going.

Are you going to hire a space and paint the walls green? 🙂

Or just a proof of concept tech wise?

Link to comment
Share on other sites

You can't get accurate position data from an accelerometer, the errors from integrating twice are too high.

It might depend on the phone, but orientation data in my experience is highly unreliable. I send orientation from my phone to PC for real world  manipulation of 3D props, and it's nowhere near accurate enough for camera tracking. It's great for placing virtual set dressings, where small errors actually can make it look more natural. There's a free android app called Sensors Multitool where you can read some of the data. If I set my phone flat on a table the orientation data "wobbles" by up to 4 degrees.

But in general, smartphones are woefully underutilized in so many ways. With a decent router, you can set up a network and use phones as mini computers to run custom scripts or apps anywhere on set--virtual or real production. Two way radios, reference pictures, IP cams, audio recorders, backup hard drive, note taking, all for a couple bucks second hand on ebay.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...