Jump to content

My Journey To Virtual Production


BTM_Pix
 Share

Recommended Posts

  • Super Members
1 hour ago, Andrew Reid said:

Could a simple accelerometer, gyro and lidar module on top of the camera feed the same info into the computer?

If camera always begins from the centre of a big room, you could measure distance to all 4 walls to see where the camera is moved.

Smartphone sensors could measure tilt, orientation, etc.

Or programme a lidar sensor (or 4) to keep track of a pattern on the camera and track the movement and calculate location of it within the physical space?

Yes, there are a number of possibilities to do it differently with integrations of different bits and pieces.

The advantage of the Vive for most people is that it is an off the shelf solution.

Some of us have shelves with different things on though 😉 

1 hour ago, Andrew Reid said:

Budget version would have to be a smartphone camera wouldn't it!

They have the sensors, wireless communications, software side already in one device.

Just all needs bridging up and feeding into the computer.

Not with my ancient steam powered Samsung it wouldn't !

But yeah, this is what I was referring to about how much shit you need to cobble together to reach the start line whereas the new generations of smartphones have all of the components built in.

Including the background removal so you don't have to use chroma key.

1 hour ago, Andrew Reid said:

Maybe try Unity on a Mac as well.

Not as graphically spectacular but might be less taunting to get going.

Not sure about the depth of support for it in Unity.

Or, more importantly, the ratio of people that would be using it for VP versus those using Unreal Engine which means less support materials and user experience to help the journey along.

1 hour ago, Andrew Reid said:

Are you going to hire a space and paint the walls green? 🙂

Or just a proof of concept tech wise?

Fuck that.

This entire projects is just a thinly veiled excuse to sneak a 4K ultra short throw laser projector into the house under the premise of essential R&D.

Until then, it'll be one of those pop-up green screens that are more difficult to get back in their enclosure than a ship in a bottle !

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
7 hours ago, KnightsFan said:

Sorry I made a typo, I was talking about motion capture in my response to Jay, not motion tracking. The repo I linked to can give pretty good results for full body motion capture. Resolution is still important there, but not in the sense of background objects moving. As with any motion capture system, there will be some level of manual cleanup required.

For real time motion tracking, the solution is typically a dedicated multicam system like Vive trackers, or a depth sensing camera. Not a high resolution camera for VFX/post style tracking markers.

I'm not really familiar with the difference between motion tracking and capture (am now - I just looked it up!) but obviously I'm at the edge of my knowledge 🙂

One thing that stands out about the difference would be that the accuracy of motion tracking the camera/lens/focus ring would have to be spectacularly higher than for motion capture of an object in the frame.  Unless the sensors for these things were placed very close to the camera, which would limit movement considerably.

I guess we'll see though - @BTM_Pix usually delivers so I wouldn't be surprised to see a working prototype in a few days, and footage a few days after that!

Link to comment
Share on other sites

16 hours ago, kye said:

One thing that stands out about the difference would be that the accuracy of motion tracking the camera/lens/focus ring would have to be spectacularly higher than for motion capture of an object in the frame.  Unless the sensors for these things were placed very close to the camera, which would limit movement considerably.

It's still easy to spot an object sliding around in the frame, especially if it's supposed to be fixed to the ground. And motion capture usually means capturing the animation of a person, ie how their arms, legs, and face move, and we're especially adept at noticing when a fellow human moves incorrectly. So for foreground character animation, the accuracy has to also be really high. I believe that even with high end motion capture systems, an animator will clean up the data afterwards for high fidelity workflows like you would see in blockbusters or AAA video games.

The repo I posted is a machine learning algorithm to get full body human animation from a video of a person, as opposed to the traditional method of a person wearing a suit with markers tracked by an array of cameras fed through an algorithm someone wrote manually. It has shockingly good results, and that method will only get better with time--as with every other machine learning application! Machine learning is the future of animation imo.

For motion tracking, Vive is exceptional. I've used a lot of commercial VR headsets, and Vive is top of the pack for tracking. Much better than the inside out version (using builtin cameras instead of external sensors) of even Oculus/Meta's headsets. I don't know what the distance limit is, I've got my base stations about 10 ft apart. For me and my room, the limiting factor for a homemade virtual set is the size of the backdrop, not the tracking area.

Link to comment
Share on other sites

On 1/25/2022 at 1:23 PM, BTM_Pix said:

The idea of having the enormous OLED walls is obviously a pipe dream but I guess the peak at this level for me I think would be to use one of the 4K Ultra Short Throw Laser projectors like the LG Cinebeam etc.

Sat off the wall at 38cm they'll give a 2.2 metre screen size and go up to just under 4 metres at about 50cm.

Price is roughly €2-2.5K for those type of projectors (plus screen) so I'd have to be very keen on this to get one but in an era when no one is batting much of an eyelid at €6K hybrid cameras then its not really that eye watering a sum to invest.

Great topic! I have been researching projectors for just such an application. Here are some of my findings.

1. Laser projectors have better contrast ratios as compared to lamp based projectors, so Laser projector blacks are less grey and preferred.

2. Laser projectors have better color consistency and degrade predictably and uniformly over time and typically are good for 20,000 hours, so Laser is preferred over lamp as a projector light source.

3. You can damage your vision by looking into a laser projector lens (when it is on of course) so using a short throw projector that the subject stands so that the projector is behind them or rear projected is the safest configuration when shooting human or animals subjects etc.  Some Laser projectors have proximity sensors and dim when an object gets too close to the lens, others do not.

4. DLP vs LCD. DLP can produce a rainbow effect when filming the projected image so LCD is preferred.

5. DLP vs LCD. LCD has better color brightness and saturation over DLP so LCD is preferred.

6. Home vs Commercial Projectors. Some commercial projectors offer "projection mapping" capabilities, which is a sophisticated means of combining the images of multiple projectors, blending the overlapping image edges and compensating for geometric issues as well as offering image calibration across multiple projectors. If you plan to use more than one projector to make a single large image, you will want commercial projectors with projection mapping capabilities.

7. Lens shift and throw distance. Short throw projectors seem to be designed with a narrow through distance range with limited (if at all) optical/physical lens shift capabilities.  Many commercial grade projectors have removable lenses with remotely operated motorized lens shift, zoom and focus capabilities. For creative and projection mapping applications, a commercial projector with a detectable lens provides greater placement and screen size options.

8. "4k" projection . True 4k LCD Laser projectors are far more expensive than 4k enhanced LCD projectors that use LCD or DPL imaging element shifting to create the perception of about half the resolution of true 4k (like the LG Cinebeam or Epson LS500 do).

Here's what a 3 LCD (1080px1200 native HD) ultra short throw Laser projector with 4000 lumens of light output looks list under trade show conditions front projected and rear projected. It is the Epson PowerLite 700U and Is about $1,400 refurbished.

For those who would like to consider using multiple projectors and doing projection mapping, the Epson PowerLite L1100U (6000 Lumens and can take a 4k signal but does not do the 4k enhancement when projection mapping) would be a good entry point at about $2300each refurbished. Here's a video on how a room looks mapped with multiple projectors.

I imagine an ALR (ambient light rejecting) screen would help in trying to achieve a useable ratio of screen light to subject light along with higher ISO cameras.

@BTM_Pixsuch a great topic. Thank you! I will be following this thread closely.

Link to comment
Share on other sites

Fascinating stuff! As an occasional gamer who recently acquired a PS5, i can only say games using UE engine and other next-gen engines (Detroit, God of War etc) are simply mind blowing with the realism of the environments and real-time rendering of bokeh, rack focusing, lens flare, lighting etc.

Also having recently produced a filmed project with augmented 3D assets done in Blender/Unity, this topic is of interest. It was fascinating to me how seamlessly my VFX unit pulled the metadata from the RAW filmed footage to recreate the scene environments respecting focal length, aperture, WB etc. 

VP is complete next-level, here's another ILM promo from season 2 where they took things even further:

Mega-million budget for sure but it would sure be interesting to see if/how this tech can somewhat trickle down.. 

 

Link to comment
Share on other sites

  • Super Members

Step 3 - Start your engines..

So, a few days on and where are we up to ?

I'm happy to say that we have moved forward but, obviously, its a very long journey so the extent of that movement is all relative.

I'm happy to see though that @majoraxis has put together a good primer regarding the projectors for when (or if 😉 ) I get that far.

First step forward was installing Unreal Engine and the first decision there was whether to use the current version 4 (4.27 to be precise) or the early access version of the upcoming version 5.

Both versions are free so the choice for me really was to stick with the current v4.27 release version as its more of a known entity so the learning resources are more plentiful, which is important whilst I'm just paddling in the shallows.

There is also less of a resource requirement in terms of the machine to run it on which, as I'm using a MacBook pro that is really long in the tooth, is a big factor.

Going into this, I am well aware of the limitations of the 1.5gb built in Intel graphics of my MacBook so there is no point trying to explore what v5 brings to the table as judging by the hovercraft noises coming from it when I'm running v4.27 then I'm guessing running v5 would require a fire extinguisher to be at hand.

You can have both versions installed on your machine though so if you have a suitable machine then go for it.

Just to briefly touch on why v5 is a big deal, the primary aspects are two elements called Nanite and Lumen which offer huge advances in terms of detail and lighting control as discussed here :

 

Obviously, v5 will be the ultimate destination but thats some way off yet, particularly considering that a wallet damaging computer upgrade will be required.

As what I'm interested in doing first is applicable to both versions then it can wait anyway and I'll be working in v4.27.

OK, so just circling back to the fundamentals of what the hell Unreal Engine is and using very broad brush strokes to discuss it... (apologies for the baby steps)

It started as the engine used to produce a game called Unreal back in the late 90s (the clue was always in the name 😉 ) and was then licensed to other game developers over the subsequent years using various licensing models to its current status where it is free of charge in terms of royalties for products grossing less than $1m.

Which means its definitely completely free for us !

You can read more about its history here 

https://en.wikipedia.org/wiki/Unreal_Engine

Such is its longevity, ubiquity and support of all gaming platforms, its fairly likely that if you have played any games more graphically challenging than Pong in the past few decades that you have already experienced something created with Unreal Engine.

If, like me, you have been playing more recent titles such as those that have had you running around Midgar lashing Materia at anything with a Shinra logo on it then you will have noticed how much more cinematic everything is.

There have always been the pre-rendered cut scenes that have always looked cinematic of course but now the real time in game content is also taking on an increasingly cinematic look thanks to the simultaneous advancement in machines we are playing them on with the tools in Unreal Engine to simulate the aesthetic.

Bringing this down to two very basic elements, Unreal Engine offers you the ability to build and light the set and then provides you with the camera with which to capture it.

In real time.

And of course, as this emulates the exact paradigm of real world production, the advancements in those two elements is what has sparked the interest in using it for virtual production.

From my point of view, the creation of the set is secondary at this point as I don't have the skills or the time at the moment to be creating the assets but, fortunately, I don't have to as there is plenty of starter content that is freely available from the Marketplace for Unreal Engine that we can use to get going.

For me, this is all about the virtual camera as this is what we will see the scene through and what we will be looking to match to a real camera so all the initial work I've been doing is using very simple sets and seeing what I can do in terms of operating the virtual camera.

The virtual camera has all the same elements of a real camera in that you can not only move its position, change the lens focal length and aperture etc but also control its processing elements such as white balance, ISO and shutter speed.

In the parlance of Unreal Engine, this virtual camera is referred to as a Cine Camera Actor and here is an example of how you see it within the editor.

CineCamera2.thumb.jpg.201316b42c2ac7de8465fc71ef2a91d8.jpg

So as you can see in this example, we have the Cine Camera Actor pointing at the figure inside the set that has been created and in the bottom right you can see its generated viewport of the scene based on its current settings.

The Cine Camera Actor has all the same elements of a real camera in that you can not only move its position, change the lens focal length and aperture etc but also control its processing elements such as white balance, ISO and shutter speed and all these changes that you make will be reflected in the generated viewport in real time.

So, if we change the focal length to be wider then we will get the matching field of view, if we move the position of the camera we will see a different part of the scene, if we open the aperture we will get a shallower depth of field and so on, exactly as it would with a real camera.

How we make those changes in real time as we would with that real camera will be covered in the next enthralling episode 😉 

 

Link to comment
Share on other sites

On 1/24/2022 at 4:16 PM, Jay60p said:

....

Stonemason is another great DAZ modeler for virtual interiors/exteriors:

https://www.daz3d.com/the-streets-of-venice

These models are generally less than $50 and are getting more detailed every year.

 

Right now Stonemason's models are on sale 55% off,  65% if 3 or more, which makes them about $15 each.

Some of his models can use the free DAZ Studio "Bridge to Unreal Engine" (or Unity or Blender etc) translation.

Stonemason is the best modeler at DAZ for 3D exterior scenes, from historical to futuristic.

These sales are brief.

Link to comment
Share on other sites

  • Super Members
55 minutes ago, Django said:

@BTM_Pix You've surely seen this already but I guess it deserves being posted as its basically identical project (VP with VIVE/UE and short-throw projector):

Yes, funnily enough it was a toss up between that one and the one in the original post as regards an example of the destination.

Although its the steps that I use to get there and whether I branch off somewhere else that remains to be seen.

My ultimate end with this may not be using it myself creatively at all but more ending up having developed tools for others to do it.

Incidentally, the posts I've made so far are summary recaps and are about two weeks behind real time so there has been some decent progress on that front already that will be coming up soon.

Link to comment
Share on other sites

Thanks to this topic, I dug a little further into virtual backgrounds myself. I used this library from github, and made this today. It's tracked using an Oculus controller velcro'd to the top of my camera. "Calibration" was a tape measure and some guesstimation, and the model isn't quite the right scale for the figure. No attempt to match focus, color, or lighting. So not terribly accurate, but imo it went shockingly well for a Sunday evening proof on concept.

https://s3.amazonaws.com/gobuildstuff.com/videos/virtualBGTest.mp4

Link to comment
Share on other sites

...and today I cleaned up the effect quite a bit. This is way too much fun! The background is a simple terrain with a solid color shader, lit by a single directional light and a solid color sky. With it this far out of focus, it looks shockingly good considering it's just a low poly mesh colored brown!

https://s3.amazonaws.com/gobuildstuff.com/videos/virtualBGTest2.mp4

It's a 65" tv behind this one. You can vaguely see my reflection in the screen haha.

Link to comment
Share on other sites

7 minutes ago, KnightsFan said:

...and today I cleaned up the effect quite a bit. This is way too much fun! The background is a simple terrain with a solid color shader, lit by a single directional light and a solid color sky. With it this far out of focus, it looks shockingly good considering it's just a low poly mesh colored brown!

https://s3.amazonaws.com/gobuildstuff.com/videos/virtualBGTest2.mp4

It's a 65" tv behind this one. You can vaguely see my reflection in the screen haha.

When you say "I cleaned up the effect quite a bit", is that in reference to making the movement of the background more fluid?  If so, if you could explain basic concept of how to you when from the back ground motion of BGTest to the smooth background motion of BGTest 2, that would be appreciated. Thanks!

Link to comment
Share on other sites

27 minutes ago, majoraxis said:

When you say "I cleaned up the effect quite a bit", is that in reference to making the movement of the background more fluid?  If so, if you could explain basic concept of how to you when from the back ground motion of BGTest to the smooth background motion of BGTest 2, that would be appreciated. Thanks!

The main conceptual thing was I (stupidly) left the rotation of the XR controller on in the first one, whereas with camera projection the rotation should be ignored. I actually don't think it's that noticeable a change since I didn't rotate the camera much in the first test. It's only noticeable with canted angles. The other thing I didn't do correctly in the first test, was I parented the camera to a tracked point, but forgot to account for the offset. So the camera pivots around the wrong point. That is really the main thing that is noticeably wrong from the first one.

Additionally, I measured out the size of the objects so the mountains are scaled correctly, and I measured the starting point of the camera's sensor as accurately as I could instead of just "well this looks about right." Similarly I actually aligned the virtual light source and my light. Nothing fancy, I just eyeballed it, but I didn't even do that for the first version. That's all just a matter of putting the effort in to make sure everything was aligned.

The better tracking is because I used a Quest instead of a Rift headset. The Quest has cameras on the bottom, so it has an easier time tracking an object held at chest height. I have the headset tilted up on my head to record these, so the extra downward FOV helps considerably. Nothing really special there, just better hardware. There were a few blips where it lost tracking--if I was doing this on a shoot I'd use HTC Vive trackers, but they are honestly rather annoying to set up, despite being significantly better for this sort of thing.

Also this time I put some handles on my camera, and the added stability means that the delay between tracking and rendering is less noticeable. The delay is a result of the XR controller tracking position at only 30 hz, plus HDMI delay from PC to TV which is fairly large on my model. I can reduce this delay by using a Vive, which has 250 hz tracking, and a low latency projector (or screen, but for live action I need more image area). I think in best case I might actually get sub-frame delay at 24 fps. Some gaming projectors claim <5ms latency.

Link to comment
Share on other sites

  • 2 weeks later...

This topic have made me mad... 

Unreal engine it's very complicated to understand, and there are issue every 2 minutes for all that you want to do if you are not expert... 

 

I've watched many tutorial on YouTube, starting with oculus quest (that I have) and other, but today I've maybe found my personal workflown... 

 

Unreal 4.27 and virtual plugin for Android (80$), very easy to setup but at the moment there are issue for anything different than 24p (I want 25p and there's a bug) 

 

I really don't want to use composure for merge virtual to real life, and I have chosen to add two track in davinci resolve (unreal world and real subject with a matte) in syncro (with waveform of music) 

 

The matte will came from runwayml (15€ a month for 1080p) and I'm still forced to use some green screen in background (without light) to help rotoscoping.. In this way I can turn around my subject (my 5 years old son) in my living room without divorcing my wife 

 

In the next days maybe I will be able to post some sample.. 

Link to comment
Share on other sites

This is a super simple example I'm working at the moment

 

I want to understand how much real world and virtual world can match without any effort like in this case...sorry for low resolution and bad mask but it's a very quick work

 

Unreal camera it's set to 21mm and my camera it's 12mm on apsc (18equivalent ), it's preatty similar but not perfect...

 

Someone kwon how to stabilize to a fix point the footsool in fusion and after that track the unreal world and assign the coordinate to the footsool?

Link to comment
Share on other sites

 

In this example I used a python script that allows you to have perfectly outlined images without any green screen, the camera must be placed on a tripod and then just a background image without the subject is enough and the script will do everything else ... not 100% reliable but when it works it's really perfect ...

I then used the "composure" plugin in unreal to try to put real and virtual in perspective ...

 

Now I'm experimenting with special effects to be inserted directly into unreal ..

Link to comment
Share on other sites

On 1/24/2022 at 11:59 AM, BTM_Pix said:

In the first instance, it will be about simple static background replacement compositing but will move on to more "synthetic set" 3D territory both for narrative but also multi camera for virtual studio applications.

There will also be a side aspect of creating 3D assets for the sets but that won't be happening until some comfort level of how to actually use them is achieved !

And the ultimate objective ?

I'm not expecting to be producing The Mandalorian in my spare bedroom but I am hoping to end up in a position where a very low-fi version of the techniques used in it can be seen to be viable. I'm also hopeful that I can get a two camera virtual broadcast studio setup working for my upcoming YouTube crossover channel Shockface which is a vlog about a gangster who sells illicit thumbnails to haplessly addicted algorithm junkies.

I'm fully expecting this journey to make regular stops at "Bewilderment", "Fuck Up", "Disillusionment" and "Poverty" but if you'd like to be an observer then keep an eye on this thread as it unfolds.

Or more likely hurtles off the rails.

Oh and from my initial research about the current state of play, there may well be a product or two that I develop along the way to use with it too...

I'm at intermediate level with Unreal in general, though I've not used VP templates yet, but I have made motion graphics pieces, developed small games (including entirely custom player controllers built from ground up etc etc) so I'm fascinated by VP as well.

As far as I understand it it's something where cost spirals very very quickly as you scale up to multiple screens. With one screen, let's say you buy a 90-inch 4K OLED or something, you can use a consumer GPU and get away with it.

Once you want to sync walls of video screens you'll need to use a Quadro Sync and have colossal amounts of VRAM available. https://www.nvidia.com/en-gb/design-visualization/solutions/quadro-sync/

Like most Unreal things it's WIP and you benefit if you know someone who is good with C++ and Unreal, which is hard to find. For example movie render queue in 4.27 is very much improved compared to previous versions, but still has some really hacky and opaque aspects, such as the need to enter arbitrary CVARs to make certain things look right, which you just have to go and find out, it doesn't hold your hand. It's no Resolve...

In general that's the hardest part. I've been on the Unreal journey for 3 years and it has never gotten easier, mainly because there is always something new to learn and stuff just keeps changing. It's very rewarding though, good luck and please keep us updated!

Link to comment
Share on other sites

  • Super Members

In case you were wondering where I'm up to...

Currently at the integration of some LIDAR, some Bluetooth, some WiFi, some Serial, some 9 axis IMU, some camera, lens and imaging control, some electrical tape and more than some coffee stage of proceedings.

SDIM0573.thumb.jpg.38b8efbac3b319a69ff30df97bcd8bfd.jpg

Its weeks ahead of the last update so I'll do a proper update at some point about the twists and turns of the journey up to this point.

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...