Jump to content

Nathan Solomon

Members
  • Posts

    7
  • Joined

  • Last visited

Everything posted by Nathan Solomon

  1. I don't know if it's interest to anyone, but I took the information here as a starting point for exploring use of D-Mount lenses. I tried to document the process as well as I could, so it's probably pretty boring to anyone not specifically interested. I hope it's of use to those who are. By the end of the month I should be done with the building, and probably have two more posts yet to write to cover the completion. http://coldmaceration.com/camera-conversion-project/
  2. Thanks, IronFilm. I'm familiar with photogrammetry, and I know that up to this point, using multiple cameras in this way is not practical or worth the effort; and yes the concept for multiple cameras is for VFX. Also, I was a cameraman for long enough to know the reasons for shooting multicamera, even if it was long enough ago that I only shot 35mm film. šŸ˜‚ I've been doing projects using Unreal Engine since since 2014, including full MR greenscreens compositing 3D interactions before they were enabled by the engine. The basis of this exploration is that advances in the Unreal Engine change the viability of using cameras in this way. I'm just not sure that it changes that enough to be worthwhile, so the question is more subtle than whether it has been done and whether it is viable under past conditions. It may well not be worth doing, but I don't think you've yet gotten closer to answering that.
  3. Thanks, BTM! It'll take me a while to pull this together, but I'll definitely post when I get it to next steps. I may seek out those Panasonics, or I may swap out lenses on a few action cameras (I have an adapter design I made for that being fabricated now).
  4. Thanks! Those are some great answers. I really appreciate your insights. The Media Division clip is inspiring; their take on the potential is pretty weird. Which is probably appropriate, given the level of creative excellence in how they carried out the test. It would not have occurred to me to keep a lens static and move only the sensor in quite that way. Perhaps that was driven by their decision to merge the elements in two dimensions within After Effects? I think that I can test the concept by building a frame that will hold six inexpensive 1080p cameras, and then working with my collaborators to develop a workflow that merges and corrects their output in real time. -This would be in contrast to the way Media Division simply ensured that the camera maintained perfect alignment. Does anyone have suggestions for a camera to use for this? I'll freely share here whatever I may learn from this. Ultimately, I think it would be useful to create a structure similar to an analog (5"x7")view camera, using a number of large inexpensive sensors, with automated controls moving front and back panels, as well as the lens. The two things that make this model especially appealing to me are: First, while it would be large, it could be quite lightweight, as the lenses required for this configuration can be very small, and lens' coverage of the sensor panel(s) would never be the issue it is with conventional digital cinema cameras. Additionally, the full control of optics planes would be fun, especially in integration with realtime 3D SFX. Second, it would be a lot cheaper than a conventional 6k+ camera.
  5. I'm new to digital cinema cameras (started my worklife as a cinematographer, but have been focused on game and immersive tech for a while), so this may be a dumb question, but I can't find an answer anywhere, and this seems like maybe the place to ask. In projection, there are a number of ways to use arrays of projectors in combination to get a higher resolution or better quality image. -Correcting for anamorphosis, parallax, etc. I've worked with Unreal Engine in projects doing this. It's definitely technical possible to do the same thing with cameras, using the same tools we use for projection. -Either employing multiple cameras, or a single lens/camera with an array of inexpensive sensors. It seems like it could be quite useful, especially in SFX. Is there a reason why this isn't done; or is it in fact being done, and I just can't find it? Thanks, very much, Nathan
  6. This is the initial design, sent off now for SLA printing (high-res, tough-durable) to test the form and basic dimensions. In the image, I have it unscrewed to show the components, but essentially, Iā€™m thinking the space between the D-mount lens base (screwed atop the red ring), and the bottom of the M12 blue ring is 10mm, with another 2.3mm from the bottom of that to the sensor. If anyone already knows that this will fail, feel free to tell me why. Otherwise šŸ¤žšŸ¤žšŸ¤ž
  7. I'm going to dive into some 3D CAD for this and design an m12-to-D-mount adapter. I figure the space savings of it being smaller than an m12-to-C-mount adapter (and tailored to this use) will allow it to be less destructive of the housing. I assume I'll first print it to MJF for form/testing, then to metal or carbon resin. If anyone is interested, I'd be glad to have insights while I'm designing, and I'll give you a final print at whatever it costs me. You can see some of my other projects at http://coldmaceration.com
×
×
  • Create New...