Jump to content

KnightsFan

Members
  • Posts

    1,232
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. I've been saying for some time (including earlier in this topic) that the roadblock to mainstream VR is bulky equipment. Facebook just announced controller-free hand tracking for the Quest, coming this week. They're talking about the resolution of the recorded 360 video. Mapping an image around for a 360x180 panoramic stream really benefits from 8k since you're only seeing a small portion of it at any time. What platform do you target in VR? My day job involves developing for the Quest, so we have pretty strict hardware limitations. I imagine developing for a traditional headset hooked up to a gaming PC gives you more room for better assets and rendering.
  2. I haven't seen any measurements of the DR of the 5D3 raw, but that seems about right based on my comparisons to the BM 2.5k. I tried that iso mode a bit, but i didn't like it and never used it on a project, i cant remember exactly why.
  3. Magic Lantern shoots native 14 bit DNG image sequences and puts each sequence into a .MLV container. If you mount the container as a drive, you literally copy/paste DNG frames out of the container--you don't actually have to convert anything. For 12 and 10 bit, Magic Lantern truncates the words which is why there is a DR penalty. (Or at least that was the case last time I used it, which was a few years ago). I can dig up some of my old footage if you can't find any samples online.
  4. Here is the third and final promo video. The first season should come out on Dec. 20.
  5. There are a lot of free SFX packs, including swishes and impacts, on https://opengameart.org/ I also have a hobby of recording sounds and putting them up on my site, I'd be happy to make some specific sounds for you if you'd like.
  6. The first short promo video is out. It's just under 1 minute long. Enjoy!
  7. I'm not sure what you are disagreeing with, my post was about how much physical space a TV set takes up compared to a headset. A large TV is always going to take up a huge amount of wall space, whereas a VR headset can be tiny but give the same impression of a large screen set a reasonable distance away. With a headset, I can experience a virtual 85" TV set 10' away from me, while sitting in the cramped back seat of a car. Our headset technology just isn't there yet.
  8. I don't know about supplant, but I think VR has huge potential once the physical limitations of current headsets are overcome. Have you ever done anything in VR, and if so, how was that experience for you? The big problem with VR now is that headsets are cumbersome. When I first tried it, you had like 10 wires going from headset to PC, headset to handsets, you had to place sensors around the room. Now with the Quest, everything is wireless, but it's still big and heavy to wear on your head for a long period of time, the refresh rate and the resolution isn't quite convincing, not to mention the low specs of the device means that at least for game content, graphics aren't great. We aren't going to have 4k fully ray traced environments on the Quest anytime soon. In the near future, these problems will be solved. Headsets will be wireless, lightweight, and cheap. One problem with flat screens is that they are huge. A 60" 4k TV is huge, and while technology is making them thinner and lighter, there is no way to get around the fact that a 60" TV takes up 60" of wall space. Even a projector, which takes up no physical space, requires space to show the image. On the other hand, if we can get a headset the size and weight of sunglasses, you can watch a 2D movie in a "virtual theater" anywhere, any time. Imagine wearing ordinary glasses that convincingly make it look like there is a 60" TV at the proper viewing distance, showing traditional film content. You can fold it up and put it in your pocket. You can watch theater quality images on a plane. You can watch 3D movies with no extra effort or equipment. You can watch them lying down. I think that's when we'll see the paradigm shift away from screens towards headsets. Once people already have headsets, I think they will show more interest in actual VR content.
  9. I'm grateful for the abundance of cheap equipment and software. But I'm also grateful to the thousands of people who spend their time freely sharing knowledge on sites like this, from the retired pros with decades of industry experience, to experimental newcomers with more ideas than experience, and everyone in between.
  10. First, 8k is better than 4k for every application if you don't sacrifice frame rate and quality. It's just a question of whether it's worth the extra cost, and at this point, the vast majority would agree that that 8k's expenses outweigh its benefits compared to 4k, both in cameras and on screens. Exactly. And not just refresh rate, but latency as well. You can feel the lag between moving your head quickly and seeing the change with current Oculus headsets. If I recall it's something along the lines of 1k resolution per eye on the Quest, and while you can definitely see the pixelation, it is not that distracting when playing games. Yeah, I see higher resolution as more important on the creation side than the viewing side. I'd rather physically move my head to look at different parts of a video timeline than click and scroll back and forth with a mouse.
  11. #millcore is a comedy web series I worked on with some friends several years ago, which we are at long last putting on the actual web. There's a lot of content coming, so I'll keep this updated as the episodes roll out. There is mild language. Hope you enjoy, and feel free to share your thoughts! This was definitely a fun and educational process. It was done with a $0 budget and mostly equipment we owned. There wasn't really a crew-- it was basically written and acted by the people on screen, and I was the crew. If I recall, there were only 2 out of 12 days where there was another person on set who wasn't acting. I shot the entirety of it on my NX1 using a couple prime lenses. We were fortunate enough to borrow lav mics and LED lights.
  12. KnightsFan

    RED Komodo

    Definitely out of my budget, but I am glad to see another foray into global shutter. Sounds like it will be global/rolling switchable based on the language Land used. I don't think any other cameras have that? Blacknagic tried with the Ursa 4.6k and cancelled it (though their later G2 update has such low rs it's global in all but name). I wonder if it is a global shutter sensor, or whether they use some other technology like their old motion mount.
  13. Yes, that makes perfect sense. With most codecs you can losslessly trim a portion out. It can be done with ffmpeg, I've done it for archiving GoPro footage from long events. I don't know whether any mainstream editing software can do this--there is an option to "only re-encode when necessary" in Resolve but I haven't tried it. On All-I codecs such as ProRes, this should be possible with cuts on any frame, but with intra-frame codecs, you would begin your trim on a I frame. Unlike an uncompressed format like your TIFFs, cutting out a portion of a compressed codec won't be lossless across edits. For example, you can't color correct it and maintain your complete fidelity with this method. On the other side, most codecs will not show any degradation over a single re-encoding. Avoid it when possible, but it's usually not the end of the world if you're not doing it a few times.
  14. There are lossless formats, including TIFF image sequences. The downside is that file sizes will be astronomical. What type of intermediates are you using, and can you use a proxy workflow instead? Using proxies instead of uncompressed intermediates will have the same fidelity and be a lot less taxing on hard drives.
  15. Right, what I said was if the screen "doesn't support" a given aspect ratio as a native file, then you add black bars manually and the problem is solved. The idea that 2.39:1 movie can be shown on a 1.85:1 screen but not vice versa is incorrect.
  16. Any screen can show any aspect ratio. There will be either pillarboxing or letterboxing if the movie doesn't match the screen. If for some reason a screen does not support your file's aspect ratio (e.g. if you picked something non-standard), then you can manually add the black bars yourself. There is no technical reason why a movie should be incompatible with a theater because of aspect ratio.
  17. If you are sure you wont be slowing it down, then shoot in the frame rate you will display at. Light loss is dependant on shutter speed also. So 60p with 1/60 will be identical exposure to 30p with 1/60. On many cameras higher frame rates are lower quality due to binning, compression, or different sensor readout. 60p has no advantage if you are just going to convert to a lower frame rate.
  18. The difference would mainly be down to your shutter speed. Eg if you shoot 60p with a 1/60 shutter, converting to 30p will be pretty similar to shooting 30p with a 1/60 shutter. That is assuming your software drops every other frame instead of attempting to blend or interpolate stuff, which will look wonky. A good compromise imo is shooting 4k60 with like a 1/100 shutter. If you slow it down you get slightly more motion blur than normal, and if you speed if up it's just a little choppier.
  19. The XT3 shoots 4k60 internally at 10 bit as well. But i agree, shoot 60p. I prefer HLG personally.
  20. Sensors of any size can produce sharp images when paired with the right lenses. Arri and Red cameras mostly have APS-C size sensors also. We could fill up a couple forums with sharpness discussions. A lot of narrative films make liberal use of softening/diffusion filters to soften the image before it enters the lens. On the digital end, the more resolution, the softer and smoother your image can be. "Can" is the keyword, because as you've seen, digital images can be sharpened, which subjectively looks bad. You could, for example, add a diffusion filter to the lens, but then add a lot of sharpening to the digital image. This will be an unnatural effect, though it may appear sharp at first. I recommend turning down sharpness in the picture profile of your camera. Big cameras like Arri and Red are very sharp, but cinematographers use that sharpness to smoothly describe organic shapes (which are created with lighting, filters, and lenses), instead of turning the sharpening up to produce jagged digital contrast the way a lot of consumer devices do out of the box.
  21. @Llaasseerr making linear gain adjustments is easy in resolve, just use a CST node to go to linear. Its very explicit. Or am i mistaking what you are trying to do?
  22. I believe it specifies the ratio of photosites for a color array with 3 different colors. So that would cover most traditional CFA's, including bayer, quad pixel, and x-trans. I think Foveon sensors would not be covered by Red's patents. RGBW patterns would also be fair game I assume, not that anyone uses those. I do wonder if anyone has considered recording an extra line of pixels to throw off the ratio of colors. Like recording 3840x2161 with that last line containing only green and blue. Not that it would be useful, just thinking out loud here.
  23. That makes sense. It might have helped your cause earlier if you'd said you couldn't share because the images were private. Considering the number of spambots and such lately, I can't really blame people for being suspicious of trolls. The way I look at it, even if someone is trolling, maybe other people get here from an honest google search. If there is a difference in motion blur when using the same shutter speed, then either the Fuji or the Canon is reporting its shutter speed incorrectly. Edit: either that, or the "motion blur" on the Canon is blurring from compression as well. More motion will strain interframe compression more, so lots of movement will make for a less detailed frame overall. The Canon might be suffering there.
  24. Yes. Or record with a width of less than 2k pixels. Or use a CFA with different RGB ratios. There are lots of ways around the patent which imply that it really isn't the thing holding companies back from implementing raw of some kind in hybrid cameras. (Note that HD is less than 2000 pixels wide.)
×
×
  • Create New...