Jump to content

KnightsFan

Members
  • Posts

    1,351
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. I wonder how the low light would stack up with a speed booster and some aggressive noise reduction? I mean if you are tearing it down anyway, it would be worth looking into switching to MFT or E mount.
  2. Thats the kind of project i would be all in for. I actually really considered doing that a few years ago.
  3. A proxy workflow is what you make of it. Essentially, you create an exact copy of your entire project in a format that plays back smoother on your system. At any time, you can toggle between using the "online" media and the "offline" media. If you wish, you can do ALL your post production on the proxies (editing, color, sound), and only toggle back to online media for the final export. Or, you can do some editing, switch to online media, do some color, switch back to proxies, do more editing, switch back to online media, edit more--etc. Or you could use proxies to edit, switch back to online media for color correction and export, and never use the proxies again. On the other extreme, you can do a proxy workflow where you never switch back to the online media, but then it's called transcoding. You can make proxies for any format. You can even make your proxies BE in any format. You can make HD ProRes proxies for 8k 10 bit H.265 footage, or 4k H.265 proxies for an HD ProRes project (not sure why you would want to though...). That's all quite abstract, though. In practice, proxy capabilities depend on which software you are using, especially if you are using the builtin proxy generator. In Resolve, if you use Optimized Media (essentially, proxies) you can switch between optimized and original media at any time, like I described. However, you are limited in the formats that you can make proxies in, and I've often found that Resolve "loses" optimized media all the time. Personally, I make proxies in ffmpeg, and manually switch out which files my timeline uses. That way I have maximum control over the proxy format, and can easily troubleshoot problems. A decent workflow should allow you to do crops/reframing and variable framerates without issue, but it depends on the software you use. In general, the only pro is smoother playback while editing. However, proxies are also a huge benefit if you have a remote editor and need to send files over slow internet connections. My 500 GB project is only 10 GB in proxy form. I can use Google drive to send the entire project to an editor, and all they have to send me in return is an XML. Cons are a messier workflow, having files become unlinked if your workflow is not perfect, tons of headaches. But all of those problems can be avoided if you know what you are doing, and rigorously test your workflow before using it on a project.
  4. @BTM_Pix Nope, the global shutter versions have a 1" sensor. Smaller sensor, less DR, much lower frame rates and significantly higher price. Very tough pills to swallow for that sweet global shutter.
  5. I hope it translates into a lot of sales, i am very eager for global shutter to be a standard feature.
  6. There you go! And you get to feel like a secret agent every time you use it.
  7. The GH5 is a lot more expensive even used, compared to the E2C. The E2C is like if Panasonic had put 4k 10 bit into their G85. Same thing that already happens when you get a phone call during a shoot... you ignore it and don't ruin the take. Haha, I kid--it's a valid point. But to be fair, people already use their iphones to shoot serious videos. It's easier to pop the monitor off and make a call on it than to get it out of one of those fancy rig/cage things with the lens on front and everything lol.
  8. The E2C is certainly an interesting new camera. My guess is it's the same sensor as the E1, but with a better processor, and updated I/O to fit with the E2 family. If so, there was minimal R&D involved, just putting together pieces Z Cam was already familiar with. I'm not really interested in buying one, but it seems like a brilliant option for live streamers. POE can really cut cable clutter. For $800 you can get a brand new camera that shoots 10 bit internally. Amazing how far we've come. On a more speculative note, a future that I've imagined is using a tiny PC as a recorder. Run ethernet from the camera to the PC, affix an audio interface to the top, and record multi-track audio in sync with the video using your favorite software. No timecode or synchronization needed, just one record button for everything. You can use codec you want, any hard drive you want. Review footage and edit metadata with a full blown OS. It might be, but the E2C is significantly cheaper for a brand new camera, even compared to a used GH4 and a new Ninja V, plus the E2C is a significantly smaller setup. If you already have an apple phone, your monitor needs are already covered. With the E2C, you get H.265, and I assume it will eventually have ProRes internally. It seems there are advantages to both.
  9. Sharpness -7, Contrast -8. I use the full 0-255 range with the master black level at 0.
  10. I use Gamma DR and set the RGB sliders to 1.88, 1.85, and 1.95. I pretty much leave them there always, seems to get me the results I want. It's important to either white balance to a known setting (such as Daylight) or to a greycard exposed exactly in the middle, otherwise the white balance is very bad. I tend to underexpose just a little, as I find that the color response is better on the low end vs the high end of the range.
  11. Looks great! I used Resolve for a couple simple effects on a recent video. I tried using the Fusion tab, but I the performance was terrible, even compared to Fusion standalone. So I ended up just using OFX plugins on the timeline and was pretty happy with it.
  12. True, that is a bit semantic. I tried to address the more meaty argument in my initial post. I dont see how the ability to match in post implies that color science is bs.
  13. Engineering is just applied science. To use my tomatoes example, you can genetically engineer bigger tomatoes by applying the scientific theory behind it.
  14. I'm still unclear on how adjusting gain can change the FWC. Doesn't gain happen to the signal AFTER the well?
  15. To anyone who says "color science is bs:" I'm curious what your definition of color science is. From the CFA, to the amplifier, to the ADC, to the gamma curve and the mathematical algorithm behind it, to the digital denoising and sharpening, to the codec--someone has to design each of those with the end goal of making colors appear on a screen. Some of those components could be the same between cameras or manufacturers. Some are not. Some could be different and produce the same colors. Even if Canon and Nikon RAW files were bit-for-bit identical, that doesn't negate the fact that science and engineering went into designing exactly how those components work together to produce colors. As it turns out, there usually are differences. The very fact that you have to put effort into matching them shows that they weren't identical to begin with. And if color science is negated by being able to "match in post" with color correction, how about this: you can draw a movie in Microsoft Paint, pixel by pixel. There is no technical reason why you can't draw The Avengers by yourself, pixel for pixel, and come up with the exact same final product that was shot on an Arri Alexa. You can even draw it without compression artifacts! Compression is BS! Did you also know that if you give a million monkeys typewriters, they will eventually make Shakespeare? He wasn't a genius at all! The fact that it's technically possible to match in post does not imply equality, whether it's a two minute adjustment or a lifetime of pixel art. Color science is the process of using objective tools to create colors, usually with the goal of making the color subjectively "good." If you do color correction in post, then you are using the software's color science in tandem with the camera's. Of course, saying one camera's color science produces better results is a subjective claim... ...but subjectivity in evaluating results doesn't contradict science at all. If I subjectively want my image to be black and white, I can use a monochrome camera that objectively has no CFA, or apply a desaturation filter that objectively reduces saturation. If you subjectively want an image to look different, you objectively modify components to achieve that goal. The same applies to other scientific topics: If I subjectively want larger tomatoes, I can objectively use my knowledge of genetics to breed larger tomatoes.
  16. According to z cam regarding this particular test, the rolling shutter is worse in wdr mode, and should be on par with the gh5s in normal mode.
  17. Yeah people have complained about that on the facebook group. Apparently z cam is putting a no-sharpening mode into a firmware update, as well as an option for less noise reduction. I am not sure whether the firmware update has been released yet to be honest.
  18. The real problem with IoT devices is internet security. This happened while I was in an internet protocols class and made for some good discussion: https://www.techtimes.com/articles/183339/20161024/massive-dyn-ddos-attack-experts-blame-smart-fridges-dvrs-and-other-iot-devices-why-your-internet-went-down.htm How good is the security on your coffee machine? Making coffee one day, being leveraged to attack Twitter tomorrow, and providing a backdoor into your home network the day after. It's worth being wary of cheap, internet-enabled devices with little to no security.
  19. I don't think you can use a CPU to increase the saturation point of a photosite. We're talking about the full well capacity of the photosites. Not the dynamic range of the image, or how the image is processed, or whether there is a computer-controlled variable ND filter in front of the photosite. Because none of those effect the FWC.
  20. @webrunner5 using the CPU to do HDR doesn't explain how they achieve a higher full well capacity with such a small pixel.
  21. Someone on Sony Rumors speculated that the pixel actually fills several times to make one exposure. Sort of like combining several short exposures into one longer one, but before the ADC and perhaps on a pixel level. I'm not sure if that's how it's actually achieved.
  22. One thing no one talks about is audio for UHD. I have a 4k tv and a 4k graphics card, so i should be able to watch 4k content, right? Nope. My receiver can only take 1080p hdmi, and that is the only way to get 5.1 audio. So to get 4k, i have to do stereo sound via aux input. I have no desire to buy a whole new receiver just to get 4k, let alone 8k, and 5.1 is more important than high resolution for me. Its frustratingly ironic that wanting decent audio is the reason i cant watch 4k right now and have absolutely no interest in 8k content.
  23. True, what i meant was that all it does is tell you to purchase. It doesnt stop working or cripple any features. Its such a refreshingly non-invasive install that i gladly paid, despite it being the easiest thing in the world to steal. No background processes or separate licensing/update software automatically draining resources every time you boot (hello apple and adobe).
  24. I use Reaper as a DAW. It has a free trial thay never actually expires, and the full version is like $50. Ive been using it for maybe 4 years and absolutely love it. I mainly use it for mixing for films, but have used it occasionally for recording simple stuff (which i do in a closet with heavy sleeping bags hanging about for isolation, if you are looking for budget ideas). I use my zoom f4 as an audio interface, and monitor with the classic mdr-7506 headphones.
×
×
  • Create New...