Jump to content

KnightsFan

Members
  • Posts

    1,351
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. Yes, brightness will match because ISO is designed to make the same brightness across devices. It's crucial that the reading of your light meter match ANY camera you use, so equal ISO should always provide the same exposure on the final image. However, ISO does not equal gain. That is, if you take a ten year old sensor with terrible low light, you might need 12 dB of gain to get ISO 800, whereas you might have a modern sensor that only needs 3dB of gain for ISO 800. And probably that new sensor will have a higher SNR, despite producing the same "brightness". For example, if you compare a GH5s vs a fictional hacked together camera consisting of 4 GH5s sensors in an array, then you would expect the array to have a higher SNR. However, after calibrating for ISO, both will produce the same "brightness" at equivalent ISO--it's just that the array will have a lower noise floor. As far as quality, yes that's entirely dependent on the product. Some small sensors are extraordinarily good (GH5s, for example), and some FF sensors are pretty bad. I think one of the main reasons for that is that new tech arrives at the smaller sensors first. It's a much larger investment to improve FF sensors than cell phone chips. Like I said, you can find some MFT sensors that outperform some FF sensors. And sensor performance isn't the ONLY factor in choosing a camera. Just trying to add some perspective on this specific "low gathering ability vs sensor size" debate. There's a reason NASA uses massive CCDs in their telescopes--I'm sure it would have been easier to launch the Hubble with a 2/3" sensor! So if the photosites are the same size, then the FF camera has MORE photosites. This means you will be downsampling, which improves SNR.
  2. Put a fullframe 50mm f1.8 on a FF camera. You capture all of the photons that land on the 24x36 sensor area. If you put that same lens on M43, the same photons exit the lens, but only 1/4 of them land on the sensor. The other 3/4 are reflected or absorbed by the area around the sensor. It's immediately apparent that at equivalent F-stop on the same lens, a smaller sensor gathers fewer photons. There is no "spreading" of light unless you add a teleconverter--which comes out exactly the same as using a smaller sensor. The teleconverter analogy explains why a M43 camera gathers less light. Taking FF as a starting point, expanding the image circle 2x with a teleconverter is the same effect as shrinking the sensor size by 2x. In both cases, only 1/4 of the original photons land on the sensor, meaning 2 stops less light. Of course, it's absolutely correct that "sensor quality, image processor quality, and size of pixels" affect ISO performance. We can easily find at least one example of a smaller sensor outperforming a larger one for noise. To use sound as an analogy, it's easy to find two amplifiers that output a different SNR even when given the same signal. The fact that some small sensors outperform some larger sensors does not change the simple fact that with the same lens, fewer photons land on a smaller sensor.
  3. The eos r rolling shutter isnt much worse than the competition. But the competition do full pixel readouts from 24 mp sensors, while the eos r does a 1:1 sample.
  4. I did see an interview with Atomos, but I don't remember that. I hope you're right, it would be a really phenomenal twist for Nikon. If they are that committed to video in their Z series, I'd love to see them tackle some of the other lesser-requested video features like LTC.
  5. @mercer I can see a difference between 10 and 8 bit once I start grading. But it's not a huge difference. I'm pretty much in the same boat as you with the whole diminishing returns view. That's why for me a camera can't just be good or even a good value, it's got to be better than what I have. My NX1 is ridiculously good in 4k, fits my hand like a glove, and has ergonomic and simple controls. That's why I keep going back to things like rolling shutter, 4k60, full frame, 10 bit, etc. It's not that I "need" any of those, but anything less and I've already got it covered. I sort of agree sort of disagree. I agree straight image quality has peaked. Framerates... well the E2 has 4k120 so we've hit a peak there. 10 bit is almost mainstream in prosumer cameras. However, rolling shutter hasn't really improved much--and while 10 bit and 8 bit aren't a huge difference, global shutter is pure magic on a glidecam. I'd also love to see more cameras that can sync to LTC.
  6. At this point it seems that anything that isn't debayered is considered Raw. So conceivably a 10 bit compressed log format could provide great image quality at a relatively low data rate, like RawLite you mention. Not true RAW, but it could be a really nice format. I don't really think that's going to happen, though I'd love to be proven wrong.
  7. I dont recall any word from nikon on internal raw. Does anyone have a source on that? @DBounce brought it up earlier.
  8. This is really unscientific and based only on having extensively using a 5D3 with Magic Lantern in the past, and recently doing a handful of tests and one job with the XT3. But my impression is that the XT3 has more dynamic range in FLog vs a 5D3 with ML. You'll have a tough time trying to find an objective number. While C5D has extensively tested cameras, I don't believe they have a number for the 5D3 and certainly not with ML, and it's unlikely they will test a 6 year old camera. And afaik no one else even tries to do objective numerical measurements between cameras. In F-Log the minimum ISO is indeed 640, higher than in photo mode. I believe you can use lower ISO values in other picture modes if you want. The difference is where Fuiji decided to place middle grey, not a difference in sensor sensitivity. I don't think that's what webrunner was saying, and it's not true in any case. DR will be even be different between different picture profiles, let alone photo vs video mode. However, DR has nothing to do with bit depth except within a specific gamma curve. That is, one gamma curve might put 20 stops into 10 bits, and another might put 10 stops into 10 bits.
  9. I agree. It's on par with Sony's codecs and bitrates with an added 4k60p mode. So really scraping the bottom of the bitrate barrel. It's not dealbreaker bad--I've never take issue with 80 mbps HEVC on the NX1--but it's not a thrilling list like the XT3 had. Let's wait and see what that paid firmware update brings, and for how much extra.
  10. You could be right. I'm still hoping for 10 bit HEVC--even if that spec sheet had a typo and it's only 420 internally.
  11. @Oliver Daniel yeah, i saw that at PDN earlier this morning. That s1 price is competitive, considering it is the first full frame 4k60p, and has that sweet pixel shift mode. As has been mentioned, lenses seem scarce at the moment but i'm sure that will change. It will be interesting to see what format 10 bit 422 is in, and the price of the upgrade. Surely they wont have internal prores? That would be insane. My guess is HEVC still.
  12. No reason why Nikon shouldn't be first, but when Atomos first announced Raw over HDMI, I predicted that Panasonic would be the first to take advantage of it. My second guess would have been that Blackmagic make a Raw video camera that takes good enough photos to be technically called a hybrid.
  13. I agree with pretty much everything you said, but i especially agree with this. I have no doubt fairlight and fusion will continue to improve. A fully integrated, single post production application would be absolutely heavenly for quick turn around and light work. It will be interesting to see if they can make it flexible and stable enough over the next few versions. That sound software you screenshotted looks like the kind of intuitive UI that i think thay fairlight lacks. One glance and i understand how it works. But yeah it would get cluttered very quickly with large projects.
  14. Yeah, but the one built into Resolve lacks a lot of features and it's really slow. It hangs and freezes doing simple comps, which run fine in the standalone version. At least that's my experience so far. There are more features in standalone Fusion than Resolve builtin, and even more in the paid version, some of which are looking really tempting--specifically the tracking features. Also I did some 4096x4096 comps in Fusion recently, and since the free version maxes out at UHD I actually had to export as four 2K images and then stitch them in ffmpeg. It's an easy workaround but it's kind of annoying. Huh, interesting discussion So far I've found fewer bugs in Resolve than Premiere, but I do avoid the "bleeding edge" features like Fairlight and Fusion. Back when Fairlight first was integrated, I had one issue crop up in the rendered audio file, but that was a few versions ago. Hopefully that's fixed. Still, not being able to reliably hear what you're working on makes it annoying. True. But it's all so big and blocky. Like if I pop open my effects tab it takes up 1/4 of my screen, so I have to close it back down just to see what I'm doing. I'm constantly finding myself adjusting the windows. I believe that you still can't freely undock or rearrange panels except like the color scopes only--which makes it really hard to take advantage of dual screens. Re: the rest of your post... I agree, and one reason I like Reaper so much is that it's scalable. If I want to record a sound, I open Reaper and hit record. If I want to EQ a single clip, it's a couple of clicks. If I need to mix for a surreal narrative project in both 2.0 and 5.1, with crazy effects, and dialog and ambient being piped through a vocoder and all kinds of crazy stuff, it scales really well. In Fairlight you've got to dive through menus to set up sends, the UI honestly feels very uncreative and inefficient. It just doesn't want to do things that it wasn't consciously designed to do by the developers.
  15. It says the thread doesnt exist... Yeah, I use resolve as my primary editor and of course for color correction. I got the full version of that, too, mainly for noise reduction. Certainly worth the price just for editing and color. At this point im strongly considering getting a fusion license when we hit post on my next project. Blackmagic really has phenomenal software. However, I tried using fairlight a few times, and really disliked it. I regularly have issues with the audio cutting out while editing, or popping, or suddenly getting really quiet for a few seconds. So at this point i dont trust fairlight for real use. The only time ive ever used it outside of testing was to add some compression to an audio track for a rough cut render. Just that once. I suppose like many DAWs, fairlight probably gets a lot of mileage out of plugins like the RX pack. I think that reaper has a better approach in that regard, though, where there is no "builtin" EQ or dynamics, its all plugins. Not only does it come with a phenomenal library, its just as easy to use a 3rd party EQ as it is to use reaper's EQ. Fairlight seems to have built in stuff just sitting there taking up screen space even if you dont need it. Instead of memorizing lots of little functions, once you understand the broad design philosophy in reaper, you can figure out the rest intuitively, which was the opposite of my experience with fairlight.
  16. To be fair though you are supposed to pay for reaper after the trial period. It doesnt lock you out, but i discourage taking advantagr of that. i am happy to pay $60 if only to show my support for non intrusive software--and it also happens to be a killer program that I use daily. @kye tongue in cheek aside, reaper is signifcantly better than fairlight in my experience. It has a cleaner interface that is much friendlier to limited screensize and dual monitor setups, in my opinion. Also resolve still doesnt support 44.1 khz exports i think? Usually when i do a music video, the audio file is 44.1 and not only does resolve/fairlight force a conversion, it is a bad sounding conversion.
  17. Have you tried plugging the camera into the monitor? Do you get an image?
  18. Totally agree. I'd rather see the E2 as a polished product before they announce a GS version. But hey, maybe the industrial market will snatch them up? True. But... last weekend I went back to some of my REALLY old projects. (The .wmv files no longer play audio in VLC, so I wanted to re-export in a modern codec. Success, btw) They were shot on an old photo camera. 640 x 480, 15 fps. But it was a CCD with intraframe mjpeg in 422! The motion was day and night compared to modern cameras. Really makes me want a GS camera.
  19. In one of their videos, a Blackmagic representative said they want other camera manufacturers use it in their products, but I'd have to look for that video. I posted it in one of the BM Raw topics here a few months ago. However I think it extremely unlikely that Z Cam will put it in the E2. I think they'd have mentioned it on their FB group if that was at all an option.
  20. Yup, sounds like simple compression artifacts to me. Downscaling might make it subjectively more pleasant to look at, but won't recover any details that have been mushed. The only real solution would be to use a higher bitrate or better codec to begin with. Raw of course solves every problem, but is usually overkill. High bitrate intraframe codecs are ideal for fast pans, but even using a higher bitrate interframe codec would improve the image. However, a better acquisition format doesn't fully solve the problem. If you distribute on a streaming service, then it will be compressed into mush, whether you shot in Raw or not. The delivery format is often the limiting factor these days.
  21. Downscaling is essentially adding a blur to your image. It might help reduce compression artifacts--if that is indeed what you are seeing, that was just my first guess without having seen the image itself. You might get better mileage out of a directional blur rather than downscaling.
  22. Does the monitor have hdmi input? Is the camera running the latest firmware, or magic lantern? If both of those are a yes, it will probably work. The only issue would be if somehow the monitor doesnt support any of the frame rates or resolutions the camera outputs. We would need to look at your specific model to know for sure.
  23. When using aconstant bitrate interframe codec, any movement in the frame will have quality degradation. The more motion, the more quality loss. Shooting raw will eliminate such issues. An interframe codec would also eliminate motion artifacts as well.
  24. Thats a fair assessment for whether it is worth the money to you. In fact, i passed on the e2 because i will be using an xt3 on my next project. But thats a similar argument to saying the a7r3 isnt worth more than the a73 because you only need 42 megapixels for specialty purposes. And as @androidlad points out, there are other benefits. If you want prores on an xt3 you are looking at another $500 for an external recorder anyway. I completely see your point though, and agree that if you dont need the extra features, or if you want those beautiful 26mp photos, the xt3 is a better camera for you. The truth is, everything is now a product in development. Most major software companies ate switching to rolling updates and away from major versioning. Tesla rolls out software updates for their cars. Ten years ago it would be unthinkable to have to update your car! Its just how the world works now. The good news is Z cam listens to their buyers and is willing to shape their product to fit evolving needs.
  25. Thats true. Still seems a reasonable price. How much do you have to pay for a reliable platform that shoots 4k 120 at 10 bit? How much for a camera that supports all that in a synced arrays of cameras? Im not sure its just for tinkerers and beta testers. LG used e2's for their massive OLED Falls display, for example. Again, maybe those arent the features you need. But it does not seem overpriced at all to me.
×
×
  • Create New...