Jump to content

KnightsFan

Members
  • Posts

    1,214
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. That sensor certainly makes great advancements for M43 video. The problem is, any advancement that M43 makes, can be done with FF with 4x the resolution or 4x the pixel size. The only way M43 can be competitive with FF in both resolution and lowlight simultaneously, is with lenses that let in 4x the light, and have 4x the resolution (lp/mm) of their FF counterpart.
  2. No-- ISO measures exposure (though as said before, different manufacturers measure differently), not gain. So the same ISO, shutter, and T-stop should give identical exposure, no matter what camera or lenses you use. Larger sensors gather more light, and thus require less gain to reach an equivalent exposure compared to a smaller sensor. A FF sensor at 0db should give the same exposure (and thus the same ISO) compared to a MFT camera at 12db. You should see less noise on FF compared to MFT under the same conditions, because less gain is applied. In the real world, there are other factors with sensor tech and processing. A given FF camera might not have exactly the 2 stop advantage we'd expect over a given MFT camera. Yes... except they all fudge their ISO numbers.
  3. @mercer totally understand about budget, i'm often in the same boat. I've never used the dr40 or h5, and I've only briefly used the h6. I don't have any sense of the quality difference. However, There is really no reason to stick with the same brand over time, it isn't like cameras with color science. The only benefit i can think of for going with zoom now is if you plan to get any of those interchabgeable modules.
  4. According to the comparison between free and pro versions, hevc is not supposed to be supported at all in the free version. So i think the "proper" solution is to get the studio version, unfortunately. You can find it very cheap on ebay if you absolutely need it.
  5. I did a few scenes as the only crew person not too long ago. I had the camera on a tripod, and used a friction arm to position my F4 just under the monitor. One glance to check framing and levels. It worked pretty well overall, and I didn't have any issues with the size. It seems like you're set against the F4, but as a non-pro-audio guy I can tell you the quality difference between the F4 and DR60 II is quite apparent. And you can use the F4 as a USB interface, so you won't need a separate device for that--which, to be fair, applies to the H6 as well I believe. If price is an issue, can you get by with just two bodypack recorders, and switch them between actors as necessary?
  6. A P4K with the ergonomics of an XC15, including a removable grip would be an interesting middle ground. If they could stick a Sony NP battery under the screen that would help. You can hold it like a DSLR, or remove the grip for a minimal, boxy design. Include that nice remote port, and the stripped down version could replace a BMMCC very nicely.
  7. Yeah, this is one of the few cameras where you NEED a rig. Forget the smug "you need a rig to use the BMPCC effectively" where everything applies to every DSLR, you will have a very difficult time using an E2 without a rig. I still prefer H.265. For example, I shot ~450GB on my NX1 for a single project last summer. 4k at 80 mbps. Prores would be over 6x that rate. I would have needed a 3TB drive to store the project. Add another for backup, and that's at least $160 on drives for a single project. After a couple projects, the expensive processing power of a GTX 1080, which can edit 4k H.265 with minimal lag, is a comparable investment.
  8. I don't know, the E2 has a lot of unique features. They want to have ProRes at some point, apparently it's just a matter of waiting for licensing. Once that's implemented, the more legit productions will be much more interested. The E2 would occupy the unique position of having the most efficient codec, and the easiest to edit codec in the same camera. Unlike any of the competition, the E2 is built for livestreaming, 3D rigs (they have a "pixel level" sync for multicams), and has a deep learning engine. They had some demo I think where the camera was on a mechanical arm and automatically followed a person as they walked around a room, all the panning and focusing and everything was automatic. I wouldn't be surprised if they pull off some update with AI-based AF or something. They're targeting every video market at once, from cinema to industrial. Maybe none of that will translate to actual sales though.
  9. To each his own... for me, those are both benefits! 90% of the time my camera is rigged such that I can't see the screen, and I use an external monitor anyway. H.265 packs so much information into so little space, it's a dream come true for my hard drive. Yeah, it's realistic, but I would have done it differently. It's not a problem, I've downloaded enough SOOC files from this camera to prove that I can grade it to my taste. Rolling shutter on this camera is something like 11ms if I recall correctly, so yeah it should have nice pans. Agree about price. Is it really worth $700 more than the P4K? Is it really worth $500 (plus resale on my current camera) more than the XT3? Since all my lenses are full frame, I'd need a speed booster. And CFast is expensive. The E2 is my favorite camera of the year, but it's really hard justifying double or triple the gross cost, compared to the P4K or XT3.
  10. That first one I saw the other day. I'm not in love with the color grade, but it looks good. The second one is the first clip I've seen from the global shutter variant. I wonder how the DR and low light compares? Isn't it also an even smaller sensor?
  11. Typically, lower megapixel versions are better for video, because fewer pixels means less processing for a full sensor readout. So it's likely the S1 will have an advantage over the S1R for video.
  12. I remember that in the Z Cam thread. And since I want to get to the bottom of this, and had some time, I gathered every Cinema5D dynamic range test I could find. I found that they are actually very consistent*, within a half stop of every other measurement for a given camera. For example, the Ursa Mini 4K is pegged at 8.5 in one chart, and 9 in another. But overall, quite consistent*. In one post, they mention a drastic difference after an Imatest update--this was in 2014, on their a5100 test. I didn't find any other tests from 2014 or earlier. * EXCEPT: there's something really screwy going on with their A7sII tests. Every camera is consistent, except the A7sII. What could be going on? The numbers I found are: Slog2: 11.9, 11.6, 10.6, and 10.6 Slog3: 12.4, 12.1, 12, 12, 12, 10.6 Perhaps sometime at some point typed 10.6 instead of 11.6, and that number was carried into the most recent lab result? Perhaps the last Slog3 number is also mistakenly the Slog2 number? But so far that camera is the only anomaly I know of. I've included the excel doc I recorded the data in. If I've missed any articles, or made any mistakes, let me know! C5D DR Tests.xlsx EDIT: Sometimes, I pulled numbers from within the article and they'd say "just over 12" or "just under 12." In these cases, I would add or subtract a tenth of a stop, e.g. 12.1 or 11.9 in my chart.
  13. @wolf33d I'm not saying Sony protects their cine line. I'm just refuting that Sony has been "ahead of the competition past 5 years" in specs (I'm not talking about sales). While Sony clearly dominates with sensors, their processors always seem to be behind. Maybe it's intentional crippling, but I think it's more likely that they simply have not invested in that area as much. In 2014, the NX1 did a 28MP readout in 30ms. The A7S II, a year later, took the same amount of time to read out 12MP. The a6300 had even worse rolling shutter in 2016 -- almost 40ms! I'm not saying the NX1 is "better," but it's an example of how Sony's processors lag behind their sensors. Perhaps their weak codecs, past overheating issues, and continued lack of 10 bit are all the result of poor processors as well? Sony leads in some areas, but not all. Panasonic has been outputting 10 bit HDMI for 4 years now. Fuji, Nikon, and even Canon have beaten Sony to 10 bit HDMI, with Fuji getting bonus points for having it internally with a modern codec. I'd say that Panasonic has really been the one leading for specs, with Sony getting a pass because they were the ONLY FF mirrorless. I also expect the new Sony APS-C to be close to the XT3. 4k60-- probably. 10 bit-- probably externally, but I think it will still be 8 bit internal. H.265 and 20ms rolling shutter-- I highly doubt these. But we'll see! I certainly hope Sony knock it out of the park. I'd switch to Sony if they deliver.
  14. Canon, certainly not! The Z6 and Z7 look like even competitors to the A7 III and A7r III. And next year Panasonic will join the fray, with what I assume to be competitive specs. I'm not talking overall quality of the system, just who is providing the best specs. The A7S II is still alone as a low megapixel FF video camera. I argue that it doesn't really have competitive specs, though, it's just got a unique sensor. Whatever their shortcomings, Sony makes great sensors.
  15. Not really. In 2014, Sony made the a6000, and Samsung made the NX1--which still tops all Sony APS-C for specs. In fact, the XT3's extraordinary specs are THE reason why people think it's using Samsung tech! Sony's "best specs" really only applies to FF video, where they have enjoyed an almost complete lack of competition until 2018. Sony has created some amazing FF sensors, but their specs haven't been that good. Hopefully, the competition kicks them into high gear with their next APS-C lineup!
  16. So if you shoot HFR, you've got to use a FF lens. For 4k, you've got to use EF-S lenses for native FOV. And if you want that ND adapter, you've got to use adapted lenses. It's almost as if Canon wants you to buy 3 sets of lenses! What's next, 10 bit only when using FD lenses?
  17. Global shutter is the future I want. It's always nice to see progress in that area.
  18. Exactly. I can't see any in-camera algorithm to de-roll the shutter ever appearing. You'd need to do frame interpolation, but at a different amount for each line of pixels. It's probably easier, cheaper, and more accurate to just use a faster processor.
  19. The question has a different meaning depending on what you already own. Personally, I'm completely satisfied with the 24p quality of my current camera. Do I need (insert any feature from @kye's list here)? Not at all! But if the primary features are covered by my current gear, it'll take an improvement in the secondary features for me to consider an upgrade. So IBIS or AF are not required for the camera that I use in 2018, but they might be a requirement for the camera that I buy as an upgrade in 2018--if I feel the need to upgrade at all!
  20. NX1 is around 8ms in 1080p, according to that DVXuser thread, which is right near the maximum you can have for 120p. I bet that's a really difficult problem to solve. It might be straightforward on whip pans across static scenery, but I don't see how a camera would be able to correct a fast moving object across the frame, in real time. Or correct for a fast zoom, where you can see zooming on half the image before the other.
  21. I checked out Media Player Classic Home Cinema (MPC-HC). It's an improvement over VLC, but it does not match my other color-managed applications (yet!). Would you guys mind sharing your settings or any insights? Currently, I have found: A) Davinci resolve, Krita, Rawtherapee, and GIMP all look identical, and all are manually set to use my calibrated icc profile. In the case of Resolve, this includes generating a 3D LUT from the icc profile using DisplayCAL, as per the tutorial here. (Though I am using sRGB instead of Rec709). B) MPC-HC looks identical to the default Windows Picture and Photo Viewer, but appears to have slightly more contrast than the programs listed in A), or maybe just a slightly lower black level. I adjusted every setting I could find, but could only get worse results, not better. Reading through Wikipedia, I came across this: AND I notice that in the color management section of MPC-HC, you can choose between Gamma 2.2, 2.35, and 2.4. So perhaps MPC-HC is actually using a single gamma value? That would explain why it looks just BARELY off from the A) programs. C) VLC is on its own planet. Changing renderers will change the colors around, but they don't explain what is actually happening so it's anyone's guess which settings are most accurate. I couldn't agree more. Every day, lots of people here on EOSHD discuss the color science of various cameras. I'm assuming they are all viewing on calibrated monitors? If so, I hope they will chime in and help us out!
  22. Keep in mind that the VCGT (video card gamma table) is universal, while the icc profile is not. So you should have the correct gamma everywhere already. Assuming Quicktime uses the calibrated ICC profile, yes, that is correct. (As a Windows user, I have no experience with Quicktime). If an application such as Quicktime is verifiably correct, then yes, you will probably have to use a LUT to make your other software match. I would try to confirm Quicktime's accuracy using image editors, as they usually have better icc profile support than video applications (after all, icc profiles are used calibrate printers for printing photos). If 4-5 different photo editors all use the calibrated icc profile and match each other and quicktime, you can be reasonably certain those are all correct. However, gamma should be correct already (if my understanding is correct... I'm still figuring this out!) so my suspicion is that it's a problem with actual colors, or with video vs. data levels, rather than a gamma problem. I don't know how color management works in AE. Which software to you use for calibration? I use DisplayCAL. It has a lot of nice tools, like generating a LUT from the icc profile. The forums there seem like a good place to ask questions. Final note: VLC absolutely SUCKS for color. Afaik, there is no way to use an icc profile, and even a simple thing like specifying 16-235 vs. 0-255 levels is unreliable. You can be reasonably certain that VLC's color is not accurate.
  23. I'm no expert, but I have been researching color calibration as well. If you are using a monitor plugged into your gpu, calibration happens using an icc profile, which tells the software color shifts to compensate for the hardware inaccuracies. Unfortunately, icc profiles are implemented on an application level, which means some applications might use them, as some might not. So it may be the case that QuickTime uses the icc profile created by your idisplay pro, while vlc ignores it (or maybe vice versa). Most video editing software appears to ignore the icc profile. however, just to make matters worse, the gamma table is implemented system wide. So you should theoretically have universally calibrated gamma, while the color is application specific. As for which to trust, ive found that many picture editors, such as krita, or rawtherapee, let you manually specify an icc profile to use. Since I can be sure they use the calibration data, I trust them more than any of my video software. so after calibrating, I export a still frame, a open it in my image editors, and compare that to the video software to determine which is most accurate. the best solution is of course to use a lut box or a monitor that supports luts, so that ALL color from your computer is calibrated, not just from certain applications. If that is not an option, make sure you are using the hardware rgb sliders on your monitor to get as close as possible durin calibration. hope that helps, and if you find anything that contradicts what I said, let me know! Color calibration appears to be a dark art, and I'm not sure I've got it all right.
×
×
  • Create New...