
KnightsFan
Members-
Posts
1,351 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by KnightsFan
-
Not really. In 2014, Sony made the a6000, and Samsung made the NX1--which still tops all Sony APS-C for specs. In fact, the XT3's extraordinary specs are THE reason why people think it's using Samsung tech! Sony's "best specs" really only applies to FF video, where they have enjoyed an almost complete lack of competition until 2018. Sony has created some amazing FF sensors, but their specs haven't been that good. Hopefully, the competition kicks them into high gear with their next APS-C lineup!
-
Canon EOS R first impressions - INSANE split personality camera
KnightsFan replied to Andrew Reid's topic in Cameras
So if you shoot HFR, you've got to use a FF lens. For 4k, you've got to use EF-S lenses for native FOV. And if you want that ND adapter, you've got to use adapted lenses. It's almost as if Canon wants you to buy 3 sets of lenses! What's next, 10 bit only when using FD lenses? -
Are cameras without IBIS and AF useless for shooting video in 2018?
KnightsFan replied to A_Urquhart's topic in Cameras
The question has a different meaning depending on what you already own. Personally, I'm completely satisfied with the 24p quality of my current camera. Do I need (insert any feature from @kye's list here)? Not at all! But if the primary features are covered by my current gear, it'll take an improvement in the secondary features for me to consider an upgrade. So IBIS or AF are not required for the camera that I use in 2018, but they might be a requirement for the camera that I buy as an upgrade in 2018--if I feel the need to upgrade at all! -
NX1 is around 8ms in 1080p, according to that DVXuser thread, which is right near the maximum you can have for 120p. I bet that's a really difficult problem to solve. It might be straightforward on whip pans across static scenery, but I don't see how a camera would be able to correct a fast moving object across the frame, in real time. Or correct for a fast zoom, where you can see zooming on half the image before the other.
-
Standalone video players & colour reproduction
KnightsFan replied to Dimitris Stasinos's topic in Cameras
@Stathman what player do you use? -
Standalone video players & colour reproduction
KnightsFan replied to Dimitris Stasinos's topic in Cameras
I checked out Media Player Classic Home Cinema (MPC-HC). It's an improvement over VLC, but it does not match my other color-managed applications (yet!). Would you guys mind sharing your settings or any insights? Currently, I have found: A) Davinci resolve, Krita, Rawtherapee, and GIMP all look identical, and all are manually set to use my calibrated icc profile. In the case of Resolve, this includes generating a 3D LUT from the icc profile using DisplayCAL, as per the tutorial here. (Though I am using sRGB instead of Rec709). B) MPC-HC looks identical to the default Windows Picture and Photo Viewer, but appears to have slightly more contrast than the programs listed in A), or maybe just a slightly lower black level. I adjusted every setting I could find, but could only get worse results, not better. Reading through Wikipedia, I came across this: AND I notice that in the color management section of MPC-HC, you can choose between Gamma 2.2, 2.35, and 2.4. So perhaps MPC-HC is actually using a single gamma value? That would explain why it looks just BARELY off from the A) programs. C) VLC is on its own planet. Changing renderers will change the colors around, but they don't explain what is actually happening so it's anyone's guess which settings are most accurate. I couldn't agree more. Every day, lots of people here on EOSHD discuss the color science of various cameras. I'm assuming they are all viewing on calibrated monitors? If so, I hope they will chime in and help us out! -
Standalone video players & colour reproduction
KnightsFan replied to Dimitris Stasinos's topic in Cameras
Keep in mind that the VCGT (video card gamma table) is universal, while the icc profile is not. So you should have the correct gamma everywhere already. Assuming Quicktime uses the calibrated ICC profile, yes, that is correct. (As a Windows user, I have no experience with Quicktime). If an application such as Quicktime is verifiably correct, then yes, you will probably have to use a LUT to make your other software match. I would try to confirm Quicktime's accuracy using image editors, as they usually have better icc profile support than video applications (after all, icc profiles are used calibrate printers for printing photos). If 4-5 different photo editors all use the calibrated icc profile and match each other and quicktime, you can be reasonably certain those are all correct. However, gamma should be correct already (if my understanding is correct... I'm still figuring this out!) so my suspicion is that it's a problem with actual colors, or with video vs. data levels, rather than a gamma problem. I don't know how color management works in AE. Which software to you use for calibration? I use DisplayCAL. It has a lot of nice tools, like generating a LUT from the icc profile. The forums there seem like a good place to ask questions. Final note: VLC absolutely SUCKS for color. Afaik, there is no way to use an icc profile, and even a simple thing like specifying 16-235 vs. 0-255 levels is unreliable. You can be reasonably certain that VLC's color is not accurate. -
Standalone video players & colour reproduction
KnightsFan replied to Dimitris Stasinos's topic in Cameras
I'm no expert, but I have been researching color calibration as well. If you are using a monitor plugged into your gpu, calibration happens using an icc profile, which tells the software color shifts to compensate for the hardware inaccuracies. Unfortunately, icc profiles are implemented on an application level, which means some applications might use them, as some might not. So it may be the case that QuickTime uses the icc profile created by your idisplay pro, while vlc ignores it (or maybe vice versa). Most video editing software appears to ignore the icc profile. however, just to make matters worse, the gamma table is implemented system wide. So you should theoretically have universally calibrated gamma, while the color is application specific. As for which to trust, ive found that many picture editors, such as krita, or rawtherapee, let you manually specify an icc profile to use. Since I can be sure they use the calibration data, I trust them more than any of my video software. so after calibrating, I export a still frame, a open it in my image editors, and compare that to the video software to determine which is most accurate. the best solution is of course to use a lut box or a monitor that supports luts, so that ALL color from your computer is calibrated, not just from certain applications. If that is not an option, make sure you are using the hardware rgb sliders on your monitor to get as close as possible durin calibration. hope that helps, and if you find anything that contradicts what I said, let me know! Color calibration appears to be a dark art, and I'm not sure I've got it all right. -
A few people have received theirs. There are only one or two user reviews on YouTube so far, but there is a pretty active Facebook group. Huh? Everything I read says the XT3's 120p does a 1.29x crop on top of the 1.5x, which means ~2x crop compared to full frame.
-
It's only ~14% larger volume than the XT3. It's half the weight of the 1dx2. It's the same weight (and smaller volume!) compared to the GH5. It's true the E2 is not a photo/video hybrid with DSLR ergonomics like the others, but it's certainly right in there in terms of cost and size.
-
I don't have a GH5 or GH5s so I can't confirm how 4:3 photos or the desqueeze are implemented, sorry. With the 5D3, you can crop the photo to 4:3 and it will have the same FOV as your video. Haha, we've all got our list! You have to jump up a few price brackets to get those, unfortunately.
-
The Z Cam E2, which I believe also uses the GH5S/P4K sensor, has a 4:3 mode, but afaik the E2 is still in a "beta" stage where you can get one, but only directly from their website. Very few reviews exist for it. I don't believe there are any others. I'm not familiar with current ML capabilities. Maybe some other ML cameras can shoot 4:3 as well. The GH5s/P4K/E2 are all very low resolution, primarily designed for video. I wouldn't recommend taking photos with them. A speed booster is not required, no. To answer your question from before... No. Instead of thinking of the speed booster as widening your lens, think of it as enlarging your sensor. If you put a speed booster on an APS-C camera, you now have a full frame camera. If a lens vignettes on full frame, it will vignette on APS-C+speed booster. In fact it might even vignette MORE with wide lenses with large apertures.
-
To clarify, the P4K has the same sensor as the GH5S, not the GH5. Also, you can use the Speed Booster XL on a M43 sensor to get a (roughly) APS-H field of view, which is slightly larger than APS-C. While it uses the same sensor, the P4K does not currently have any 4:3 recording modes.
-
It's always valid to point out testing conditions that are not controlled. That said, most of us agree that equivalent dof is NOT important in this test, and are satisfied with the conclusions we can draw from watching it. Feel free to conduct your own, and focus on the comparison points you find most important.
-
And they all have different lenses, too! You have a valid criticism, but not every camera test needs to test the same thing.
-
but this isn't a sensor size comparison test, he's just doing a quick real world comparison between several cameras. I didn't even look at dof when judging, I only looked at skin tones, and it was clear to me which looked better. It was very good in that regard. If anything, it also highlights the fact that you need prohibitively expensive glass to get equivalent dof on m43.
-
That is exactly what I thought, both in ranking order and that b had less flattering lighting. I didn't try to guess which was which, I just ranked based on the images. A was by far the best, c was good and b was close behind, d was clearly the worst for my taste.
-
Nikon Z7 is at EOSHD HQ - better video than Sony?
KnightsFan replied to Andrew Reid's topic in Cameras
Yeah, I'm talking about a camera company making a first party solution. -
Nikon Z7 is at EOSHD HQ - better video than Sony?
KnightsFan replied to Andrew Reid's topic in Cameras
I think Basilisk means without any external motors and gears at all, similar to how the Aputure DEC works with EF lenses. I'm surprised devices like the DEC aren't more common. Any camera company could make a follow focus wheel (or rocker) that plugs straight into the camera body, and sends electronic signals to control the AF motor built into the lens. Hard infinity stops, custom A/B points, repeatable throws--all it would take is some simple hardware and some code. If such a feature materialized, lens ergonomics would cease to matter! -
The last project I worked on we had a remote editor and we were mainly transferring via the internet. I wrote a simple Python script to batch all the videos to TINY proxy files to send over--plenty of quality for basic editing--and got an XML back. Very friendly upload and download times. Resolve's reconform system was very unintuitive for me. It was pretty much an all day task to figure out how to get it working the way I wanted. Importing XMLs was even worse. True, but for some of us it is the best option. Not everyone wants to buy terabytes of harddrives every month shooting ProRes. It would be really nice if at least ONE company did both. Right now, you either get a consumer camera shooting HEVC, or a pro camera shooting ProRes (I know I'm simplifying to make a point). It would be ideal if someone like Atomos made a recorder that could switch between HEVC and ProRes--suddenly, every camera would be able to scale between saving space or saving processing power.
-
90% of my shots are 28mm on APS-C (42mm on FF). I think I'd use a 24mm more often if I had a good one--my zooms in that range have no character. 28mm is great for wide shots if you've got enough space, but really shines for medium shots and close ups. It's not exactly "flattering," but really makes a face jump out from the background in a way that a longer lens can't. I love the DOF at f4: your eye immediately knows what's in focus, but you can still tell what's behind the blur. It guides your eye, but maintains the scene's context. Some years ago I read this article, and I still agree with it 100%.
-
@Henry Ciullo When editing H.265 files, I use Ffmpeg to make 1.5Mbps 1080p h.264 proxies first. They edit very easily in Resolve. If you keep your proxies and online media in different bins inside Resolve, you can easily switch between using proxies and online media, by using the "Reconform from Bins" option. Not sure if that's what you're trying to do--I can go into more detail if you'd like.
-
Which Will Be Adopted Sooner: 4K or Rec. 2020 / HDR Broadcasting???
KnightsFan replied to Mark Romero 2's topic in Cameras
To be clear, that's just because the content itself was mastered in P3. The P3-based image data needs a transformation to look correct if it is displayed in the Bt.2020 space. So it's not "encoded" in P3, it's encoded in Bt.2020, but doesn't use the parts of the Bt.2020 gamut that are outside the P3 gamut. Right?