Jump to content

KnightsFan

Members
  • Posts

    1,214
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. A few people have received theirs. There are only one or two user reviews on YouTube so far, but there is a pretty active Facebook group. Huh? Everything I read says the XT3's 120p does a 1.29x crop on top of the 1.5x, which means ~2x crop compared to full frame.
  2. It's only ~14% larger volume than the XT3. It's half the weight of the 1dx2. It's the same weight (and smaller volume!) compared to the GH5. It's true the E2 is not a photo/video hybrid with DSLR ergonomics like the others, but it's certainly right in there in terms of cost and size.
  3. I don't have a GH5 or GH5s so I can't confirm how 4:3 photos or the desqueeze are implemented, sorry. With the 5D3, you can crop the photo to 4:3 and it will have the same FOV as your video. Haha, we've all got our list! You have to jump up a few price brackets to get those, unfortunately.
  4. The Z Cam E2, which I believe also uses the GH5S/P4K sensor, has a 4:3 mode, but afaik the E2 is still in a "beta" stage where you can get one, but only directly from their website. Very few reviews exist for it. I don't believe there are any others. I'm not familiar with current ML capabilities. Maybe some other ML cameras can shoot 4:3 as well. The GH5s/P4K/E2 are all very low resolution, primarily designed for video. I wouldn't recommend taking photos with them. A speed booster is not required, no. To answer your question from before... No. Instead of thinking of the speed booster as widening your lens, think of it as enlarging your sensor. If you put a speed booster on an APS-C camera, you now have a full frame camera. If a lens vignettes on full frame, it will vignette on APS-C+speed booster. In fact it might even vignette MORE with wide lenses with large apertures.
  5. To clarify, the P4K has the same sensor as the GH5S, not the GH5. Also, you can use the Speed Booster XL on a M43 sensor to get a (roughly) APS-H field of view, which is slightly larger than APS-C. While it uses the same sensor, the P4K does not currently have any 4:3 recording modes.
  6. It's always valid to point out testing conditions that are not controlled. That said, most of us agree that equivalent dof is NOT important in this test, and are satisfied with the conclusions we can draw from watching it. Feel free to conduct your own, and focus on the comparison points you find most important.
  7. And they all have different lenses, too! You have a valid criticism, but not every camera test needs to test the same thing.
  8. but this isn't a sensor size comparison test, he's just doing a quick real world comparison between several cameras. I didn't even look at dof when judging, I only looked at skin tones, and it was clear to me which looked better. It was very good in that regard. If anything, it also highlights the fact that you need prohibitively expensive glass to get equivalent dof on m43.
  9. That is exactly what I thought, both in ranking order and that b had less flattering lighting. I didn't try to guess which was which, I just ranked based on the images. A was by far the best, c was good and b was close behind, d was clearly the worst for my taste.
  10. Yeah, I'm talking about a camera company making a first party solution.
  11. I think Basilisk means without any external motors and gears at all, similar to how the Aputure DEC works with EF lenses. I'm surprised devices like the DEC aren't more common. Any camera company could make a follow focus wheel (or rocker) that plugs straight into the camera body, and sends electronic signals to control the AF motor built into the lens. Hard infinity stops, custom A/B points, repeatable throws--all it would take is some simple hardware and some code. If such a feature materialized, lens ergonomics would cease to matter!
  12. The last project I worked on we had a remote editor and we were mainly transferring via the internet. I wrote a simple Python script to batch all the videos to TINY proxy files to send over--plenty of quality for basic editing--and got an XML back. Very friendly upload and download times. Resolve's reconform system was very unintuitive for me. It was pretty much an all day task to figure out how to get it working the way I wanted. Importing XMLs was even worse. True, but for some of us it is the best option. Not everyone wants to buy terabytes of harddrives every month shooting ProRes. It would be really nice if at least ONE company did both. Right now, you either get a consumer camera shooting HEVC, or a pro camera shooting ProRes (I know I'm simplifying to make a point). It would be ideal if someone like Atomos made a recorder that could switch between HEVC and ProRes--suddenly, every camera would be able to scale between saving space or saving processing power.
  13. 90% of my shots are 28mm on APS-C (42mm on FF). I think I'd use a 24mm more often if I had a good one--my zooms in that range have no character. 28mm is great for wide shots if you've got enough space, but really shines for medium shots and close ups. It's not exactly "flattering," but really makes a face jump out from the background in a way that a longer lens can't. I love the DOF at f4: your eye immediately knows what's in focus, but you can still tell what's behind the blur. It guides your eye, but maintains the scene's context. Some years ago I read this article, and I still agree with it 100%.
  14. @Henry Ciullo When editing H.265 files, I use Ffmpeg to make 1.5Mbps 1080p h.264 proxies first. They edit very easily in Resolve. If you keep your proxies and online media in different bins inside Resolve, you can easily switch between using proxies and online media, by using the "Reconform from Bins" option. Not sure if that's what you're trying to do--I can go into more detail if you'd like.
  15. To be clear, that's just because the content itself was mastered in P3. The P3-based image data needs a transformation to look correct if it is displayed in the Bt.2020 space. So it's not "encoded" in P3, it's encoded in Bt.2020, but doesn't use the parts of the Bt.2020 gamut that are outside the P3 gamut. Right?
  16. I think the difference may be that for 4K, commercials can zoom in and say "if you had a 4k screen, you would see THIS much detail!" and that demonstration works pretty well, even on an HD screen. HDR is literally something you can't display on your current screen, so marketing is like "well, we can't show you what it is unless you buy the hardware." It's way too abstract unless you either see it yourself, or have some prior knowledge about how displays work. The hurdle that I see with HDR is that Rec 709 and sRGB are so entrenched, not just for pro/semi-pro web and T broadcast, but for desktops, documents, video games, and everything else we see on screens. Scaling an HD image (whether it's a film or Windows Explorer) to a 4k screen is simple. I'm not sure how easy it is to coordinate all the moving parts for the switch to HDR. For example, I've got some old photos from ten years ago. If I get an HDR/WCG monitor, will those photos display properly? I don't know if they even have the necessary metadata to identify them as sRGB. Will my video games from the early 2000's look correct? How about my DVD collection? It seems like a bigger mess for backwards and/or forwards compatibility to go to HDR, compared to 4k.
  17. Please correct me if I'm wrong, but I thought Rec.2020 only specifies 4k and 8k resolutions, using standard dynamic range, not HDR. Perusing Wikipedia, I'm finding: Rec.709: standard dynamic range, standard color gamut, 1080p, 8 or 10 bit Rec.2020: standard dynamic range, wide color gamut, 2160p/4320p, 10 or 12 bit Rec.2100: high dynamic range, wide color gamut, 1080p/2160p/4320p, 10 or 12 bit So perhaps Alistair Chapman was referring to Rec.2100? (Not trying to be pedantic, just making sure I understand the alphabet soup here!) Back on topic, I think 4k is easier to market for whatever reason, so we will see mass adoption of 4k before HDR. The public seems to "understand" 4k better than they do HDR. Moreover, we're all agreed on what 4K is, whereas HDR is still in a kind of format war from what I can see, between HLG and PQ.
  18. If it were me, I'd probably use a variety of programs to use the strongest tools of each. I'd animate the text in Blender (or the 3D package of your choice), as well as some of the other solid, static "hero" elements the camera mainly just circles around. For example, the room and chain at 0:16 in the embedded youtube video. I'd prefer to do objects like this in a legit 3D package, because they can be modelled easily, have few moving parts, and I don't want to fight a software layout designed primarily for compositing. After rendering out the 3D parts--possible in a few layers or maybe a separate Z-depth render, I'd bring those into After Effects or another compositor. There are some 2D elements which I'd do the compositor, on top of the 3D renders. Elements that either don't require much perspective change, or are particle based (fireworks, smoke) are usually easier to fake in 2D than to simulate fully. The asteroid field from the dailymotion link, and the planet in the background would also be in AE. Some of the foreground elements, such as the trees at 0:30 in the YouTube video, can be sourced from real photos of trees and then composited in. If I was feeling adventurous, I'd use Fusion instead of After Effects. I've never used Fusion on anything complex, but AE is an unintuitive mess, so I'd love to give Fusion a spin. Remember that little things, such as proper motion blur, will help sell it. Depending on how well you want it to match camera footage, you could compress it in H.264, add noise, or something like that before use.
  19. When I said I wanted better rolling shutter than the NX1, I meant by more than 0.45ms...
  20. Well the first two people who did guess both got the brand correct, so there's that. If you want to prove that people can't tell the difference, next time share some high quality files, not a an 8 Mbps YouTube video. one of my pet peeves is people pretending to see a cameras banding, compression, dynamos range, macro blocking, motion cadence (whatever that is) etc from YouTube videos additionally, the vast majority of discussion I see about cameras are: - ergonomic--shape of the camera (for ease of use. As you said, "more convenient") - nds, xlrs, battery, HDMI size (again, ease of use, not much impact on final output) - crop factor (for lens compatibility) - bitrate and codec (moot by YouTube, the destroyer of all images) - stills capability (not applicable) - color science (2/3 of us recognized Sony) - low light capability (we don't have $20k in lights available, due to budget or type of shooting) - rolling shutter (didn't have a strong feeling either way on this video) Anyway, great job on the music video! it's very nicely done.
  21. Cropping for 120 doesn't mean it has less processing power then the NX1. NX1's 120fps is not a full sensor readout either; line skipping vs 1:1 cropping, same processing power (if the XT3 is oversampling at all in 120fps, then it is using more processing power. The NX1's 120fps looks like a 1920x1080 readout at BEST). Edit: also, adjusted for inflation, the XT3 is cheaper than the NX1 was.
  22. Looked Sony to me to, my guess would be A6500. Pretty impossible to tell after grade and compression though.
  23. I knew this would eventually come up. Perspective distortion is only due to distance between camera and subject. Focal length has nothing to do with it.
  24. Of course! I suppose I was getting a bit off topic as I wasn't directing that part of my post towards the OP.
  25. I had a chance to play with an FS7 recently, and I felt something was off with exposure. So I did a comparison with my NX1. I set my NX1 to ISO 800 and the FS7 at its "base ISO" of 2000. In Resolve, I added the builtin SLOG to Rec709 to the FS7 footage, and it was darker than the NX1. ISO 2000 vs ISO 800. So I check with my light meter, which unsurprisingly agreed with the NX1. So in what world was it ISO 2000? One where you make middle grey from the washed out SLOG3 file remain middle grey after grading? In other words, one where you either keep the washed out-ness, or clip the top four stops of highlights. Whether you want to say "overexpose" on set or "underexpose" referring to post, doesn't matter. The issue is that it's universally accepted (except by the manufacturer) that you should pretend that your camera's ISO reading is actually a stop lower than it says. Hence: Exactly... That was basically my thought. Changing to log doesn't change the analog gain that Sony (or Panasonic, etc) is using, it just lets them put up some far-fetched number for "low light" performance. The truth is, everyone should test their camera extensively and find out what exposure works best, regardless of what the numbers say.
×
×
  • Create New...