Jump to content

KnightsFan

Members
  • Posts

    1,234
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. I am curious if anyone has experience with the new lowel tota led? I haven't seen many reviews. They seem to output a lot of light for relatively cheap. I have fond memories of tungsten tota lamps exploding in college.
  2. This is a bit nitpicky, but these numbers are for are MB/s, not Mb/s. The XT3 goes up to 400 Mb/s which is equal to 50 MB/s. You should wind up with 4:4:4 actually, so even better!
  3. Ah, I see now, I wrongly assumed 35mm to be FF since you mentioned the 5dII days. I'm jealous of your lens collection for sure! With that budget, I recommend the XT3. It's a phenomenal camera, I got to use it quite a bit earlier this year. I don't think you'll find anything else in your budget that has 10 bit. Unfortunately it doesn't have a crop mode. But as for the competition, none of the Sonys output 10 bit even over HDMI, and the GH5 and P4k both have <s35 sensors. And then maybe wait a year to pick up a used OG BMPCC for peanuts just for your 16mm lenses. There is actually also the Z Cam E2c, the $800 little brother to the E2. It can record 10 bit 4:2:0 to an SD card, or 10 bit 4:2:2 via USB C. It's M43 and I haven't heard about a 16mm crop mode, so not ideal sensor size--but it's affordable for 10 bit and ProRes. Again I haven't used it, but you can try the Z Cam E2 facebook group for info if you are interested.
  4. I usually just use ffmpeg by itself, it's worth learning if you do a lot of unconventional media manipulations.
  5. Here's my blind opinion. In order of preference, best to worst: G, C, D/E/F, B, H, A A. the only "bad color" to me, way too red/pink B: Maybe a touch more saturation would make it perfect, but he looks a bit too pale and dead C. Pretty good. The only reason it's behind G is it's lacking a touch of richness D. Highlights are ugly. Not bad, though. E. Could use more contrast. It might also be slightly out of focus? His lips don't stand out from the other skin enough. F. Highlights are ugly. I think it's just overexposed a little. Also kinda soft, maybe out of focus. Not enough richness, really. G. This one's good. Nice, rich color, not too much red/pink. This and C and the only ones that look really well exposed. H. Possibly overexposed, but it's just ugly. Pink lips and white skin. Overall, I think G is good, the rest are kinda ok but mainly just overexposed a teeny bit for my taste, and A is not good.
  6. Have you looked into the Z Cam E2? It has a multi-aspect M43 sensor like the GH5s, and it has a S16 crop mode. With a speed booster, you can get full frame or S35, or you can use the crop mode for S16. And it shoots 10 bit 4:2:2 internally. It's $2k for the body, so more budget friendly than an FS7 or URSA. The downside is that it requires an external monitor, though you can use a smartphone for wireless monitoring and control via wifi. I do not have first hand experience with one yet, unfortunately. I can't think of anything else in the price range that can go up to 35mm with a speed booster, and has a dedicated 16mm crop mode, and 4:2:2 10 bit internally. I would stay away from ML for pro work. It was too finicky when I used it extensively on the 5D3. It was a phenomenal image, but I wouldn't be confident in it working 100% of the time (even due to user error, cards filling up too quickly, etc).
  7. That's a good idea. A few versions ago, Resolve's H265 output seemed to be poor quality, so I would export DNxHD and then convert to H265 using ffmpeg.
  8. For Vimeo, I usually export 2k H265 for projects shot in DCI 4k. That gives the most quality per MB, which is good because you get limited upload space on Vimeo. I'm not sure if you can export H265 in the free version, though. If you wanted 4k, you could export UHD. You'll get small black bars on the top and bottom, but significantly higher resolution than 2k.
  9. You need the full version to export 4K DCI. You can go up to UHD in the free version. You will get black bars if your delivery resolution is different from your timeline resolution. What is the resolution of your vimeo delivery?
  10. Last I heard, the S6, F6, and F8 will have H264, H265, ProRes, and a proprietary RAW format. ProRes would be pending Apple's approval (no reason it wouldnt be approved, it was for the E2 after all). Raw could be converted to Cineform raw in post using z cam's converter.
  11. If you're coming from layers, it might help to conceptualize nodes as being similar to analog audio systems. Say you have a mic going to the input on an EQ, out to a compressor, and then to a speaker, like: Mic -> EQ -> Compressor -> Speaker You can stick headphones in at any stage and you hear the audio signal at that point. You aren't applying EQ to the mic, you're placing an EQ on the signal between the mic and compressor. That's how nodes work, too. With layers it's often like you are "applying color correction to an image," whereas with nodes you're "piping the image signal through a color correction operation." You can use the viewers in Fusion like you'd use headphones on that audio path, to see what the signal looks like at any given point in the chain. I don't know if that helps at all.
  12. Nodes are certainly much better for composites and fine tuned control of effects that don't change much over time. Layers are generally easier for motion graphics, or effects that take place over time. Just for fun, I did a recent motion graphics bit in Fusion which I would normally do in AE. It wasn't too bad at first, but it was a nightmare to retime things. I think that with a few small changes, Fusion would be almost as good as AE for motion graphics though, while keeping all the benefits of a node based compositor. Some of these may be possible already, but I just haven't learned about them yet: - Easily make collapsible node groups / nested node graphs. You could then sort your "layers" into collapsed node graphs. This would solve the problem of a massive, un-navigable node graph - Merge node with unlimited inputs. PixaFlux has this awesome merge node with unlimited inputs. Each layer is just composited onto the next. It saves SO much space on the graph. - Sensible, intuitive merge options. What does an apply mode of "Normal" with an Operator of "In" mean? I have no idea, so I have to memorize what different things do. It would be so much easier to just have "Add" "Subtract" and "Multiply." - Nodes with multiple outputs. How about an RGBA split and combine? Or keyer nodes that output the image and the matte as separate outputs? - I still haven't figured out how to manually adjust tracking markers, or how to redo a portion of a track.
  13. Is it fusion specifically or node workflows in general that is baffling? I continue to find fusion's nodes unintuitive for very basic operations (compared to nodes in other software).
  14. I never notice a difference between 4k and 1080p for distribution even on a large 4k tv. However, with every camera that I have used or edited footage from, downscaled 4k for a 1080p delivery is significantly better than natively shooting 1080p. For capture, I think 4k is without a doubt "worth it" in terms of SD card and disk space, and processing power--unless you need a very quick turnaround and don't much about image fidelity. 4k is also very useful for green screening, motion tracking, and other information-hungry VFX processes, especially if you downscale to 1080p afterwards. On the other hand, I can't honestly say I see a difference between downscaled 4k and native 1080p after YouTube compression. But FWIW it's a fairly well known trick to upscale 2k to 4k just to get a higher bitrate on YouTube, if the bandwidth supports it at the streaming end.
  15. Krita is also great and does seem to have an active development community. It seems more geared towards creating artwork from scratch, with a phenomenal brush engine and great graphics tablet support. I find Gimp quicker and more intuitive for simple photo touchup with the clone and heal tools, but it could also just be that I have used it more.
  16. You're welcome! Reaper is an incredibly flexible tool that has more features and capabilities than you see at first glance. Kenny Gioia's reaper tutoroals on Youtube are a fantastic place to start also.
  17. AE: Fusion. It's not as nice for motion graphics, but much better for compositing. It's still usable for motion graphics. Fusion's tracker is phenomenal, though I still haven't quite gotten the hang of manually adjusting tracking points yet. Audition: Reaper. Reaper completely replaces Audition multitracking, and mostly replaces the audio editor. Reaper's builtin noise reduction plugin isn't as good as Audition's, but that really the only downside I've found. Reaper works very well with VST (and other) plugins, so a third party noise reduction plugin could even out the difference. RawTherapee is fantastic for RAW photo processing. Gimp is my goto for photo editing.
  18. I do accept that, what I'm saying is that his signature style makes me less likely to enjoy the film.
  19. Ah, the old "it's bad, but intentionally so." Solo was too dark in many scenes, and it wasn't a question of not calibrating my TV, or watching on an ipad, or even compression. I saw it on blu ray on a 60" TV that is within reasonable calibration, in a dark room. It's funny because the Godfather, a movie famous for "underexposure," is so much easier on the eyes--because it's not about darkening a scene, it's about lighting it so that the viewer gets the impression of darkness. ...is exactly right.
  20. I don't watch GoT so maybe the few images I've seen online are misleading, but it looks unnecessarily dark to me. Compare the screenshot at the top of this page https://www.theverge.com/tldr/2019/4/30/18524679/game-of-thrones-battle-of-winterfell-too-dark-fabian-wagner-response-cinematographer With another nighttime battle: https://images.app.goo.gl/3wP5kc7T9JKKCi7m7 The LotR image is obviously nighttime: it's dark, bluish, and moody, and yet the faces are bright enough to see without any trouble, and there are spots of actual pure white in the reflections. It's the job of the cinematographer to give the impression of darkness while keeping the image clear and easy to understand. If it were an end-user calibration problem, everyone would be complaining about every other movie as well. It seems like something was different here.
  21. I've read that FF is more expensive than APS-C because the same number of defects on a wafer will mean a lower percentage yield if you are cutting larger sensors off of that wafer. In other words, 5 defects on a wafer that will be cut into 100 sensors could mean 5 defective sensors and 95 good ones, 95% yield in the worst case. If you are only cutting 4 sensors off of that wafer, those same 5 defects give you a 75% yield at best. The conclusion is that a large sensor is actually more expensive per sq. mm than a smaller sensor. I'm not sure what the actual numbers are--maybe it is just a $150 difference as you read.
  22. That makes sense, because Long GOP 100 Mbps should be better quality than All-I 400 Mbps for a static shot. For example, something like a medium shot of an interview subject has like 75% of the frame is completely static, and will be encoded once per group of pictures, instead once every frame for All-I. That's significantly more of the bitrate that can be allocated to the 25% of the frame that is moving. There are very few circumstances in which I would choose to record All-I. I don't see 140 Mbps in The Z6 as a downside.. Even 4:2:2 vs. 4:2:0 makes no visual difference, though it's better for chroma keying. The real advantage of the GH5 is 10 bit. 8 bit falls apart almost immediately, particularly when used with log curves. But despite it's codec fidelity, GH5 footage is not fun to work with so I'd take a Z6 myself. I do wish Nikon had included a 10 bit HEVC codec though.
  23. What do you mean by "data"? Because in terms of digital data collected, the GH5 records more pixels (in open gate mode), and stores it in more bits. So no, the Z6's larger sensor does not record more data. True, more photons hit the Z6 sensor at equivalent aperture, due to its larger size. That's why the low light is significantly better. I agree, the Z6 has better color. Like I said, I'd rather have the Z6 for many reasons, color being one of them. All I am saying is that the GH5 holds up much better to an aggressive grade before you showing compression artifacts, not that it looks subjectively "better."
  24. The GH5 also totally destroys the A6500. I believe the A6500 is one of the main reasons there is a general negativity towards Sony. It has odd color (to be generous) and the worst possible rolling shutter. I think we all know how big sensors are. Size does not necessarily mean more detail, or better image, a lot of that is determined by the processing done afterwards. And sensor size has nothing to do with the amount of data unless you keep pixel pitch the same. I think the GH5's 4:3 open gate mode records more pixels/data than the Z6 in 16:9 UHD, and certainly packs it into a beefier codec. From what I have seen, the Z6 doesn't have a particularly good image, compared to a GH5 at native ISO. Maybe that's because I've actually worked with Gh5 footage on real projects, whereas I've only downloaded test clips of the Z6. But the GH5 holds up better to grading with high data rates. I haven't tried to work with any externally recorded Z6 footage yet, but, as soon as you require an external recorder attached with a feeble HDMI cable, you've really lost my interest. It's a workaround to the fact that many DSLR-style cameras don't come with the video codecs and tools we need. That isn't to say the Z6 is bad. I'd rather have a Z6 than a GH5, but it's not because the video image fidelity is better at native ISO. Low light is much better, and full frame means I'd use vintage lenses at their native FOV. But I really think it's a trade off with the current batch of smaller sensor cameras that have more video-oriented features.
  25. Thank you. As someone who has to listen to the audio while I edit it, I have a great appreciation for boom ops. I think it's more likely that we use deep learning to clean bad audio than we invent a robot that can hold a physical boom pole properly.
×
×
  • Create New...