Jump to content

KnightsFan

Members
  • Posts

    1,224
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. If you're coming from layers, it might help to conceptualize nodes as being similar to analog audio systems. Say you have a mic going to the input on an EQ, out to a compressor, and then to a speaker, like: Mic -> EQ -> Compressor -> Speaker You can stick headphones in at any stage and you hear the audio signal at that point. You aren't applying EQ to the mic, you're placing an EQ on the signal between the mic and compressor. That's how nodes work, too. With layers it's often like you are "applying color correction to an image," whereas with nodes you're "piping the image signal through a color correction operation." You can use the viewers in Fusion like you'd use headphones on that audio path, to see what the signal looks like at any given point in the chain. I don't know if that helps at all.
  2. Nodes are certainly much better for composites and fine tuned control of effects that don't change much over time. Layers are generally easier for motion graphics, or effects that take place over time. Just for fun, I did a recent motion graphics bit in Fusion which I would normally do in AE. It wasn't too bad at first, but it was a nightmare to retime things. I think that with a few small changes, Fusion would be almost as good as AE for motion graphics though, while keeping all the benefits of a node based compositor. Some of these may be possible already, but I just haven't learned about them yet: - Easily make collapsible node groups / nested node graphs. You could then sort your "layers" into collapsed node graphs. This would solve the problem of a massive, un-navigable node graph - Merge node with unlimited inputs. PixaFlux has this awesome merge node with unlimited inputs. Each layer is just composited onto the next. It saves SO much space on the graph. - Sensible, intuitive merge options. What does an apply mode of "Normal" with an Operator of "In" mean? I have no idea, so I have to memorize what different things do. It would be so much easier to just have "Add" "Subtract" and "Multiply." - Nodes with multiple outputs. How about an RGBA split and combine? Or keyer nodes that output the image and the matte as separate outputs? - I still haven't figured out how to manually adjust tracking markers, or how to redo a portion of a track.
  3. Is it fusion specifically or node workflows in general that is baffling? I continue to find fusion's nodes unintuitive for very basic operations (compared to nodes in other software).
  4. I never notice a difference between 4k and 1080p for distribution even on a large 4k tv. However, with every camera that I have used or edited footage from, downscaled 4k for a 1080p delivery is significantly better than natively shooting 1080p. For capture, I think 4k is without a doubt "worth it" in terms of SD card and disk space, and processing power--unless you need a very quick turnaround and don't much about image fidelity. 4k is also very useful for green screening, motion tracking, and other information-hungry VFX processes, especially if you downscale to 1080p afterwards. On the other hand, I can't honestly say I see a difference between downscaled 4k and native 1080p after YouTube compression. But FWIW it's a fairly well known trick to upscale 2k to 4k just to get a higher bitrate on YouTube, if the bandwidth supports it at the streaming end.
  5. Krita is also great and does seem to have an active development community. It seems more geared towards creating artwork from scratch, with a phenomenal brush engine and great graphics tablet support. I find Gimp quicker and more intuitive for simple photo touchup with the clone and heal tools, but it could also just be that I have used it more.
  6. You're welcome! Reaper is an incredibly flexible tool that has more features and capabilities than you see at first glance. Kenny Gioia's reaper tutoroals on Youtube are a fantastic place to start also.
  7. AE: Fusion. It's not as nice for motion graphics, but much better for compositing. It's still usable for motion graphics. Fusion's tracker is phenomenal, though I still haven't quite gotten the hang of manually adjusting tracking points yet. Audition: Reaper. Reaper completely replaces Audition multitracking, and mostly replaces the audio editor. Reaper's builtin noise reduction plugin isn't as good as Audition's, but that really the only downside I've found. Reaper works very well with VST (and other) plugins, so a third party noise reduction plugin could even out the difference. RawTherapee is fantastic for RAW photo processing. Gimp is my goto for photo editing.
  8. I do accept that, what I'm saying is that his signature style makes me less likely to enjoy the film.
  9. Ah, the old "it's bad, but intentionally so." Solo was too dark in many scenes, and it wasn't a question of not calibrating my TV, or watching on an ipad, or even compression. I saw it on blu ray on a 60" TV that is within reasonable calibration, in a dark room. It's funny because the Godfather, a movie famous for "underexposure," is so much easier on the eyes--because it's not about darkening a scene, it's about lighting it so that the viewer gets the impression of darkness. ...is exactly right.
  10. I don't watch GoT so maybe the few images I've seen online are misleading, but it looks unnecessarily dark to me. Compare the screenshot at the top of this page https://www.theverge.com/tldr/2019/4/30/18524679/game-of-thrones-battle-of-winterfell-too-dark-fabian-wagner-response-cinematographer With another nighttime battle: https://images.app.goo.gl/3wP5kc7T9JKKCi7m7 The LotR image is obviously nighttime: it's dark, bluish, and moody, and yet the faces are bright enough to see without any trouble, and there are spots of actual pure white in the reflections. It's the job of the cinematographer to give the impression of darkness while keeping the image clear and easy to understand. If it were an end-user calibration problem, everyone would be complaining about every other movie as well. It seems like something was different here.
  11. I've read that FF is more expensive than APS-C because the same number of defects on a wafer will mean a lower percentage yield if you are cutting larger sensors off of that wafer. In other words, 5 defects on a wafer that will be cut into 100 sensors could mean 5 defective sensors and 95 good ones, 95% yield in the worst case. If you are only cutting 4 sensors off of that wafer, those same 5 defects give you a 75% yield at best. The conclusion is that a large sensor is actually more expensive per sq. mm than a smaller sensor. I'm not sure what the actual numbers are--maybe it is just a $150 difference as you read.
  12. That makes sense, because Long GOP 100 Mbps should be better quality than All-I 400 Mbps for a static shot. For example, something like a medium shot of an interview subject has like 75% of the frame is completely static, and will be encoded once per group of pictures, instead once every frame for All-I. That's significantly more of the bitrate that can be allocated to the 25% of the frame that is moving. There are very few circumstances in which I would choose to record All-I. I don't see 140 Mbps in The Z6 as a downside.. Even 4:2:2 vs. 4:2:0 makes no visual difference, though it's better for chroma keying. The real advantage of the GH5 is 10 bit. 8 bit falls apart almost immediately, particularly when used with log curves. But despite it's codec fidelity, GH5 footage is not fun to work with so I'd take a Z6 myself. I do wish Nikon had included a 10 bit HEVC codec though.
  13. What do you mean by "data"? Because in terms of digital data collected, the GH5 records more pixels (in open gate mode), and stores it in more bits. So no, the Z6's larger sensor does not record more data. True, more photons hit the Z6 sensor at equivalent aperture, due to its larger size. That's why the low light is significantly better. I agree, the Z6 has better color. Like I said, I'd rather have the Z6 for many reasons, color being one of them. All I am saying is that the GH5 holds up much better to an aggressive grade before you showing compression artifacts, not that it looks subjectively "better."
  14. The GH5 also totally destroys the A6500. I believe the A6500 is one of the main reasons there is a general negativity towards Sony. It has odd color (to be generous) and the worst possible rolling shutter. I think we all know how big sensors are. Size does not necessarily mean more detail, or better image, a lot of that is determined by the processing done afterwards. And sensor size has nothing to do with the amount of data unless you keep pixel pitch the same. I think the GH5's 4:3 open gate mode records more pixels/data than the Z6 in 16:9 UHD, and certainly packs it into a beefier codec. From what I have seen, the Z6 doesn't have a particularly good image, compared to a GH5 at native ISO. Maybe that's because I've actually worked with Gh5 footage on real projects, whereas I've only downloaded test clips of the Z6. But the GH5 holds up better to grading with high data rates. I haven't tried to work with any externally recorded Z6 footage yet, but, as soon as you require an external recorder attached with a feeble HDMI cable, you've really lost my interest. It's a workaround to the fact that many DSLR-style cameras don't come with the video codecs and tools we need. That isn't to say the Z6 is bad. I'd rather have a Z6 than a GH5, but it's not because the video image fidelity is better at native ISO. Low light is much better, and full frame means I'd use vintage lenses at their native FOV. But I really think it's a trade off with the current batch of smaller sensor cameras that have more video-oriented features.
  15. Thank you. As someone who has to listen to the audio while I edit it, I have a great appreciation for boom ops. I think it's more likely that we use deep learning to clean bad audio than we invent a robot that can hold a physical boom pole properly.
  16. https://www.newsshooter.com/2019/04/19/premiere-pro-version-13-1-1/ Hmm, i wonder if the editor updated to 13.1, which caused the crashes to start, and then today we went to 13.1.1 which fixed the crashing, but also made our footage incompatible. Anyway, this is why it is great that blackmagic releases betas. I have seen people deride it as "releasing incomplete software", but having public betas is a great idea as long as people aren't stupid enough to assume a beta is for anything othet than testing. And it's infinitely better than having to patch a "non beta" release a week later because it crashes every 2 seconds.
  17. We were hours away from finishing an edit in Premiere yesterday on a fairly big project, and randomly Premiere stopped working. Just crashes. Backup projects did the same thing. It was working one minute, and stopped the next. So we updated Premiere, only to find that the new version doesn't load any of our clips at all (simple H.264 files). So tonight I am batch converting all of our footage to a new format so we can finish the edit tomorrow, before Adobe decides to update Premiere again. This is eerily similar to another story we had here. So yeah, I hope they are in damage control, controlling the damage it does to end users. Needless to say, my longstanding suggestion of switching to Resolve has been accepted. I'm excited to use 16 for our next project!
  18. That's the danger. With our current economy, the more we employ robots, algorithms, and artificial intelligence, the more economic disparity we'll see. So if we don't figure out how to value humans before poverty rates go through the roof, we'll have war. Also, side note, I saw this just now. It will be interesting to see how this kind of service evolves over the next 5, 10, and 20 years. https://petapixel.com/2019/04/11/this-robot-photographer-just-shot-her-first-wedding/?fbclid=IwAR0lbBvyZ-dMj7DHrwqGsQuaTXEFAjUX7uup-AkpWisfvTTV8qzn_Iqry1k
  19. @heart0less I get that in 15.3 when doing Fusion composites sometimes, even on a 2048x1080 timeline. It's one of many reasons I still use Fusion standalone for anything that uses more than like 2 nodes. For reference I've got a GTX 1080 with 8GB VRAM.
  20. That is exactly what machine learning is for. Studying human nuances and internally creating an abstract model based on real world patterns, instead of a human-programmed algorithm. It is exactly what a human wedding videographer does: use experience to govern future actions.
  21. Depends on how old you are. A lot of the stuff I mentioned are real things that we can do now. Here's some really interesting things to look into: Many of us have probably already seen this, where they generate a 3D map of the entire soccer match from an array of cameras. That was a tech demo from over a year ago. Here is nice overview of where we currently are with machine learning as it relates to 3D modeling. Includes some links to tools you can go try out right now. In the later half of the video he shows off some text to image generators, and photoreal facial generators. Certainly worth a watch. Speaking of which, there's the amazing deepfake engine. we've already seen the beginning of machine learning creating screenplays or even entire movies. And before you point out that these aren't anywhere near the quality that humans can produce, look at the timeline. According to Wikipedia, deep learning "became feasible" in the 2010's. In 2018, nVidia announced the Turing chips with Tensor cores, which use machine learning for denoising, really the first real integration of machine learning into consumer vocabulary that I have seen. It's used for real time raytracing in video games. Just in the past month, both Adobe and Blackmagic have announced integrating AI into their NLEs. We've barely begun with AI and machine learning. Where do you think we'll be in 20 years? As for your thing about robots taking over jobs, that is exactly right, which is why we need to figure out what an economy that no longer requires human input will look like, before it's too late, which comes full circle back to the original post. What will be the monetary value of work when the end for which that work is a means is unnecessary? Edit: Couldn't resist adding this one: An AI found a glitch in the video game Qbert to get an obscenely high score. In 35 years, no human had found the glitch.
  22. First of all, a lot of the video/editing jobs aren't art. Analyzing a billion ads and creating something similar for a new product is EXACTLY what machine learning does best. And it's not like it's just a black box--an AI can spit out a dozen, a hundred, or a thousand samples, let a human pick what they like best, and refine, and with each iteration, the machine creates a slightly better algorithm. Instead of hiring a motion graphics artist, a business owner who wants a commercial can just sit down with an AI and pick which ads they like out of a never-ending stream. Second, I disagree entirely. How do human artists work? They build a knowledge of art history, change a few things, and build off of feedback. That is exactly what machine learning does. Instead of C-3PO wandering about shooting a wedding, picture this: A robot scouts the venue ahead of time and sets up a few dozen small cameras to film the wedding from all angles, and then uses those cameras to reconstruct the entire ceremony in 3D. It then picks the best angles based on the knowledge of every single wedding video ever shot, taking into account the satisfaction ratings of the couples (using videos of the couples' faces when they see their video). With each video, it experiments slightly by changing a few things. It composes music for the wedding based on knowing what songs the couple plays, and knowledge of all music ever written. It does all of this by the next morning. No one sees the robots at any stage--completely discrete. With each wedding it shoots, this system improves slightly. And since it's a machine, it can shoot virtually unlimited weddings every day, thus quickly becoming the best wedding videographer on the planet. Obviously this isn't going to happen tomorrow, but there is no way to stop it from becoming a reality in the near future.
  23. @kye Oh, okay, I must have misunderstood what you were getting at when you said: ...since the F6's 32 bit mode should be useful in pretty much any scenario that dual channel is--which, as you say, is "all the time." But yeah, the F6's dynamic range doesn't increase the microphone's dynamic range. You still can't get leaves rustling and a jet engine 50m away in the same file with the same mic, unfortunately.
  24. @kye Have you ever used dual channel recording? How is what you are explaining different from dual channel recording and using the peaks from the lower track to replace the clipped portions of the higher track? Because I do it all the time and it works, especially when mixed in (surprise) 32 bit space, because then you can match the relative levels of the two tracks without distortion. I don't know all the maths, but I know from experience that dual channel recording is a life saver at times. I assume the F6 basically does the same thing, combining two input gains based on peaks, but automatically and internally, saving time and file space (1x 32 bit file is smaller than 2x 24 bit files).
  25. @SR It's absolutely a big deal, but primarily for non-pros. I believe what @kye is saying is that Zoom didn't do anything particularly difficult--it's not like they completely redesigned how the circuitry works. It's similar to the dual channel recording feature many recorders have had for years, except that it merges the two files automatically into a 32 bit file, instead of giving you two 24 bit files that you can manually splice if you so desire. But it's absolutely a useful feature, especially for one man bands who don't have enough eyes to watch the camera and the audio meters at the same time, or for ultra low budget projects (like mine) who employ non-pros without much experience. The dynamic range of the audio file should increase dramatically. 32 bit 48kHz is exactly twice the file size of 16 bit 48kHz, not including the negligibly small amount for metadata.
×
×
  • Create New...