Jump to content

Towd

Members
  • Content Count

    104
  • Joined

  • Last visited

Everything posted by Towd

  1. Very cool! Interesting to see how they are sampling the bayer pattern for the 4:4:4 color. Some pretty extensive binning going on. With this level of sampling, the speed optimized binning may still produce a really nice image with just a tad more noise. It'll be interesting to see the final results.
  2. Color aware binning that doesn't throw out data. Sounds cool. Would love to read the white paper on how they bin the data without throwing anything away. In the meantime my point stands that a lower megapixel sensor with the same capabilities would be more useful.
  3. Yeah, but in the spec you posted it says it does 6k readout through pixel binning/sub sampling. So it's throwing away data that could have been preserved. It just looks more like a chip designed by a marketing department than something to serve the needs of filmmakers. Granted it's for consumer devices and megapixels sell cameras, so it's not like I'm surprised. My experience has been that engineers have very little experience with end user needs, and that was my point.
  4. I know that Sony's sensor architects don't deliver movies. But I can see some value to a 100mpix sensor for stills. I have to agree with @Nikkor though, I'd love to see them continue to improve dynamic range and rolling shutter in their consumer line more than resolution.
  5. You typically want to put any finishing/sharpening a the very end after of your processing. Typically just before running it out. How strong you make the sharpening and the radius you use will just depend on the final look you are going for and how soft your original footage is. Depending on the sharpening filter you are using there is typically a strength and a radius value. As part of your finishing process, you may also want to add grain into the image. Opinions on whether you add the grain before or after the sharpening can vary, but I find that if I'm dealing with an image that
  6. I once worked with a really good compositor at a large VFX house who admitted to me once that he was totally colorblind. His trick was that he just matched everything by reading the code values from his color picker tool and matching the parts of his composite purely from the values he sampled. I've always remembered that when I feel I can't trust my eyes, or something is not working for me. You can color grade just by making sure everything is neutral and balanced. Later, as you become more comfortable with the process and gain more experience you can start creating looks or an affect
  7. It's a separate piece of software you run along side of After Effects. It looks like they still offer an evaluation version, so you can try it out. After effects will load a sequence using your raw settings. LR timelapse takes care of the exposure variations. It's been years since I've done it, so I don't remember the exact workflow, but it's something along those lines. ? Also, never did it this way, but I imagine, you could run out a clip using a Photoshop timeline. After Effects is just nice because you get its workflow with things like Lumetri color adjustments you can layer
  8. I used to process a lot of time lapse footage for an old job. The standard we used was LR Timelapse. My memory is sketchy, but I think the process involved grading one raw image for the look you wanted and then running the sequence through the LR Timelapse software and it would calculate exposure variations for you and try to correct for them. You would then export that exposure data into After Effects and run out the sequence using your raw settings. Overall, it produced very nice results even if there was a lot of flickering in the original sequence. It used to offer a trial versi
  9. Here's my take on this footage: First one is my neutral grade... but with the exposure pushed way down just to add drama and get the image to pop... at least to me. Also, threw a power window over the right pillar, so it competed less as the subject of the shot. Second one is my crack at a grade for a ghost story (since Kye is looking for themes?). In this story our little friend here is selling haunted curiosities.
  10. You know @kye, when I was reviewing the 2 Gems Media stuff, I got generally a similar impression. His newer stuff is better than some of his early stuff. I also, suspect he may just shoot the "standard" color profile on the GH5 and white balance off a wall in the room he's in. (Or off a card.) His post process may not be much more than degraining, tweaking exposure, adding glow, and adjusting skin tones (if that). That would explain the clipped highlights. Plus the standard (non-log, Cine-D) profiles can deliver really nice results with minimal work. This way he can just cr
  11. Hey Mark, I'm really glad you found it helpful! It was kind of fun to take a crack at this type of grading since it's not a look I normally do, but it's nice and certainly has some utility. I don't know if you use Resolve, but I'll attach the powergrade that you are free to use. Otherwise, I'll go through my basic approach and thought process on this. Finding a general system that works for me, helps me compartmentalize what I'm doing. Overall though, I try to keep things simple and generally avoid secondaries or masks unless I'm trying to fix problems in footage. I laid everyth
  12. So, I watched the 2Gems Media's video a few times and some of their other videos on their channel. It's an interesting modern style that he's obviously using to much success. I wouldn't call it a cinematic style. He's not afraid to let his whites clip and it looks like he degrains his footage and doesn't add any back, but just leaves it very clean. The most important thing he does is get a nice neutral white balance. Also, he seem to push overall exposure into the upper range. I'm not saying he lifts blacks, but his middle exposure area feels higher than normal. Conversely, for a c
  13. Yes, shared nodes are really useful for making a scene adjustment ripple across all shots in a scene. It's something more useful to me in the main grade after I get everything in my timeline's color space. For me, what I like about pre and post-clips is that I typically have 2 or 3 nodes in my pre grade and the purpose of my pre-clip is just to prepare footage for my main grade. For example, a team I work with frequently really likes slightly lifted shadow detail, so I'll give a little bump to shadow detail then run the color space transform in my pre-clip. If one camera is set up rea
  14. Just wanted to point out that the OP is shooting on a GH5 and not a GH5s. They have totally different sensors and very different noise patterns. I personally really like the GH5 noise. At 1600 ISO and less it has very little color noise. I actually really like the way it looks at 400 and 800 ISO... feels organic. Also, the video posted above is a perfect example of why I believe you should analyze footage using a view LUT. V-LOG maps black to 128 out of 0-1023 which I believe is higher than any other manufacturer. You can take any camera and lift it's blacks 10% and see all kind
  15. A big +1 on this for myself as well. Some people seem to get good results just pulling log curves until they look good and can get get nice results, but I find if I handle all the color transformations properly, I'm reducing the number of variables I'm dealing with and I have the confidence that I'm already working in the proper color space. Once in the proper color space, controls work more reliably, and it is also a big help if you are pulling material from a variety of sources. I have not tried the Aces workflow, but since I'm pretty much always delivering in rec 709, I like to use
  16. I watched a video a few times on my laptop, but the noise didn't look unusual. I think you're just seeing a the noise in the lifted shadows. Log is not a normal viewing format. That is why view luts are typically created for monitoring on a set or in camera. It is supposed to be adjusted to your delivery format (rec709, rec2020, DC3, film print) before final viewing. It's sometimes described as a "digital negative" in that like a film negative it holds a wider dynamic range than a final film print. But when you view it without a display LUT, you are seeing all the gorey details that
  17. If you ever end up with some of that in a shot you want to use, you can suppress it pretty well by using a soft mask around the problem area, key the purple and desaturate it. Best to just avoid it if possible ?or use a better lens if you have one when shooting high contrast.
  18. Yeah, the purple stuff around the trees when the focus is changing looks distracting.
  19. I think the most distracting element in the sample footage is the purple fringing and pulsing focus. Log color profiles have very lifted shadows, so you will see a lot of noise in them when looking at it without a view LUT or grade. Normally, you would apply a LUT, or an S-curve to push your shadows to near black and roll off the highlights. Once the shadow area is compressed, you wont see as much noise, if any. If you are going for a flat look, you will probably want to run some kind of denoise on the shadows to clean them up. The advantage of LOG is that you have the option to c
  20. The thing about the Slanted Lens comparison for me was just that it was obvious at regular exposures, the Nikon was clipping whites earlier than the Sony and general consensus seems to be that they are using a very similar if not the same sensor. Maybe the same sensor thing is wrong though. I do think that clipping or overexposing highlights does give a video-ish look. You may be right though that the Fuji and Sony cameras underexpose on purpose to help protect them. It is interesting in the test how much detail in the shadows could be recovered by the Nikon. That's been my gene
  21. Ahhhh, "Motion Cadence". I love it when that old chestnut gets pulled out regarding a cinematic image. I've spent many days in my career tracking and matchmoving a wide variety of camera footage from scanned film, to digital cinema cameras, to cheap DSLR footage. So I find the whole motion cadence thing fascinating since I sometimes spend hours to days staring at one shot trying to reverse engineer the movement of a camera so it can be layered with CGI. So leaving out subtle magical qualities visible to a select subset of humans who have superior visual perception, or describing it lik
  22. Regarding a video-ish look to the Z6 footage, I found the following video interesting. Nikon seems to overexpose by about a stop or so compared to Sony. I remember when I used to shoot with a d5200 a few years back I got in the habit of underexposing by a stop or two and pulling up my images in post. Considering clipped or blown out highlights is one of the major contributors to a "video look", my guess is that Nikon's default exposure levels may be to blame for this opinion among some shooters who shoot mostly full auto. Luckily the easy solution is to underexpose and balanc
  23. Yeah, to be honest now that I think on it more, just because they are hitting 100 IRE on the waveform, who knows what's going on in the camera's color profile. There could be surperwhite data, or some other special sauce that can be recovered above 100 IRE. Yeah, I'd expect them to at least try to match the image output a little better. Still curious....
  24. Maybe I'm a noob, but it doesn't look to me like the leftmost square for the S1 was exposed as brightly as all the other test examples. If exposure had been pushed another stop or more, the dark end and noise floor maybe have fared better. Did they not want it to beat the Ursa? Conspiracy theorists want to know... But seriously, I'm curious if anyone has experience with these test charts. [EDIT] I checked their waveform plots for the S1 and Ursa, and they look similar for the left box, so it may just be grading anomaly, but I'm still suspicious.... ?
×
×
  • Create New...