Jump to content

KnightsFan

Members
  • Posts

    1,214
  • Joined

  • Last visited

Posts posted by KnightsFan

  1. @kye Yes, your H.264 files are 8 bit. So in your initial run, were all your tests generated using either resolve or ffmpeg using your reference file, or were any made directly from the source footage?

    1% of the file size is not remotely true, unless you start with a tiny H.265 file and then render it into ProRes which will increase the size without extra quality. I wouldn't trust much that comes from CineMartin.

    Of course, the content matters a lot for IPB efficiency. So I guess you might be able to engineer a 1% scenario if your video isn't moving, and you use the worst ProRes encoder that you can find. (Stuff like trees moving in the wind is actually pretty far on the "difficult for IPB" spectrum, though it looks like only about half your frame is that tree).

    Just btw, in my experience Resolve does a lot better with encoding when you use a preset rather than a defined bitrate. Emphasis, a lot better.

  2. 1 minute ago, kye said:

    Any RAW file will require de-bayering in order to compare to anything else.  The only way to avoid softness from de-bayering is to downscale.  

    I used 4K raw files, and exported them as uncompressed 10 bit 444 HD files and did all my tests in HD, so yes I downscaled. And that's pretty similar to my workflow, where I always work on an HD timeline using the original files with no transcoding.

     

    You can run "ffprobe -i videofile.mp4" and it'll tell you the stream format. There will be a line that goes something like:

    "Duration: 00:04:56.28, start: 0.000000, bitrate: 20264 kb/s
        Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 4096x2160 [SAR 1:1 DAR 256:135], 20006 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)"

    So in my example above is 8 bit 4:2:0. 10 bit 4:2:2 would have yuv422p10le

  3. Great tests! One thing I will say is that when I did my ProRes vs H265 tests, I tested on a >HD Raw file in order to maximize the quality of the reference file, to avoid softness and artifacts from debayering. Additionally, my reference file was 4:4:4 rather than 4:2:2... fwiw.

    The other thing that would be nice is some files to look at, since while SSIM is great to have it's not the only way to look at compression.

    Also quick question: Are your H.264/H.265 files 4:2:0 or 4:2:2? 10 bit or 8 bit?

  4. It looks so much better now, so it was indeed the workflow that was causing issues. There's still some noise in low light, but the mushiness is gone and the color has life in it. Also, I'm not sure if you slowed it down before or had a frame rate mismatch, but that opening shot of the front gate used to have jerky and unnatural movement but looks normal now.

    One thing though is that you've got some over exposure that wasn't present before. You might want to adjust the curves in Resolve, or experiment with the raw settings. The shot of the bust at 0:44 in the original is properly exposed, but that shots at 0:48 of the bust in the new one is blown out. But yeah, definitely looks like it was captured in Raw now, which the first video did not.

  5. I played at 1440p and I have to agree, it's very noisy and not sharp. When you watch the version on YouTube, how does it compare to your H265 intermediates? How about to your original MLV files, if you have a way to play those back? I haven't watch any other EOS M footage so I can't say whether this is typical of that camera, but I checked against the XT-3 footage I'm editing right now (which is on a 1080p timeline) and it's night and day the clarity and noise level--not to force a comparison but just to make sure I wasn't going in without an immediate reference.

    In any case, you've got some great stuff to film and practice on at that museum! Tinkering with ML was always fun.

  6. Those new capsules look fairly interesting. It would be neat if Zoom transitioned to a modular recorder system, without compromising quality, durability and form factor. Being able to just add 4 more XLR's with phantom power, or an ambisonic capsule, is actually pretty cool. They're in a position to make a Zoom F1 sequal that is small enough to be a beltpack recorder, has remote triggering from bluetooth, but can also turn into a recorder with 4-5 XLR inputs when needed. That would be great for people at my budget/skill level who need a swiss army knife for lots of different uses.

    Since their new capsule can do 4 tracks apparently, it would be nice to see some capsules with, say, two XLRs plus stereo mics, though I don't know how many people would ever use those. I wonder if it's bi-directional? Could they make an output module for live mixing?

    It seems like they've put some effort into improving their OS as well. Touchscreen control is... not ideal, but moving towards a modern UI with more functionality is a good direction. When they say it's app based, I assume there are no 3rd party apps--but it would be REALLY interesting if they could get a Tentacle Sync app.

  7. The other part of HEVC decoding is that it's not a simple question of whether your GPU supports it. Last year I upgraded my CPU from an i7 4770 to a Ryzen 3600. My GPU remained the same, and yet my HEVC decoding performance in Resolve jumped from barely usable to really smooth. This was editing the same exact project with the same exact files on the same version of Resolve studio. I don't know if it was the CPU, the new motherboard/chipset, whether Resolve just doesn't like 7-year-old systems, or what--but it's certainly false that the GPU is the only factor, even on GPU accelerated performance in Resolve.

  8. I think Intel supports 10 bit 4:2:2 decoding, but I have AMD so I'm not sure. @gt3rs was the one who started our attempts at smooth playback with 4:2:2 files, and I'm not sure if they found anything else about CPU decoding. Fwiw, I don't see any options in Resolve for CPU decoding on my Ryzen 3600, so if it has any they aren't implemented in Resolve.

    Edit: actually, I did try 1Dx3 4:2:2 files on an 8th gen Intel i7 processor. With QuickSync enabled, the files didn't even show up, and with QuickSync disabled they played at ~10fps. So either Intel doesn't support 4:2:2 on that processor, or Resolve doesn't implement it.

  9. 17 hours ago, thebrothersthre3 said:

    I don’t think the noise pattern is subjective in this case. It’s unusable on the URSA. Do you not think the pocket is keeping the highlights better tho ? My face can pretty much be recovered with the P6k where it’s clipped to white on some parts with the URSA. I want to do another test on the URSA to see where the weird vertical lines start happening. I would agree the overall noise is slightly less. But those vertical lines just aren’t removable with NR. The pocket and ursa do better in shadows at 400 iso but I prefer highlight to shadow info usually. 

    The pocket is keeping the highlights better, yes. That doesn't necessarily equate to DR because we don't know the distribution, and I do think that the Ursa is better in the underexposure strictly in terms of amount of noise, not that you would use it because the fixed pattern is very unpleasant. But Ursa is also keeping the sharpness in the underexposure just slightly better--though that could be a slightly sharper lens. But of course that's splitting hairs as you wouldn't use either one at that point.

    If you do any more tests, what I would do is get a gradient of light and ensure that they clip at the same point, and then check the shadows. That would avoid any variables with them distributing DR differently around middle grey, or any discrepancies between what the two cameras use as middle grey to begin with. Maybe I'll do a similar test with my cameras just for fun.

  10. Back when I did a ProRes vs H265 comparison, the only person to actually take a guess pointed out that the ProRes version had significant artifacts compared to H265. So in terms of plain quality, H265 can do very well with most scenes. It broke down a little bit more vs. ProRes HQ on a scene with extreme artifacting--even at very high bitrates, H265 couldn't get to ProRes HQ levels, though it did look favorable compared to ProRes 422 to me.

    Of course, actual camera footage will vary, so looking forward to your tests. Another test of just the codecs can be done on a Z Cam, which shoots 10 bit H265 and and flavor or Prores, internally.

    While H265 is harder on the PC, I actually have very little trouble editing it. I'm editing 4K IPB Fuji H265 in a project at the moment. I'm even doing motion tracking and VFX in Fusion, directly on the timeline, plus color grading. I'm not really having any issues with a GTX 1080, Ryzen 3600, 32GB ram. It's not buttery smooth, but it's not holding me back at all for the edit or VFX. The color page is very slow if I have clip thumbnails enabled and am using groups, I guess because every change re-renders all the thumbnails for each clip in the group. Seems like a place Blackmagic could optimize Resolve with some simple tweaks. So I just turn off clip thumbnails and then it's fast again.

    So yeah, while ProRes is easier to work with and definitely has better compatibility when working with others, H265 isn't bad at all.

  11. @thebrothersthre3 Thanks for the files. I've played around with them a little bit. It's so close, I can't definitively say one looks to have more DR than the other. I do think that, although the FPN is subjectively displeasing in motion, the actual amount of noise looks slightly less on the Ursa... though it's obviously hard to say. One really interesting thing is just how different the colors are. The Ursa is very magenta. I can definitively say that from these images, I like the P6K version better.

    Just to go back against the other measurement, C5D puts the P6K at 11.8 in 6k at ISO 400, and the Ursa 4.6K at 12.5 when the 4.6k is downscaled to UHD in ISO 800. That's gotta be really close if you also downscale the P6K to UHD. Also worth noting that they measured at 400 instead of 800. I don't know which way that would change things--you'd expect worse DR in a non-native ISO, but it could also have different noise reduction. So overall I'd caution against saying this disproves the other measurement, but it is evidence against the Ursa having more DR than the P6K at any given setting.

     

    The other thing I will say about latitude is that while latitude does not equal DR, its is true that if Camera A has both more over- and under-exposure latitude than Camera B, then Camera A also has more dynamic range (discounting subjective opinions on one having a "nicer" noise pattern).

  12. @thebrothersthre3 Raw files would be great, along with some information about your test. What ISO? How much over/under are these? What did you use as a base line for "correct" exposure? How are you measuring stops (aperture or shutter speed)? How did you process the images?

    Did you process the images differently? Because on the under exposure images that the P6K clips more outside the window than the Ursa.

    Not trying to say you're wrong here, just trying to understand what I'm looking at.

  13. 26 minutes ago, thebrothersthre3 said:

    I set both cameras to the same ISO, aperture, and shutter speed and used a 300w LED to clip the subjects face and see which could pull back more highlights. Then we did a rough shadow test. The URSA seemed to be on par with the Pocket shadow wise but you got the vertical fixed noise, which ruined it.

    I'd love to see tests! So just to clarify, you didn't test them both at native ISO, which I believe is 800 for the Ursa Mini, but 400 for the P6K?

  14. 17 hours ago, barefoot_dp said:

    I'm actually starting to get interested in the Z-Cams. Since covid-19 smashed my dreams of buying an URSA Mini Pro G2 this year I've been thinking about other options and the Z-Cam is the only other camera that ticks most of the boxes I want for my next camera. Here's the checklist of things I want:

    - 4K @ 100fps
    - Internal ProRes (or similar edit-friendly 10-bit codec)
    - V-Mount batteries (or easily adapted without making a horrible rig) to power monitor, FF, etc.
    - Better Image quality / colour than the FS700 4K
    - Dual XLR
    - Native EF mount so I can do away with Metabones adapters.

    It seems the Z Cams can handle all of this with the exception of XLR - though with the newest update it at least makes it possible to record and adjust two channels independently. I'm researching exactly what I need to make a working rig at the moment.

     

    You can get a 3rd party cable for stereo XLR on the E2. I have not tried it, but I have heard that there are audio issues such as possible clipping at -3 db or something like that. Definitely something to research first if you need reliable XLRs.

    V mount batteries are quite easy to attach, though personally I prefer using NP batteries for size and simplicity. I use an old iPhone SE as a monitor, which is charged over the USB cable, so it's still a single battery solution.

    10 hours ago, Mike Mgee said:

    Yes. Was in a SmallRig cage with a wooden side handle. As much as the BMPPC4K is poo poo'd on because of the DSLR form factor, it is very convenient. 

    I can keep one hand on the camera and scroll through aputures on the Pocket4k with the other hand. On the Zcam you need to remove your grip from the handle and press the squishy buttons on the camera.

    That's my main complaint tbh. You need to release your grip on the side and readjust your hands to interact with the camera. They really need a native Lanc/BT handle to interact with the camera. And I am not talking about a $400 third party option.

    Same thing with the menu. BM has spoiled a lot of people with its design. ZCAM is reminiscent of a Sony menu system. They should release a first party monitor to interact with the camera. 

     

    Can't you keep your right hand on the handle and use your left to press buttons? Or vice versa if you have a left handed grip. But yeah, the menu sucks, and on the original E2 the buttons are... not great. It's certainly not designed for handheld at all. A proper handle would go a long way, particularly one that can navigate the menus. I get the impression that their original design was for people to use the app, which is what I use. So IF your camera is rigged with an iphone, then it has great touchscreen controls.

    The other thing that is really annoying on the original E2 is the CFast slot is essentially underneath my handle, and is hard to open to begin with. I wrapped a piece of tape around it with a tab to open, so it's not terrible, but the card slot should have been on the left from the beginning (as it is on the new bodies).

  15. 8 hours ago, majoraxis said:

    My next move is to put a VR headset on my camera to rotate my green screen set when my camera pans and tilts 😉 ... does this already exist in the form of capturing gyroscopic data as it relates to the camera position (like steady xp) and then be being able to drive the rotation/pan and tilt of a 360 degree video or virtual rendered set in computer.  I have not done the research. Someone please school me on this topic if you know about it. Thanks!

    I've been working on this. It's pretty tough. I have mounted an Oculus controller onto my camera and tried to simply record the movement and rotation. I think that it will take some sitting down and doing some math to figure out the exact X, Y, and Z axes of the controller because it doesn't line up quite right "out of the box". I think those "puck" things from the HTC Vive would be better suited, but the Rift is what I have so I'm trying to get it to work with that. I'm not even sure the tracking is accurate enough anyway. The Rift only samples movement at 30 FPS.

  16. 19 hours ago, majoraxis said:

    I watched the trailer and I did not see any obvious CA so my eyes must not be trained at a high enough level.  I think if I wanted a look that was epic I would shoot on the Zeiss Master Anamorphics, or the Panavision lens that Tarantino shot the Hateful 8 on.  That said I appreciate the focus on the optics over camera body, I hope this trend will continue.

    Final note: I have kids and I really enjoyed the cartoon original Mulan.  I hope they can beat it because it the original had humor and heart.  I wonder who is going to play the Eddy Murphy Muchu role in the live version.

    There's definitely some hefty CA in the trailer I just watched, example:

    screenshot.thumb.jpg.ef32a45b91181f848c74efddb7bbef11.jpg

    Speaking of which, that's a pretty awesome shot. I haven't paid any attention to Mulan based on the quality of the other live action remakes, but that is a great shot, both for content and the optical quality. It reminds me of showdown with the Wild Bunch at the end of My Name is Nobody. It also shows the impact that vintage styling can make.

  17. @FranciscoB x264 is a specific set of libraries that encodes h.264 files, it's not an either or. I'm not sure what library Resolve uses under the hood to encode h.264 if you use the internal encoder. You can also use Nvidia's NVENC. To clarify, NVENC and x264 are different algorithms that both end up with an H.264 video. I haven't compared them myself.

    In the past, I have seen better results from ffmpeg compared to Resolve when exporting to h.264 and h.265. One thing that I have noticed is that Resolve does much better if you give it a quality preset rather than a max bitrate. Use the dropdown instead of a number and you'll have fewer artifacts even at comparable bitrates.

    Yes, you can still get better quality up upscaling to 4k and uploading that instead of HD--but only if the user watches in 4k. I never watch YouTube in 4k so if I was your target audience then it wouldn't make any difference. My phone doesn't even stream 1080p most of the time, and honestly I watch as much on my phone as a PC.

×
×
  • Create New...