Jump to content

KnightsFan

Members
  • Posts

    1,233
  • Joined

  • Last visited

Posts posted by KnightsFan

  1. I played at 1440p and I have to agree, it's very noisy and not sharp. When you watch the version on YouTube, how does it compare to your H265 intermediates? How about to your original MLV files, if you have a way to play those back? I haven't watch any other EOS M footage so I can't say whether this is typical of that camera, but I checked against the XT-3 footage I'm editing right now (which is on a 1080p timeline) and it's night and day the clarity and noise level--not to force a comparison but just to make sure I wasn't going in without an immediate reference.

    In any case, you've got some great stuff to film and practice on at that museum! Tinkering with ML was always fun.

  2. Those new capsules look fairly interesting. It would be neat if Zoom transitioned to a modular recorder system, without compromising quality, durability and form factor. Being able to just add 4 more XLR's with phantom power, or an ambisonic capsule, is actually pretty cool. They're in a position to make a Zoom F1 sequal that is small enough to be a beltpack recorder, has remote triggering from bluetooth, but can also turn into a recorder with 4-5 XLR inputs when needed. That would be great for people at my budget/skill level who need a swiss army knife for lots of different uses.

    Since their new capsule can do 4 tracks apparently, it would be nice to see some capsules with, say, two XLRs plus stereo mics, though I don't know how many people would ever use those. I wonder if it's bi-directional? Could they make an output module for live mixing?

    It seems like they've put some effort into improving their OS as well. Touchscreen control is... not ideal, but moving towards a modern UI with more functionality is a good direction. When they say it's app based, I assume there are no 3rd party apps--but it would be REALLY interesting if they could get a Tentacle Sync app.

  3. The other part of HEVC decoding is that it's not a simple question of whether your GPU supports it. Last year I upgraded my CPU from an i7 4770 to a Ryzen 3600. My GPU remained the same, and yet my HEVC decoding performance in Resolve jumped from barely usable to really smooth. This was editing the same exact project with the same exact files on the same version of Resolve studio. I don't know if it was the CPU, the new motherboard/chipset, whether Resolve just doesn't like 7-year-old systems, or what--but it's certainly false that the GPU is the only factor, even on GPU accelerated performance in Resolve.

  4. I think Intel supports 10 bit 4:2:2 decoding, but I have AMD so I'm not sure. @gt3rs was the one who started our attempts at smooth playback with 4:2:2 files, and I'm not sure if they found anything else about CPU decoding. Fwiw, I don't see any options in Resolve for CPU decoding on my Ryzen 3600, so if it has any they aren't implemented in Resolve.

    Edit: actually, I did try 1Dx3 4:2:2 files on an 8th gen Intel i7 processor. With QuickSync enabled, the files didn't even show up, and with QuickSync disabled they played at ~10fps. So either Intel doesn't support 4:2:2 on that processor, or Resolve doesn't implement it.

  5. 17 hours ago, thebrothersthre3 said:

    I don’t think the noise pattern is subjective in this case. It’s unusable on the URSA. Do you not think the pocket is keeping the highlights better tho ? My face can pretty much be recovered with the P6k where it’s clipped to white on some parts with the URSA. I want to do another test on the URSA to see where the weird vertical lines start happening. I would agree the overall noise is slightly less. But those vertical lines just aren’t removable with NR. The pocket and ursa do better in shadows at 400 iso but I prefer highlight to shadow info usually. 

    The pocket is keeping the highlights better, yes. That doesn't necessarily equate to DR because we don't know the distribution, and I do think that the Ursa is better in the underexposure strictly in terms of amount of noise, not that you would use it because the fixed pattern is very unpleasant. But Ursa is also keeping the sharpness in the underexposure just slightly better--though that could be a slightly sharper lens. But of course that's splitting hairs as you wouldn't use either one at that point.

    If you do any more tests, what I would do is get a gradient of light and ensure that they clip at the same point, and then check the shadows. That would avoid any variables with them distributing DR differently around middle grey, or any discrepancies between what the two cameras use as middle grey to begin with. Maybe I'll do a similar test with my cameras just for fun.

  6. Back when I did a ProRes vs H265 comparison, the only person to actually take a guess pointed out that the ProRes version had significant artifacts compared to H265. So in terms of plain quality, H265 can do very well with most scenes. It broke down a little bit more vs. ProRes HQ on a scene with extreme artifacting--even at very high bitrates, H265 couldn't get to ProRes HQ levels, though it did look favorable compared to ProRes 422 to me.

    Of course, actual camera footage will vary, so looking forward to your tests. Another test of just the codecs can be done on a Z Cam, which shoots 10 bit H265 and and flavor or Prores, internally.

    While H265 is harder on the PC, I actually have very little trouble editing it. I'm editing 4K IPB Fuji H265 in a project at the moment. I'm even doing motion tracking and VFX in Fusion, directly on the timeline, plus color grading. I'm not really having any issues with a GTX 1080, Ryzen 3600, 32GB ram. It's not buttery smooth, but it's not holding me back at all for the edit or VFX. The color page is very slow if I have clip thumbnails enabled and am using groups, I guess because every change re-renders all the thumbnails for each clip in the group. Seems like a place Blackmagic could optimize Resolve with some simple tweaks. So I just turn off clip thumbnails and then it's fast again.

    So yeah, while ProRes is easier to work with and definitely has better compatibility when working with others, H265 isn't bad at all.

  7. @thebrothersthre3 Thanks for the files. I've played around with them a little bit. It's so close, I can't definitively say one looks to have more DR than the other. I do think that, although the FPN is subjectively displeasing in motion, the actual amount of noise looks slightly less on the Ursa... though it's obviously hard to say. One really interesting thing is just how different the colors are. The Ursa is very magenta. I can definitively say that from these images, I like the P6K version better.

    Just to go back against the other measurement, C5D puts the P6K at 11.8 in 6k at ISO 400, and the Ursa 4.6K at 12.5 when the 4.6k is downscaled to UHD in ISO 800. That's gotta be really close if you also downscale the P6K to UHD. Also worth noting that they measured at 400 instead of 800. I don't know which way that would change things--you'd expect worse DR in a non-native ISO, but it could also have different noise reduction. So overall I'd caution against saying this disproves the other measurement, but it is evidence against the Ursa having more DR than the P6K at any given setting.

     

    The other thing I will say about latitude is that while latitude does not equal DR, its is true that if Camera A has both more over- and under-exposure latitude than Camera B, then Camera A also has more dynamic range (discounting subjective opinions on one having a "nicer" noise pattern).

  8. @thebrothersthre3 Raw files would be great, along with some information about your test. What ISO? How much over/under are these? What did you use as a base line for "correct" exposure? How are you measuring stops (aperture or shutter speed)? How did you process the images?

    Did you process the images differently? Because on the under exposure images that the P6K clips more outside the window than the Ursa.

    Not trying to say you're wrong here, just trying to understand what I'm looking at.

  9. 26 minutes ago, thebrothersthre3 said:

    I set both cameras to the same ISO, aperture, and shutter speed and used a 300w LED to clip the subjects face and see which could pull back more highlights. Then we did a rough shadow test. The URSA seemed to be on par with the Pocket shadow wise but you got the vertical fixed noise, which ruined it.

    I'd love to see tests! So just to clarify, you didn't test them both at native ISO, which I believe is 800 for the Ursa Mini, but 400 for the P6K?

  10. 17 hours ago, barefoot_dp said:

    I'm actually starting to get interested in the Z-Cams. Since covid-19 smashed my dreams of buying an URSA Mini Pro G2 this year I've been thinking about other options and the Z-Cam is the only other camera that ticks most of the boxes I want for my next camera. Here's the checklist of things I want:

    - 4K @ 100fps
    - Internal ProRes (or similar edit-friendly 10-bit codec)
    - V-Mount batteries (or easily adapted without making a horrible rig) to power monitor, FF, etc.
    - Better Image quality / colour than the FS700 4K
    - Dual XLR
    - Native EF mount so I can do away with Metabones adapters.

    It seems the Z Cams can handle all of this with the exception of XLR - though with the newest update it at least makes it possible to record and adjust two channels independently. I'm researching exactly what I need to make a working rig at the moment.

     

    You can get a 3rd party cable for stereo XLR on the E2. I have not tried it, but I have heard that there are audio issues such as possible clipping at -3 db or something like that. Definitely something to research first if you need reliable XLRs.

    V mount batteries are quite easy to attach, though personally I prefer using NP batteries for size and simplicity. I use an old iPhone SE as a monitor, which is charged over the USB cable, so it's still a single battery solution.

    10 hours ago, Mike Mgee said:

    Yes. Was in a SmallRig cage with a wooden side handle. As much as the BMPPC4K is poo poo'd on because of the DSLR form factor, it is very convenient. 

    I can keep one hand on the camera and scroll through aputures on the Pocket4k with the other hand. On the Zcam you need to remove your grip from the handle and press the squishy buttons on the camera.

    That's my main complaint tbh. You need to release your grip on the side and readjust your hands to interact with the camera. They really need a native Lanc/BT handle to interact with the camera. And I am not talking about a $400 third party option.

    Same thing with the menu. BM has spoiled a lot of people with its design. ZCAM is reminiscent of a Sony menu system. They should release a first party monitor to interact with the camera. 

     

    Can't you keep your right hand on the handle and use your left to press buttons? Or vice versa if you have a left handed grip. But yeah, the menu sucks, and on the original E2 the buttons are... not great. It's certainly not designed for handheld at all. A proper handle would go a long way, particularly one that can navigate the menus. I get the impression that their original design was for people to use the app, which is what I use. So IF your camera is rigged with an iphone, then it has great touchscreen controls.

    The other thing that is really annoying on the original E2 is the CFast slot is essentially underneath my handle, and is hard to open to begin with. I wrapped a piece of tape around it with a tab to open, so it's not terrible, but the card slot should have been on the left from the beginning (as it is on the new bodies).

  11. 8 hours ago, majoraxis said:

    My next move is to put a VR headset on my camera to rotate my green screen set when my camera pans and tilts 😉 ... does this already exist in the form of capturing gyroscopic data as it relates to the camera position (like steady xp) and then be being able to drive the rotation/pan and tilt of a 360 degree video or virtual rendered set in computer.  I have not done the research. Someone please school me on this topic if you know about it. Thanks!

    I've been working on this. It's pretty tough. I have mounted an Oculus controller onto my camera and tried to simply record the movement and rotation. I think that it will take some sitting down and doing some math to figure out the exact X, Y, and Z axes of the controller because it doesn't line up quite right "out of the box". I think those "puck" things from the HTC Vive would be better suited, but the Rift is what I have so I'm trying to get it to work with that. I'm not even sure the tracking is accurate enough anyway. The Rift only samples movement at 30 FPS.

  12. 19 hours ago, majoraxis said:

    I watched the trailer and I did not see any obvious CA so my eyes must not be trained at a high enough level.  I think if I wanted a look that was epic I would shoot on the Zeiss Master Anamorphics, or the Panavision lens that Tarantino shot the Hateful 8 on.  That said I appreciate the focus on the optics over camera body, I hope this trend will continue.

    Final note: I have kids and I really enjoyed the cartoon original Mulan.  I hope they can beat it because it the original had humor and heart.  I wonder who is going to play the Eddy Murphy Muchu role in the live version.

    There's definitely some hefty CA in the trailer I just watched, example:

    screenshot.thumb.jpg.ef32a45b91181f848c74efddb7bbef11.jpg

    Speaking of which, that's a pretty awesome shot. I haven't paid any attention to Mulan based on the quality of the other live action remakes, but that is a great shot, both for content and the optical quality. It reminds me of showdown with the Wild Bunch at the end of My Name is Nobody. It also shows the impact that vintage styling can make.

  13. @FranciscoB x264 is a specific set of libraries that encodes h.264 files, it's not an either or. I'm not sure what library Resolve uses under the hood to encode h.264 if you use the internal encoder. You can also use Nvidia's NVENC. To clarify, NVENC and x264 are different algorithms that both end up with an H.264 video. I haven't compared them myself.

    In the past, I have seen better results from ffmpeg compared to Resolve when exporting to h.264 and h.265. One thing that I have noticed is that Resolve does much better if you give it a quality preset rather than a max bitrate. Use the dropdown instead of a number and you'll have fewer artifacts even at comparable bitrates.

    Yes, you can still get better quality up upscaling to 4k and uploading that instead of HD--but only if the user watches in 4k. I never watch YouTube in 4k so if I was your target audience then it wouldn't make any difference. My phone doesn't even stream 1080p most of the time, and honestly I watch as much on my phone as a PC.

  14. You can read YouTube's upload guidelines here.

    1 hour ago, FranciscoB said:

    I did a slow encoding with 18 on the quality bar. From a 50gb master file I got a 700mb x265 one. The quality is OK but compared with the master, you notice the difference right away. 

    I'll need to up the quality for better final file. 

    YouTube reencodes everything to a very low bitrate (~8mbps for HD). So while you do want to maximize the quality that you upload, keep in mind that viewers watching on YouTube will see worse quality than the file you created. So for example, if you are worried about a quality difference between 50 mbps and 25 mbps on your HD file before uploading, well, that's a moot point since the actual stream will be much lower than either of them.
     

    I always upload in H.265. I haven't uploaded much to YouTube in the past year or two, but when I did I couldn't see any benefit or downside between H.264, H.265, and DNxHD uploads. So I picked the one that was easiest for me, which is a high quality H.265 file.

  15. I use the NX1 with an external monitor all the time and all buttons continue to function. At one time I had a hack installed and didn't have any issues there, I don't remember which one it was and I uninstalled it long ago.

    Have you tried with other field monitors? TVs or PC monitors? Is the USB actually connected to anything while you're trying this? I previously found issues with HDMI and USB having issues when using both at the same time. I'd try without a USB cable attached, if you haven't already.

     

  16. 33 minutes ago, Andrew Reid said:

    In the future digital will be so far gone... so far manipulated with computational photography... that documentary filmmakers will use film, to prove what they shot was real. Digital cameras are going to be reality distortion fields.

    One option is to use hashing to determine whether a digital file has been altered or not. The one technology I saw about it was that the user is directed to make certain camera movements, which are hashed with the content of the video. That makes it virtually impossible to edit the video without anyone being able to immediately tell it's been altered, since the hash would change. So that technique won't help you immediately determine if a video you see on YouTube is real or not, but it would be possible for the filmmmaker to provide source clips to prove that they are real, if necessary.

  17. 13 hours ago, kye said:

    I still see focus issues regularly on youtube channels that feature vlogging type content.  Sure, we have DPAF and PDAF, but if the camera decides to do a beautifully-smooth speed-controlled phase-detect focus-pull to the wrong f*cking thing, then it's still a wasted shot.  Watch some of the reviews of the new Sony vlogging camera to see people talking about how it has a new focus mode that is far better for things like product shots, and remember that when someone says that a new product is better then it means that the old one had issues that they wanted to be fixed.

    Luckily, if you're holding the camera while filming yourself, manual focus combined with an aperture setting that isn't too ridiculous can fix those focus issues!

    Besides, at this point if you're a vlogger who only cares about content and want the filming process to essentially leave you alone, then there are many smaller sensor cameras to just crank out content.  The vloggers that want to have larger sensors and larger apertures are also somewhat likely to be interested in the MF aesthetic.

    Locking your focus while vlogging can be useful, but I have yet to see anyone vlog with a follow focus.

  18. 6 hours ago, josdr said:

    Thank you for this KnightsFan. Your experience with the F4 and you kindly commenting on its use is something that I am interested in. I will strive for a used  zoom F4 I think since sourcing a mixpre-3 mk1 will be way more difficult. Thank you.

    If I may a last question. Is the zoom F4 easy enough to use on the go or does it take too much fiddling through its UI (hunting around for a low limiter cut for example in submenus :) ).

     

    P.S As for the fact vs opinion I am always weary of people just quoting numbers and proclaiming as undisputed facts... I see that all the time in my professional capacity and also see the resultant retractions and mental gymnastics in trying to defend a rapidly sinking ship.. Snarky comments are easy to make, making a well structured case for something less so.

    I find the F4 relatively easy to use on the go. I think that audio equipment in general has UI/UX issues since the general convention is to use a little knob to twiddle your way through expansive menus. You kinda have to memorize where things are, but it's not worse than other recorders. (I think the DSLR convention of a wheel built on top of 4 directional buttons is simply easier to use).

    But the main UI of being able to toggle channels on/off with 1 click, adjusting gain directly on a knob, and soloing a channel with 1 click are nice. I like having all the controls on the front panel so that it's easy to see when hanging from a neck strap. The one thing I miss from the DR60 Mk2 is that little Mic/Line/Phantom switch. You have to enter the menu on the F4 to get to that. But you can press a button on the F4 to immediately access a channel's options, so day-to-day I don't have to hunt around. It's things like switching from 48kHz to 96, or changing the routing matrix, that are a pain because I only do them once every few months and forget where the options are.

    The biggest issue F4 has for me is that battery life is quite bad using 8 AA batteries. I affixed a large 12V battery on top, but it's an ugly, heavy solution. However, most recorders have pretty bad battery life in my experience, especially older ones. The F6 is very attractive to me for the simple reason of having a built in NPF battery sled and (supposedly) really good battery life, along with 32 bit recording of course.

  19. 2 hours ago, Andrew Reid said:

    How is EF the future? Give me mirrorless lenses all day long

    It's not like people are short for good EF lenses

    My EF and F lenses follow me from camera to camera, so it will be a long time before I buy anything other than EF. I'd love to see Rokinon get some competition for affordable EF cine style lenses. It's great that they also have mirrorless mounts as well.

    Gotta love the #vlogger on Meike's tweet about manual focus cinema lenses though.

×
×
  • Create New...