Jump to content

KnightsFan

Members
  • Posts

    1,214
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. Yes, I'm aware of that. That's why surprising that at 32 GB with a max read speed of 80 MB/s has a slower write speed than a 16 GB card with a max read of 45 MB/s, both from SanDisk. It's a glitch of some kind, yes. It's strange that it is only present in that one specific white balance mode, though.
  2. Amazong. So is this the first hybrid camera to natively offer RAW video? I would never have guessed that Nikon would be first.
  3. @DaveBerg Haha, I totally understand! I once tried gaming with a 4k monitor, but it was limited to 30 fps and I couldn't stand it! It really is just that we are used to "cinema" being really slow. I sometimes shoot whitewater kayaking at 120 as well. The motion is beautiful, really a night and day difference compared to 30 or 24. Although, I'm not sure I'm ready for that smoothness in a narrative film yet. I have issues like this too. I have an "80 MB/s" 32GB SanDisk that is too slow for 120 fps (though it works perfectly for 4k/24). Then I've got a pair of "45 MB/s" 16GB SanDisks that are somehow fast enough for 120. Doesn't really make sense to me. Actually, my NX1 is on its way out to be honest. It freezes from time to time, it even did so while shooting tests yesterday. Sometimes the screen is half black, or the menu won't come up. So far, battery pulls have fixed every problem... but I don't think I'll use it on real projects anymore. No, but I've seen other mentions of this glitch. My NX1 sometimes has weird color in single frames when using a Kelvin white balance. It's like I'm shooting in Gamma DR and then one frame will be in Gamma Normal. It only happens with a sudden change in exposure, like if I quickly close the aperture while shooting video. And only in Kelvin white balance mode. I never had any color issues in any white balance presets, or using a grey card, or even in AWB.
  4. I think a motorized gimbal will give the best results with the least amount of attention paid to the rig, which seems to be the critical element here. Benefits to a motorized gimbal: Significantly easier to keep level, which will be HUGE if you are walking on rough terrain or climbing over things. It will be significantly easier and faster to balance, which is crucial if you are in a hurry. The glidecam is not a bad option at all, and would be my pick if you can be sure of your footing. A few benefits to the glidecam: no batteries, no firmware, very weather resistant. It's significantly easier to pan and tilt naturally with a glidecam, you just touch it and it moves. Much more intuitive and tactile than twiddling with joysticks or touchscreen apps. You can also carry it easily when not shooting. A motorized gimbal will flop around if it is off, so you need a way to secure it when not in use. If neither of those will work, I would go for a fig rig. It's essentially just two handles shoulder-width apart, and a top handle. I know people who have made their own out of old bike wheels. Using this will keep your horizon more level than just holding the camera. You can operate it one handed if you need to clamber up stuff. Again no batteries or firmware to worry about. A fig rig won't do MUCH stabilization, but it's better than nothing. My last pick would be the shoulder rig. I can't imagine doing a strenuous hike with one of those on me. Maybe other people have better technique, but for me to get decent results I have to keep my back and neck pretty straight, which is impossible hiking up or down any sort of incline or when scrambling up and over things. And if you need to swing your arm out suddenly to maintain balance... goodbye camera. Whatever you go with, I definitely recommend having a quick and secure way to put the rig away, and leave both hands free, just in case.
  5. @DaveBerg No problem, I'm really curious to find out what exactly is going on. Earlier today I did a number of tests. I tried PAL 25, 50, NTSC 29.97, and 24. First I compared with an old Olympus camera I had lying around. I almost came back and said I saw what you meant about jitter, because the Olympus has a "smoother" look on pans. It may be due more to a softer image/more compression, though, because I also tested against my friend's XT3 and there I don't see a difference. I put this together as a quick example: https://drive.google.com/open?id=1zdEhB_CPPQyC612tR5aElDlxnMd9WXNd How does that video compare with what you observe on your footage? I used the settings you said, except that both the XT3 and NX1 are shooting 24.00 fps with a 1/50 shutter. The Fuji has a Flog/Fgamut to 709/Eterna LUT thrown on. I did notice that the Fuji looked smoother until I put the LUT on. The washed out appearance falsely smoother it over in my eyes. If I lower the contrast on the footage, it does appear to have smoother motion. In other words, lower dynamic range makes the sharp boundary between indoor and outdoor extra apparent at a low frame rate, almost "jittery". I don't know if this is the same "jitter" you are describing. 50 fps with a 1/60 shutter (which is slowed down 2x) will have much more motion blur than 25 fps with a 1/50 shutter. I know what you mean! I use Gamma DR, but I spent a lot of time figuring out how to color grade it to where I want. It goes from yuck to fantastic with some tweaking.
  6. @DaveBerg In your footage I see virtually no motion blur, so I think that particular clip is a fast shutter. To be honest, I'm don't see the jitter you are talking about. It's possible that you have a more critical eye than me, or even that I am so used to an NX1 that I am blind to it. If you get a chance, can you do a side by side with another camera that doesn't have the jitter? Preferably in 4k so it's super obvious. If you don't have the camera with you then no hurry, we can always keep investigating this later. Another possibility is that the player you are using has issues with its HEVC decoder, which would explain why you see it and I don't. Here on EOSHD we've already discovered problems using the builtin Windows 10 HEVC decoder inside Da Vinci Resolve Free. (This seems unlikely to be the cause, but worth mentioning as we haven't ruled it out yet). If this is the case, then if I transcode your jittery files on my end to, say, ProRes, then the jitter should disappear. If this is true, then it's either a problem with your settings or a genuine glitch/broken part in your camera. (Hopefully the former!). However, if you are viewing transcoded footage then it could still be a decoder issue. What firmware do you have? Any hacks currently or previously installed? Does it occur with all lenses? To rule out lens issues, have you tested it with an adapted manual lens? If you tell me a specific set of settings (exposure, picture profile, white balance), I'm happy to shoot some tests on my end and send the files for you to look at.
  7. @Rick_B I appreciate you taking the time to ensure that I see that. Always happy to give my opinion on gear!
  8. Yes. I suppose it depends on the format you choose for your proxies, but for any reasonable proxy format, the answer is yes.
  9. What is your shutter speed? It looks like a lack of motion blur. If your shutter is significantly faster than 180 degrees, low framerate videos will appear "jumpy" as your eye sees individual frames rather than being fooled into seeing motion. The increased sharpness if 4K would exacerbate the problem. Check out: https://frames-per-second.appspot.com/ and check the presets to compare images with and without motion blur.
  10. What software are you using? Tbh, I would be surprised if a slow HDD was the root of your problems if it tells you the GPU is incapable of an effect. Can you tell which frame is causing issues? ie is it always crashing on a specific part of the render? If so, can you preview that particular frame, or even export it as a single image? If so, then my guess (and this is only a guess!) is that it's a bug--maybe sapphire has a memory leak such that when rendering, it eventually runs out of RAM and bogs down. If that's the case, then no amount of hardware changes would solve anything. However, you may be able to get around it. In my experience, effects-heavy sequences fail to export more often than not. I usually export as an image sequence. That way, if it crashes on frame 30,000, you only need to debug one frame, and you won't have to render the first 29,999 frames again. And if it's a memory leak bogging a long render, then you won't have any issues at all. At the end, you can stitch the image sequence together with minimal system strain. Of course, for a long project this would require a LOT of disk space, but if you're considering a new SSD anyway it may be worth it.
  11. You have OIS and DIS off, right? Sometimes the DIS flips out on me, especially with ultrawide lenses. I've never had any issues when DIS is off. I'd be interested in seeing a clip that shows the problem if you have any.
  12. It might not be very easy to do this if you've cut the files before generating proxies. When I do this for my proxy workflow in Resolve, I'm relying on the fact that each proxy file is exactly the same filename and length as its corresponding online media. How would Premiere know that Proxy A corresponds to time 0:05 - 0:15 of file A? If you have unique timecodes for each file, and have properly carried those over into the proxies, you may be able to do it painlessly, since timecode would indicate which proxies correspond to which segments of the original. I think Resolve's conform abilities might be able to handle this, no clue about Premiere. In my experience, there won't be much degradation. It might be visible if you pixel peep, but certainly will not be immediately noticeable to the average viewer. It may be worth the cost, especially since at this point the H.264 files are really holding up your ability to finish editing. So I wouldn't worry too much about it--it's a non issue at best, and a necessary evil at worst. But I'd keep those original files around anyway, just in case you find that you need them.
  13. Yes, it is big enough if you are close enough to be operating (ie not 10 feet away). I downscaled from 7 to 5 because the 7 was so unwieldy on a dslr rig. Much happier now.
  14. I have the lilliput a5. For the price, it is excellent. I can use it in bright sunlight, it has peaking and false color, and was cheap even brand new. It has mounting points on three sides, and runs off 9v power from the same brick that powers my camera. The color is NOT accurate. If you want to judge color on your external monitor, you will need to spend more money. But if you mainly want a monitor it for flexibility and focusing, I highly recommend the a5. The main problem with it is it only had one customizable button. I previously owned a 7" feelworld and a 3.5" feelworld evf. Both were inferior to the lilliput. The 7" was just too big and heavy, it really dwarfed my rig. The evf was just bad in every way. My impression is the new feelworld monitors are better than the ones i had, probably on par with the a5 for the similarly priced ones. I have used some smallhd monitors briefly. They are certainly better build quality. I cant say they would be worth 5x the cost to me, since i have never broken any of my cheap gear and i have no need of great color accuracy on set. Your needs may differ of course.
  15. @fuzzynormal it was not working perfectly as of a month ago. It worked pretty well throughout editing and color, but seemed to lose optimized media when i started adding ofx plugins. Regenerating the media would work, but it would lose them again next time i loaded the project.
  16. I wonder how the low light would stack up with a speed booster and some aggressive noise reduction? I mean if you are tearing it down anyway, it would be worth looking into switching to MFT or E mount.
  17. Thats the kind of project i would be all in for. I actually really considered doing that a few years ago.
  18. A proxy workflow is what you make of it. Essentially, you create an exact copy of your entire project in a format that plays back smoother on your system. At any time, you can toggle between using the "online" media and the "offline" media. If you wish, you can do ALL your post production on the proxies (editing, color, sound), and only toggle back to online media for the final export. Or, you can do some editing, switch to online media, do some color, switch back to proxies, do more editing, switch back to online media, edit more--etc. Or you could use proxies to edit, switch back to online media for color correction and export, and never use the proxies again. On the other extreme, you can do a proxy workflow where you never switch back to the online media, but then it's called transcoding. You can make proxies for any format. You can even make your proxies BE in any format. You can make HD ProRes proxies for 8k 10 bit H.265 footage, or 4k H.265 proxies for an HD ProRes project (not sure why you would want to though...). That's all quite abstract, though. In practice, proxy capabilities depend on which software you are using, especially if you are using the builtin proxy generator. In Resolve, if you use Optimized Media (essentially, proxies) you can switch between optimized and original media at any time, like I described. However, you are limited in the formats that you can make proxies in, and I've often found that Resolve "loses" optimized media all the time. Personally, I make proxies in ffmpeg, and manually switch out which files my timeline uses. That way I have maximum control over the proxy format, and can easily troubleshoot problems. A decent workflow should allow you to do crops/reframing and variable framerates without issue, but it depends on the software you use. In general, the only pro is smoother playback while editing. However, proxies are also a huge benefit if you have a remote editor and need to send files over slow internet connections. My 500 GB project is only 10 GB in proxy form. I can use Google drive to send the entire project to an editor, and all they have to send me in return is an XML. Cons are a messier workflow, having files become unlinked if your workflow is not perfect, tons of headaches. But all of those problems can be avoided if you know what you are doing, and rigorously test your workflow before using it on a project.
  19. @BTM_Pix Nope, the global shutter versions have a 1" sensor. Smaller sensor, less DR, much lower frame rates and significantly higher price. Very tough pills to swallow for that sweet global shutter.
  20. I hope it translates into a lot of sales, i am very eager for global shutter to be a standard feature.
  21. There you go! And you get to feel like a secret agent every time you use it.
  22. The GH5 is a lot more expensive even used, compared to the E2C. The E2C is like if Panasonic had put 4k 10 bit into their G85. Same thing that already happens when you get a phone call during a shoot... you ignore it and don't ruin the take. Haha, I kid--it's a valid point. But to be fair, people already use their iphones to shoot serious videos. It's easier to pop the monitor off and make a call on it than to get it out of one of those fancy rig/cage things with the lens on front and everything lol.
  23. The E2C is certainly an interesting new camera. My guess is it's the same sensor as the E1, but with a better processor, and updated I/O to fit with the E2 family. If so, there was minimal R&D involved, just putting together pieces Z Cam was already familiar with. I'm not really interested in buying one, but it seems like a brilliant option for live streamers. POE can really cut cable clutter. For $800 you can get a brand new camera that shoots 10 bit internally. Amazing how far we've come. On a more speculative note, a future that I've imagined is using a tiny PC as a recorder. Run ethernet from the camera to the PC, affix an audio interface to the top, and record multi-track audio in sync with the video using your favorite software. No timecode or synchronization needed, just one record button for everything. You can use codec you want, any hard drive you want. Review footage and edit metadata with a full blown OS. It might be, but the E2C is significantly cheaper for a brand new camera, even compared to a used GH4 and a new Ninja V, plus the E2C is a significantly smaller setup. If you already have an apple phone, your monitor needs are already covered. With the E2C, you get H.265, and I assume it will eventually have ProRes internally. It seems there are advantages to both.
  24. Sharpness -7, Contrast -8. I use the full 0-255 range with the master black level at 0.
×
×
  • Create New...