Jump to content

herein2020

Members
  • Posts

    839
  • Joined

  • Last visited

Everything posted by herein2020

  1. Truly and without a doubt the best camera demo video I have ever seen. Sony may have finally figured out that having random YouTubers who call themselves cinematographers shoot demo cat video footage with their flagship cameras wasn't doing them any justice. I hope they do a BTS on the making of this video, I'd love to see how much rigging was needed. I remember the C300III BTS, they used about $200K worth of rigging and a full production team for their demo video and it didn't come close to comparing to this.
  2. I'm a hybrid shooter but I'm not going anywhere near it either and not just because it is Sony. You could literally get a great stills camera AND a great video focused camera for the price of this one. I think this camera is literally targeted at the hybrid sports shooter if such a thing exists. I'm with you, I think the quest for the perfect hybrid camera might be overrated and dedicated video cameras are more realistic. Personally I'm starting to give up on the idea of a hybrid camera. I keep thinking that a hybrid camera would make my life easier on a hybrid shoot (less gear to carry) but then when I'm on a shoot there's so many differences between the photography portion of the shoot and the video portion (lighting / audio / camera settings / movement / etc) that I feel like I will always need two camera bodies. Even when it comes to the body, for photography I need a remote flash trigger attached and a battery grip with a vertical shutter release button so that I can shoot portraits, for video I need an audio module attached and can't use a battery grip because of the camera cage.
  3. Probably multiple reasons, not the least of which is the size. The S1H is huge and I get the feeling that the sales numbers were pretty low, probably mainly due to the new lens mount and AF, but the size probably didn't help (the S5 is proof that people are willing to accept limitations as long as the body is smaller). Also, these cameras are marketed as hybrids, a fan in a hybrid would make it less a hybrid and more a dedicated video body. Let's not forget as well that every other maker still needs to protect their cinema line in some way....only Panasonic seems to really throw every single thing they can into their bodies. Last but not least, fans add complexity, drain the battery, and affect the weather sealing, none of which hybrid shooters want to deal with.
  4. I definitely do that already, 29.97FPS and 60FPS are the only frame rates I use. It's just incredible to me that so many people release YouTube videos with that kind of stutter and either don't notice it or don't care; yet the rest of the footage is great (color grade, camera movements, content, etc.) and they had to have seen the stuttering prior to uploading. I'm with @Video Hummus I would think by now it would all be standardized and something like 30FPS vs 24FPS wouldn't even need to still be a consideration.
  5. I don't think so, Canon is already working on the R1; this is in the 1DXIII price territory so its not a direct competitor to the R5, the A7SIII is more of a competitor to the R5. The R1 will be Canon's answer to this camera both in price and functionality. I thought the exact same thing...except the MFT part. Panasonic is dependent on Sony for their sensors, so if they are able to use this sensor in a next gen S1H (and change their AF system).......oh never mind who am I kidding, Panasonic will never change their AF system; but anyway if they were to make a S1H II with this sensor that would be interesting...but probably at least $5500
  6. The funny thing is, I see that literally all the time in online videos, not as much in Hollywood productions but even some of them have some really bad scenes where it is jittery. I think the biggest problem is that the typical camera doesn't have an option to record at a multiple of 24, they are mostly multiples of 30 (i.e. 30FPS, 60FPS, 120FPS) vs multiples of 24 (24FPS, 48FPS, 72FPS) so anytime you record at the higher frame rates using a multiple of 30 (just talking NTSC here), getting it to conform properly to a 24FPS timeline is going to require some additional configuration and thought for the process. So years ago, I concluded...why do any of that? Why have to worry about panning speeds, reconforming clips in post, optical flow, frame blending, etc.....just to achieve extra motion blur which my clients will never know or appreciate. Is all that work and careful planning really worth it in the end for anything less than high end commercial and Hollywood work? It gets even better...want motion blur? You can even add that back in post using Davinci Resolve. I mean come on....24FPS is at the very edge of what the human eye perceives as smooth motion...why not step back from that edge and give yourself a little more breathing room, is motion blur really that important or even noticeable when 90% of your viewers will be watching your creation on their cell phones? I watched an excellent video from Gerald Undone years ago when I started questioning my frame rate choices again because I kept seeing people recommending 24FPS, but after watching this video I never questioned my 30FPS decision again.
  7. I think maybe it is a combination of frame rate and shutter speed when filming in 24FPS, but the thing is, I've never noticed jittering at 30FPS. Here is an example, great looking video until the 0:28s mark where it goes from being very nice to a jittery mess on my screen. Every single time a video that I'm watching starts becoming unwatchable to me due to stutter and I go in and check the frame rate it is always 24FPS, and in my own testing with the proper shutter angle and 24FPS the footage just didn't look as good to me so years ago I decided to never film in 24FPS unless a client specified it. The 60FPS was just the capture frame rate, the final delivery is something lower like 30FPS or 24FPS. My point was, I wonder if some of the jitter at 24FPS for some of the videos I have seen comes from shooting at 60FPS then delivering at 24FPS. I guess for me I've never noticed a motion blur difference between 30FPS and 24FPS, but I notice all the time the stuttering when 24FPS is used.
  8. I know this is probably very controversial, but I ask myself this question every time I see a video shot in the USA at 24FPS instead of 30FPS; why did they do that? I am mainly talking about the USA because I know overseas there is PAL and 50Hz refresh rates and some other things involved in that formatting which I know nothing about. I will assume that if my TV was set to PAL and the frame rate was 25FPS it would look the same to my eyes as my TV set to NTSC and the frame rate set to 30FPS. So back to shooting at 24FPS or 23.97FPS in the USA...I just don't get it; I have never seen 24FPS footage (that I am aware of) anywhere other than Hollywood that does not look like its is stuttering badly at 24FPS. If there is no motion, or its a talking head, then sure I can't tell the difference; but most of the time the footage looks great....except it is stuttering along due to the frame rate when there is fast motion. To me and the TVs and monitors that I use to view YouTube and online content, I can almost always tell when its not 30FPS and there's nothing "cinematic" about it. I even researched the history of frame rates and I know they started out that way to save tape media, but those days are long gone. Motion simply isn't smooth if it is not shot at 29.97FPS (30FPS) in my opinion. Somehow Hollywood gets away with it, maybe its their post processing, their camera equipment, etc. but every other footage at that frame rate is just a stuttering mess to me if its fast action or a lot of things change between frames. I have also watched a lot of videos on frame rates and they describe the problems that occur when you shoot in 59.97FPS then try to slow the footage down to 50% on a 24FPS timeline....let alone to 23.97FPS. Even with Hollywood, playing a movie straight from a DVD, there's been scenes that were hard for me to watch because the frames appeared to be stuttering. So am I the only one that thinks this way? Is it something with H.265/H.264, YouTube compression, LongGOP compression, bitrates, or something else that makes 24FPS look so terrible most of the time when motion is involved?
  9. I notified the developers.....they said its a known issue and just don't open anything else other than Resolve...go figure. The way I can get it to trigger instantly is to have Lightroom open then in DR right click > generate optimized media. PS isn't as bad as long as you aren't saving out to something right when DR is rendering.
  10. Guess I celebrated too soon....I needed that Fusion Render Clip feature yesterday...so I right clicked the clip > render in place....nothing happened. Went and watched the YouTube video just to be sure..went back in DR > Render in Place...nothing. It will probably work by the time its out of beta but at least for now I'm still watching endless Fusion caching progress bars. Fortunately I never use that, I just put the clips on the timeline and color grade from there. Speaking of bugs...another nasty one is if you use Resolve while Photoshop, Lightroom, or other GPU accelerated apps are open...Resolve will crash every time and to the point that it takes Windows with it. The only way to get the display back is to reboot.
  11. I have a Core i9 14 core 7940X CPU, 64GB of memory, NVME project and OS drives, dedicated SSD cache drives and an RTX2080Ti video card; I can edit nearly anything in Resolve without proxies unless I speed up the footage past 200%; at that point its easier to create a proxy for smooth playback. I work mainly with H.265 and H.264 10 bit 4:2:2 and 4:2:0 LongGOP footage. I can add color grades, Fusion effects, layers, multicam, etc without proxies. I personally will never use Premier again but its sucks bad enough that I have no doubt it would still be unacceptably slow on my box. To spec my current box I based my HW specs on Puget Systems recommendations at the time. I skipped the MB/Memory/PS/Case research and got the HP Z4 the reasons and build process which I listed in this thread, long story short I decided years ago my time is worth more than the amount of time it would take to catch up on current HW standards or troubleshoot/avoid HW incompatibilities : You mentioned your budget is up to $3K......if so inclined you could also skip the custom build headache and get a pretty nicely outfitted Z4 for right at $3K......https://store.hp.com/us/en/pdp/hp-workstation-z4-g4-tower-9vd55ut
  12. I loved the idea of the steadycam but it just never worked for me. I could get it balanced, and it was great since it didn't need batteries, or calibration, or apps, etc, you just grab it and go; but I work outdoors mostly and it can get very windy; I found it pretty much impossible to keep a level horizon when it was windy and so many of the moves that I can effortlessly make with a gimbal seemed impossible to do with the glidecam. I still want to master it one day to have more options on shoots but it will probably never happen. Instead I've gotten much better with the monopod and handheld; thanks to the S5's IBIS and Davinci Resolve's stabilization options I use the gimbal less and less.
  13. I agree, all of these clamping solutions look like an accident waiting to happen or an expensive bill if you damage the railing of the venue. To get enough clamp force most clamps will damage any surface softer than their own teeth and hotels/venues aren't too happy with that kind of damage. As you mentioned, glass is twice as bad. I go out of my way not to attach anything to anything I don't own if at all possible, so that's why I use the travel tripod; the tag line is a good idea, I might add that if I ever get to travel again. When I do a setup like that I have the camera lens literally inches from the top of the railing and my heaviest bag hanging from a hook in the center pole to lower the center of gravity as much as possible, as much as I don't like hauling around a tripod, I'd still rather that over one of these clamping solutions. I do tend to use mine for more than just the balcony though, especially as it gets dark I can use it for longer exposures; when I travel its mostly for nature and landscapes so a tripod is pretty important; my biggest gripe usually is that I don't have a fluid head on the tripod for video but hauling around a good fluid head just makes hauling around the tripod even less like a vacation so I stick to a normal photography head.
  14. My 45mm showed up too, about 2 months after buying the S5, but it literally is still sitting on my desk without ever having been mounted on the S5. The 20-60mm is sitting there beside it. After putting a base plate on the camera, the EF adapter will not come off unless you remove the base plate. One day I'm going to get around to trying out the 45mm and Continuous AF. It will be interesting to see if your AF tests with the 45mm yield better results than with the other lenses.
  15. Call me paranoid but in your situation I'd rather just take a travel tripod. I just can't imagine trusting my camera setup to something like a clamp. Induro, Manfrotto, and other brands like that make some great travel tripods that are very light and collapse using 4 joints instead of the usual 3 so they get really small. What I do is put the two front legs against the balcony, extend the third leg until the tripod is fully braced against the balcony, then I typically hang my camera bag or some kind of weight to a hook at the bottom of the tripod center pole. I've shot quite a few time lapses like that and the nice thing is, I can also use the tripod for other things too like long exposures. Of course lugging around a tripod can take the fun out of any vacation, but travel tripods in my opinion are a reasonable compromise.
  16. haha, I doubt they let me change my username. It will remind me what year I signed up if I'm still here talking about 8bit vs 24 bit or whatever we are up to by then 🙂 I have that problem too, I've never done a real side by side test with the exact same sensor, color profile, and scene but with the only change being the bits, partly because up until now I didn't have a camera that would let me test something like that all internally. Going from 8bit internal to 10bit external is 3 changes (external recorder, external over HDMI, and going from 8 bit to 10 bit) so I don't feel that would be a true test of just 8bit vs 10bit out of the same camera.
  17. I tried that before as well, the problem with every method that I have seen is that you cannot change the camera angle later if the client wants to replace that angle or if you decide to replace it later. The easiest way that I've found is to just flatten it and put it on top of the multi-cam track, before I do that I create a gallery still of the color grade and apply that to the flattened clip, it really only takes a few seconds....but its still a clunky workaround for a problem that does not exist in Premier Pro. They also fixed the problem where if you add a picture over top of another picture or a video clip.....if the picture didn't fill the frame everything around the picture would be black instead of transparent which prevented you from seeing the clip underneath. The fix was to move the image left/right/up or down 1 px and it would suddenly turn the border transparent like it should be.
  18. Yess.....thanks Mark, I hadn't found that one yet, that's exactly what I meant. I'm sure I wasn't the only one, but I emailed that feature request directly to the development team back in Resolve 16 one day when I got fed up with Fusion constantly caching....and they did it properly meaning you can go back to the Fusion comp later if you need to make a change, fortunately its not a one way trip. I have that problem with multi-cam edits, certain things you can't do to a multi-cam sequence such as change the crop factor so you first have to flatten the clip but it loses the color grade and you can't switch camera angles after that, so my workaround is to copy the clip to a new layer first, flatten it then edit the copy. If I need to go back later I delete the clip and copy the underlying multi-cam clip again to the top layer; who knows....maybe they improved that in DR17 as well, I need to watch the full DR new features video or read the new features list one day.
  19. I think the biggest thing that I have discovered is that 10 bit lets you push around the mids more than 8 bit without losing color quality. Everyone typically talks about recovering the highlights and preventing noise in the lows and looks at dynamic range...for me the biggest problem is usually recovering exposure on the skin in difficult lighting situations when you have the highs and the lows all well within the WFM but the skin tones are under exposed which you can't control because you can't properly light the subject. I run into that situation all the time and 10 bit lets you fix the exposure on the skin tones without losing color quality. 8 bit used to fall apart every time and typically it was better to just leave them underexposed than to try to fix them without fill lighting. Even with 10bit there's only so much you can do to the mids before you start losing color (that's where the whole latitudes limitations come in) but I will take 10 bit any day over 8 bit for this situation. A good example is with a sunset behind a subject and no fill lighting, with 8 bit you have no choice but to severely under expose the subject or completely blow out the sunset, with 10 bit (and the benefits of VLOG and the S5's sensor) I am able to recover the skin tones almost to the bottom edge of the proper IRE while retaining the highlights and not washing out the lows. One day I may try to shoot the exact same scene right after the other with a sunset behind the subject one with 4:2:0 8bit and the second shot with 4:2:2 10 bit then demonstrate how the mids fall apart when trying to recover the subject's exposure, but I typically don't have that kind of time during a shoot.
  20. So far this year I have already picked up the Rokinon 10mm EF-S lens for my S5 for real estate shoots, two more expo discs so that I don't have to keep trying to remember to pack one in the right bag, a few more odds and ends from Smallrig to tweak my camera cage options, and 3 Falcon Eyes F7s for those times when you need just a little extra light or a 3 point lighting setup on the go for a single subject. If DJI releases a Mavic 3 this year I may pick one up, but beyond that this year will probably be more of the same (small stuff) or maybe lighting....seems like you can never have enough lighting or audio gear. I am all set with white light (3 Godox VL300's, and GVM LED panels) but I wouldn't mind two more GVM RGB lights for backlighting / RGB lighting without gels. I also now need to sell my GH5 and all of my MFT lenses without losing too much on the deal. I keep feeling like I need to replace my 5DIV for photography but only when I'm sitting here on the Internet or browsing Amazon. When I'm actually on a photo shoot there's no other camera I'd rather have in my hands for photography. If the R5 hadn't turned out the way it did my gear bag would probably look a lot different (two R5's and maybe even a C70).
  21. DR's history is actually quite fascinating and it is nothing short of incredible that the software coders have been able to modernize it to where it is today. It is no surprise that some features that you take for granted in every other software application took DR 17 revisions to add, there's still some weird quirks like not being able to take a screen grab from anywhere except the Color page but each version just gets better. I really think Adobe going to the subscription model was the best thing they could have done for Blackmagic Design. All of the new revenue from new users looking for anything that's not subscription based had to have helped them hire more coders and make it more user friendly. I always feel like I've only scratched the surface of what DR can do, especially when it comes to Fusion. If they would just improve the performance of Fusion or at least let you right click > convert to video clip after all of the Fusion edits are done so that its not in a constant state of caching for simple things like Text+ titles it really would be the perfect NLE. I have more Fusion related performance and stability problems than anything video clip related.
  22. Yes I am aware of that method as well, I guess I should have been more specific in my original post on the functionality that they added. With the attributes option you have to select the ones you want and you cannot select individual nodes that way. The most intuitive way to me has always been to simply select the nodes and copy / paste; sometimes you also only want say 3 out of the 5 nodes or something like that. Everywhere else in the OS you just select what you want and copy and paste it; in DR it has always been some other method which was less intuitive and less customizable. I do wish on the edit page you could just right click > copy color grade then ctrl-V to paste it to the next clip or at least let the shift + = keyboard shortcut work on the edit page because I like to color grade each clip as I place it on the timeline; in DR's world you put everything on the timeline first then color grade later which is why it is a different page; but for me I shoot in such terrible lighting conditions I like to see if I can fix the clip with a color grade before going to the next clip so that I can replace it if it is not salvageable. The new feature combined with adjustment clips (like Premier's adjustment layers) will be a huge time saver and is long overdue.
  23. I can tell you that I'm a believer. I have the S5 and have had to recover from some really horrible lighting and had to push the WB to the other side of the spectrum....and the footage held up flawlessly, I am able to recover from situations where I had no control over the lighting in ways that I simply was not able to when shooting 8 bit. I am the first one to say I think something is overrated....like I personally think ALL-I is just a waste of storage space, RAW footage for YouTube and Vimeo is an even bigger waste of storage space (unless your editing machine really can't handle LongGOP), external recorders are a waste of money for most people, etc, etc. I have reached the conclusion that for anything short of TV quality commercial work, documentaries (maybe), and feature length films, LongGOP, H.264, and H.265 are good enough. But after shooting 8 bit for years and fighting with the color grade especially when the WB was off, I am a believer in 10 bit. I also think 10 bit handles highlight roll-off better than 8 bit which is something that even YouTube viewers will notice if you don't manage to get it under control during editing, but anything beyond that (ProRes, BRAW, etc) as I mentioned before I think is a waste of space, time, and money. External recorders for certain cameras do let you record higher resolutions (like 6K for example) but unless you really do need to crop and recompose that much I think anything over 4K is also a waste. There is a video that shows 8 bit vs 10 bit side by side on YouTube, I don't remember the title now, but in his side by side tests he reached the same conclusions that I reached on my own; there is no noticeable difference in the final footage, but 10 bit handles highlights and WB correction better. I think 8 bit is the better way to go if you have the time to properly WB, expose the footage, etc. but when you are shooting hectic events like I typically shoot, or real estate where the lighting is always some weird mixture that you can't control; the extra editing latitude that 10bit provides is very noticeable.
  24. Yes that is correct that has always been possible in DR16, you can also use remote grades, create a gallery image and apply the grade from there, hit SHIFT + = to copy a grade from a previous clip, store a color grade in a memory slot, etc....NONE of those ways are intuitive nor do they work the way EVERY other program works which is a simple CTRL A > CTRL C > CTRL V to copy anything to anything. Only with DR17 and even then only with the latest build can you finally simply go into the color grade node tree, select all nodes, CTRL C, go to the clip where you want to apply the grade and CTRL V. With every previous build you could only use this method to copy the first node which made no sense.
  25. I finally got the Rokinon 10mm EF-S lens. I feel like I'm buying more and more glass for a dead mount but its still better than the alternative. I also wasn't sure that the Rokinon would work since the Canon 10mm-22mm EF-S lens that I use on my crop sensor Canon body does not work on the S5 (lens barrel is visible); but the Rokinon does work with no problem. I had a real estate shoot yesterday, and hauling around the GH5 just to do the interior was a pain, not to mention color grading it after and trying to match the S5 and drone footage. It looks like its finally time to sell the GH5 and all of my MFT lenses. The S5 is so impressive I am actually thinking about using the money to either get another S5 or an S1. I feel like the minute I do though Panasonic will announce something even better...you know...like a working AF system that you can only get in their next version.
×
×
  • Create New...