Jump to content

HockeyFan12

Members
  • Posts

    887
  • Joined

  • Last visited

Everything posted by HockeyFan12

  1. I believe you, I just don't see how that would be a problem under normal circumstances. Does it crop more than the other software solutions you listed? I feel like I'm missing something. I'll definitely hold off until reviews are in, though.
  2. That makes sense. If the oscillation is happening at a frequency greater than 24 hz you'd definitely struggle with isolating rolling shutter. I'd think the gyroscope could theoretically be synced with the rolling shutter so it could target different parts of the frame differently, but I doubt that's been implemented or it's even close to precise enough in practice. For my hobby stuff, I use a C300 (and Ronin with DJI Focus) and iPhone X (and DJI Osmo Mobile not yet working...). The C300 has bad rolling shutter, but I've never noticed anything too crazy with it that can't be fixed in post. The job I might apply for would be 6k Red but require extra stabilization on dolly moves, etc. but that wouldn't be something I'd be on camera department for, so it's just curiosity whether something like this would be useful. I was interested in this device mostly for a travel kit that's better than the iPhone. (Probably an XC10 or X7S.) Gimbals are cool, but I'm not a good enough operator to eliminate bounce and I wanted something extra to bring things to the next level. Then again, warp stabilizer doesn't do a bad job in with gimbal footage... just not always great. Would the crop with the SteadXP be worse or better than warp stabilizer? Do you know how it deals with parallax if at all? Adobe's subspace warp is pretty bad in terms of how it distorts footage.
  3. What are micro vibrations and why can't they be reduced in post? I'm not familiar with this phenomenon, but I'm also not used to using smaller cameras. I have seen artifacting when OIS on an iPhone works against a gimbal, for instance. Are micro vibrations a phenomenon limited to smaller cameras? This is the first I've heard that term. Why is it more difficult to reduce these than other rolling shutter artifacts or camera shake artifacts? Does it have to do with the frequency of the motion being above a certain threshold (presumably, 12 shakes per second) or is it related to motion blur obscuring information (which in theory wouldn't be a problem at a high shutter speed)? I'd think having gyroscope data would allow for rolling shutter correction that's superior (at least in terms of camera movement, not figure movement) to any current alternative, but the Nyquist theorem suggests any shakes at >12 oscillations per second would be difficult to solve, regardless of shutter speed. Potentially, if the frequency at which the device (not the camera) records is great enough it could get around these limitations with deconvolution, but that's way out of my league. What's andrgl? What post workflows do you recommend for stabilization? To my eye SteadXP is the best in the above video and I don't see it cropping any more than the others, at least in that example. The gig I might apply for is for team that stabilizes in post, rather than in camera, and they shoot with significant look around room (6k for 4k center extraction). Their current workflow as I understand it is one I've used for years and which is very limited (Mocha stabilization then hand animating in curves between the first and last keyframes of the same data to remap the motion with approximately correct parallax), so I'm intrigued by devices like this and, ultimately, reprojection workflows, that would record that stabilization data more precisely and in 3D in order to create a point cloud from which to fill in missing and occluded detail. Of course, I doubt the current software does this, but it's something I've only seen possible in the past with match moving (extremely difficult on the kinds of moves this product is designed for). Furthermore, for match moving generally, the data seems extremely useful as a guide. I don't know much about this thing so maybe I'm excited about the potential of a similar product, not the product itself. But to my eye that video looks good and I don't see it cropping substantially more severely than the other software. To be fair, other than Mocha (which I use most often), I've only really fooled around with Warp Stabilizer and Mercali, and have had better but still pretty poor results with Warp Stabilizer. I'm considering applying for this gig because I want to learn more about stabilization techniques outside the very limited stock packages, so I'm researching stabilization workflows and this caught my eye.
  4. Just out of curiosity, what is "sensor jarring" and what in the video above looks better to you from the other stabilization solutions? I'm pretty new to stabilization and was considering applying for a gig involving post stabilization, so I'm trying to brush up on the current state of the art.
  5. The Alexa is a great camera for handheld if it's rigged up right. I just don't see something like this being used with it... for a wide variety of reasons. But for like a GH5 or 1DC, definitely. Maybe I'm easily impressed. It wouldn't be the first time. If the other stabilizing solutions you mention crop significantly less, I can see your argument. Otherwise, I don't understand. It's clearly doing the best job in that comparison and the crop doesn't seem to be more extreme (given the increased stability). But if the other software (which I have had quite poor luck with, consistent with the comparison in this video) crops much less I suppose you have a point. Although I think in these cases you shoot with a wide lens and kind of "find" the shot in post (which is the main reason I don't see this on an Alexa–producers want to see the shot, not the raw material). It is clearly designed for action cam work, I agree. But there's action cam footage in even the highest end films these days. Just off the top of my head I remember a lot of go pro and/or dSLR footage in Mad Max, the Transformers series, Marvel, and a lot of prestige tv, even Twin Peaks (which surprised me and stood out as looking pretty ugly). Again, I'm a hobbyist. As I mentioned in this same reply above, I don't see this being used with Alexas, and I can understand why you wouldn't want to use it with an Alexa. But I'm not at the level where I get to shoot with one very frequently. And although I use IS for video, it's also completely inappropriate for "cinema" use for similar reasons to why this is. No serious camera op I know or have worked with would go near in-camera IS because the motion correction is unpredictable. I think they'd hesitate to go near this, too, to be fair to your point, but a post solution at least lets you reign in the algorithm a bit by hand, and the crop can be treated as look around room, as Fincher and many others do.
  6. So cool. Absolutely incredible results. How does the mic jack work to record metadata? Does this mean it only works MOS? I think there's an even bigger market for this kind of device on the extreme high end but for different purposes (motion matching), and one more tolerant of buggy workflows. Even if the data isn't good enough for 3D solves it would be interesting to see PFTrack use this data as a guide for solves. I wonder if team Fincher will pick this up or keep using their current workflow (Mocha stabilize). I believe the iPhone is already using similar technology. The hyper lapse results with iPhones are spectacular. The loss in resolution seems mostly irrelevant to me, but that's just me (mostly hobbyist). Gimbals still suffer from translational instability (three axes only covers rotation) so I could see this device complementing a stabilizer except that it might cause issues in terms of the device bumping into something and I'm curious how (if) it accounts for parallax, which is an issue with translational bounce but not rotational. (I invented something mechanical to fix translational bounce in the y axis but am too lazy to prototype it.) Does it work with any hotshoe? How do they account for the distance between the sensor and the nodal point? I don't see the utility with an Alexa, tbh. Alexa Mini on Ronin on a steadicam seems to be the industry standard already and there's no way this will disrupt that. Once you're bringing out the Alexa you're not really run and gun anymore. (Though I've tried it and the Amira and Mini are way more usable than the original IMO.) But it's still cool enough I want to buy an XC10 and leave the gimbal at home.
  7. HockeyFan12

    C300 vs F3

    In my experience the difference is very small between the two, with Canon having better looking colors (just slightly, the F3 beats the F5's SLOG2 for sure) and the F3 having about a half stop more highlight detail. The C300's codec falls apart in high contrast scenes with a lot of foliage. The F3's codec is only suitable for rec709. Both are great in low light. The Canon is sharper, but also noisier in the shadows. Just slightly on both. Both are close enough to each other but far enough from the Alexa (which is worse in low light, to be fair), that I'd just take a C100 Mk II any day for the ergonomics. But if a big camera is okay for you, the F3 rig offers 60p and a slightly higher quality image technically. It would not be my choice, but I still recommend it to others. I disagree about DPX files being functionally any different from video files, but that's a digression. The difference between uncompressed DPX and 444XQ or even 444, in my experience, is completely invisible, and I get irritated when clients demand DPX. I can send some footage I shot with both if you're interested, but can't post it publicly. Both great cameras at great prices. You can't go wrong!
  8. I have terribly shaky hands, but that sounds brilliant. Maybe I'll order a set from Amazon and just pick the best one. I suppose a bit large isn't a problem or does it need to be perfect? Will this help at all with the corners getting bright when hit by a direct light? Seems like an internal reflection there, too, but I've seen this phenomenon even with Panavison C Series, I believe. Those lens tests are a bit humbling. I like the Iscorama (it is stunningly sharp) but the others do have more character.
  9. What is the diameter of the O-ring? 30mm I'm assuming?
  10. Fair enough. I don't think anyone gets into this industry for the money. But I wouldn't even care which camera I used. $2k/day is quite good for corporate work.
  11. Option 2. For sure. Y'all must have good rates to turn $2k/day down, that's pretty good for non-union camera ops.
  12. Believe it or not, it has the potential to. But if you're stabilizing and cropping in, then the C500 has an advantage. BOTH will produce sharp enough 1080p footage. I'm just saying that oversampling might not be as useful as you think unless you're cropping or stabilizing. Both are gonna be very very sharp. But you'll get better tonality with the C500. If you can tolerate the form factor and media (and figure out what settings will mitigate chroma clipping or just shoot raw, not raw to ProRes) then it will be better. My friends made Blue Ruin! They had a good grade on that but it was actually a fairly guerrilla shoot from what I understand. They wanted an Alexa but the C300 let them use a tiny crew and smaller gear.
  13. I agree that the highlights are intentionally blown in order to gauge dynamic range and rolloff, but I don't agree with you at all that chroma clipping isn't a serious issue. Especially on a camera like the C500 that has less dynamic range than an Alexa or Dragon (but still good and better than its reputation), you can't always expose for the highlights. There are going to be traffic lights, headlights, practicals, blown out skies, etc. in some scenes and avoiding them at all costs or underexposing horribly isn't a viable option. IMO, you cannot make all camera systems look good, otherwise they would look good more often. Most digitally acquired content–even on the high end–doesn't look as consistently good as film, even with the same crew. Only the Alexa seems to get close imo, though I have seen some good looking content shot with other cameras, of course, and some "intentionally digital" looks that work. A friend of mine had a piece graded by Stefan Sonnenfeld, and I remember he mentioned that chroma clipping was Stefan's biggest bugaboo re: camera systems. I won't get into the details because I don't want to put words in someone's mouth, but if the greatest colorist in history struggles to wrangle with chroma clipping, it's a problem, and you'd better hope you're the greatest DP in history to never blow out a single source. Or just use a camera that handles chroma clipping properly. (Fwiw, I don't find hard luma clipping problematic if one grades the knee nicely, and even film appears to hard clip rather fast when processed photochemically–so this is a discussion about color space and rolloff, not dynamic range.) And there is a massive difference between how the Alexa handles chroma clipping and how the C500/Q7 (as set up there) and F5 or pre-IPP2 Red etc. do. Sure, you can make an Alexa look bad if you're wildly incompetent. But I'd argue you can't light a scene with someone lit by a practical flare on an F5 or C500/Q7 (at least with the settings above, and the ones in the C500 footage I've worked with) without it looking too terrible to really fix in post, because the camera will blow out the highlights to red or to red and yellow, not to white, as with the Alexa (which clamps saturation at maximum at 30 IRE then slowly reduces it over its extremely wide dynamic range). With the Alexa, lighting that same scene well is as trivial as exposing roughly correctly. Of course you can to SOME extent avoid that kind of situation, or white balance to your practicals so they blow out more nicely assuming nothing else is blown out (dicey workflow, though). And if you record raw and process correctly this likely isn't an issue even with the C500. I'm just surprised that Canon Log has this problem far less severely than the C500/Q7S combo does, though I imagine there are settings that handle chroma clipping better. Some of the newer film emulation LUTs and even the SLOG3 colorspaces for F5/55/FS7 are fine in this regard, too, to be fair. As is IPP2 a huge improvement over Red's original pipeline. Canon Log, weirdly, has always been kind of good... there's the appearance of chroma clipping, but detail is almost never lost and the knee can be graded smoothly. Not so with any of the footage in the test above. And all that said, I think most operators overexpose the CX00 series pretty substantially. And in practice this isn't a huge issue under normal circumstances.
  14. Sounds and looks right to me. Re: that test, look at the chroma clipping with both cameras. The C500 is worse, big color shift in the red, and I have seen that before with it for sure. Canon Log does not hard clip like that (even if it looks like it does, it's always recoverable), which is why I'm a little wary of the C500/Q7 combo because I've seen the exact same thing in practice. Red has apparently fixed this with IPP2. Took them long enough, but better late than never. Log C has always been perfect.
  15. The Ursa 4k I'd put below almost anything, including the cheapest 4k dSLRs. Super clippy and slow with lots of fixed noise. Fairly soft image, too. But the 4.6k I'd put above both Canons (except in low light). The C300 and C500 do have the same sensor and the C300 has the sharpest 1080p I've seen and it has to do with the sampling not being traditional Bayer interpolation but instead instead it just groups the photo sites into a faux-Foveon type array so it's just insanely sharp looking. Sharper than the Epic or Alexa or F3 or F5 or F55 at 1080p and noticeably. From what I've seen, C500 has a razor thin OLPF and the Q7 has aggressive debayering so the 4k image from the C500 is sharper looking than a 5k or 6k Red image but it has significantly more aliasing, but not objectionable. Both cameras have similar DR. RAW doesn't seem to provide much improvement there over ProRes, but better shadows than the internal codec for complex scenes. The C500 is basically a C300 with extra features if you use a raw recorder, so if money is no issue and you WANT to use an external recorder (I hate them) get the C500 instead. If you plan to crop or stabilize, 4k could be useful for 1080p delivery, though personally I'd (almost) never shoot 4k and if you don't crop or stabilize the 1080p output will actually look sharper, shockingly. But maybe not in a good way. The Alexa is softer, but... "smooth." But you gotta experiment with the Q7 workflow when you shoot raw. When it's set up wrong to record ProRes FROM raw, it can induce chroma clipping and aliasing you wouldn't get in the C300 or C500 alone. And shooting actual raw IMO is not worth the trouble (then again I don't think 4k is either). I dunno. Rent for sure, but think the 4.6k is Ursa Mini Pro sounds like the camera for you. It can alias, even worse than the C500, but in practice I haven't seen much of it. Maybe there's less sharpening to make the aliasing pop. Maybe I just haven't worked with it much. Dunno.
  16. Yeah not an FS700 fan, really. Nor a C500 fan. But that's a long story.
  17. The tripod is what I had eyes on. Canon claims publicly it's the same image pathway, which makes sense given that they use the same processors. But apparently that's not the case and it's actually very much dumbed down. I have shot the two side-by-side but not with the same focal length so I couldn't A/B it exactly. I couldn't see any difference. But what else would account for how tiny the C100 is? Anyone who's getting that kind of damning inside information from tight-lipped Canon must be pretty high up. I know of other people who got technical information early or directly from Canon about unreleased cameras and chromasticities or developments with matrices or gammas etc. They didn't leak any information to me or break NDA (frustrating for me), but after the fact they mentioned to me that they were privy to it ahead of time or that controversies online were going on behind the scenes for the camera manufacturers as well during development. They got surprisingly little info, even given their inside status. The one constant is that all of them are top brass at top post companies or are working for the ACEs board or Dolby or similar. So if someone is getting that kind of inside info from Canon, especially anything that contradicts the party line, they must be pretty top of the line pros. Customer service reps know next to nothing and those who know something rarely talk publicly. That said, from what I've seen the image is just as good as the C300's. But it's an interesting factoid nonetheless.
  18. That is a good deal. And to be fair, the IPP2 pipeline Red introduced FINALLY looks good. They got the chroma clipping stuff right I think (assuming the magenta highlights are actually finally gone)... took them more than long enough but better late than never. Mixed feelings on the C100. Nearly ideal ergonomics (for the dSLR crowd) but the viewfinder is awful and the lack of genlock sucks... The only deal breaker for me is that according to someone on RedUser, Canon badly crippled the image processing compared with the C300 and the image is actually much worse even before it's transcoded to AVCHD (so you can't save it with a Ninja) without as much color processing to get the famous Canon colors. The one time I shot them side-by-side I couldn't tell any difference whatsoever. That said, I have to trust the pros at RedUser over my own amateur eyes.
  19. The issue is that SLOG looks like garbage recorded internally. It's not that the internal codec is terrible (it is pretty bad) so much as it is that the internal codec isn't nearly good enough for SLOG, which all-but requires 10 bit ProRes or better. If you shoot with another picture style, I think internal recording is good enough. I do have a serious soft spot for the C300, but that soft spot has something to do with the EF mount model having a good enough image in a tiny body. For a dSLR fan, it's what I recommend. Brilliant step by Canon into cinema from the 5D2 etc. Probably the bane of half this site because its success is why Canon ignored dSLR video after. But if you're gonna kit your camera out with an external recorder and PL glass... just get the F3. I would SO rather have an F3 than a Red MX.
  20. I guess so, but I've always found the Alexa keyed a lot better than the Epic and F55. And that the cost of a fiber network plus storage and render times to support 4k pushed budgets up insanely high even for 2D work. That's true, though. That's a fair point. On high end stuff the cost of the network and storage must become irrelevant.
  21. I assume most people buying 4k tvs are doing it because they own a 4k camera phone or something, not to watch commercial 4k content, of which there is almost none. The transition will happen, though. I think Fincher makes good use of a 4k+ workflow. Netflix is building a library of 4k HDR content. I totally agree with his comment that 4k vfx etc. are prohibitively expensive. What's weird is online it always seems to be vfx artists (or companies pushing cameras as designed for vfx) asking for higher resolution. Whereas in my (very limited) experience and in agreement with that tweet, 4k is the LAST thing vfx artists want. But I read a post on Canon Rumors, presumably from an artist on Guardians of the Galaxy 2 (which is the only thing I can think of shot at 8k), who was discussing how great 8k is for vfx. A lot of conflicting narratives there.
  22. I haven't used the A7SII but if it's anything like the original A7S (and the F5/F55) it's not for me. The F35 and F3 I liked quite a bit. The F65, too. But something went wrong with SLOG2. If it isn't working for you rent something else next time. There are tricks to improve A7S footage in post (gradually dialing back the saturation past 35 IRE in the lum v sat curve, for instance, to get rid of those nuclear greens) but if it's not working for you just don't rent it next time. I can't shoot decent footage with it, either. We might be idiots who don't know how to get the best out of our tools, but that camera is too hard to get the best out of for me to want to use it. So I'd be doubly an idiot for wanting to use it, knowing I'm an idiot who can't get a decent image from it and that I hate the ergonomics and then being extra dumb and trying to make it work when I can't ever seem to. That said, it's a miracle for low light and has great DR. For an experimental low light project I would rent one before anything else. But yeah it's not for me either!
  23. Even as a C300 fanboy (I love the ease of use), I agree that the F3 is a better cinema camera with a technically stronger image. The image isn't as sharp but it's more than sharp enough and otherwise it's better in every way except codec... and color is subjective. It has slightly better DR and 10 bit 444 out vs 422 8 bit out (though that's all Canon Log really needs). Same pixel pitch etc. as the Alexa I believe (very similar at least). But rent it and try it out if you can. Coming from a dSLR, I couldn't deal with F3 ergonomics, especially when the C300 had 90% of the image in a much smaller form factor. I also liked the Canon colors a bit more, but the F3 has good color, not like later Sony cameras which are a very mixed bag. If you're considering both, try both, but I'm a Canon fanboy who did not like the F5 but I still think the F3 is a steal. Plus, 60p! I also like the F35 but it's more trouble.
  24. The F3 is really nice. I remember being so let down by the F5 after using the F3. The ergonomics and menus are really bad, though. But yeah, the F3 is nice. (I don't fully trust that video, but sort of agree with its conclusions.) I remember using it as a B cam for the Epic and I preferred it to the Epic and the low light blew me away. And then it was used by someone else to shoot promos for an Alexa/Epic/C300 feature I was on and it was kinda looking great when we saw those promos. Where the C300 wins is ease of use and ergonomics and workflow. But it's also a great camera. The image is a bit sharper and it's just such an easy camera to use with a gorgeous daylight matrix. I think the image might be worse than the F3 technically, and it does have worse DR, but not by a lot. The apparent chroma clipping in that video is deceptive. That's not lost detail and it can be graded out (unlike with the F5's default SLOG2 gamut which clips hard). This might sound weird, but I'd probably recommend the C300 with an EF mount over the F3 for most people because of ergonomics, but if you're shooting with a "cinema" kit and PL lenses don't mind the large body (and required external recorder–the XDCAM codec is worse than the C300's internal codec, and worse than the C100's internal codec even) the F3 is a winner. But wow did I not like the ergonomics. It's not a good camera for the "dSLR" type (me).
  25. I don't think that's what people are saying. I think it's sort of the opposite. If you're working professionally you want to make sure you can justify the cost of the gear or else it's bad business. So there's no reason to buy an Alexa unless you can afford the crew to support it (and the roughly $100k investment up front)... and your revenue warrants it. Whereas if you're buying a camera as a hobbyist, your bank account (and credit) is the limit. I'd say spend more on other areas of the production, but if you have the money for a camera package and the rest of the production, go for it! I do think a bigger problem is that a dSLR or C300 or FS7 is a camera you can operate comfortably with a crew of one or two, whereas in my experience it's harder to shoot on Red on Alexa without a bigger camera department and a much bigger G&E department (the sensors are light-hungry). Particularly if you want to introduce more camera movement. And so that can add many thousands a day getting the additional crew and support gear, unless you have the ability to get people to work for free and have access to cheap rentals G&E and camera support. Frankly I find the Red MX and Epic to be a nightmare on fast-moving sets and very difficult to use as owner/op cameras, though to be fair I was using both of them a month or two after they first came out and they are a lot better now. This isn't meant as an anti-Red screed. The Alexa is just as bulky and requires a lot of battery swapping, too. It's also pretty slow, but the Amira is faster. I love the image from the Mini but not the ergonomics. They're all built for a bigger crew. So I think the hidden expense is the added cost in workflow and crew and time with some of those cameras, and that can add an additional five figures to the budget of a short easily... that is, if you're paying your crew and post team. But if you have that worked out, you've got it worked out. That's my only argument against getting a big expensive camera, but if you have that part worked out, go for it. On the basis of image quality I'd put the 4.6k BM well above the MX well above the original 4k BM, which has a significantly worse image than the 2.5k. The 4k has the thinnest dynamic range and most fixed pattern noise I've seen in a cinema camera. But if you're used to shooting on film and lighting for film, you won't need any more light than that with any of these cameras, though you will need to be very careful managing your highlights on the 4k BM and to some extent on the MX. What lens set do you own? Some lenses have strange corner performance with some cameras (I believe the Cooke Speed S2/S3s do with some Reds). I would rent all three cameras before buying anything. Anything with very oblique light rays in the corners can perform dramatically differently on different sensors and that won't be as apparent on a 2.5k converted to PL because of the smaller sensor only receiving the more direct rays. But as for spending money on your hobby, do it! 99% of Porsche owners aren't professional race car drivers. Just get whatever is gonna make you happy. They're all nice cameras. You sound like you know what you want and to that extent can't really go wrong. I might envy you for having gobs of disposable money, but I think any of us would spend it just as irresponsibly if we had it.
×
×
  • Create New...