Jump to content

Sean Cunningham

Members
  • Posts

    997
  • Joined

  • Last visited

Everything posted by Sean Cunningham

  1. The quality difference isn't a "myth" with respect to chroma re-sampling. You're shooting to a sub-sampled format and when you import that into something like After Effects there are multiple ways in which this can be handled (and quality influenced). This goes back to the DV codec wars where DV quality was not simply DV quality and not all codecs or applications were equal in their handling. Whether or not to use YC cables versus composite for older video equipment is also semi-relevant (*). When moving from a sub-sampled, low-precision form like 8bit 4:2:0 YUV to half or full sample 10bit or straight into RGB at 16bit or 32bit float the low precision chroma can be preserved, exactly, with no interpolation until you apply some effect that forces a re-sampling, or, that low precision chroma can be filtered as it's pushed into a a higher precision space, smoothing over the steps. On 1080P footage you're at a resolution where this distinction maybe isn't as big of a deal as when this was being done (or not done) with DV footage but there is a difference. Avid and a few other companies liked to play up how certain color situations looked better when customers used their DV codec versus Apple's. The real difference was they filtered their chroma so you got smoother looking 1st Gen color around high frequency edges than you did with Apple's codec which did no chroma filtering at all (this was a process that was codec dependent, not host application dependent). But come to find out, Apple's codec held up better to the next generation where Avid's did not. Of course back then folks were generally always working in and mastering to some kind of compromised format, working in 8bit color, etc. where now there aren't downsides to re-sampling the chroma. In the case of 5DtoRGB and something like After Effects you have multiple transcoding steps here and subtle possibilities for differences in quality (influenced by both codec and host application). The quality of transcoding from AVCHD to full-bandwidth, high precision color is not guaranteed to be the same as transcoding Prores to the same space or DNxHD for that matter (regardless of what each actually contains). Each three instances are dependent on completely different libraries to hand After Effects the data. We know, for a fact, that the Main Concept MPEG libraries are questionable, or were for CS6. The Main Concept libraries were high performance but I wouldn't trust them from a quality standpoint now and the only way to remove them from the equation is to transcode and up-sample to a professional codec using an app that does not use the Main Concept libraries, prior to use in any application that does. You cannot count on a given MP4 playing back at the same quality between any two browsers or video player. The difference comes down to which host is using the better decoding libraries to transcode pixels to an RGB space for viewing. Even assuming MC fixes their code and does a proper job, there will still be a question of who does a better first-step job of filtering on the up-sample? The folks who are simply dismissive are more or less equating the debate over whether or not the transcoding turns 8-bit to 10-bit. That's off the real mark for sophisticated users in an attempt to give a simple answer to noobs. They're making the mistake of assuming view quality is dependent solely on the MP4 itself, that all applications handle the viewing and processing of MP4 equally or that what might be true for their application is true for all. Their desire for an easy answer diminishes the discussion rather than enhances it. Some footage of mine (Flowmotion) was definitely improved by doing a transcode step in 5DtoRGB so that After Effects imported Prores 4:2:2 instead of AVCHD. What's not clear is whether this was limited to long GOP footage and strictly in the "digital rain" department or if there is any improvement, based on who and where any chroma subsampling is occurring, with an All-I codec. (*)- I mentioned the Y/C video (S-Video) versus composite scenario for older video equipment. Most folks just assumed that when they got a laserdisc or S-VHS or DVD player with Y/C-out and a TV with Y/C-in that this was one and only best way to connect their components, but t they'd be wrong. The video itself (until DVD) was always stored as a composite signal. When and if to use a Y/C cable was dependent on the quality of the comb filter in your player versus your TV. If your player was a higher quality component, like a Pioneer Elite player, and you had a so-so TV, then you would use YC. If you had a so-so, mass consumer player but an XBR Sony TV monitor, you would connect composite. Whether to do a transcode step before your main application or in your main application asks a similar question. Which is doing a better job of filtering and decoding? edit (3/11): I now know that transcoding All-Intra AVCHD footage to ProRes via 5DtoRGB produces better results, thanks to the chroma filtering, than what you get simply importing the MTS directly into CS6, either Premiere or After Effects. I don't know yet if this is also the case with Prem/AE CC. Moon Trial 7 (All Intra) MTS in Premiere: Prores422HQ via 5DtoRGB in Premiere, same frame:
  2. For anamorphic, particularly if your final conform is to letterbox on a standard 1080P frame, you're effectively oversampling the image in a non-uniform way if nothing else. You'll be filtering some of the effects of compression. I would think this calls for some experimentation to see just how much mileage you get out of it. I'm willing to bet you get the best, cleanest results pulling keys on footage un-squeezed in the vertical direction versus from the squeezed footage or from imagery un-squeezed horizontally. I'm less sure what sort of difference you might see when you go to sharpen. For Canon AVCHD shooters I wonder how much or how little un-squeezing helps with moire, or Blackmagic for that matter. You'll want to be in a high bit depth project and the type of filter used in both the scale as well as transcoding the 8bit 4:2:0 YUV footage into an RGB colorspace will play a big part (some you can choose how you filter and some you can't). With the same piece of video you could very well get entirely different results depending on the application used.
  3. For factory and long-GOP footage I've had to transcode to Prores because CS6 didn't decode MTS properly for certain GOP lengths. All-Intra seems to be fine. I've seen improved results in some cases going this route (skies & skin) because of the chroma filtering that happens in the transcode from 5D2RGB but I can't say for sure whether I ever saw that with All-Intra or just in footage that CS6 was screwing up anyway reading straight from MTS.
  4. If you're not talking gear and technique, the technical stuff, then the "content" is something you have to write or otherwise go do. Creation of content is pretty mutually exclusive from spending time places like this. Discussion boards are for the spaces in between creating content or executing other people's content. There's commentary and theory, dissecting "content" to understand what works and what you or an audience responds to. Maybe something like that is what you're talking about? Or, say you want to do a scene that will have a certain mood or you're wanting to convey a specific subtext and you're not quite sure how, is that what you mean?
  5. Heh, if I hadn't been so obsessed with them in my 20s I likely wouldn't have noticed. I think I have some thirty-five or so of their albums which doesn't cover everything that's available for them through just the early seventies to the mid-late 1980s, which I think is their best work. What's amazing is how they made this music live without computer sequencers and tracks ready to roll. You go see someone like The Crystal Method and their live performances are mostly a sham, with equipment that's not even plugged in sometimes and a lot of pantomime. Tangerine Dream, especially the Edgar Froese, Peter Baumann and Chris Franke lineup, were "space rock" gods. When they started switching to digital synths I tuned out. But check out Edgar Forese's solo album "Stuntman" from 1979 if you're into their sound from films.
  6. There wasn't any TD in this film. They provided fantastic scores for a couple of Mann's earlier films, like Thief and The Keep which featured a mix of original material with album material (Tangram, if I remember right, for Thief or perhaps it's Force Majeure, as somewhere between those two are most of their tracks for Thief and Risky Business, and LOGOS Live for a key piece in The Keep, "The Silver Cross", though the opening track, a TD remix of a Brian Eno song, was rarely included and used to be impossible to find). The soundtrack to Manhunter used to be insanely, insanely hard to get a hold of, especially here in the States where I believe it was only ever originally released on cassette tape, and likely incomplete at that. Manhunter OST playlist Michael Rubini scored the very TD sounding track "Graham's Theme". Kitaro did maybe my favorite piece from the entire film, also very TD sounding with its echoed, prancing ambiance. Not included on some versions, "Seiun" which is featured in the scene where Graham is dreaming on the plane with the beautiful slow-motion photography working on his boat, his wife walking down the docks and his obsessive stare. Amazingly beautiful electronica used in Mann's signature style. Other very TD sounding tracks were from The Reds and then the haunting vocal pieces by Shriekback, notably "Coelocanth" and "Big Hush", which were used effectively in scenes giving Mr. Dohlerhyde surprising humanity and sympathy. Through most of the 90s I was obsessed with finding some of the tracks from this film but it was a lot harder back then, with stuff being traded around in newsgroups as multiple uuencoded walls of text. Sometime in the early '00s a friend got me a bootleg CD made of tracks pulled from a European album and possibly cassette material. Thankfully the internet caught up and the re-mastered re-release spawned renewed interest I'm sure. I watched this film so many times, growing up, and later. I love Dante Spinotti's compositions and very '80s use of color. I've read it undeservedly maligned as too "Miami Vice" or "music video". I love the palette though and Mann's singular, long-lens style. Tangent complete ;)
  7. Thanks, it's getting a little skinny jeans, kiffiyeh and non-prescription black-rimmed glasses up in here.
  8. skiphunt, yeah, Magnolia is on my list, and in my top 10 films as films. It has my favorite anamorphic steadicam shots, particularly the one that starts out in the rain and then takes you all the way through the TV studio. My jaw was on the floor. So many wonderful shots in there. And Elephant just had me stunned, even though I'm not a fan of non-widescreen. The simple, natural beauty in the photography introduced me to the work of William Eggleston, Harris Savides greatest influence during his final years. I so wish there was some way to really nail the look of his dye transfer prints and, likewise, the look of print ad photography that used the technique. Go into it with an open mind. I've never, not once, read a negative review where I thought the critic had really gotten it. And it's such a simple, solidly done revenge and redemption tale under the stylized presentation. It's a samurai-western wrapped in an art film. If you enjoyed Valhala Rising and Bronson you should dig the hell out of it. But besides that, the photography is just dynamite and doubly impressive when you read how simply and cheaply it was done. There's only one scene in the whole film, I believe, where they used a big movie light. Lots of practicals, the equivalent of El Mariachi style consumer grade fixtures bought on location by the DP, etc.
  9. Yeah, the Hawks are similar in their lack of flare since the bent glass isn't up front.
  10. Also in no particular order: 01) Blade Runner ...interactive light, caustics, eye-light, beam splitters, atmosphere, smoking, rain, scale 02) 2001 ...static composition, scale, control, juxtaposition 03) Christine ...anamorphic flares, anamorphic composition 04) The Underneath ...color themes, composition, Texas photography true to locale 05) Matchstick Men ...daylight interiors, sets + lighting indistinguishable from location, indirect light, side-y 06) Manhunter ...composition in widescreen, jump cuts, color themes 07) The Fog ...nighttime anamorphic photography, the witching hour sequence, driving to lighthouse sequence 08) Magnolia ...anamorphic steadicam, anamorphic composition, depth, LA photography true to locale, top-y 09) Drive ...night driving, driving, duo-tone, color, horizontal movement, Alexa, true to LA 10) Elephant ...naturalism, William Eggelston, long take, tracking, verite, beauty in banality ...films would go in and out of this, list, with honorable mentions like Deep Cover and King of New York if I was in a Bojan Bazelli frame of mind, or perhaps a few more Harris Savides films like The Game. If I was in the mood for wide and really crisp I'd have to make room for something like Year of the Dragon. I know, I know, I hate when other folks don't follow the rules and go over the list... edit: +1 to folks mentioning Only God Forgives, Legend, Heat and The Place Beyond the Pines. There were some "as expected" entries I simply had to add that I knew would be on most lists, like Blade Runner and 2001 but some of the not so expected choices tend to be more interesting. edit2: for what they mean to me, how come they be here
  11. Well I'm glad, I feel like I'm an asshole half the time. I really don't want that to be the case. This is a dense subject that I know I'm no expert in. But I see folks facing issues now, with these new cameras, that parallel what I've seen in my own industry since the beginning of the A-to-D era and there's really no place to go but up from here.
  12. Hurlbut explained that he liked the C500's response to electric light at night. In his tests it just looked more video-like to me than the Alexa but that's what they responded to.
  13. Some of these do look nice. They do, however, look like combinations of film emulation + a graded look in one LUT, looking at the OSIRIS package specifically. VisionLOG looks like they're recreating what Academy Color Encoding System is doing but they're doing it to a universal LOG space rather than a universal, high gamut linear space. Yes, it could be a viable alternative for some folks. It looks well worth checking out.
  14. Exactly. You're doing work before doing your work, I'm assuming pretty often. Ideally you should never have to look at flat, log footage and begin (it's not meant for human consumption, not at all) or, on your own, push it around until it's a correct representation of what you shot for your display. The notion some folks have that when you shoot raw the image is totally up to you and arbitrary and it's in there and you just have to find it is nonsense. They've been sold a bill of goods and cameras before software was really ready to handle the footage properly. The process that's been standard up to now is incomplete and not fully realized. That will change though. It takes getting all the camera manufacturers on board and the software companies moving towards a standard though. And, unfortunately, a lot of re-education of some working professionals and enthusiasts. Until then we have to make the best with the tools we have and ride out the one-step-forward-two-steps-back reality that comes with certain advancements where software is often left behind by hardware. What's going on now reminds me of the early days of scanning film. You'd get your 10bit log .CIN files that needed to be linearized so that it was actually useful and looked like a real image. You'd get a match clip from surrounding footage and then someone would have to painstakingly, going back and forth from monitor to loupe and light table, re-create something meaningful out of the scan. In a facility of a hundred or more artists you might have one, or two or maybe three people who could adequately perform this task. Flash-forward ten years and the process had only gotten a little better but you still had a similar process happening that created a "show LUT" that might vary even if show-to-show scans were being made of the same film stock. It was terrible. Labs don't have to re-learn how to process the same film over and over again but the process would be changed project to project with digital files because it was never truly right or ideal. Thankfully those days are coming to a close and when you load a CinemaDNG or R3D or .CIN your software will already know how to display it properly and you can concentrate then on how you want it different from how it was shot.
  15. GH2 has about six stops when shooting video and about four of it are useful. Similarly, Kodak Vision stock has a reported 15 stops but someone like David Mullen shoots knowing he has about ten stops of useful range in the end. Shian Storm has a very informative video on the DR of the GH2 and DSLRs in general: ...of course the Pocket has more DR than a DSLR. I don't think I've ever heard anyone dispute that. Even in ProRes mode you've got far more range. Assuming it's as-advertised and based on the same dynamics as the BMCC it's very close to shooting on an Epic in RAW mode and has been demonstrated to offer slightly better highlight retention than the Epic. But the French blogger is off claiming the GH2 only has three stops, if that's in fact what he said.
  16. Except that most of that work isn't actually enhancing the footage, it's getting to a reasonable place to start. You can't intelligently grade your footage until you see it for what it is. That's not what you really start out with most of the time when shooting raw or if you've shot linear with a baked in LOG curve. If I was a client and I'd shot film and I got rushes back that were different every time or, worse, I was at a lab timing my film and having to sit there and watch the colorist poke around and try this or that to get my footage to simply look as filmed and as good as I would expect from rushes with house lights I would take my film and go someplace else. But somehow this is how folks are expected to work with the digital equivalent to a film negative. It's awful. Once you're really grading, I wholeheartedly agree, methodical, logical order. But it's easy enough to see in upload after upload that most folks aren't starting their grade from a good place before trying to get to their look. Just look at Shane Hurlbut's BMD tests. They're no better looking than Joe Six Pack bought himself a BMCC's uploads to Youtube. I've got this fellow's entire short shot on an Epic sitting on a drive. I look at the footage as-is in After Effects it looks just as bad as it does in Redcine-X. Just awful. Unusable for my purposes. So I'm having to linearize it all to a meaningful place to work that's appropriate for display on an sRGB monitor. That extra work is eating into time that might otherwise be spent doing my actual job on this fellow's film and it's going to be the same waste when he goes to do his final grade in Chicago because the colorist is dealing with it like everyone has always had to deal with it. At least most of these cameras seem to have decent monitoring now, even if you're going to have to work to get it back to looking like what you remember from set once it's shot. One of the most said phrases by Brian Singer on the set of Superman Returns, the first motion picture shot on the first model Sony Genesis, "it's not going to look like that, right?" We have come at least that far.
  17. Do you mean for film or for cameras? There are a bunch of camera profiles in their download section so if the trial didn't have your's check there. As for the film presets, yeah there aren't that many but there really wouldn't be. You have a few Kodak stocks, some Fuji stocks and then there are some stills stocks. There are a couple of Kodak stocks that I wish they would do that are contemporary and still being used but they haven't profiled it yet. Most of what is there I've ignored unless I was just seeing what something more "oddball" might look like. It's virtually impossible for them to profile stocks that are now discontinued. And since it isn't an Instagram like product they've kept it to fairly practical choices. There weren't that many choices ten years ago and there are fewer today if you were going to shoot a film on film. You have a couple choices from Kodak and a couple choices from Fuji. Looks like Agfa just offers a couple, a B&W neg and a color print stock. They have, however, needed to have Kodak Vision3 5219 500T since the beginning as it's one of the most used stocks today. Fresno Bob made one of the best demos I've seen of the available stocks they've profiled: Here's a typical progression for how I work, from straight GH2 (in this case it was Flowmotion patch), Film Convert (Fuji Pro 160s), ColorGHear grade (enhance saturation, play up the neon in the scene), final recipe for micro-contrast to bring out skin and tiny black details:
  18. Canon does the same thing with the C300, for instance. On the C500 they're well over the size of Super 35, not quite as big as RED but they're still calling it a Super 35 CMOS. Sony does it on the FS700. And the F3. And the F5. And the F55. And the F65. Even Arri calls the Alexa a Super 35 sensor at 23.76mm across (though they have a new "Open Gate" mode on some models that can open up beyond the viewfinder's view to record 3.5k with the full 28.17mm sensor width). So, don't be so hard on BMD. Everybody's doing it.
  19. I haven't yet. Examples online look fantastic though, some of them. I haven't had much of a lens budget the last couple years but I plan to get a DSO modified Helios as soon as I can. There's a longer, portrait lens that I can't recall at the moment that I've seen quite a bit of anamorphic coupling with that looks fantastic too.
  20. It also, at least when you're at full size, has a kind of dreamy curved bokeh that almost seems to burst from the center of frame. Some people hate that so it's something to also be aware of. Some samples from shooting film on my Nikon n90s (at the Cockrell Butterfly Center, Houston Museum of Natural Science):
  21. In such hyper-mundane cases you wouldn't apply something like Film Convert last because its result is based on a pre-grade profile. If you're pushing everything so far out of whack that now values that once existed no longer exist the calibrated beginning profile now is meaningless.
  22. Ahhhh, that might also explain why I haven't seen as much of its ills on my GH2. When I pulled it out I mostly shot daylight footage. I originally bought it as part of building up a small Nikkor set for use with my Redrock Micro M2 adapter and DVX100b but at the same time I bought my GH2 I got my Century Optics anamorphic adapter. I ended up not touching my 35mm again for the longest time because I had to be too stopped down on it with the anamorphic and I liked the look of my 24mm f/2 better with the adapter (which I could shoot at f/2.8). Not until pulling it out for this spec industrial/commercial spot had it left my bag and all of that was daytime footage. I opened up some of it to maybe f/1.4 but it was never in a really high contrast or night scenario. For that stuff, it was gorgeous, back in the service department of this Honda dealership. They had this great, even, daylight balanced light all through the shop.
×
×
  • Create New...