Jump to content

Sean Cunningham

Members
  • Posts

    997
  • Joined

  • Last visited

Everything posted by Sean Cunningham

  1. I cannot say for certain, in all cases, that FC is good for getting digital LOG footage out of its flat state. I haven't experimented enough with LOG footage to know the extent of its capabilities in this regard or bothered to see what flat/log profiles they've created. I have brought in some very LOG looking, still very flat, green, "typical" RED Epic footage into FC and it did what appeared to be a very good looking transformation of the footage. It turned it into something instantly recognizable as "film like". I do forsee an ACES workflow as being the more general purpose means to get better log->lin conversion than what is currently standard and I drew parallels to how that system works and what's going on under the hood of Film Convert. Those six steps and that poking around to get a decent looking image from BMD cameras (or RED or any raw), either RAW or with that log curve baked into linear ProRes, is what the ACES workflow is designed to remedy, because it's the result of an incomplete, inefficient and error inducing workflow (no judgement on you, it's been that way for everyone). But, back to FC, yeah, you can do your conversion and then consider this a new starting point to then actually grade your footage or you can just be happy with the primary conversion. That choice is up to you and your desired look. If you want your footage to looked DI'd and heavily modified then it would be best to do the conversion to FC within Resolve or AfterEffects and then do all the grading that you would normally do to any footage. Since the whole DI process began with scanned film anyway, not digital camera footage, you are essentially turning your digital footage into some approximation of a scanned negative and now you're free to grade to the look you want like anything else. All you're doing is altering your starting point. Or, you just do the primary conversion and minor balancing for any exposure and color temp inconsistencies. This would be analogous to what you see in film prior to DI. Watch a movie from the '80s or the '70s or even all through the '90s when there weren't a lot of ways to modify the look of film, no such thing as secondary CC for feature film (though there was for commercials), just basic color timing and exposure before involving an optical house for some kind of effect. There are thousands of films that are basically just lightly modified, processed film...basically everything before O Brother, Where Art Thou? That's an approximation of what you have just running FC and not doing a lot of grading afterwards. There's nothing wrong with either way of using it. I will say, it's kind of refreshing, going back to watch pre-DI films where folks weren't doing heavy handed modifications just because they could. The opposite would be something like that new Liam Neesen film where he's an air marshall. Hoo-ah, that duo-tone taken to absurdity looks like hammered shit, pardon my French. Again though, download the software and play with it in demo mode for free. It does give you film-like results like nothing else you can currently buy and better than any attempt that's come before it. One of the negatives I've read about it that I wholeheartedly agree with is that their grain is too strong. It doesn't bother me that it comes up by default with 100% application. The only thing that makes sense is that it would be either all on or all off. Complaining about a program's defaults are silly to me. That said, the scale of their grain is set based on selecting a gauge of film, from 8mm up to various 35mm formats and their 35mm options seem too strong. I believe their grain files are something like 4K but I don't think that they're doing an appropriate supersampling of the highest resolution file and transforming it to be appropriate for the frame size. At the very least, it's all too big for 1080 footage. It almost looks like they're doing a 1080 crop out of the middle of a 4K grain scan. It would be more appropriate to scale down and filter. Having worked with scanned 35mm film since 1993 which is not long after film scanning became common, I can tell you that grain is bordering on sub-pixel in a high quality 2K scan. You might get something pixel sized in the blue record but the grains in FC are huge by comparison and, if nothing else, it just needs to be "kissed" in. I'd suggest doing this after everything else since you can apply a second copy of FC with all of the color and film settings dialed down to pass-thru and just apply grain, or use some of the grain files that have been posted by other professionals for other folks to use.
  2. As I said, it's not a Blackmagic phenomenon. And do you think "consumers" of the Sony F65 are smarter? It's close but no cigar, and perhaps making it exact just isn't possible. Worse is they claim it's more like shooting 65mm. Marketing departments are often without shame for all of these cameras. The spec is based on horizontal size, not vertical, at least in a motion picture context. It may not have even been in their mind but my point is it's a spec with a lot of slop and "consumers" seem very used to and okay with this and more of these consumers versus high end shooters are who the camera is being marketed to. It's within a decimal of the Canon 7D's sensor size: ...so, it doesn't matter what you consider the size to be, it's an APS-C sized sensor in reality. In that world, if 35mm film were as loosey-goosey they would be perfectly fine claiming Super 35. It's being maybe a little pedantic calling them out on it but that kind of "close enough" would have made any product DOA if it were designed for film production. Film standards are rigid to hundredths of a millimeter. They don't change for decades and when they do it's a big deal because it affects more than just the fellow shooting the pictures. The GH2 is an actual "beefed up" MFT sensor. That one mm makes a difference when planning on certain purchases. 8mm of variation (APS-C) is a lot. That depends on who's talking. Someone who's spent a lot of time working with motion picture film should be quite comfortable with the 22.1mm figure. Say that to them and they'll know, "oh, it's close to Super 35". When you say something is Super 35 size that says to them, "oh, it's 24.89mm" or "oh, it's .980 inches" and that gives them the wrong impression. Telling someone who isn't familiar with film production that it's Super 35 will either send them to google or they'll assume, "oh, it's APS-C" size, if they're more initiated. That's the trap. Those that really understand what Super 35 is know what it is specifically and what it has always been.
  3. It's unfortunately a transposition of the common acceptance of loose or sloppy definitions found in the DSLR world, making the mistake of applying it to an actual standard. Super 35, the buzzword for Full Silent App, is 24.89mm across, period. It's more accurate to say it's an APS-C sensor, but that's not sexy and it doesn't make an immediate connection to movie making. Their marketing department makes the leap that since Super-35 falls smack in the middle of the 8mm of slop, er, variation present in the APS-C spec, therefore an APS-C sized format, that APS-C also equals Super-35, which is wrong. A goose and a duck are both water foul but a goose is not a duck. BMD isn't alone or unique in making this improper correlation. Almost none of the chips with the label "Super 35" are actually the proper size. They're all within the APS-C "zone" however. They should just say "35mm" but it doesn't have the same impact without the "Super" attached.
  4. The thing to also be aware of, besides coma, is a noticeable lack of contrast totally open versus a stop up on that 35mm. My 50mm F.Zuiko has the same issues wide open but this effect virtually disappears with a little micro-contrast adjustment while grading.
  5. Yeah, those observations regarding what is "normal" go back to pre-digital days and SLR shooting. The problem moving forward is people read something, some rule of thumb or what have you, and they might want to carry it over without fully appreciating the context of its origination. They might not fully understand the concept being discussed or how their situation might be different but they have a desire to do things "the right way" and aren't always aware of when something is applicable or not. Jumping from film to something like the Alexa or RED or one of the contemporary Sony cameras doesn't introduce a radical change in FOV expectations. Lens selection becomes more about color and sharpness as seen by the digital sensor, which might be entirely different than the same lens and film.
  6. I can concur having seen this in my own 35mm Nikkor copy. It's slightly less of an issue on MFT but I've shot film with it and its issues fully open are easy enough to see if you look. I still love the lens a stop or so off the floor though on my GH2. I fully entertain the notion that I might like it less paired with a Speedbooster because its issues are more pronounced as you get further from center.
  7. Partly true, that about motion picture 35mm being roughly equivalent to APS-C is true (it's more like 1.45x crop versus stills 35mm). Because film runs through the motion picture camera vertically instead of horizontally like an SLR. When shooting with a 50mm lens on 35mm motion picture film camera (shooting Super 35mm, this won't always be the case) you see the field of view of a 72mm lens on a full size SLR. That distinction is only meaningful if you're using a full size SLR as your main point of reference. Motion picture cinematographers will tend to think of and plan for what a given focal length looks like on a motion picture camera and don't really need to do conversions between motion picture FOV and SLR field of view. It's a pointless exercise unless they've scouted a location with a still camera and established the framing they wish to have in the film. If they've done this scouting with a director's viewfinder or one of several apps that lets you preview framing and lens selection there is no need for these sorts of gymnastics. The equivalency is often nothing more than trivia.
  8. This advice makes the assumption that you're doing things wrong, or at least, not as well as you should. It's also making the assumption you're applying some bull-shitted "film curve" that applies a uniform transformation without respect for the digital file's origination. It's like good advice for doing things wrong which I don't get if it's coming from that Australian colorist. Resolve doesn't clamp or integer-ize intermediate results at the output of a modification node. It makes no sense. You will lose information if you read in 8-bit footage, apply a LUT and then save it back to an 8-bit file. Don't do that. That's bad. You will lose information if you think because you shot 8-bit footage you can grade in an 8-bit project. Don't do that. That's bad. (I don't even like working 16bit any more) If you are working in a floating point project. there is nothing lost after applying a LUT. There is no removal of anything. Nothing destructive is happening. I can crush an image down to practically nothing and expand it back out to its original form and everything is still there. That's true in After Effects so I would be really, really surprised if Resolve did something boneheaded like clamp or integer-ize all intermediate steps between nodes. You should really be operating in a floating point project and saving both intermediate and final imagery to a high quality, un-compressed format with enough precision to not be destructive. edit: he's also referring to "print LUT" which, while highly anachronistic and a dubious proposition at best in any case, is very different than what FC is. A "print LUT", if you were going to do such a thing, would definitely be the last step, if you were really after an old telecine look (but if you work without having it on most of the time you are in for headache and heartache doing your grade, being happy with it and then applying something like this). It's analogous to AE's View->Simulate Output->Kodak 5218 to Kodak 2383 (which would only really look right if you were working with scanned 5218 negative though they also have a "universal camera film" to 2383 as well). When you watch a movie on TV or BD or DVD or LD, ideally, you're not seeing the influence of a print stock. Likewise with commercials shot on film. They scan or otherwise transfer from negative. If a movie has had a DI odds are no video representation you have or will ever see has any printing influence. Ideally you're always wanting to be looking at some representation of the IP and not something that's been "stepped on" because, yeah, you don't grade/time a film once you've printed it. Maybe a "print LUT" would be useful if you were wanting to simulate the very specific look of old (like '80s and earlier) telecine (transfers from print rather than negative material) or something like Technicolor's ENR or other silver retention process applied to the printing of a film, creating an image that you would only ever see in a theater from select, expensive prints and not your typical release prints. This is a look that you would not get seeing the same movie digitally projected or on a home video release.
  9. You should really only do it though if you're going for that look. If you aren't really unhappy with the inherent look you're getting from your camera then it's maybe an investment that would be better spent (not that it's that expensive) on a video card upgrade, if you don't have a compatible card for using something like Resolve, or some other tool. I love what it does when I want that look but it's not hard to see example works out there shot on nearly any decent camera where I don't think the footage or film would have been improved any by pushing it through Film Convert. It would have been different, but maybe not really better. Please don't misinterpret my desire to keep things fair with regard to its purpose or value with any kind of blanket advocacy for it being used all the time on all things. I've seen, for instance, lots of video shot using various Driftwood patches for the GH2 that have a great look as-is. It's often hard to judge the stuff being shot on various BMD cameras because it's so often poorly graded (or worse, un-graded). I've seen RAW 5D footage that I don't think needs FC at all to look film-like enough for me. You can download a trial copy and you should play around with it before buying. I also suggest getting the plug-in version, not the stand-alone which is tuned for speed and not precision. If you don't want to spend the cash for their one price buys all you must be very careful about which product you buy because they're all licensed separately and licenses are not transferable between product or platform because they use a really kludgy style of local key.
  10. Fundamentally not much. What's different than most of the LUTs that get passed around is that their transformation is based on real film. They do this by studying and sampling the results of control images passed through both digital cameras and these same control images photographed on various film stocks. Calculating the difference between reference value and color lets you push one towards the other. Of course it has limits but what you get as a result is an approximation of the result from doing a telecine of negative or scanned negative on a Datacine, where the resulting digital file is an inter-positive. The quality and accuracy of your "scan" will be directly proportional to the quality of your digital footage, in terms of dynamic range, color gamut and (and here's an important part) its exposure. Some critics don't seem to understand the "inter" part and assume this is, is designed to be or should be your "grade". It can be, the same way you can shoot film, send it to the lab and get a print back. This won't be graded it will be at a baseline such that if you were to have shot the film properly exposed you get a proper, representational image back. If you're on a budget and can't afford to pay for a colorist then there you go, that's your film. Otherwise, based on seeing this baseline you can now decide if it needs modification to match surrounding footage and you can decide how and how much you would like to change the photography to achieve a different look for artistic effect, etc. Similarly, once you've applied Film Convert you can either decide you like the basic transform or you can then grade and modify further. Some have claimed doing Film Convert somehow limits further modification but that's absolute nonsense.
  11. That is not entirely true. Reference values and hues used to calibrate film and television equipment since the dawn of time are fixed values. A properly exposed image, be it film or video, will arrive at some approximation of that value. You want to know how off or on your lab is put a Macbeth chart or some other known quantity in your slate. Color cast or exposure variance can be quantified and corrected for. How off or on is not a matter of opinion. The values coming off the sensor relative to what they represent can also be quantified and are not subjective. This shit is math, son, and it's all very much non-subjective. This testing of known quantities passing through a camera or sampling system will reveal how it "sees" and how it "sees" can be quantified and measured against the original value. That's objective, not subjective. Likewise, since the dawn of the the color display we have had similar means to judge the output of a monitor against known control values and hues. The better the monitor and test equipment the closer you can get it to display known, fixed, reference values. This response can also be quantified and is also not subjective. How do you not get this basic concept that in some form or fashion is a part of every camera and every display device and why we have meters and chip charts, color bars and densitometers? Seriously. RAW does not wipe away the fact that real, measurable hue and value are are being recorded. After that the process is subjective. Without the objective part being dealt with in a meaningful and thoughtful way the result is just operator induced oscillation. If you'd rather work like that, knock yourself out.
  12. Fast for anamorphic is in the f/2 - f/2.8 range. You can go faster with certain lenses on the SLR Magic Anamorphot. How fast you can be and achieve a sharp enough image with the Anamorphot is dependent on the optical design of the taking lens and how that sauce mixes with the Anamorphot. A Speedbooster might or might not affect that relationship. For a given focal length one lens might be usable at f/1.7 while another might require f/2.8 as maximum (prior to breaking out the diopters).
  13. Perhaps you were being too subtle with your Ms. Lima example. Here's Vanessa Williams: ...then you have actors like Michael Ealy from Almost Human. Some theorize all blue eyes come from a single ancestor about 8000 years ago who lived on the coast of the Black Sea. There is a rare genetic disorder that can give light blue eyes to a small number of folks of African descent but there are also non Waardenburg Syndrome instances of Africans born with blue eyes where it's not the product of interbreeding with someone of European descent.
  14. You're suggesting that supersampling introduces aliasing. Mmm-kay, dubious but playing devil's advocate perhaps bad downscaling can introduce aliasing. Yeah. So do it well. Do it right. Use quality software. Work at a high bit depth and use high quality scaling. Folks wanting to cut corners and finish in their editing software, yeah, they might have some issues. Not only does it not introduce aliasing it contributes to noise reduction as well as artifact reduction in the case of AVCHD footage. That's not theory, that's experience. Finishing in something like After Effects from a 32bit linear light workspace does not introduce the artifacts you're describing during down sampling. Nothing negative is introduced. I would hope that something like Resolve would be as capable. Upsampling horizontally also is very forgiving. Even at SD resolutions. This is the defining exploit in our visual system that made the 16:9 anamorphic DVD the highest quality home release you could get of any film until BD. It's what's behind the color sub-sampling present in all broadcast video formats past, present and, looks like, future. With full-sample in the vertical field Sony was able to pull off their amazing con of 2001 convincing certain filmmakers that the original HDCAM was somehow the death-blow for film, with all of its 3:1:1 135Mbit 8-bit codec glory. I will agree that optical does a better job, especially if all you're doing is a naive digital scale by comparison. Saying what you're doing is a viable alternative for folks with 36mm sensors is all fine and good. I've suggested the same, given that, besides a few of the available adapters, large format cameras tend to have more caveats associated with compatible lenses, etc. But the provocative title of this thread and several other statements within the thread go above and beyond claims of having a viable alternative.
  15. Please re-read what I'm saying. What the sensor sees, what the relative powers of light were in the scene as shot are not opinions. Please, quote where I say it would come up "done". In fact I specifically say it should come up with a correct representation of the scene as it was photographed. From there you grade. RAW isn't arriving at chicken salad out of chicken shit.
  16. To achieve 2.40:1 aspect ratio with the 5D you crop 1080 lines to 800 lines. The 1080 lines of the original 16:9 footage corresponds to 20.25mm of sensor height and after cropping this means an effective vertical sensor size of 15mm. After cropping, a "scoped" 5D represents a 36mm x 15mm sensor. Anamorphic 35mm, however, is 21mm x 17.5mm at the negative which makes it a larger format by more like 16% (*) in the most important dimension with respect to lens selection, to framing, scale and focus distance. Besides the elongated bokeh and horizontal flares (for bent glass up front versus the boring middle and rear anamorphic designs) the fact of it being a larger format plays into its overall aesthetic. This is also a factor in its favor when shooting on smaller format cameras like MFT + 1.33x versus shooting spherical and cropping in the same format (putting aside the oversampling and enhanced detail captured when conforming to flat 1080P). Anamorphic allows the use of the entire sensor height of the 16:9 camera which puts you closer to your subject than you would be when cropping for "scope" format. It's vertical framing that determines your distance to subject after all, not horizontal, when you're dealing with widescreen. Focusing on a subject that is closer to camera means enhanced bokeh (hence why anamorphic of any stripe is bokehlicious on 5D and similar cameras, even if the optics are sometimes harder to wrangle). Yeah it is. Doubly true if you're shooting AVCHD. (*) with only a single decimal place given for anamorphic and the cropped 5D being a whole number the math is a little "roundy" on this point given 15/17.5 = 0.857 --but-- 17.5/15 = 1.166
  17. How ironic. Shooting 2.40:1 on the 5D turns it into a "crop" sensor camera (.86x) compared to anamorphic 35mm.
  18. They get to a look, or a representation, but they're still arriving at a subjective guess as to what it should be. There should be no guessing and no series of mouse moves besides loading your clips. When you first see your footage you should be actually seeing your footage properly transformed for viewing and then grading it from there. That's why Academy Color Encoding System is such an exciting development for RAW and why, hopefully sooner rather than later, there will be no reason to not be working this way. Smarter, not harder.
  19. And I'm in no way advocating using FC on everything or conversion to a filmstock look as something that needs to be done on everything or by everyone. I love what it does but it's just another tool in the toolbox.
  20. There is another tool that is being, or will be marketed by a professional colorist. Shian Storm is also, in a more limited way, doing some empirically derived filmstock transformations in his ColorGHear suite. It may not always be FC but there is a lot of "film conversion" going on, even in professional circles. Film and the way it renders tonality and color will haunt digital acquisition for a long time to come. They've yet to build a digital camera that can hold a candle to the best work being done in celluloid even today. Anyway, yeah, people asked for "raw" but people don't ask for the highly imperfect and incomplete log-to-lin setup that's most commonly being used. There is too much effort spent simply trying to manage up an honest and true rendition of what light the camera saw and the relative values present in a scene. Getting to an appropriate linear representation so that you can then grade from a place of awareness and creativity, a place where you can make meaningful decisions that are more than just trying to get it to not look like shit anymore, shouldn't be an ordeal, it should just be. It should be the first thing you see when you look at your footage. Seeing a meaningful representation of what you shot doesn't mean baking anything in or losing anything in the raw data. I got my first taste of that swapping over to ACES on this short I'm doing VFX for that was shot on the Epic. It's shocking how a manufacturer IDT that's universally regarded as "horrible" is still so much better than their own ability to meaningfully render their own native footage to the monitor. Anyway, the type of techniques that make FC possible are what will make mixing multiple cameras from multiple manufacturers on the same film as close to seamless as you will get until hitting their relative limits.
  21. They sound like they're working over their head then. That's the downside to democratized tools.
  22. I think that's my answer. And this would mean 960x1280 would be 18mm x 24mm, if you were so inclined. That the height is variable under normal, non "crop mode" shooting is good to know. Thanks!
  23. This isn't correct. You're thinking of something like Magic Bullet Looks. Film Convert transforms the measured response of a digital camera to the measured response of several actual film stocks. It's essentially doing a very specific type of transform that you could do in an ACES color pipeline if you had an IDT for a given film stock. The science being used is extremely contemporary and relevant. It's the kind of thing that's going to eventually make all this sloppy, subjective "LUT jockey" business that's so endemic to raw photography an unfortunate little footnote of the past. You could do a conversion and then consider this your "look", the same as filmmakers have for a hundred years or more shooting film and having it processed. In the digital world the result of doing Film Convert is your inter-positive. If a filmmaker then has more money they could time their film to have a different look, instead of having just a "one light" processing. That is the step that comes after Film Convert. FC gives you your IP, now you can give it an artistic grade or not. They have some very basic controls for doing this but of course a dedicated grading tool will be better. The ideal solution is to do your conversion and then grade. Whether that is done is up to the sophistication of the end user, both their technical ability to grade and their ability to make the distinction between the two operations. How it fits into a RAW workflow is another matter. Their standalone tool, presently, I don't view as anything useful for more than a hobbyist that just wants to play around with it. It doesn't even process AVCHD footage with sufficient accuracy or the quality you get by using the plug-in for After Effects and doing your conversion in a 32bit float environment. For RAW you would definitely want to be using the plug-in and not the standalone, letting the host application handle the log-to-lin conversion (of course successful use of Film Convert is dependent on their "IDT" being based on the same linearized data).
  24. Okay, but resolution wasn't really my interest. It's evident in all the testing that you get good enough resolution with a 5D, what's not clear to me is where this is being pulled from the sensor. Since the sensor is 36mm x 24mm (1.5:1) it stands to reason that standard 16:9 motion video is derived by lopping off top and bottom. So shooting standard 16:9 means you're pulling image from ~20.25mm sensor height. What I'm really wanting to know, and perhaps I just didn't phrase my question right, when you define a non-16:9 aspect ratio, is the vertical dimension always pulled from the same ~20.25mm area as standard 16:9 imagery? Between 1.78:1 and 1.5:1 it makes more logical sense to grow the crop region vertically until eventually pulling from the entire 24mm sensor height and you aren't cropping any more. Then between 1.5:1 and 1.33:1 (or 1.2:1 or 1:1) you maintain full 24mm sensor height and just keep cropping in the sides, using less and less of the 36mm sensor width. But perhaps that isn't the way it's done for some technical reason and ~20.25mm is always read for the vertical dimension and all sub-16:9 aspect ratios are created by cropping from the 36mm dimension. I'm just curious which way it works.
×
×
  • Create New...