Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 11/29/2013 in all areas

  1. You better start with learning one, your CC software with the manual or better yet with a complete video tut training. Then there are some very general advices on requirements and workflow: 1. Calibrate your monitor(s) ... 2. Backlight them with 6500k balanced lights. 3. Paint the wall behind with a dark, neutral grey. 4. Exclude all strongly colored objects from your FOV. 5. Don't allow reflections or light temperatur mixes. Workflow: 1. Optimize your clips (neutralize them). Don't trust your eyes. Use the scopes. This is primary CC. 2. Apply changes to selected areas of your images. This is secondary CC. 3. Find a look for the most enigmatic of your shots in one sequence. Now your taste and your good eye are asked for. Don't start this step when you're tired or not relaxed. This (on top of steps 1&2) is a grade. Copy the grade. 4. Apply the grade to the rest of the sequence, fine-tune it. You will find, that by now clips from different cameras or with different lenses or exposure will look 98% as much congruent as possible. 5. Finally, iron out the missing 2% by comparing all your shots with the tools provided by your software (split screen, lighttable, whatever). After you graded for look, now grade for consistency.
    4 points
  2. Eizo CS or CX as mentioned above. Images look 'eye candy' on the iMac screen, but flatter and darker on my Eizo. You want a true representation for correct grading.
    2 points
  3. The iMac monitor is brighter and more punchy than a proper grading monitor. I use a second, flatter monitor with it. I think you can probably use the current top level iMac with a fully upgraded graphics card, or one of the more powerful Macbook Pros with retina.
    2 points
  4. Quote: [Dan, i can only ask you for better turning of highlights handling. i saw cineform samples from s 35 model (asian woman in the forest against the sun) and they handle highlights in more delicate way, because uses protune cineform curve i believe.] There are some issues that people seem not to understand about how the monitoring to end result matching works in the KineRAW cameras. 1) You cannot apply grading directly to the Cineform recordings because there is no matrix applied to the raw Bayer data (you cannot cross mask spatially separated colors). If the 3D-LUT the camera makes are not applied, then the relative saturation of the red, green and blue will be wrong when you use a simple saturation increase in grading. This seems to be the source of many of the complaints people have, but in fact its their NOT matching the camera monitoring that is upsetting the color balance of relative saturation of various colors. 2) Most people using the DNG frames seem not to be using the 3D-LUT the camera makes for matching the raw data to the monitoring so the results do not match anything you saw in the camera, and since no matching matrix was applied you get the same problems as in case #1, but this time you are grading from unbalanced red, green, and blue saturation taken from linear 12bit sensor data. 3) In the case #1 the data is "LOG90" encoded Cineform data giving equal weight to each stop in the brightness range, that is a wrong starting point to look at on a Rec.709 or monitor gamma monitor, you get way too much code range in the shadows making the results show too much grain and leading to heavy under-exposure (not to mention no matrix having been applied so the colors are skewed.) 4) If you process the DNG frames without using the 3D-LUT that converts the linear sensor data to "LOG90" data then apply the second 3D-LUT that converts the LOG90 data (same as for Cineform recordings) your results will not match anything having to do with the in camera monitoring, how could it, there is not DNG header meta-data that can exactly match the in camera curves, that is why the camera generates 3D-LUT, and so your workflow needs to be set-up to ignore the XYZ to RGB DNG tag and only interpolate to linear Gamma 1.0 greenish results, those then pass through the two mating 3D-LUT (the linear to log one is constant, the monitoring curve one mates with only the shot in the shot folder it was taken from). 5) You are NOT working with a RGB recording, the DNG data is the SAME no matter what monitoring table is used, other than the exposure and analog gain applied to the sensor preamps that feed the ADC, after the ADC the signals are just recorded for the most part and no changes to the camera settings have ANY impact on the results, other than the making of the 3D-LUT that are put into each shot's folder. If you do not use those 3D-LUT, you are on your own and your results will be the same no matter which look group you use, Kine709, KineCOLOR, or KineLOG have no impact on the recorded data (other than their shortcuts that have the ISO setting force analog gain when you are NOT using the so called "expert" mode.) 6) Operation of the camera would be more understandable if they had buttons labeled: ANALOG GAIN, EI/ISO CURVE, K/LIGHT_TYPE in place of making the controls clear, they have combined those functions to "simplify" the camera for the Asian market, you have to request the "expert" mode, then on the S35 you need to use the ISO and F2 buttons to set the analog gain and EI/ISO curve independently, I have not use the Mini yet to see how they implemented it there, but you need to understand what is going on to figure out how the menus impact the 3D-LUT made vs. the burned in aspects impacting the recorded data. Their setting the analog gain above 1x for green does tend to make the highlight range shorter, it depends on where they make the origin point so that the EI/ISO number reads right for 18% gray cards, if you scale from black bias point or from code value 0 etc. 7) The curves I made do not clip anything, not the red, green, or blue channels because the three channels are balanced by using the analog gain. The only way to prevent clipping as shown in the cameras 90% zebras on the raw data is to reduce the exposure or use a Cir.Pola filter to cut reflections by a stop. No gains can be had by changing the curves, KineCOLOR is soft enough for normal uses if you set the exposure right and use EI/ISO 1280! without additional analog gain. 8) The amount of highlight range when analog gain for green is set to 1x depends on the EI/ISO curve used, the maximum highlight range is when you see 2560! displayed, higher speeds are not useful for 12bit data because there would not be enough data above the FPN and under 18% gray code value. At speed 1280! the full Cineon range is filled so you get the same range about as a 35mm film scan, at 2560! I added a small shoulder curve so that NO data would be clipped. At speeds under 1280! the sensor does not have enough highlight range to fill the "super white" Cineon data range above code value 685/1023 in DPX LOG files, you cannot compensate for that lack of highlight detail aside from controlling the contrast range of what you are pointing the camera at by using fill lighting to light the shadows so you can reduce the overall exposure to keep the highlights needed under sensor clip. 9) The KineCOLOR currently defaults to having the 3D-LUT output full range data, you cannot ingest those results into a Rec709 or maybe DCI P3 workflow without clipping both the highlights and shadows. In addition the internal signals in the camera's monitoring are full range, only the output range settings reduce that for having the 3D-LUT output sub-range ITU601 limits for Rec.709 use etc. If you shoot using KineCOLOR you can change the output range limits to ITU601 "HD limits" to avoid this issue which will then make the output of the 3D-LUT limited like the default state of Kine709 and so be able to ingest the output of the 3D-LUT (RGB) into Rec.709 workflows (if your DCI P3 workflow is also ITU601 limited, keep that in mind as well, that is do you expect the DPX frames to have a range of 64/1023 to 940/1023, because KineCOLOR has a range of 0/1023 to 1023/1023 by default with 18% midtone around 470/1023.) 10) Its not possable to have "softer" highlights other than what is there because the current looks do not clip the highlight data, you get everything the sensor puts out. If the highlights are clipped in the DNG data its the result of not using the 90% raw zebras when shooting. 11) I could lower the 90% white card point so that its closer to 470/1023 signal level in KineCOLOR, I suggested having a softer look group by Jihua did not want that as he thought too many choices would be confusing to people, the camera supports about 36 look groups, and only three are being used, so there should be room for more. You cannot however get 90% white TOO close to 18% gray in the EI/ISO curve because that puts a kink in the curve that does not look good on skin tones were light is modeling the highlights and shadows, its better to have a smooth a curve as possible and to adjust the exposure and lighting when shooting to have the light or dark side of the face aligned with the 18% gray reference tick mark on the histogram display (you see a gray and white vertical line on the histogram, those are the calibrated code levels for the ADC output when you have the exposure set right and are shooting a 18% gray and 90% white card, faces should have their peak between those lines, you do set the viewfinder zoom to 800% and point the camera at the actors face to use the camera as a spot meter to adjust the shutter and iris to get the face exposure in the correct range, if things are too much under 18% gray tick mark you will end up with dark shots showing heavy noise, EVEN THOUGH THEY DO NOT LOOK DARK when you display the Cineform LOG90 without the corresponding 3D-LUT because the "raw" Cineform recordings are NOT meant to be graded from directly as they lack the correct color matrix and have the shadows expanded way too much leading to chronic underexposure and excess grain noise as well as FPN showing up later when people grade the shots several stops above the rated EI/ISO at the time of shooting. I can't add highlight detail that is not within the sensor's range, the current curves don't clip anything, if there are clipping problems its probably from not mating the workflow to the output range of the 3D-LUT or the exposure was wrong or they have fiddled with something I don't know about. You also need to set the monitoring path monitor range limits to 16/255 and 235/255 probably, or the camera's LCD monitor may show clipping IN THE MONITOR because HD monitors are NOT full range, I don't know how the implemented the HDMI interface but it should already be applying ITU601 limits, not that all LCD monitors can display that range anyway... Additionally Kinefinity.com (sm) changed the camera's design AFTER I calibrated the look groups invalidating the color matrix settings and everything else. Their engineer Cheng seems to have fiddled with the analog settings by adding a translation table to try to match the previous white balance, but as far as I know they have not compensated for the saturation difference in the Bayer filter dyes between the 1st generation sensor I was using, and the current production sensor (or changes to the IR glass in the OLPF filter etc.). I will have to talk to Cheng to get him to remove the translation table so I can re-calibrate everything back to the native sensor output, hopefully he can do that, so people using the new revised look groups will probably need to upgrade the camera firmware to clear vestiges of the current issues. If Cheng is scaling the 12bit data to correct the white balance, I would hope not, that could introduce histogram gaps and clipped color channels, something not going on in the prototype S35 I used for doing the calibrations, hopefully I can get some straight answers if they really want me to untangle the current issues and get the Mini's working as intended...
    1 point
  5. NVidia GTX770 4GB any make, budget would be maybe a Zotac. Socket 2011 could be a Asus P9X79 Pro mobo which gives 2x 16x PCI-e 3.0, on a budget a basic quad core processor, 16GB or 32GB HyperX (4x 8GB sticks). Eizo CS series monitor + cheapo monitor for GUI / Scopes. The SDI thing and 3D LUT box is a step too far really for budget. Most important is GPU power with as much VRAM as you can afford 4GB really particularly if using any temporal filters, then 2nd importance is RAM, 16GB minimum, 32GB better, then least importance is processor, just used for encoding. Personally I'd not waste cash on an 8 core or greater processor, on a SSD, on 2x mediocre Dell monitors, perhaps put that cash towards one decent entry level reference monitor like a Eizo CS range, or the CX range that'll take a 3D LUT, if you can afford it.
    1 point
  6. In fact, just the difference between 23.98 (what final cut is using ) and 23.976 (what everything else is using) is quite big. When you have time code burn in your footage and when you are starting your edit at 10:00:00:00, you can have a lot of frame differences here and there. Had this problem 2 years ago when a client worked in fcp (23.98) gave me the project so i can import it in AE (in 23.976)... lost a few days of work when I saw everything was off-sync (a 1h30 feature film with hundreds of cuts)
    1 point
  7. I much preferred that natural flare version above to the vivid blue. It is pretty distracting in many situation to have royal blue lines across your image, but a natural, context sensitive flare can do wonders. Still, it'll work for hip-hop and blade runner style vids! But I think it may turn away a fair few people... a natural coloured flare can settle in any genre, even realistic narrative.
    1 point
  8. You asked for opinions but sound like someone who already knows. You say you are a photographer at heart. Most photographers I know want RAW, as RAW as they can get. Naturally, you can't shoot hours of documentary interviews in RAW, but you weren't clear about what you're looking for. Everyone was polite about that. But since you think I, at least, read your mind, then no, I can't read your mind. Exactly what do you want to use the camera for? What things would make a difference between the cameras you mentioned? They're all basically the same thing. Get the cheapest. And when you say, "I bought a BMCC and sold it, not useful for my work at all." Why? Other people have that camera here. Maybe you know something that would be helpful to them.
    1 point
  9. Here are more examples, all taken at full aperture f1.6
    1 point
  10. You should also keep a lookout for a cheap 50D and run Magic Lantern RAW on it. I've owned a Nex 5N and 7, both great cameras, but in the end, to me, H.264 is pretty much the same for all these cameras. There's only so much you can do with 3-6MBS bandwidth. Another camera to consider is the EOS-M, which I've worked a bit with. Here's a RAW shooters guide http://www.magiclantern.fm/forum/index.php?topic=8825.msg82944#msg82944 If you have any Canon glass below 20mm, you can get, in my opinion, awesome high-dynamic range 720p out of the camera. No internal audio. ML is in alpha, and though buggy, if you stick to using one card for RAW, is very reliable. On the NEX's, the 7 had higher res, but I couldn't see any difference, which makes sense, because you're taking such a small part of the image. Some things about Sony are irritating, but overall, I have to say they exhibit a greater love of consumer imaging equipment than the other manufacturers. On the full-frame question, I don't get that. You can can get shallow DOF using say a c-mount TV lens ($30) and if you don't mind the edges softness, I think, create very stunning images.
    1 point
  11. For real. Like we've never had the content is king rant here before. Thanks for stopping by bro. We had no idea.. Lol
    1 point
  12.   hello friend, you seem lost.   this is a thread about nuances in cinematography: its an "In Depth Test" of Camera A vs Camera B vs Camera C vs Camera D
    1 point
×
×
  • Create New...