Jump to content

Axel

Members
  • Posts

    1,900
  • Joined

  • Last visited

Everything posted by Axel

  1. @garypayton Yes. It is keying, like greenscreen, but more intricate, because you use a [u]range[/u] of tones (in [b]h[/b]ue, [b]s[/b]aturation and [b]l[/b]uma, the color picker is therefore called [i]HSL qualifier[/i] in Apples Color) instead of only one shrill Kermit color. You have to finetune the key very exactly. The secondary CC also always have a softening filter integrated to feather the matte. Google for [i]Sin City[/i], [i]Pleasantville[/i] or [i]Schindlers List[/i] tutorials. Because with the extreme effect of completely desaturating everything but one color you see it best. @yellow GH2s AVCHD and 5Ds H.264 both use YCbCr, and both are treated in an RGB environment (the colorspace of your NLE, your monitors). No differences to be expected from there. Correct me.
  2. [quote author=christianhubbard link=topic=290.msg4263#msg4263 date=1334777814] so you're saying after I grade my footage I should mark skin tone as the secondary and grade it so that it lines up with that line? [/quote] You can color correct with primary CC, which actually means optimizing the contrasts and balancing the colors (like a white balance in post). You can do so by neutralizing neutral grey (resp. shady white), but you can also check if prominent skin tones hit the skin line of the vectorscope. I didn't put it so well. What I wanted to say is, that natural looking skin is more important than anything else. If there is a cast, you see it first on the skin. With secondary CC, you only change a selection of the pre-corrected image. Either this is the inside of a soft vignette to relight a scene (comparable to Photoshops dodge tool, which copies the photographers wagging hand in the darkroom) or it is a [i]color[/i]. Or all colors. To pimp the mood of an image, you deliberately change all colors. [u]Except skin tones.[/u] You exclude skin tones (as they did in [i]Matrix[/i]), because it doesn't look cool if the faces are green. An example by Laforet ([i]Reverie[/i] and [i]Mobius[/i]), graded by Stu Maschwitz, is [i]Nocturne[/i]: http://vimeo.com/7152063 Note, how the skin stays normal in most scenes, whereas the shadows have that greenish (cyan) cast. Only when the death in the disguise of a young girl teases the skater, the skin becomes green too. EOS 1D, but so heavily graded, that this would be quite easy to do with a GH2. A good way to understand secondary CC is to follow a [i]Pleasantville[/i]-Tutorial like [url=http://www.proapptips.com/proapptipsvideotutorials/879F6B61-CFF9-4FD1-8D43-FDF89605611A/5FB37E30-CA68-42CF-9492-E8E89FAFE83F.html][u]this[/u][/url]. Final Cut Express is not a very mighty grading application, but it's secondary CC works in principal in the same way as in other programmes. [quote author=yellow link=topic=290.msg4267#msg4267 date=1334779674] Interpretation and subjective generalisation. With regard to Canon (any model) vs GH2 skin tone comment & video is actually YCbCr not RGB comment. [/quote] I am sorry. English is not my native language. Are you saying that using RGB workspace means misinterpretation and results in bad colors?
  3. [quote author=christianhubbard link=topic=290.msg4103#msg4103 date=1334631728] What about skin tone makes it so important? can you explain what "bad" skin tone is and how I can avoid it in my videos?[/quote] Stu Maschwitz, the author of [i]The DV Rebel's Guide[/i], says that skin tones belong to the >memory colors<. No matter where the picture was taken, we have an idea of how skin tones look. The same is true for the blue sky or for the green grass. White objects, on the other hand, may be tinted by the light, by the color temperature of the light, to be precise. So our natural [i]white balance[/i] is really [i]skin balance[/i]. But technically we need to find the color of the light, and with our camera we find it best by manually wb-ing on a neutral grey object or a white object in the shade (because if actual white is out of balance, at least one RGB-value must be clipped, and so you'd get no correct wb, the GH2 tells you that the object is too bright then). As Sara said, all three color chanels (the RGB-model of our software is actually dealing with YCbCr-video, but forget it) are used by the skin tones, and caucasians will only have values of the right side of the histogram. She is right, you should'nt underexpose too much, but even as bad, if not worse, is [i]over[/i]exposure. I loaded a web-pic of Sarah Palin into Synthetic Aperture's Color Finesse (you can do this with Color, Photoshop or probably any other application that deals with color) and color-picked her left cheek. These are the values: [img]https://public.bay.livefilestore.com/y1pWCoqxOp3ht-jTK9oM15Id0BUzbJdyjOc5FSI_BUmpkP3BRBkRRCKMdY_vtt2NnNs0oHzMslOnFqJ01f_cocnXA/RGBvalues.jpg[/img] Red very prominent (250 of 255), and green and blue around 180 are typical for skin (Maschwitz calls the color [i]porange[/i], pink and orange). If you have darker skin, it's just that: The same mixture with more melanine in the skin cells. The blood that color our cheeks is always red. But of course Sarah Palins face has more than one hue, and to preserve them all by correct exposure is very important. What if a person is filmed during sunset, when there is an orange filter (by nature) over everything? Then it is natural, that the skin would also look more orange, [u]but not completely[/u]. We have an internal AWB in our perception. We expect skin tones, and we see them! Even in [i]Matrix[/i], where everything seems to be filmed through a greenish filter made of rotten fish, the skin tones are protected. You protect them by selecting them (usually with color pickers, you qualify their hue-, saturation- and luma-values and play with the tolerances), invert the selection and change all other values towards cyan. In the so-called secondary color correction. [img]http://data.motor-talk.de/data/galleries/0/18/3395/15341920/203244673-w500.jpg[/img] To check in your color grading software if the skin tones are right, you should use the vectorscope. Here is Sarah Palins face as analyzed by the color scopes of Color Finesse: [img]https://public.bay.livefilestore.com/y1prOBgEKL00xv62NTVyKrgQPzCFgDU8lHwLsrIbZryT5HNhsGTXy9Rz7-imikQAoiNKlj616287iPWMKyweTiXPA/Skinline5.jpg[/img] If you see the vectorscopes circle as a watch, the so-called skin-line "R" for red is almost eleven 'o clock. You always get this line with correct skin tones. In Avatar, Cameron deliberately ignored the rule that the skin tones need to be left out. The humans reflect computer displays and become blue, they reflect foliage in the jungle and become green, the blue na'vi become orange when they stand around a fireplace. EDIT: The skin tone values can not be limited to "the right side of the histogram". Sarah Palins cheek reads 78 % luma. But color-picked beneath the chin the value reads: [img]https://public.bay.livefilestore.com/y1p12N5ttDkpN_OBUseretOzXuO25Lzv9zWBrmSbTbzAOliJ84Tr_qk39AiURpCee7roHWA5qTGvZEfuJ3mM-C4VA/Chin.jpg[/img] 26%! This is the darkest spot, and still clearly part of her skin. The brightest is 83%.
  4. Thank you, Sara, your informations are priceless. And good arguments against the often-heard point, that the GH2 makes bad colors. Anyway, the skin tones of the 5D [i]are[/i] very good, even without it's owner knowing all that background. [quote author=Sara link=topic=290.msg1877#msg1877 date=1329713657] Second, skin tone is comprised of nearly all the colors of the rainbow (no matter what race).[/quote] I don't want to imply anything, but there [i]are[/i] no human races. Breeders select certain physical characteristics (i.e. skin tones) arbitrarily and prevent or control interbreeding. By this, they cause de-gen-eration, eugenics and racism. Biologically there are only species, and their mark is, that their individuals can have sex and reproduce. Excuse the OT, we should be exact in these things.
  5. [quote author=AndrewP link=topic=570.msg3945#msg3945 date=1334374811]The short films that I've seen done with the FS100 just don't seem convincing in a "cinematic" way... I don't necessarily mean "filmic" or any of the other annoyingly ambiguous terms that constantly get thrown around. It evokes a very grounded in reality, behind the scenes-esque , News Style, Travel Channel HD Expose, Top Chef / Cooking Show, Reality TV / Fear Factor vibe.[/quote] In german, with it's many anglicisms, you say "Filmlook". This refers only to the image itself, not to the motion-emotion-part of "filmic" and "cinematic". A kind of Filmlookology developed, and the rules and characteristics are concrete and available: 1. Filmlook can only be applied to video, not film. In the german pendant to the magazine [i]The Cinematographer[/i] (Der Kameramann, # 4/2012) there was an article about how modern digital cameras are used in a way to make their videos look like analogue film. In other words: Filmlook is desired where Videolook becomes apparent and annoying. Audiences hated the look of Michael Manns [i]Public Enemies[/i] (filmed in 30i). Since [i]Cloverfield[/i] was supposed to look like video, it was okay there. So it's not just limited to amateur movies. 2. Because video cameras traditionally had small sensors, you'd use shallow DoF. Needless to say. 3. Because video cameras traditionally had interlaced video, you'd use progressive video. Also obvious. The artifacts of "i", though perceptible, are a minor issue. What is more important is the [i]temporal resolution[/i]. 24 fps have become a viewing habit to us, they tell us subliminally that the time of the film is [i]narrated[/i] time and not real-time. 25i/30i, but also 48p/50p/60p signal real-time, which is good for docs, news, porn and our baby. Watch Laforets [i]Reverie[/i] again, with it's 30p (5D before the first FW-update): The 6 frames more are enough to make it look less, uhm, [i]cinematic[/i]. Time in a film is too important to allow it to look real. I think nobody disagrees. 4. Films we watch in a cinema use style to be instantly distinguishable from reality. To postpone our disbelief, to draw us into the movie, a lot of techniques are used to create a suggestible mood. Trance-invoking techniques. Ironically these have not to do with more details (not [i]at all[/i]) and high-fidelity-colors (not [i]at all[/i]). Nobody even cares for those attributes. The Videolook of the FS100 comes from people who approached it like a classic video camcorder, resulting in a clean and neutral look. "No style" is seen as realistic style, as the recording of real events, meaningless, uninteresting. A style that fits the emotion you want to express by the image is seen as fictional, meaningful, interesting. You get the feeling that this might lead to something. To let these FS100 users understand all ambiguous terms connected to what is exciting about DSLR filmmaking, they should read EOSHDs articles. Let Andrew ("R") test some adapted lenses on the FS100, and the magic will come. With the native Panasonic MFT-lenses the GH2 videos also don't look particularly sexy. EDIT: What you described for the FS100 is also true for every video with higher framerates. Okay, higher framerates are one factor that can cause Videolook, but just because it has always been like this, it doesn't have to be true for all times. One can easily imagine an action sequence that looks shockingly hyperrealistic with 48p or more. What we see instead are stills, nearly motionless. 4k has the same effect. It adds nothing to the image but cleanness, and so cleanness becomes the content of the demo. Incredibly boring.
  6. @Bruno You are right. So many changes. It will take time to see through.
  7. [quote author=eoskoji link=topic=580.msg3916#msg3916 date=1334337703]Patched FW do contain the support for peaking, right?[/quote] Afaik, no. There is peaking (but not colored) in viewfinder and display already, with or without patch. [quote author=eoskoji link=topic=580.msg3916#msg3916 date=1334337703]but for which loupe should I go?[/quote] The loupe magnifies the display, which has a poorer resolution than the viewfinder. On "Musgo" they probably used a loupe. I made a screenshot of the making of to find out: [img]https://public.bay.livefilestore.com/y1pgE4jWiGl8Fy0oF-Vj4WUPMqs3Tolv6n4ei1u7QotL2dGF5T_jl7KV6gqQDk1QCz7-T6ACGZlCWiWBX3_Cgw9fA/Bild2.png[/img] To me it doesn't look comfortable. My recommendation is to buy an ancient Super 8 camera on ebay with a big rubber eyepiece (most old Nizos have the right size). Here you can see it on my DIY shoulder mount: [img]https://public.bay.livefilestore.com/y1pX535SzjwkcY17nLc1G_pPNiZ6mG07xkOyohSRTvTcC_zqqzKunYB0FBHEADIvWz1doFGfDuB6TtEgvXw9djKQw/Rig_m_Cam3.jpg[/img] Of course you need to adjust the height of the camera very exactly (I use the shoulder pad to rise and lower the camera), but you also need to with a loupe. If you don't want to rip a Super 8 camera, you can also buy a rubber part that you use to waterproof your bathroom sink (excuse my terribly awkward english), on which you have to cut out the viewfinder-form on the backside: [img]https://public.bay.livefilestore.com/y1pMjcS8ai04ZyD-b6Bv8AvgdPCcf4raizYPvCwS-cXrlFNDAntzkj8HWgB2JWsUOOjo_Fcv2GaEGdDEcPYmENZNA/Handgriff.jpg[/img] (grip not recommended, search for "pistol grip") EDIT: I also tried an external monitor via HDMI. Works well, the output is clean and HD. But you forgo the advantages of a compact setup. And then you can neither film in 720p (becomes 1080i automatically once HDMI is connected) nor with EX-Tele.
  8. Very thorough thoughts on where our low budgets won't be spent. To some of the EOS HD members $15.000 (actually more) may be no issue, but surely this is not the majority. A pity, because it all startet so promising, with the EOS VDSLRs, then with Scorsese praising the C300 as the perfect tool for indie filmmakers. I believe without proof that the C500 or the new 1D are fine cameras, but not for me. My (and other former Canon purchasers) envy may make Canon proud. Will Hollywood sell their Epics and Alexas now? Another point: All these cameras are prototypes in a way (the 4k thing). Always prototypes are expensive, and always the next generation is technically [i]much[/i] better and [i]much[/i] cheaper. And this is like buying a 4k TV panel now, when we struggle to receive 720p or 1080i broadcasts (and will continue doing so for at least another decade). Or like buying a hydrogen fuel cell car right now with no hydrogen stations around. Hollywood? Yeah, Peter Jackson bought 48 Epics, and he shot in 5k. If that teaches you, that one 4k camera will pay off - for cinemas that are and will remain 2k, for films that need to amortize within two weeks and then disappear  - then you must buy these beta-cams.
  9. Hi Andrew, thank you very much for the article. Ever since I saw the first video by F-Stop Academy a year ago, I fell in love with the very interesting design, but it was all just a little bit too expensive for me. We live with megapixel sensors that surpass the target resolution by far for a few years now and curse all the negative aspects. Despite all efforts to allow a resolution close to fullHD (more intelligent de-mosaic-algorithms, AA-filters a.s.f.) there was no success. Now there's a new hype: You buy 4k, although you need less than 2k (your target resolution) and you do the interpolation with your software. A downsampled resolution ist not "real" or "true", but who am I to tell them! Let the lemmings jump over the cliff, and let the FS 100 become affordable. Thank you, Sony!
  10. [quote author=JayVex link=topic=568.msg3736#msg3736 date=1334179533]Sharpness is not too much of an issue for me, I am more interested in obtaining a shallower DOF.[/quote] A shallower DoF you would of course get with f1.6, if only marginally. There are some direct comparisons between the two on YouTube, and one may say the SLR magic is softer and has a nice bokeh. At f1.6 it seemingly tends to produce CA (it was recommended to start at f2.0), and in backlight it makes a lot of reflexes, whereas the Olympus is better coated.
  11. [quote author=nahua link=topic=548.msg3735#msg3735 date=1334174667]Don't know why there's so much hate for M4/3 on this thread really.[/quote] I don't see hate. Take the Pentax (now Ricoh) "Pentax Q" with it's proprietary Q-mount. Designed to make an EVIL (Electronic_Viewfinder-with-Interchangeable_Lens) smaller and lighter. In the past few decades there were a lot of examples of such systems that never were developed any further. cul-de-sac. They believe that this is the fate of M4/3. But not because the Lumix or Olympus were shopkeepers - as photo-cameras for amateurs. On the contrary. It's only the idea of putting a diminutive plastic lens that weighs as much as a candy bar on a brick-like chunk of camcorder like the AF100 that makes the, uhm, serious pros frown. The form factor is optimized for small size. Less would be large sensor no longer, larger would make the camera bodies also larger and heavier. There are a lot of advantages of small size that are not understood by wannabe-pros who need monumental rigs to show off their professionality and distinguish themselves from the amateurs. I am curious how a device like this for the 5D will do: http://www.youtube.com/watch?v=PiqT4gVEa0s Assumed it is any good in quality, I nevertheless doubt that it sells well. Because it looks wrong to have a camera that - properly made up - [i]can[/i] look like a Panavision and is made to look like a boring, [i]reasonable[/i] camcorder. So I guess I made a few friends among the GH2 fanboys now. But why all the pettiness? M4/3 is for GH2, E-Mount for Nex, APCS for the Rebel T2 and so on. As you wrote, nahua, you can mix everything. I found out that the GH2 is a missing link between the Sony EX-camcorders and the 5D Mii. There is proof enough that this works fine. For guys who are evaluated by their clients by the size of their equipment it may be wise to follow the motto "size matters". If you are not hampered by this misconception, understatement is your friend. Remember Keyser Söze from [i]The Usual Suspects[/i]? Sometimes it is smart to feign modesty, this might get you to places that impostors never reach.
  12. [quote]As in the past, Zacuto will also have 35 theatrical screenings of the test footage (...) To enhance the web experience, each screening will be recorded so web viewers can hear the opinions given by those at the test in 2K. In contrast to previous Shootouts, viewers will watch the test [b]blind[/b], meaning that the footage from each camera will not be labeled. Viewers will be required to write down their favorite shot on paper before being informed what footage belongs to which camera. A discussion will ensue. The scenes will then be played again and Zacuto will reveal which camera is which. A second discussion will follow.[/quote] No thing that has a prominent aesthetic component can ever be judged objectively. The best you can do is to take as many subjective reactions as possible and then draw conclusions. People will indeed be very surprised by some of the results, but perhaps those who expect [i]others[/i] to be surprised.
  13. Sara already posted a link to this. To me this is a calling to approach filmmaking in this basic way. It didn't start as glossy slideshows, but as moving pictures. It has little to do with the camera used. As the [i]Shut Up And Shoot[/i]-Book had as subhead: Any Budget. Any Camera. Any Time.
  14. [quote author=gene_can_sing link=topic=548.msg3626#msg3626 date=1334000839]For the pro market (many of whom are very particular), Panasonic shot themselves in the foot with M43.[/quote] What kind of pros? Independant filmmakers who shoot image-videos for homepages? Freelancers who do just everything to be able to pay the next rent and who are asked by the band's manager, if half the salary was in order? I am an amateur, all right. This doesn't make me blind. When did you last see such a shallow DoF as with full frame sensor in a big film? The first films that used this intentionally were [i]Die Hard[/i] and [i]Alien3[/i]. Before it was considered unprofessional. Most DOPs who work for cinema stop down to f4, or so it looks. They isolate the foreground, but not extremely. But of course you are right. The whole MFT standard is not professional. Who cares? (btw: I like my ambivalent signature) In a way the EOS 5D beats all the competitors because it lets all the so-much-desired details swim away in a romantic blur, [i]because[/i] of the big sensor. And the isle of so-called sharpness in the middle glows with heavenly flesh-tones. That's the worst about the Panasonics, they make people look dead. Everywhere the wrong questions.
  15. Again I am out of my depth, but I try my best ("act as if you know!"). Shouldn't you have a LUT for the colorspace to judge it correctly? Or to get it presented in the right way in the first place? (Which of the two sentences is more wrong?) Seems it isn't just "RGB" or there would be no shift. The QT gamma-shift doesn't change the values of the pixels, it shifts the gamma just when the video is played. Isn't this something similar with the colorspaces? All you know for sure is that the result looks bad. There could also be a filter saved once and for all to compensate for the shift (and then easier applied on a ProRes master than earlier or later). Best of course, if you find out why this happens at all.
  16. [quote author=jaybirch link=topic=548.msg3613#msg3613 date=1333990839]From my understanding, the P2 is, in essence, an array of the highest quality SD cards (4 cards, if I remember rightly).[/quote] In 10/2010 they had this test with a prototype of the AF100 with a P2 recorder via HD-SDI: http://vimeo.com/15765280 They recorded *as* AVC-Intra, which is 10-bit 4:2:2 @ 100 mbps. Everybody raved, fantastic, awesome! Only later did they compare the AVCHD-version from the SDHC-card, 24 mbit VBR, we know it. There was almost no difference (seems the HDI-SDI-output was simply not 10-bit 4:2:2). Last year I visited a trade show, and there was a stand from Panasonic with the Af100. The two guys showed me the camera. One said, you can even record in 10-bit 4:2:2 with an P2 cards! I like being lied to, but only if I know better. If I fell for this one ... I would opt for the alternative.
  17. [quote author=weltenbummler link=topic=534.msg3607#msg3607 date=1333987309] Manual Videomode - HBR - 24p Cinema - variable Videomodus What's behind those?[/quote] The first one chosen lets you go down one step in the menu bar (icon filmcamera without "M") "video". There you can choose between 1080i (don't), AVCHD 720p (this) or Motion Jpeg (no idea). You can further select everything else, if you like P, A, S or M. Take this as an example, not as my recommendation: You might wish to have a fail-safe-mode for dark, difficult situations. You want it to be 720p, because you want the higher framerate for whatever reason*. Then you might choose "P" (I don't know what this stands for right now, but it means autoexposure). In dark places, the shutter won't go much over 1/50. But the automatic doesn't know that the place is [i]supposed[/i] to be dark, that there are black shadows, as in most dark places are. The Iso should'nt raise too much. So to correct the automatic you click on the wheel to highlight the "-0+" in the viewer (exposure shift) and set it to, say, minus one point five (stops). You choose nostalgic film mode, because in dark places you should [i]always[/i] use nostalgic (dialed down as recommended in Andrews book). You perform the WB-Shift (Andrews book). You assign the fn-functions. You choose the Tungsten symbol for WB. EDIT: *... for whatever reason ... If you feel you can't already judge exposure, 720p with the higher framerates is indeed easier to use with automatic exposure, since the side effects of inappropriate shutter times don't look so weird. But as I said, it's only an example. And after you've done all this, you enter the menu, go down to the individual menu (icon: C and a screw wrench) and take the first point "save custom settings" and choose "C3". Now the next time you are in the nightclub, you dial "C3" on the mode dial, you have 720p auto exposure (corrected down), nostalgic a.s.f. and you are ready to shoot. You can nevertheless change everything during the shoot. If you go back to manual exposure (fastest way is with the Quick-button), then the exposure shift is deactivated, as well as auto-Iso. The filmmodes (and other things) can be assigned to the fn-buttons. If you don't have to intercut your footage with Pal-camera-stuff, I recommend you forget the HBR mode and use only 24P Cinema for 1080.
  18. Very cool. I'm looking forward. It doesn't sound too much like a fairytale any more that we might see affordable 422 10bit. Surely 1080 50/60p. GH2 1/2  with P2-cards? I guess not (but who knows?). But this is very exciting. History repeats itself. Sony brought us HD as HDV. Panasonic (with their very successful DVX for SD in the background) announced the HVX200 with better codec and so on. People waited (and waited and waited), and suddenly the Canon XHA1 appeared, a cheaper compromise. The HVX turned out to have it's problems (pixelshift, shitty display, very expensive cards), as the AF100 has today (bettered in resolution not only by the Sony pendant FS100 but by Pans own ridiculous consumer-toy GH2). This strife is only good for us.
  19. I honestly never understood the logics behind these shifts. Therefore these are no answers, but more questions: > you transcoded to ProRes for editing. I understand the codec is not accepted by log&transfer? Is that the reason why you used Compressor for the transcoding? And then you transcoded with another batch-transcoding-application named 5D2rgb? > since you edit in a ProRes-sequence, why don't you export a master in ProRes as "self-contained movie"? Because, if this is a problem of a colorspace-shift (the notorious Quicktime gamma-shift only affects gamma, hence the name) and if the colors look right in ProRes, this may be a matter of checking a box before you encode as mp4. My x264 plugin module has a lot of options. I export with MpegStreamclip, because with QT7pro (residing in [i]Utilities[/i], I saw by the screenshots that you don't use it as player) there are considerably less options. > doesn't the colorspace change to YUV again with mp4? Why haven't you experienced the same problems with mii?
  20. [quote author=Lenslover link=topic=545.msg3569#msg3569 date=1333919276] I wonder if you would mind elaborating on your comment.. "But there is a magic to full frame, as there was with the old Mark II. It is all in the rendering of the lenses". I find no-one being able to pin this down properly. Apart from the crop factor's effect on focal length, how is the same lens 'rendering' differently on a full-frame to give it 'magic' rather than if it is on a crop sensor body? Anyone's thoughts on this encouraged as I find discussions on this always tend to blur into vagaries. [/quote] Do you remember the Bloom christmas shootout? The screenshots for the color comparison, where the unsharp candle confetti in the background were the biggest with the 5D? This is because if the sensor size doubles, the bokeh doubles (well, not exactly, but clearly). And what is more, the colors in the background are softer. Beautiful shots. EOS stays ahead of all the DSLR cameras. At least for video. CBR seems to be a good idea for a master. It sometimes happens that the vimeo or youtube encoder makes the ugly pixel clouds in your shadows in spite of your effort. This can be prevented by a very fine "filmgrain" that you put on difficult parts of your video (shadows, gradients, blurry parts, the worst are blurry moving dark shadows) before you upload. Called [url=http://en.wikipedia.org/wiki/Dither][u]dithering[/u][/url].
  21. [quote author=Sara link=topic=301.msg1908#msg1908 date=1329817385] As to what we "could" (or want) expect from a GH3 * Sensor with greater dynamic range * Increased color gamut * 60p or 120p at 1080 resolution * Raw - like a CinemaDNG file output (dream) * Sensor with improved low light performance * Higher Res OLED viewfinder and or LCD screen * Clean HDMI out * 100% weatherproof design (dream) * Access to flat color profile picture style[/quote] You know that with these features it would better every camera under 20 grands, if not 40 grands. Very improbable. The clean HDMI out we have already. Yes, it remains 8-bit 4:2:0, but it can render all high-bitrate-Intra-hacks obsolete, because it skips the compression part. Instead of playing with new bitrates, the expert testers should finetune the colors. Magic lantern edited the picture styles, and so it should happen with the filmmodes. Isn't there a way to get access to the curves and to test them thoroughly? Let's stop barking up the wrong tree.
  22. [quote author=PAVP link=topic=541.msg3558#msg3558 date=1333862945]Now we can explore workarounds to some of the problems with the Look of the footage from the DSLR's.[/quote] If you took Shane Hurlbuts[i] Last Three Minutes[/i] (he was the DOP) and showed it in a cinema, nobody would think of video. Although his camera was only capable of a resolution around 720p (the Canon 5D Mii). He didn't count on detail, he counted on the power of an enigmatic composition (EDIT: I meant emblematic. But I'm afraid my English is terrible today, I hope you *get the picture*). Or, as one of my favourite filmmakers put it, the film image is a recording of an image or else it is only a recording. In the cinema where I work as projectionist there was a little festival with student films once. One could deliver on DCP, on BD or DVD. The most cinematic look came from a Sony EX-1 with a 35mm-adapter. The latter provided the soft roll-off. Probably this cumbersome DoF-machine cost resolution too (the resolution of the recording, not the resolution of the image), but hard to say, since it came on DVD and was projected onto a 50 feet wide screen. One would think that this must have looked terrible. It didn't. The scaler (an interface to adapt different image sizes from external sources to the fixed 2k chip) did a good job with the interpolation, but had the DOP relied on detail, this wouldn't have helped. And he [i]had[/i] a lot of long shots (the set was a big restaurant). Videolook is produced by the wrong concept. There are films that want to overwhelm you with their aesthetic richness (very often particularly the films that lack the surplus of high dynamic range, more than sufficient colour depth and colour resolution). Even the technically perfect examples never become successful films (find an IMAX theatre to watch them). We should not try to follow this path. Our images should be recordings of images, not just recordings.
  23. [quote author=TimeZone link=topic=541.msg3540#msg3540 date=1333824997] I was kind of surprised by some of the comments by Shane on his blog that the c300 still looked like video and that he didn't really like it.(...)   Lower resolution is probably more flattering on actors though and the smoothing gives a more cinematic look perhaps.[/quote] [quote author=MattH link=topic=541.msg3542#msg3542 date=1333826673]I Think the difference is between resolution and sharpness.  Both cameras are capable of true 1080 resolution, but the c300 in this shoot is perhaps set with added sharpness on.  I think it is this added sharpness that creates the video look he was referring to[/quote] It may be not just a matter of taste: A beautiful image is never described as crisp and punchy, but could be described as softer, richer and finer in definition. ??? How can it be softer and better defined at the same time? Because in direct comparison to analogue film or digital cinema packages with their 12-bit JPEG 2000 the colours in our videos look like manually coloured b&w. They are poorly defined and not rich in any sense. They have no depth, no vibrancy. They look like cheap video. You should test it. Actually, you don't need direct comparison. Just watch your video on a very big screen as dcp with the standard widescreen-presets. Of course, if you feel an image is just too sharp, you can put Tiffen LCF on the lens.
  24. Very clear results. And for the rolling shutter? As Shane Hurlbut says, rolling shutter becomes annoying if you don't know how to move your camera gently.
×
×
  • Create New...