Jump to content

HurtinMinorKey

Members
  • Posts

    798
  • Joined

  • Last visited

Reputation Activity

  1. Like
    HurtinMinorKey got a reaction from Marino215 in Beautiful 4K Blackmagic Production Camera footage from James Miller   
    I'd really like to see a comparison of the  BMPC to BMCC. 
  2. Like
    HurtinMinorKey got a reaction from Ernesto Mantaras in Beautiful 4K Blackmagic Production Camera footage from James Miller   
    I was trying to keep them all to four letters, dammit! :D But I think BMPCC is probably better (and certainly used more), you win.
  3. Like
    HurtinMinorKey reacted to FilmMan in Discovery: 4K 8bit 4:2:0 on the Panasonic GH4 converts to 1080p 10bit 4:4:4   
    Here's some more readings.  Cheers.
    http://www.dvxuser.com/V6/showthread.php?300214-Canon-1DC-crop-modes-surprise!
  4. Like
  5. Like
    HurtinMinorKey reacted to austinmcconnell in Beautiful 4K Blackmagic Production Camera footage from James Miller   
    I'd really like us to settle on an acronym for the thing. It's the Blackmagic Production Camera. BMPC. All this nonsense about BMCPC, BM4K, BM 4K, BMPCC4K. Blah.  :lol:
  6. Like
    HurtinMinorKey reacted to thedest in Beautiful 4K Blackmagic Production Camera footage from James Miller   
    Lets talk about art.
     
    The example posted by Miller is too soft. Making the entire video soft is not beautifull, its a flaw. Whats the point of using expensive lenses to deliver a soft video?? You can soften a skin texture, but making the entire scene soft is wrong. The video will look like a low resolution out of focus video. With high end codecs, like raw, you can make the skin look soft and sharpen the eyes, for example. Eyes need to look sharp, otherwise the person will look lifeless and the video will look like a distant dream. He is also throwing away good dynamic range. He is blowing out the face of the main subject. She is like a pin up girl, but he graded her like a zombie shot on a low dynamic range camera. Her skin tone is green. One thing is to add a graded color to the scene, to create a mood, the other is to add a color cast that makes the skin tone look horrible.
     

     
    I said it once and I will say it again. Applying pre-set looks like mojo, film convert tools, LUTs etc are not the way to do it. Grading is REALLY easy. You dont need those tricks.
     
    And here is my simple grading for commercial use.
     
    What did I do?
     
    - Made her skin look soft
    - Gave her a pin-up skin tone
    - Increased the sharpness and contrast of her eyes, to make them stand out
    - Changed her eye color, because I like blue
    - Added a lot of saturation in her lips, to create a sexy look
    - Increased the sharpness and the saturation of the yellow, to make her hair stand out
    - Added some split toning (blue and purple) while mantaining her skin tone
     
    Now I can see me gaining the contract for the commercial, while James grading would be only usable for an underground alien movie.
     

     
     
     
    COMPARE THE 3 VERSIONS:
     



  7. Like
    HurtinMinorKey reacted to thedest in Beautiful 4K Blackmagic Production Camera footage from James Miller   
    I understand you Andrew, and I know your style.
     
    As I said, if I were grading it for commercial purposes I would do it differently. I would add some split toning, I would fix her skin, maybe enhance the color/clarity of her eyes. But not when testing a camera. As I said, I can see James grading being used on a vodka commercial. Mine wouldnt be accepted. But I can also see my correction being used on a documentary, while James grading wouldnt.
     
    The problem with artistic gradings is that our taste is very subjective. And when we try to make too much art when reviewing a camera, we can make people think that the camera has some kind of problem. Its not hard to find people that hate Blackmagic cameras because they think that all Blackmagic videos look surreal, have bad colors, no contrast, color casts and look like old damaged movies. And thats because that kind of "art" wont impress a lot of people.
     
    There are lots of people that like the impact of the "you are there" look. My girlfriend hates me when I take pictures of her and I dont hide her pimples. I hate that "Sweet 16 Photo Book" soft glowing look. I like to see details, and Im not alone in this world.
     
    When reviewing a camera, its always nice to show the full potential of the camera when recreating a natural scene.
     
    Im not against art though. I can appreciate graded videos sometimes, but not every time.
  8. Like
    HurtinMinorKey reacted to Michael Ma in Discovery: 4K 8bit 4:2:0 on the Panasonic GH4 converts to 1080p 10bit 4:4:4   
    I think that Cineform Studio free version (now called GoPro Studio 2.0) can do this.  In step 1, convert your file to CFHD (in the same resolution).  It is inflated to a 10-bit file.  Step 2, select file.  Step 3, export to custom settings, choose MOV container, scale down to 1080p, you can choose Film scan 2, which is the highest quality that can preserve the details of a 4:4:4 footage.
     
    Update: Just tried this with my Galaxy Note 3 which shoots 4K 4:2:0 8-bit at 50Mbps AVC.  Looks a million times better than native 1080p.  I also tried dropping the same 4K file into After Effects CC working in 16bpc.  Created a sequence and exported to DNx 350X or 440X 4:4:4 10-bit...Looks different than Cineform (CFHD) workflow above.  Not sure which is better to be honest.
  9. Like
    HurtinMinorKey got a reaction from Peter Rzazewski in Jittery footage from 5D Mark II   
    It's a combination of an unsmooth pan and judder from panning too fast for the distance to your subject.
     
    Explanation here:
     
    http://kb2.adobe.com/community/publishing/908/cpsid_90843.html
     
    See the "seven second rule"
  10. Like
    HurtinMinorKey got a reaction from maxotics in Discovery: 4K 8bit 4:2:0 on the Panasonic GH4 converts to 1080p 10bit 4:4:4   
    let's make this even simpler, and use a dynamic range to show why you can't always resurrect higher bit depth (even with error diffusion):
     
    Assume there are two types of A/D conversion:
     
    1 bit (taking on the value of 0 or 1) 2 choices 
    2 bit (taking on the values of 0 1 2 3) 
     
    Let's assume that analog values are:
     
    (0.1) (1.2) (2.0) (2.1) (3.0) (4.1)
     
    and that A/D conversion assigns the closes possible digital value. 
     
    1 bit A/D conversion becomes:
     
    (0) (1) (1) (1) (1) (1)
     
    2 bit A/D conversion becomes
     
    (0) (1) (2) (2) (3) (3)
     
    at half resolution you get either:
     
    (0) (2) (3) or (1) (2) (3)
     
    either one represents 3 levels of light , which you cannot represent in just 1 bit. 
     
    Is this a contrived example, yes. But the point is to try and show they are not mathematically equivalent. 
  11. Like
    HurtinMinorKey reacted to pablogrollan in Rumoured Canon 7D Mark II specs   
    Could it be that the new 7D becomes the first DSLR to implement the new h.265 codec in video mode?
     
    That way it would not be 422 nor super-high-bitrate and certainly would be aimed at a different crowd than the Cinema EOS line... Same video file size but double the quality, slightly improved dynamic range and low light performance and maybe a headphone jack.
     
    Sure they could add peaking and zebras (costs nothing really) + some other firmware related features, but delivering the same concept of "non-gradable" video with the much improved quality of h.265 would make it competitive in certain segments.
  12. Like
    HurtinMinorKey got a reaction from Sean Cunningham in Why Do Some Cameras Create More of a Film Look?   
    I still consider making the best use of natural or available light under the realm of "lighting". So we are not in disagreement.
  13. Like
    HurtinMinorKey reacted to maxotics in Hands-on preview of the powerful 4K shooting Panasonic GH4!   
    I too, was confused before working with the actual bits of RAW video data.  When the sensor is exposed to light a chip reads the values from each sensel (which is eventually combined into a pixel value).  The sensels are basically just voltage resistances.  The camera can read the values to as much precision as the electronics allow.  You can liken this to getting the voltage from you household electric.  A simple meter would show 220 volts in the U.K., but a fancy one might show 219.556345 volts. 
     
    The very first sensel, top left, is usually a red filtered sensel (keep in mind the sensor are, at heart, monochromatic).  A Canon camera, for example, reads that value and stores it as a 14-bit number, 0 to 16,383.  That means it records 16,383 shades of red.  It does the same for the next pixel, a green one.  Below that green one, on the second row, will be a blue sensel, again the same for that.  Both Magic Lantern RAW and Blackmagic, save these values (the blackmagic with some lossless data compression, not to be confused with visual compression).
     
    Each of these values is a number between 0 and 16,283.  From the three, if you construct a full-color pixel you'd end up with a value at around 4 trillion, say it was white.  And another pixel, black, was 0.  Unfortunately, our eyes can only see about 16 million variations in color so the 4 trillion dollar white would actually only look like a 16 million value.  For the most part, we can only see, and our display and printing produce, color values within 16 million shades.  So why do we want 4 trillion colors when we can only see 16 million?
     
    EXPOSURE!
     
    First, let's turn to the GH4 (or any H.264 tech).  In those cameras each sensels value is saved into a 256 (8-bit) value.  Those 3 values are then combined to create a 16 million (24 bit) full color value. Assuming you've exposed the scene perfectly, and the colors are what you want, you will happy as Andy with a gold-plated G6 :)  Also 4:2:2, and all similar nomenclature, are about color compression for motion; they degrade each still image.
     
    But what if you didn't expose correctly?  What if you made a mistake and exposed +2.  Now your scene is washed out.  When you try to get lighter values at the bottom you can't get them.  The detail (shading) just isn't there.
     
    But much of it WAS there in the RAW data from the sensor!  Let's say with the GH4 you exposed perfectly, and it happened to be the 16 million color values between 2 trillion and 2.16 trillion.  Then, you expose +2 and it saves the values from 2.74 to 2.9 trillion.  With the RAW data you can pick up those values and CONSTRUCT your 16 million color H.264 video data.  Naturally, the sensor works better when you expose in the middle of its range, so you can't fix the exposure perfectly in real life.
     
    The only benefit of the I-frame data is that it doesn't try to compress the 24bit values from one pixel in one frame, to another, which can create artifacts.  It does not, in any way, solve the problem of available color depth, if you're comparing 8bit to 14bit.  
     
    It sounds like the GH4 will provide fantastic resolution in low-dynamic range shooting conditions.  It will not do what the Blackmagic cameras do in high-dynamic range shooting conditions (save enough data for you to fix exposures in post, or recover details from shadows, etc)
     
    Filmmakers who don't understand the difference between the cameras may end up having the wrong camera and that would be a shame.  They are both fantastic technologies.  You can no more have both than you can have a Range Rover and Jaguar in the same car ;)
  14. Like
    HurtinMinorKey reacted to Sean Cunningham in Vimeo Scaling on IE 11   
    VIMEO playback is very different from browser to browser because on a given OS they don't all use the same libraries to decode the stream.  I see a marked difference in playback quality on my Mac in Safari versus Chrome, for instance.  Safari looks bad while the same stream on the same computer looks as I expect with Chrome.  It's the same phenomenon with players.  You will often see a different look playing back the same file in QTPro, VLC, MPlayer, etc., etc.
     
    I haven't used IE since the Win98 days so what happens in IE stays in IE as far as I'm concerned ;)
  15. Like
    HurtinMinorKey reacted to Hans Punk in guess camera???   
    RED Epic Dragon Sensor Shot at 6K.
     
    Credits:
    Production Co: Brunswick Studios
    Creative Agency: The One Off
    Director: Dan Smith
    Producer: Rohan Tully
    DOP: Steve Marshall
    Steadicam: James Davis
    Grip: Carl Dunn
    DIT: Valentine Rocha
    Post: MyTherapy
  16. Like
    HurtinMinorKey reacted to soundguy in RED cameras absent from all Oscar cinematography and best picture nominees   
    Funny, that most of you here in the forum comment the absence of RED on the Oscar nominees by technical picture details (pro and cons).
     
    May I give a hint to a possible reason not yet mentioned here:
     
    The RED Epic is a quite noisy camera (soundwise) and degrades the quality of actors dialogue to a huge degree. If I was a producer and I would pay millions of dollars to get the best actors in the world to play for my movie, I would not work with a camera, which makes half of their work (dialogue) go directly into the trash bin!
     
    Even ten years after their first ONE camera, RED hasn´t managed to produce silent cameras, a big obstacle for scenic movies (if that´s the right expression?). The Epic fans only go down to 30% speed while recording - not 0% - a mess!
     
    Yes - there have been movies shot with RED. But either the directors/producers didn´t care for actors ADR, or it was such a SFX-movie, that the dialogue couldn´t be captured at all, as most of the other SFX equipment is even more noisy than the camera.
     
    But if you want to shoot an indoor scene with a big portion of dialogue and actors presence, then RED is not the choice for a good movie. Maybe that´s a reason.
     
    Best regards,
    Andreas
     
     
  17. Like
    HurtinMinorKey reacted to Sean Cunningham in RED cameras absent from all Oscar cinematography and best picture nominees   
    Captain Phillips seemed to have the most disparate use of cameras.  Quickly skimming the Nov'13 ACM it appears they used Super-16mm (Aaton) shooting on the water, especially for the Somali only parts of the film, in the skiffs.  Being so remote without support they didn't trust digital for this kind of shooting.
     
    As soon as the Somalis step onto the boat and Tom Hanks' portion of the story starts it switches to 3-perf 35mm.  For the arial stuff showing the extremities of scale they shot Alexa.  GoPros were used to capture the SEAL parachute drop.  VistaVision was used for VFX plates.  They don't mention the C300 at all in the article, nor do they list it at the end, so I'm betting it's inclusion above is mostly about marketing.
     
    Wolf of Wallstreet was split 4-perf 35mm and Alexa to ArriRaw.  Here again the filmmakers went with different shooting styles for different stages of the narrative, mainly with different optics, lighting and color.  Depending on the character's state of mind they shot either spherical Arri Master Primes and the Hawk anamorphics, heightening DiCaprio's mania by shooting a lot with the 35mm and 28mm anamorphics, switching to spherical when his state of mind is more clear and precise.
     
    For DiCaprio's "quaalude look" they shot 20mm on an Alexa at 12fps with 360 degree shutter, then step-printed to resample back to proper time.  And a prototype C500 was used by the second-unit/VFX to shoot ariel photography.  It was small enough they could rig it to the nose of an RC Octocopter.  The RC copter was necessary because the location in Long Island didn't allow full-size choppers as well as allowing them to get shots that would have been impossible with a full size chopper.
  18. Like
    HurtinMinorKey reacted to Sean Cunningham in 4k frenzy and BMPCC   
    Especially since he doesn't light, generally speaking.  Being able to see where he can use available light to artistic effect is some kung fu he's really strong in.
  19. Like
    HurtinMinorKey got a reaction from maxotics in Why I am going with 4K and why you should too   
    Things like bit depth really matter for post. I can approximate  higher bit depth by downsampling from higher resolution, but the individual precision of each pixel remains the same. So in certain cases, you won't pickup subtle changes in tone. 
     
    And all else being equal, down-sampling does not afford you better dynamic range.  And 4K raw is still way too much data, so you will lose something by going to 4K. 
     
    As for content delivery, almost eveyone is still stuck below 1080p (besides Blu-Ray). iTunes, Netflix, Cable, almost all of their HD looks like heavily compressed 720p or 1080i. 
     
    The Canon c300 is a special case, because it uses its 4K sensor not just to down-sample intelligently, but to minimize motion artifacts.  Most of the 4K cameras that are going to come out won't be using the same process, and therefore won't get the same benefit from a 4K sensor.
  20. Like
    HurtinMinorKey got a reaction from Aussie Ash in Why I am going with 4K and why you should too   
    Things like bit depth really matter for post. I can approximate  higher bit depth by downsampling from higher resolution, but the individual precision of each pixel remains the same. So in certain cases, you won't pickup subtle changes in tone. 
     
    And all else being equal, down-sampling does not afford you better dynamic range.  And 4K raw is still way too much data, so you will lose something by going to 4K. 
     
    As for content delivery, almost eveyone is still stuck below 1080p (besides Blu-Ray). iTunes, Netflix, Cable, almost all of their HD looks like heavily compressed 720p or 1080i. 
     
    The Canon c300 is a special case, because it uses its 4K sensor not just to down-sample intelligently, but to minimize motion artifacts.  Most of the 4K cameras that are going to come out won't be using the same process, and therefore won't get the same benefit from a 4K sensor.
  21. Like
    HurtinMinorKey got a reaction from Harrison Geraghty in Would this iMac be capable of editing raw 2.5k footage off the BMCC and using Davinci Resolve?   
    Yes, easily. But your editing experience will might be a bit choppy if you try and edit directly off your HD. You should consider faster external storage using Thunderbolt.
  22. Like
    HurtinMinorKey reacted to Axel in BMPCC Lens choice for Videoclips and Short Films starting career.   
    With the Sigma 18-35 and Speedbooster, you cover all FOVs equivalent to 30-60 mm full frame (I know this is an awkward way of translating it, but people are not used to think in FOV angles yet, perhaps in a few years). This will be sufficient for 80 % of situations. What is missing for about 15 % is a moderate tele. With a Novoflex Nikon to MFT adapter (~150 bucks), you'd add 52mm - 100 mm with the same lens. For the rare occasions when you plan steadycam-like shots or a landscape, you'd need a very wide lens.
  23. Like
    HurtinMinorKey reacted to Michiel78 in Fingerprint on sensor - what now?   
    Personally Jurgen, I wouldn't risk further harming your sensor and go to the store you've bought the camera for advice. Many specialized camerastores are experienced in cleaning your cam/sensors and will do it for a small fee or sometimes free of charge. And otherwise they will ship it to an official repair center.
     
    I don't know, but if you're using a small microfiber cloth (for cleaning lenses or glasses or such) I think it has to be absolutely clean for any impurities on it could scratch your sensor...
     
    But... found some useful links: http://www.mu-43.com/showthread.php?t=21725 
    and http://photosol.com/product/sensor-swab-plus-4-pack-type-1/
     
    It seems the SensorSwab is pre-moistened with cleaning solution, so I assume it would remove fingerprints as well, and it's packed sterile so no impurities that could damage your sensor.
     
    I don't have any experience with SensorSwabs myself so use it at your own risk :)
  24. Like
    HurtinMinorKey got a reaction from Jeroen de Cloe in Export incoming footage from Resolve, which Codec?   
    You'll still be able to make adjustments to white balance, but you'll lose a bit of freedom. I suggest doing a rough color balance and exposure adjustment in Resolve (applying it to all similar clips using the appropriate button in the raw tab), and then exporting to ProRes. This way you get the best of both worlds.
  25. Like
    HurtinMinorKey reacted to maxotics in Camera Bit Depth   
    Not sure you question, but in general bit-depth is a computer (bits) + photo (depbth) lingo way of saying how wide the range in shading of any given color/brightness sampled be the electronic equipment.  
     
    You can store 256 shades (or depth values) in an 8-bit memory slot (or byte). 
     
    When working with Canon RAW, the camera saves each sensel/pixel as a 14-bit value.  That would give you a range from 1 to 16,384.
     
    Once you save those values in smaller bit values you have to reduce the precision.
     
    I tried to explain the "precision" problem on the ML forum like so:
     
    One can think of it this way.  You have a palette of 14 scales of gray.  You need to convert them into 8 scales for something else.  So 1-2 because 1, 3-4, become 2, 5-6 becomes 3, 7-8, becomes 4; 9-10, 5, 11-12, 6, 13-14, 7 (and we throw out the 8, for example)

    Let's say you have two gray colors in what you shot, and they are 2 and 6.  You want to reduce the exposure by 1 (increase contrast).   

    They were convered to 1 and 3, so now they become 0 and 2.   You went from a 300% difference (in 14bit) to 200% (in 8bit).  More importantly, you went from some gray 1, in 14bit, to no gray 0, in 8bit.
     
    What people don't understand, especially about H.264, is you can't just take 8 bit values and put them in a 14-bit space, for example, and get that 14bit precision again.  
     
    Does this make sense?  You seem to know your stuff, so maybe you're asking something else?
×
×
  • Create New...