Jump to content

Eric Calabros

Members
  • Posts

    645
  • Joined

  • Last visited

Posts posted by Eric Calabros

  1. EV = stop. Same thing. EV 2 is one stop away from EV 1. Etc.

    Straight from DXOMark: "Maximum dynamic range is the greatest possible amplitude between light and dark details a given sensor can record, and is expressed in EVs (exposure values) or f-stops, with each increase of 1 EV (or one stop) corresponding to twice the amount of light."

    I've shot with the D800 and it definitely has more DR than 12 stops just by eyeballing it. It has DR to spare (atleast in ISO 100). It makes no sense to just "ignore" measurements.


    Check this chart from Imaging Resources:
    ' alt='' class='ipsImage' >

    Highest detected range: 13.3 f-stop
    Where is that 14.4 number?

    and this one is even more realistic (also closer to my guess):

    http://home.comcast.net/~NikonD70/Charts/PDR.htm#D800E
  2. you confused Ev with Stop. 14.4 Evs on D800 is equal to 13.5 stops. so 12.8 on GH4 is much lower. however, DxO measures the DR down to 1:1 signal to noise situation, that is not practical (an image which has same amount of noise as it has signal, is useless for you, no matter how much DR is there). I usually ignore 3 stops of the score, so 10.5 stops for D800 (still) image could be a safe bet.  

  3. I remember arguing with you Andrew about something in your site, couple of years ago (with another account, another name I think), I dunno it was about RED or something else. and I was literally impolite in our email conversation. I had a bizarre feeling, cause I'm not a impolite person, so why I said those unreasonable unacceptable words? in cyber space, we forget that we are dealing with humans. humans like ourselves.

    this post made a flashback in my mind, which was quite necessary

    I sincerely apologize     

  4. Did you read my post?

    My whole post was about the difference of "being able to process" and "be able to process in proper high quality".
    Here, take a look at the aliasing and crap you'll find in an iPhone 5S video: vimeo.com/77162053 (removed the http 'cause I didn't see it necessary for this thread to have the embedded video here)

    Why is there aliasing? Because the iPhone isn't up to do the processing in high quality and they have to take shortcuts. The video quality of the iPhone 5s is great for a phone, but my post was about processing it properly for far better footage (like in a possible upcoming GH4). Processing it properly, to get rid of aliasing, moire and getting the most possible resolution out of the footage while keeping the color information - whilst outputting it to a suitable compressed format tailored for high quality output.

    If you do it with proper downsampling algorithms optimized for quality, you'll need beefy hardware if you want it done in realtime, especially when we're talking more than 24 fps, 10-bit.

    Also, I'm perfectly well aware of Qualcomm chips. Our company uses those for hardware encoding video on smart phones.


    Aliasing and moire in iPhone video is because of sensor itself and method of readout. If you gonna line skip, even an Intel Xeon CPU cant clean its mess. there are lots of powerful hardware out there that can handle those so called properly-processing. even a hacked 5d mark3 can do it: read a larg amount of data from sensor, properly processe it and output as DNG. and If raw frames are not actually full frame is because of damn buffer, not CPU. Nikon V1 can output 60x10mp NEF raw files per second. and Ive never heard any pro says V1 raw files are not properly processed!
  5.  

    The power & processing issues can be solved of course. But it will require more processing, more cooling, more power = bigger, more expensive camera. Want it done on a budget - get a decent 4k 10-bit and do the processing on a computer.

     

    iPhone 5s is already doing 120 720p frame per second. its actually 110 megapixel/sec processing power, with a chip not really optimized for specific task like downscaling. 30 x 2160p = 248mp/s. I dont know why a $2000 camera shouldn't handle a 2.25x of a job that a phone is flawlessly doing.   

     

    from Qualcom website:

     

    "Snapdragon 805 processors also enable users to take, edit and share higher quality photos in low light conditions. The world’s first commercial mobile 1GPixel/s (Giga-pixel per second) ISP (image signal processor) packs a large increase in ISP and CPP (camera postprocessor) speed and throughput, empowering users to take sharper, higher resolution photos with advanced post-processing features for low light conditions."

  6. I understand that 4k gives better resolution, dynamic range and colour than 1080 - and that these benefits are transferred when 4k is downscaled to 1080 in post. But I don't understand why this is an argument for 4K for people who don't need 4K output. Surely those people would benefit much more if the greater processing power and bitrate was put into a better 1080p codec (e.g. like the Pocket's prores). They would then get the benefit of the larger files in the form of grading latitude, rather than just chucking away information from their very full cards as soon as they got home. Wouldn't they? Personally I'd rather chuck that information away after I've done something useful with it, like a bit of colour correction.

     

    I'm not hearing a lot of people complaining that the Pocket isn't 4K. I am however hearing a lot of people complaining that the Pocket is a pain in the a**e to use. Imagine if Panasonic put BMPCC-like tech inside a GH3. We'd all go completely wild. Why isn't that the immediate future? With 4K it just seems to me like we'll be starting the whole H264 journey again, just at a higher level. Why not make HD the best it can be before moving on to 4K? The whole thing smells a lot like the megapixel race to me and, to be honest, the ugly side of capitalism.

    Anyway, this is my question: Leaving aside reframing options, why is compressed 4K better than high bitrate 1080 for those who don't need 4K output?

     

    Just to be clear, this is a genuine question. I am genuinely hoping to learn something. I am not being pointlessly antagonistic in the hope of rubbing someone up the wrong way. That's just the card I was dealt at birth - to forever write forum posts that elicit the wrath of Hades.

     

    for fully resolving a pair of lines (one black, and one white), 2 rows of pixels should be used, right? Nope, 4 rows needed because of Bayer pattern: blue,green,blue,green. so in matter of resolution,  4k is not actually 4k, let alone 1080p. so 2 megapixel Sigma Foveon like sensor  (3 layered color filter), has potential to be equal to bayer 4k. BUT, its insane to make 2 megapixel Full Frame sensor, cause every single one of pixels will have a massive area, almost 17um x 17um. that will heavily suffer from electron overflowing, and you need a thick dark ND filter for your every outdoor shot. what we lack right now, is not resolution. we lack Acuity. colors are not correct, much of that is based on mathematical guess, thanks to demosaicing, edges are soft for exactly same reason. lots of moire and false data. 4k isnt going to solve all of these, but downscaling, a decent downscaling, can give us some of that acuity we are lacking right now. the problem is, there is no camera equipped with "Built-in Best-in-Class hardware-accelerated downscaler". all the processing is on your own (PC) shoulder. otherwise, why should I care if gazillion pixels has been read out to give me my sweet 1080p? 

  7. [quote name='EOSHD' timestamp='1354039266' post='22468']
    Again if we take the view that it is JUST content that is important, there's no motivation for filmmakers to put any artistry into their camera work and cinematography. A disaster.

    If we take the view that it is JUST image quality that matters and that the camera is the most important thing, you lose the motivation to work on the content and just churn out pretty timelapses.

    Is this balance so difficult for people to grasp?

    Why is every argument in 2012 polarised, be it about cameras, politics, music, anything...
    [/quote]

    why you limit the "artistry" to Resolution? and why not make balance in resolution? I think a 48 fps real 2k image is more balanced than a 24 fps (maybe fake) 4k. why not interprete "more is better" as bigger pixel area, which leads to more dynamic range and S/N ratio?
    I dont have any problem with 4k or 6k or 8k. good for hollywood, they have no issue buying hundreds terrabyte of storage for their RAW multimilion dollar projects. but for me, if %99 of my content is going to be viewd on phablets and tablets, 1080p is enough. I prefer whinning about deficencies like DR and color noise ;)
×
×
  • Create New...