Jump to content

HurtinMinorKey

Members
  • Posts

    817
  • Joined

  • Last visited

Posts posted by HurtinMinorKey

  1. It's nonsense and you only need to open your eyes to see that. GH3 sensor was labeled "Sony", no crippling in that, it was better than Sony's own interchangeable lens video camera (NEX-VG series) for heavens sake!

     

    Gh3 is m43, different market, so less direct competition with the Sony's prime-time lineup.

     

    And I'm not saying Sony is preventing their licensees from having great video features, i'm just saying they'll make them pay for them. In other words, Nikon can probably get Sony sensors cheaper if they agree to strings attached.

  2. I don't think it works like this in business.

     

    Take Samsung, they have been making many parts for Apple iPhones yet they don't dictate anything to Apple. Samsung doesn't cripple the iPhone. I think its nonsense that Sony would dictate to Nikon what they can do with their sensors. Nikon is just not interested in video.

     

    You can't generalize. It depends on the bargaining power.  In this case Nikon doesn't really have a video market (to protect), and Sony does. 

  3. It's a f**king joke, a few lines of code, and a small heatsink would have made this a professional cinema camera (image quality wise), certainly the best for the buck.

     

     

    If they don't come up with a D4c it just doesn't make any sense to me.

     

    A D4 4K Prores and compressed raw + D400 with the same specs but in APS-C would have made great sports,wildlife and cinema lineup for the next years to come, only surpassed by double exposure sensors.

     

    I mean, everybody has nikkor glass, and only nikkor glass fits on them. They could build a nice market to make up for the losses.

     

     

    I've said it once, and i'm sure i'll say it again: licensing agreements. Nikon uses Sony sensors, that means Sony gets to tell them what they can and cannot do with them. 

     

    It's the only sensible explanation.

  4. I think the Black Magic Pocket Cinema Camera (that's the full name, right?) goes like BMPCC. So that's one more C thatn the Production Camera.
    Actually I thought I was seeing wrong and my mind was closing the C. I think BMPCC is easier to difference on the spot than BMPO which at first sight might confuse some (or maybe just me) that it's written BMPC.

    Well, I'm nitpicking, just saying since it's the first time ever I've read "BMPO" anywhere.

     

    I was trying to keep them all to four letters, dammit! :D But I think BMPCC is probably better (and certainly used more), you win.

  5. So let's start with your four 8-bit pixels example and compare it to a single 10 bit observation. Imagine our stored 8-bit values are all max readings:

    (256)  (256)

    (256)  (256)

    Now suppose we want to approximate a single, native 10-bit observation in the middle of all four of them ("X").

    (256)   (256)

            (X)

    (256)   (256)

    What 10-bit value do we assign? 1024 seems logical, right? But what if the true value of the native 10-bit observation is 1023? (because 1023 would be mapped to 256 in all of the 8-bit observations) Then we have made a mistake in our mapping and overestimated the value. Similarly, we cannot be sure about assigning any number less than 1024, because 1024 might be the true value of the native 10-bit observation!

     

    In this case, used an example where the dynamic range is truncated in the 8 bit from above, but it doesn't matter where this lack of precision exists. 

     

    All that being said, error diffusion complicates the matter. It may be possible to diffuse the errors (the remainders when going from native bit depth to 8 bit) It complicates the matter beyond a simple proof.


  6. 00:31 blown out mushy highlights on bmcc not on canon

    00:58 high crop factor on bmcc

    01:10 terrible shadow performance on the bmcc (if you want to see how bad it truly is watch in full screen)

    1:27 global shutter on bmcc better than rolling on canon

     

    they differ in highlight performance, crop factor, shadow performance, and global shutter vs rolling shutter.

    Completely different.

     

    I agree with your assesment of this video, but i've seen CInema 5D be Canon shills before, so I can't trust that their expirement was 100% objective.

  7. I have to believe that David is talking about "in practice," b/c the math is not perfect.

     

    Here's a simple example that illustrates what I'm saying:

     

    Have an image with a perfectly uniform shade of grey that falls between two 8-bit values but closer to the higher value. Lets say between 100 and 101 but closer to 101. Well it's going to be encoded as 101.

     

    But let's say you take the same perfectly uniform shade of grey and sample it with 10-bit precision. So it falls between 400 and 404, but lines up perfectly with 403.

     

    There is no way that four 8-bit values of 101 are going to mathematically sum to 403. They are going to sum to 404. And 404 <> 403.

     

    While I'm sure the downrezing helps the bit depth considerably, the math is not perfectly reversible.

     

    We've said this in the thread a bunch now. I am sure this is true without error diffusion. But with error diffusion I think there might be a way to store the information in 8 bit. 

  8. OK let's put this in context HurtinMinorKey please. What affect does your theory have on the end result, are we arguing here over a tiny technicality / mathematical proof, or is it a serious issue which will mean we get nowhere near a higher bit depth as David and others are suggesting?

     

    I'm not sure exactly about the practical implications. The math seems to indicate that we cannot get 10-bit DR benefits out of 8-bit by downsampling.  

     

    Maxotics, I was wondering when you'd show up! You probably have as good a handle on this as anyone. 

     

    In Andrew's defense, I think he has been led astray by a Software guy at GoPro.  

     

    But the idea of error diffusion got me thinking. While we cannot resurrect the full DR of 10 bit, perhaps we can do a decent job of approximating this precision within the DR limits. 

  9. let's make this even simpler, and use a dynamic range to show why you can't always resurrect higher bit depth (even with error diffusion):

     

    Assume there are two types of A/D conversion:

     

    1 bit (taking on the value of 0 or 1) 2 choices 

    2 bit (taking on the values of 0 1 2 3) 

     

    Let's assume that analog values are:

     

    (0.1) (1.2) (2.0) (2.1) (3.0) (4.1)

     

    and that A/D conversion assigns the closes possible digital value. 

     

    1 bit A/D conversion becomes:

     

    (0) (1) (1) (1) (1) (1)

     

    2 bit A/D conversion becomes

     

    (0) (1) (2) (2) (3) (3)

     

    at half resolution you get either:

     

    (0) (2) (3) or (1) (2) (3)

     

    either one represents 3 levels of light , which you cannot represent in just 1 bit. 

     

    Is this a contrived example, yes. But the point is to try and show they are not mathematically equivalent. 

  10. Yes HurtinMinorKey,

     

    But if Error diffusion is employed in the 8 bit codec - the accumulated error ( 0.1 + 0.2 + 0.3 + 0.4 = 1.0 ) will be applied to the last 8 bit encoded value, resulting in an encoded sequence of (1) (1) (1) (2) - in which case results in a down sampled (averaging each pair) sequence of (1) (1.5) the same result you got from the 10 bit example. Of course, I'm assuming the the down sampled data is stored at 10 bits, but that's the whole point...

     

    J

     

    Error diffusion? How does it know where to place the error?  Is this standard?  But I don't think it matters. 

     

    10 bit can capture more DR than 8 bit, so error diffusion or not, there has to be some 10bit values of the brightest white, or the darkest dark that cannot be represented in 8 bit (while maintaing the contrast in the rest of the picture), regardless of error diffusion.

  11. This is how to do it....

     

    1 In the NLE, place clip on timeline and overlay 3 copies.

    2 Normalise the result to 10 bit to prevent over exposure

    3 Shift copy1 vertical by 1 pixel

    4 Shift copy2 vertical by 1 pixel and horizontal by 1 pixel

    5 Shift copy3 horizontal by 1 pixel

     

    Output to 1080p 444 10 bit (even though it is 11 bit)

     

     

    ta da

     

    If i understand you correctly,  you're showing why you can resurrect the chroma, but not the bit-depth. . 

     

    Here is my "proof by color" of why i think you should be able to get 4:4:4 1080 from 4:2:0 4K.

     

     

     

    Slide1.jpg

     

    The bottom left its 1080 4:4:4, and the top right is 4K 4:2:0

  12. So bit depth stays around 8bit but sampling is indeed 4:4:4 after conversion?

     

    3rd edit. 

     

    i think 4:2:0 25% less chroma information than 4:4:4. So you should be able to reconstruct 1080 4:4:4 from 4K 4:2:0 because 4K has 4 times the resolution of 1080p. 

     

    I'm much more certain about the bit depth, than I am the chroma though. 

  13. Here's a proof by counter example for why it is not always possible to interpolate the extra bit depth from more pixels of information.

     

    Imagine 4 pixels of digital information. Imagine the actual analog information (let's just use an arbitrary measure of brightness) is:

     

    (1.1) (1.2) (1.3 ) (1.4), so it's a smooth gradient of increasing brightness. 

     

    Now assume that 8-bit maps everything less than 1.5 to 1, but 10-bit will map more precisely, so that anything less than 1.25 is mapped to 1, but everything above 1.25 is mapped to 1.5. 

     

    So after analog to digital conversion, our 8 bit info is stored as

     

    (1) (1) (1) (1)

     

    and our 10 bit info is stored as

     

    (1) (1) (1.5) (1.5)

     

    Now let's assume that we only have half the pixels from the 10 bit version:

     

    (1) (1.5)

     

    In this simple example, you can see how no amount of subsampling will resurrect the contrast information that is lost in the 8bit conversion. 

     

    Does this make sense?

  14. Storage: I'd go with the 256GB Flash internal for editing, and then augment with external for storage.  Spinning drives are slow and affect the performance of almost everything you do.

     

    I'm not a big fan of the fusion drives. (and I have one).

  15. Do you plan to use Resolve? I think it's a nightmare. I'd never build a machine in an attempt to stay compatible with Resolve. There must be other softwares out there that are more friendly. 

     

    If you plan to use Resolve, that should dictate what Card to use. That's a starting point. 

     

    When working with raw, Resolve is not the issue. It's just a ton of data, and no program is going to do it well on an underspeced machine.

×
×
  • Create New...