Jump to content

chauffeurdevan

Members
  • Posts

    72
  • Joined

  • Last visited

Posts posted by chauffeurdevan

  1. My understanding is that the downsampling can yield 10 bit Luma, because there are 4 8 bit Luma samples getting combined into one, not full 10 bit color sampling. Anyone else get that?

     

    4k 8:2:2

    Luma : 10bits ( 256+256+256+256 = 1024 possibilities)

    Chroma : 9bits (256+256 = 512 possibilities)

     

    4k 8:2:0

    Luma : 10bits ( 256+256+256+256 = 1024 possibilities)

    Chroma : 8bits (256 possibilities)

     

     

    let's make this even simpler, and use a dynamic range to show why you can't always resurrect higher bit depth (even with error diffusion):

     

    Assume there are two types of A/D conversion:

     

    1 bit (taking on the value of 0 or 1) 2 choices 

    2 bit (taking on the values of 0 1 2 3) 

     

    Let's assume that analog values are:

     

    (0.1) (1.2) (2.0) (2.1) (3.0) (4.1)

     

    and that A/D conversion assigns the closes possible digital value. 

     

    1 bit A/D conversion becomes:

     

    (0) (1) (1) (1) (1) (1)

     

    2 bit A/D conversion becomes

     

    (0) (1) (2) (2) (3) (3)

     

    at half resolution you get either:

     

    (0) (2) (3) or (1) (2) (3)

     

    either one represents 3 levels of light , which you cannot represent in just 1 bit. 

     

    Is this a contrived example, yes. But the point is to try and show they are not mathematically equivalent. 

     

    Your A/D conversion is not right. You are mixing dynamic range and encoding.

     

    If you have a 12 stops DR sensor (let's use a regular single amp sensor - not a dual amp one like on BMCC/Arri), you will get a 12bit linear signal : on each stop you double your value = on each bit you double your value.

     

    However, when there is encoding (not RAW), your signal is processed. When it is converted to 10 or 8 bit, the signal is first converted to log (eg. 2.2) so your 6th stop is not anymore the value 64 (2*2*2*2*2*2) on 4096 but 2048 on 4096. After that you'll divide your value by 4 (10bit) or 16 (8bit). In a log encoding, each stop will have the same number of steps, compared to linear where you have one step for your first stop, 2 steps for your second stop, etc....

  2. But then why is BMCC 12bit raw? Do they just concatenate the extra data? The only thing I could want from BMCC is a slightly bigger sensor, and the accompanying low-light performance. 

     

    @eoshd, remember, you said yourself there is no garuntee that these are the sensors going into BMCC updates.  Plus, they could always up the SSD controller, and do crazy things like 16bit raw, or 12bit raw at 4K (pushing up against the write speed of the faster SSDs).

     

    This is what I wrote last may >

     

     

    The sensor sampling is done by two ADC - one with high gain amplifier, one with low gain amplifier. Each ADC output 11bit. Two 11bits doesn't give neither 22bits (unless it is MSB and LSB - but it is impossible in this case, amplifier/ADC are linear) The only linear signal that could be created (over 11bit) is a 12bit signal - greatest value of a 11 bit unsigned integer is 2048, 2048+2048=4096=12bit. Over that, it is not linear anymore... but logarithmic. If the high gain was +18db, you need to multiply its ADC output by 3. I will not go deeper, but hope you understand...

     

    However, from the measured specs, a ratio of 16,000:1 is best represented by a 14bit linear value. +/-13 stops is best represented by a 14-bit linear value. However I think that BMCC did a better choice of choosing 12bit log instead of 14bit linear.

  3. Hey guys! I have some questions regarding some of our upgrades we plan to make very soon. The production house is looking into pre-ordering the black magic production camera which will be our first 4K ready camera and were excited about it. But my questions are really on the post side of things. Right now we are running on 2 older mac pros and they arent getting the job done like we would like them too. We just recently switched to Media Composer as our NLE and have started to incorporate Davinci Resolve 10. We started shooting in raw for some of our productions and need computers to handle this. We are looking at the new mac pros but seem a little lost in what we should get with the options on these computers. Im going to be honest I am not a computer geek I am simply the camera man and I am looking for answers for our post side of things. 

     

    So some of my questions are.

     

    - Whats the advantages of a 12 core system?

     

    - Can Avid and Resolve take advantage of all cores?

     

    - Is it smarter to get less cores and focus more on a GPU upgrade? 

     

    Pretend like money isnt an issue but you dont want to spend money on something you wont benefit from in this situation. And please dont make this a PC over Mac battle and vice versa. We are a mac house and are staying that way and hackintosh is out of the question. 

     

    Thank You

     

     

    First, there is currently no 12 cores of the newest Mac. Depending on the configuration of your old Mac Pro (probably dual x5650 or better), the new one will not be much faster...

     

    Second, are you sure Avid MC is able to do 4K ?

     

    Third, the BM 4k, is far from being in your hand....

  4. In fact, I am not sure if it was FCP actualy working in 23.98, or FCP writting 23.98 in the XML instead of the real speed with precision that messed up everything. (I read that resolve also have difficulty with FCP 23.98, not just automatic duck)

     

    So I just created a 02:00:00:00 comp in AE, one at 23.976 and one at 23.98,

    23.976 = 172799 frames

    23.98  = 172655 frames

     

    If someone can create a timeline in FCP and confirm one or the other, it would be nice...

  5. Lucky i work with FCP then!

    So do you think i should use/keep to 24fps when i know the projects will go somewhere else?

     

    Love this place, loads of info just being handed around!

    Thanks!

    In fact, 23.98 (like FCP) is really wrong - the real one is 23.976 (and something...). At first, I tought it was only the rounded number being displayed, but it is not.... if you ever have to convert to another editor, make sure you triple check your edit with a rendered version....

     

    got the problem with automatic duck - there is a free ae script somewhere if I remember that interpreted correctly the 23.98 to 23.976.

     

    As for 24p vs 23.976, it really depends on the final destination : TV : 23.976, Cinema : 24, everything else : you choose....

  6. I mean 0.02fps can't be generate that much of a difference.

     

    In fact, just the difference between 23.98 (what final cut is using ) and 23.976 (what everything else is using) is quite big. When you have time code burn in your footage and when you are starting your edit at 10:00:00:00, you can have a lot of frame differences here and there.

     

    Had this problem 2 years ago when a client worked in fcp (23.98) gave me the project so i can import it in AE (in 23.976)... lost a few days of work when I saw everything was off-sync (a 1h30 feature film with hundreds of cuts)

  7. Not really.  The imagery being projected is being generated in realtime, most likely.  It's really simple graphics.  

     

     

    Hopes they'll have a making of eventually....

     

    I actually think the images are pre-rendered. Why do it in realtime with games quality renderer (they would probably use processing as the Creative designer is Gmunk, and he is used to processing) when you can have hi-end CG with global illumination, ambient occlusion, area lights, etc...

     

    Pretty sure the camera is mounted on one of those 6-axis robot. That way, everything is tight to the millisecond and to the millimeter. no need to build and to test a tracking system - and anyway you could not do a system precise like that in the dark. Everything is animated in 3D, from the cam to the surfaces, so that way you are sure all visual is perfectly tight in 3D.

  8. Excerpt from http://blogs.adobe.com/aftereffects/2012/09/cinemadng-in-after-effects-cs6-and-elsewhere.html

     

    One question that we’ve been seeing a lot–especially since the recent announcements of a couple of cameras–is why Premiere Pro doesn’t import CinemaDNG files. The answer is simply that we have not been satisfied with the performance that we have been able to achieve with CinemaDNG files in Premiere Pro, in which real-time playback is crucial. If it’s important to you that we add native import of CinemaDNG footage into Premiere Pro, please let us know with a feature request so that we can get a sense of whether this is an area where we need to put more effort.

     

    Adobe Feature Request page :

    http://www.adobe.com/go/wish

  9. This is almost exactly what I wanted from Apple for many years, but not as an hi-end mac pro, but as a mid-range desktop with an i7 in the range of 1.5k-2.5k - something that always missed as I never had any interest in those iMac.

     

    However, I don't think Apple it is a good idea to replace the old tower Mac Pro with this. I really don't like the idea for an hi-end computer to have everything external. People that I know that have Mac Pro have tons of internal hard drives, a few PCIe cards (Pro Tools, Avid, AJA), etc. Going fully external, you increase noise, a lot of cable and external power supply, and at the same time increase the price of every component.

     

    I don't call that evolution. In this case, I really think the visual design of the box reduce the usability design of the computer for pro/hi-end user. Like I said, it should have been a mid-range computer for people with no or a single harddrive, maybe a printer, nothing else.

  10. Seriously?  It's already established the original BMCC sensor captures 16bit linear, this is reduced to 12bit log (unpacking back to 16bit once loaded into Resolve).

     

     

    edit: sorry, my bad, the sensor is capturing more than 16bits.  It takes that, reduces to 16bit lin, then from that writes a 12bit log file to disc.

     

     

    I read more in detail the sCMOS whitepaper, and I was wrong, so are you.

     

    The sensor sampling is done by two ADC - one with high gain amplifier, one with low gain amplifier. Each ADC output 11bit. Two 11bits doesn't give neither 22bits (unless it is MSB and LSB - but it is impossible in this case, amplifier/ADC are linear) The only linear signal that could be created (over 11bit) is a 12bit signal - greatest value of a 11 bit unsigned integer is 2048, 2048+2048=4096=12bit. Over that, it is not linear anymore... but logarithmic. If the high gain was +18db, you need to multiply its ADC output by 3. I will not go deeper, but hope you understand...

     

    However, from the measured specs, a ratio of 16,000:1 is best represented by a 14bit linear value. +/-13 stops is best represented by a 14-bit linear value. However I think that BMCC did a better choice of choosing 12bit log instead of 14bit linear.


  11.  

     Well, the codec got obliterated by the irregular flashing lights making 80% of the footage unusable

     

     

    Was it horizontal splitted frame with the top having the lights on, and bottom lights off ?

     

    Then no luck, this hack will do nothing for you. This is a rolling shutter problem, not a codec problem. Get a global shutter camera.

  12. They probably never will. The dynamic range figure they quote is a computed value, not an observed one as "X stops" would imply. It's derived with the following equation:

     

    dynamic range in dB = 20 x log(fullwell capacity ÷ readout noise)

     

    As you can probably guess, this is a theoretical figure, not an observed one gotten by looking at a chart. And the "X stops" figure we toss around here is derived from that figure (i.e., by dividing by 6). That's why I'm a little suspicious of BMD's 12-stop claim for the 4K Pro camera, as the theoretical limit is 10 stops without the CMV12000's HDR tricks. Of course, the CMV could in practice be somewhat better than its specs suggest, but it's unlikely to be two stops better.

    I tried finding articles or theory to proof my point, but still can't find any.

     

    As the measurement is for each channel, RG and B, as the blue filter is about 2 stop darker than the green (red about -1stop), whenever there is white light (probably at 5000k or 6500k), the blue channel will clip 2 stops later (probably 3 stops in tungsten).

     

    I think that's why we have a lot of people measuring charts in their garage and getting 14 stops of DR (15 if measured in tungsten light) even when the sensor theoretical dynamic range is at 72dB (12stops) (measured should be lower than theoretical DR because no analog to digital converter is 100% efficient.)

  13. I thought DPX was a format used by video editors to transport raw in an uncompressed way from NLE to NLE. That not right? Can anyone clarify? I've never used this format before in my work.

     DPX is derived from the Kodak Cineon file format. The Cineon was a film scanner. So it was kind of a "RAW" scanner file format. However, it doesn't support Bayer Pattern of stuff like that. http://en.wikipedia.org/wiki/Cineon

     

    But yes, it is uncompressed.

×
×
  • Create New...