Jump to content

chauffeurdevan

Members
  • Posts

    72
  • Joined

  • Last visited

Everything posted by chauffeurdevan

  1. 4k 8:2:2 Luma : 10bits ( 256+256+256+256 = 1024 possibilities) Chroma : 9bits (256+256 = 512 possibilities) 4k 8:2:0 Luma : 10bits ( 256+256+256+256 = 1024 possibilities) Chroma : 8bits (256 possibilities) Your A/D conversion is not right. You are mixing dynamic range and encoding. If you have a 12 stops DR sensor (let's use a regular single amp sensor - not a dual amp one like on BMCC/Arri), you will get a 12bit linear signal : on each stop you double your value = on each bit you double your value. However, when there is encoding (not RAW), your signal is processed. When it is converted to 10 or 8 bit, the signal is first converted to log (eg. 2.2) so your 6th stop is not anymore the value 64 (2*2*2*2*2*2) on 4096 but 2048 on 4096. After that you'll divide your value by 4 (10bit) or 16 (8bit). In a log encoding, each stop will have the same number of steps, compared to linear where you have one step for your first stop, 2 steps for your second stop, etc....
  2. FYI, It was chinese new year.
  3. This is what I wrote last may > The sensor sampling is done by two ADC - one with high gain amplifier, one with low gain amplifier. Each ADC output 11bit. Two 11bits doesn't give neither 22bits (unless it is MSB and LSB - but it is impossible in this case, amplifier/ADC are linear) The only linear signal that could be created (over 11bit) is a 12bit signal - greatest value of a 11 bit unsigned integer is 2048, 2048+2048=4096=12bit. Over that, it is not linear anymore... but logarithmic. If the high gain was +18db, you need to multiply its ADC output by 3. I will not go deeper, but hope you understand... However, from the measured specs, a ratio of 16,000:1 is best represented by a 14bit linear value. +/-13 stops is best represented by a 14-bit linear value. However I think that BMCC did a better choice of choosing 12bit log instead of 14bit linear.
  4. Possibly a UV filter could fix that depending on the type of light and the UV emission of those light. Also a warm filter (81c, 85) will reduce the blue saturation - but could result in problems however with the red lights....
  5. First, there is currently no 12 cores of the newest Mac. Depending on the configuration of your old Mac Pro (probably dual x5650 or better), the new one will not be much faster... Second, are you sure Avid MC is able to do 4K ? Third, the BM 4k, is far from being in your hand....
  6. In fact, I am not sure if it was FCP actualy working in 23.98, or FCP writting 23.98 in the XML instead of the real speed with precision that messed up everything. (I read that resolve also have difficulty with FCP 23.98, not just automatic duck) So I just created a 02:00:00:00 comp in AE, one at 23.976 and one at 23.98, 23.976 = 172799 frames 23.98 = 172655 frames If someone can create a timeline in FCP and confirm one or the other, it would be nice...
  7. In fact, 23.98 (like FCP) is really wrong - the real one is 23.976 (and something...). At first, I tought it was only the rounded number being displayed, but it is not.... if you ever have to convert to another editor, make sure you triple check your edit with a rendered version.... got the problem with automatic duck - there is a free ae script somewhere if I remember that interpreted correctly the 23.98 to 23.976. As for 24p vs 23.976, it really depends on the final destination : TV : 23.976, Cinema : 24, everything else : you choose....
  8. In fact, just the difference between 23.98 (what final cut is using ) and 23.976 (what everything else is using) is quite big. When you have time code burn in your footage and when you are starting your edit at 10:00:00:00, you can have a lot of frame differences here and there. Had this problem 2 years ago when a client worked in fcp (23.98) gave me the project so i can import it in AE (in 23.976)... lost a few days of work when I saw everything was off-sync (a 1h30 feature film with hundreds of cuts)
  9. So 2999$ for a quad-core, 3999$ for the six-core.   Where is the announced 12-core and how much it will be ?   So who's getting a Mac Pro ?
  10. It is really clear from that gmunk description page of the project that everything was pre-rendered from, maya, houdini and Cinema 4D.   http://work.gmunk.com/BOX-DEMO     Really awesome work.
  11.   Hopes they'll have a making of eventually....   I actually think the images are pre-rendered. Why do it in realtime with games quality renderer (they would probably use processing as the Creative designer is Gmunk, and he is used to processing) when you can have hi-end CG with global illumination, ambient occlusion, area lights, etc...   Pretty sure the camera is mounted on one of those 6-axis robot. That way, everything is tight to the millisecond and to the millimeter. no need to build and to test a tracking system - and anyway you could not do a system precise like that in the dark. Everything is animated in 3D, from the cam to the surfaces, so that way you are sure all visual is perfectly tight in 3D.
  12. Excerpt from http://blogs.adobe.com/aftereffects/2012/09/cinemadng-in-after-effects-cs6-and-elsewhere.html   One question that we’ve been seeing a lot–especially since the recent announcements of a couple of cameras–is why Premiere Pro doesn’t import CinemaDNG files. The answer is simply that we have not been satisfied with the performance that we have been able to achieve with CinemaDNG files in Premiere Pro, in which real-time playback is crucial. If it’s important to you that we add native import of CinemaDNG footage into Premiere Pro, please let us know with a feature request so that we can get a sense of whether this is an area where we need to put more effort.   Adobe Feature Request page : http://www.adobe.com/go/wish
  13. This is almost exactly what I wanted from Apple for many years, but not as an hi-end mac pro, but as a mid-range desktop with an i7 in the range of 1.5k-2.5k - something that always missed as I never had any interest in those iMac.   However, I don't think Apple it is a good idea to replace the old tower Mac Pro with this. I really don't like the idea for an hi-end computer to have everything external. People that I know that have Mac Pro have tons of internal hard drives, a few PCIe cards (Pro Tools, Avid, AJA), etc. Going fully external, you increase noise, a lot of cable and external power supply, and at the same time increase the price of every component.   I don't call that evolution. In this case, I really think the visual design of the box reduce the usability design of the computer for pro/hi-end user. Like I said, it should have been a mid-range computer for people with no or a single harddrive, maybe a printer, nothing else.
  14.     I read more in detail the sCMOS whitepaper, and I was wrong, so are you.   The sensor sampling is done by two ADC - one with high gain amplifier, one with low gain amplifier. Each ADC output 11bit. Two 11bits doesn't give neither 22bits (unless it is MSB and LSB - but it is impossible in this case, amplifier/ADC are linear) The only linear signal that could be created (over 11bit) is a 12bit signal - greatest value of a 11 bit unsigned integer is 2048, 2048+2048=4096=12bit. Over that, it is not linear anymore... but logarithmic. If the high gain was +18db, you need to multiply its ADC output by 3. I will not go deeper, but hope you understand...   However, from the measured specs, a ratio of 16,000:1 is best represented by a 14bit linear value. +/-13 stops is best represented by a 14-bit linear value. However I think that BMCC did a better choice of choosing 12bit log instead of 14bit linear.
  15.     Was it horizontal splitted frame with the top having the lights on, and bottom lights off ?   Then no luck, this hack will do nothing for you. This is a rolling shutter problem, not a codec problem. Get a global shutter camera.
  16.   The BMCC sCMOS have a dynamic range quoted at >16,000:1, 14bit is 16,384. So it is not 16 bit unless there is 2bit of noise.   http://www.scmos.com/files/high/scmos_white_paper_8mb.pdf
  17. Woh ! This is not PCI Express, this is PCMCIA, not at all the same thing. However, with this CF to PCMCIA interface you could connect a PCMCIA to SATA like this : http://www.miniinthebox.com/ake-pcmcia-to-sata-serial-ata-cardbus-card-for-laptop_p183408.html Will it work, do not know.
  18. I misread, tought you said 8Mpx crops. Was just weird to have a picture cropped to megabytes instead of megapixels....   8Mpx at 8bits would be 8MB.
  19.   But maybe you'll want something better than 8bit ? So, double that RAM.
  20. Is it possible to get an original clip from that Hyperdeck recorder ? Because from that 4:2:0 mp4 on vimeo, it is IMPOSSIBLE to say if it is 4:2:2.
  21. http://support.apple.com/downloads/Apple_ProRes_QuickTime_Decoder_1_0_for_Windows   You can read, can't write.......... you could compress in ProRes using ffmpeg   
  22. I tried finding articles or theory to proof my point, but still can't find any.   As the measurement is for each channel, RG and B, as the blue filter is about 2 stop darker than the green (red about -1stop), whenever there is white light (probably at 5000k or 6500k), the blue channel will clip 2 stops later (probably 3 stops in tungsten).   I think that's why we have a lot of people measuring charts in their garage and getting 14 stops of DR (15 if measured in tungsten light) even when the sensor theoretical dynamic range is at 72dB (12stops) (measured should be lower than theoretical DR because no analog to digital converter is 100% efficient.)
  23.  DPX is derived from the Kodak Cineon file format. The Cineon was a film scanner. So it was kind of a "RAW" scanner file format. However, it doesn't support Bayer Pattern of stuff like that. http://en.wikipedia.org/wiki/Cineon   But yes, it is uncompressed.
  24. Any 4k to Odyssey7Q seems to be conventionnal video. http://www.xdcam-user.com/2013/04/convergent-design-odyssey-7q-to-work-with-fs700-raw/#comment-33298       Dont think so.
×
×
  • Create New...