Jump to content

chauffeurdevan

Members
  • Posts

    72
  • Joined

  • Last visited

Everything posted by chauffeurdevan

  1.   Watch out, you're mixing Pocket specs with Production 4K specs.   Pocket : Lossless DNG compression Production 4K : Lossy DNG compression   John Brawley confirmed that the visually lossless compression the 4K is actually lossy compression. However, it should be 12bits. So some Adobe DNG specs update should follow !
  2. Let's hope he's right.   Where did you find the info on the website ? Looked everywhere, was not able to find anything. The only hint I get is the VISUALLY lossless wording they using on the BMPC and not for the other camera - this is pure marketing wording.   Christine Peterson from Blackmagic on the forums : It's visually lossless, a very slight compression.   So right now, I prefer believing Adobe specs on DNG than Blackmagic wording - and the only "visually lossless" compression that is not lossless is the 8bit JPEG.   So like I said, there could always be some new specs that could be published soon with a 12bit log "visually lossless" compression. Right now, there is not.   Like I wrote on the Adobe forum 6 months ago,  I really hope that Adobe would update the specs to include something better than 8bit Jpeg and that was before the BMPC. I now hope even more that Adobe will do it.
  3. Like I wrote in another thread I asked Adobe about DNG lossy compression and they answered that it is 8bit Jpeg, you could also look in the Adobe's DNG documentation .   So for now, our only hope is that there is a new DNG specification version coming soon with better lossy compression and that BM have access to it.
  4.   Lossless on the Pocket, visually lossless on the 4k.   So damn, the 4k CinemaDNG is only 8bit lossy jpeg compression - no RAW. Hopefully, the ProRes 422 HQ is 10 bit as I would not want to pay for a high-end 4k SDI external recorder.
  5.   It could really well, be that the JPEG example is kind of high-quality, and the lossy is highest quality.       It is not 10-bit, but 8bit.   From what I understand, the Jpeg inside the DNG is created from the raw data before all the transform operation (sharpen, white balance matrix, DR adjustement etc.) so you have a clean Jpeg that you can change the white balance, that still have the data in the highlights and shadows so you can do the same recovery as in the RAW. The only problem, like I said in the DNG forum, is exactly that, all the operation are done after the conversion to 8 bit. So if your shot needs a lot of color correction/grading (in fact just going to a really warm color temperature) you'll have a lot of banding/posterization, maybe more than a regular MJPEG video file - as on this one, the white balance matrix (and other operation) is done from the 12/14 bit data.
  6. Right now, people should not be too excited about compressed CinemaDNG.   I asked a few months ago about DNG 1.4 lossy compression bit depth, and what I got is the it is 8bit. In fact, Lossy DNG is a 8bit jpeg. http://forums.adobe.com/thread/1078247
  7. Not sure if it is this sensor : http://www.aptina.com/products/image_sensors/mt9j003i12stcv/ From what I see here, and from the recent Toshiba sensor in the D5200 - the sensor is mostly responsible for the video capability than the actual camera architecture. For example the downscaling method is actualy implemented in the sensor, not in the camera. Nikon probably didn't change much the d5200 vs the d5100, but just changing the sensor give totally differents results. I really think that Sony/Canon crippling are not crippling but just what the actual sensor architecture is able to give. You could give the 5D3 sensor to Black Magic, and probably would not be able to get more out of it.
  8. As Olympus m43 lenses are "focus-by-wire", the energy that you put rotating the focus ring is not mechnicaly transfered directly to the lens. It is in fact converted to an electronic signal, send to the camera, and back to the lens where an electronic motor mechanicaly focus the lens.   So the noise you hear is this motor. I suppose the loudness change from lens to lens.
  9.   0.95 on full frame is extremely really narrow as you know - 4 times more than what most people here ever experimented, if not more. You probably had a lot of difficulty to focus at the same time that you were manipulating the camera - almost every shots are really soft. Maybe an experimented follow-focuser could drive it doing just that   Even though the leica noctilux is the best extremely fast lens, I would stay far from it for video - besides some visual effects that you may want to create.   You should use the Summilux, or even better, the Summicron most of the time.
  10. I think the best performance/money ratio is the GTX x70 models (570 or 670). The GTX x60 are way less performant for not that much cheaper. And the GTX x80 are not much better for twice the price. In fact the x70 share the same chip as the x80 but have only 1 of the 16 SM disabled (or like some of you like to call it - crippled).
  11.   In fact no. For the same bitrate, IPB should be better on most scene (with low to mid action or little camera movement)   Like said, you have a small size for every frame on All-I. You have bigger I frame on IPB, this is important. after that, the other frames are calculated as difference from that I frame. The difference between frame is generaly small, it is like have much higher data on IPB.   Personally, I would shot IPB most of the time and switch to All-I when there is a lot of action going on.
  12.   Hope you'll look for medium format to full frame. Would be awesome to have some Hasselblad Zeiss to Canon EF, or even Mamiya 645 to Canon EF. Would blow away any 35mm and smaller lens and they are cheaper and cheaper as film disappear (less and less film roll available on eBay each week).
  13. Hi,   I find the video overexposed by at least half a stop. You would get much richer color this way, even more important on a sunset. You also need to know if it is more important to see all the details in the bush at the bottom rich and behind the rock or if it is all the details in the sky.   Also, I didn't try personnaly the GH series but I would not tune down the saturation, you should not gain anything. I would keep it at zero. Increasing it however should reduce the dynamic range, so it is not a good idea.
  14.   No you don't throw away extra green. It is just part of the bayer process.   What I meant is to have 4:4:4 in 1920x1080 on a FoveOn  for example, or tri CCD, you need a 1920x1080 x 3 sensor providing 6220800 photosites or 6mpx.     On a Bayer, you don't have that 1-1-1 red, green, blue ratio. but a 1-.5-.5. So you need to double the resolution to get at least each color for a final pixel. This is 8294400 photosites or 8mpix for a 1920x1080.
  15. To be noted :   On my post about Bayer and 4:2:2. All I wrote is valid in 1:1 Bayer resolution to final resolution, meaning a 1920x1080 Bayer for a 1080 final resolution.   If you have higher resolution Bayer this change. Ideally, a bayer sensor should double (or quadruple) the final resolution, eg. 3840x2160 for a 1080 video. From there, you could get a nice 4:4:4 (slighlty oversampled in the green.)
  16. Personnaly I find the result grading awful (hope this is graded - not source). Too much contrast, saturation all the way up.       The hair of the the girl dancing, is the best thing in this video. Shot at 1/2000. I like the resulting details and you feel more the action.   Personnaly I find that shooting only at 180deg is overrated. I much prefer to adapt the speed to the actual content/action/subject.
  17.     In fact, this is false headroom - no magic in here. There is no more range in the data of the C300 the the Nikon (not talking about DR, but the data).   Like Yellow said in a different discussion, Nikon is just using the entire 8bit range (BT470) opposed to Rec709 where it records a 16-235 luma.   When you go super white (and sometime superblack), they record full range 8 bit data (at least in luma) and flagged that as Rec709. Are they using a BT470 matrix and wrongfully flag that as Rec709 ? I don't know.   Maybe Yellow would be able to confirm that - or have a better explanation.
  18. Let's try.       For each photosite in a bayer pattern, you capture luminosity for a certain region of the spectrum. But this is not luminosity as we see human see it. (Think putting a red filter on a b&w film). So for each block of 4 photosite (RGGB) they extrapolate some luminosity as if it was white light using (in REC 709) this formula Y = 0.2126 R + 0.7152 G + 0.0722 B (http://en.wikipedia.org/wiki/Luma_%28video%29). So the luminosity you captured is not perfect.   What you have captured contain 4 samples of color information.   So for the resultant 4 pixels, we have 4 luma info (aproximated) and 4 chroma info.   If we wanted to capture a real 4:4:4 RGB we would need for a block of 4 pixels - 4 greens, 4 red, 4 blue. As each one provide it's part of the spectrum in luminosity, we have 12 samples of luminosity (remember 4 luma for Bayer). (If we converted to 4:4:4 YUV, 4 samples of luma, 8 of chroma)   Now let's go from 4:4:4 to 4:2:2. To go to 4:2:2, we have to convert to YCbCr. For both chroma channel we discard half the information. So we get 4 samples of luma and 4 samples of chroma.   4:2:2 YCbCr is 8 samples of combined chroma/luma. Bayer is 4 samples. So Bayer is half as good as 4:2:2.     So why 4:2:2 from the uncompressed hdmi out is less good than Raw from the BMCC ? Because first the 4:2:2 you get is from a Bayer Pattern, so you already converted all the data. But you mainly processed the Raw Data to some preformated preset from Nikon, Arri, Sony or Hasselblad - it is not as automatic - far from the dumbest guy using a Raw Converter.   So now is 4:2:2 better than Bayer. Yes. Much better. I would no convert any of my work back to a bayer encoded file. It is that bad.   Look at what it is without any demosaicing :       What would be the best output in my opinion that every camera should have ? Not HDMI of SDI, but a digital raw output protocol, somthing like S/PDIF or AES EBU for audio where you output RAW and you do what you want with it from your external recorder. First, about the same bandwidth for a 16bit RAW Bayer than a 8bit 4:2:2 uncompressed. You'll also be able to convert on the fly using some standard algorithm or proprietary (think Fujifilm). And to whatever codec you need...
  19.  A few clarification are needed are.   Uncompressed does by no mean RGB and full resolution. Uncompressed is a codec with raw data and without lossless compression (zip or similar), think a regular text file, a .bmp, or a .wav. An uncompressed video codec can be YUV 4:2:2. If you are using an uncompressed codec, you can recode it millions of times without ever losing any information for the source.   There is lossless codec. Similar to uncompressed codec, you never lose information. The difference ? Lossless compression. Think a zipped file, a .flac audio file. etc.   And finaly, lossy compression. Every time you compress with it, you lose a generation.   Just to be clear, if you have a 4:4:4 video file and you encode it to an uncompressed 4:2:2 (or lossless 4:2:2), there is some information discarded, but it is like saying that you lose information by exporting a 5.1 24-bit 96kHz audio file to 16bit 44.1hz stereo, you choose to discard some data. But from there you can open the file, resave it in aiff in the same format, feed it in PCM through SP/DIF - you still don't lose information.   Like I wrote in a previous post, 4:2:0 is not a and half resolution blue channel, and a quarter resolution red channel. Both chroma channels - which neither are blue or red - have the same sampling : a quarter of resolution.   As for the channels - if YCbCr - the Cb is a yellow/purple saturation channel, the Cr is kind of turquoise/pink saturation channel. Here is an example : from ( http://en.wikipedia.org/wiki/YCbCr )       As for practical fixes, I would suggest : - Never sharpen in RGB. Sharpen only the luma channels. Sadly most NLE does not include a luma only shapener. - Slightly blur or smooth the chroma channels - or the problematic region. That is the way to get ride easily of color moire without affecting resolution. - Grade mostly in Luma/Chroma. Much better result.
  20.   Image compression is is nothing like you described.
  21.   Sampling from the sensor is not done in 4:2:2 or anything like that. It is sampled from RGB cells in a bayer pattern. From demosaicing you get a 4:4:4 RGB (even though you only have 2G, 1R, 1B - it could be similar or to a 4:2:2 RGB - but that doesn't exist.)   from that demosaiced 4:4:4 RGB, they are matrixing  it to YUV (in fact YCbCr) to get 1 luma channel and 2 chroma channel. This is what is hold in those codec - not rgb.   After that, they may downsample the chroma channels to 4:2:2, 4:2:0, or 4:1:1, or something else.   And after that some block splitting, DCT, etc...
  22.   Lets not go really far. JPEG can be 4:4:4, or can use chroma subsampling (http://en.wikipedia.org/wiki/JPEG ) is JPEG 4:4:4 lossless ? You tell me.   ProRes 4:4:4 is not lossless either.   You mixing up chroma subsampling with compression. Two different things.
  23.     There is a lot of mixed-up here. Chroma sampling has nothing to do with uncompressed, lossless..... You can have a lossy 4-4-4- compression codec and a lossless 4-2-0 codec.
  24.   Thanks, I looked at it at home.   Had to convert with ffmpeg the ProRes to non-compressed 4-4-4 as the ProRes decoding in AfterEffects smoothed the chroma.   It really seems like 4-2-2. Sharp block of 2x1 pixels in the U and V channels. It didn't look like an upscaled 4-2-0 to 4-2-2.   Too bad I didn't have time to get some grab.   So unless another expert from the forum can't prove otherwise, the hdmi out of the d5200 is 4-2-2. This is really nice.
  25.   Just opened your image in Afterfx and changed the channels to YUV.   Internal h264 recording is clearly 4-2-0. As for the Prores version, it is difficult to see as chroma change on each pixel. I would not be surprised if Apple is cheating by applying some chroma smoothness on their ProRes decoding.
×
×
  • Create New...