Jump to content

P337

Members
  • Posts

    67
  • Joined

  • Last visited

About P337

P337's Achievements

Member

Member (2/5)

12

Reputation

  1. That's interesting, I didn't realize how destructive h.264 could be to color depth. Sorry to put you on the spot but I'd like to see some links or examples that backs that claim (I'll also google it) Thanks hmcindie!
  2. Ok, good question, so let's figure this out. Say you've resampled the 4K/UHD to HD then cropped to 720 and then resampled back up to HD. Scaling from 1280x720 to 1920x1080 needs to add 1 new pixel after every 2 pixels (both horizontal and vertical) which means every 3rd pixel are interpolated in post rather than "recorded" (but remember every pixel from a bayer sensor is interpolated at some point, what is important is how accurate the interpolation method is.) The good news is if using a decent scaling algorithm this will still be a 4:4:4 image since you aren't simply copying pixels but sampling from a group of 4:4:4 source pixels which gives them a very high chance of accuracy; I'm not saying there will be no quality loss vs source 4:4:4 HD footage, because this still requires every 3rd pixel in the final image to be estimated which means it could get it wrong from time to time, but any artifacts or softness would be minimal. Compare this method to how a "recorded pixel" is created from a bayer pattern sensor, in the camera a 3x3 block (9 pixels) of Red, Green OR Blue "pixels" (sensels really) are sampled to interpolate one pixel; the pixels being created here in post are interpolated from sampling a 4x4 block (16 pixels) of full RGB pixels. Now instead let's say you've cropped 4k/UHD to 75% (2560x1440 aka 2.5k) then resampled that to HD. (First off if you did any reframing in 2.5k before resampling to HD try to do them in values of 4 pixels (4,8,12,16, 20, etc.) from the center, because you're in a 4:2:0 environment that you plan to scale down, if not then your results could be slightly worse but probably not noticeably.) So cropping UHD to 2.5k has not made any changes to chroma subsampling or bit depth yet since all you've done was delete the outer pixels but when you resample 2.5k to HD now you are removing every 3rd pixel (both horizontal and vertical). There isn't enough data here to sample up to a 4:2:2 subsampling pattern so they'll be resampled back to 4:2:0 and will likely add extra aliasing. The same goes for the Bit Depth, it might allow up to a possible 9bit precision for every other pixel but that isn't enough to call the entire image 9bit, but I guess you could call it 8.5bit if you like :) In the end what you're talking about here is an "8.5bit" 4:2:0 1080p image (with extra aliasing) vs a slightly inaccurate 10bit 4:4:4 1080p image.
  3. No, unfortunately when you crop an image you're straight up throwing away pixels and cutting down resolution. Once you crop the image past 3840x2160 you lose that 2:1 pixel scale rate to 1920x1080, which negates any gains found in resampling "4k"/UHD to HD. You should resample the UHD to HD with Lanczos first, you want that improved image so make sure to render that in 10bit, then crop that down to 1280x720. That would get you perfect 720p footage with the ability to reframe. Then if you're feeling lucky you can try to up-res that back to HD with a Mitchell interpolation algorithm (render this in 14bit, if you can), at least that would keep the artifacts to a minimum, but I'd be happier with the reframed 720p image.
  4. Right, well a proper de-squeeze is stretching the image horizontally, which "duplicates" pixels (1920x1080 to 3840x1080). However you could also just "squash" the image vertically (1920x540), this would "resample" the vertical pixels. The "squashing" technique should "technically" result in a 4:2:2 9bit image, that is if you're averaging the pixels down rather than just deleting them, use a "Sinc filter" like Lanczos.
  5. Nah, an anamorphic de-squeeze actually adds pixels rather than replacing or resampling them and since it's essentially duplicating the already recorded pixels over to the right it would still have the same bit depth/color depth, color space, dynamic range/latitude. Someone might make the argument of a perceived difference but the numbers are the numbers and nothing "new" is created in the process. Actually instead of simply duplicating you could interpolate the "new" pixels with something like a Mitchell filter (samples the surrounding 16 "source pixels" to make one new pixel) which, "technically speaking", an 8bit source would have a chance of achieving UP TO "14bit per channel accuracy" (more likely 11bit) for every other horizontal pixel LOL; but hey it could make a difference to the perceived bit depth and dynamic range :D
  6. Ah that's interesting tosvus, so if the GH4 is like the GH3 it will use a 12bit A/D converter and if its video processing is anything like most cameras the algorithm would take 12bit values from the sensor then "rounds" those to 8bit values. The question then is what's the "round up" threshold in panasonic's algorithm. To do what you suggest would require Panasonic engineers to use a variable threshold; (**WARNING TECHNICAL RANT AHEAD**) first they would have to break the pixels into 2x2 blocks, then round the 12bit values of (at least) every "top left pixel" to 10bit before they can set the "round up" threshold for the rest of the pixels (every "top right pixel" and the two "bottom left and right pixels") in relation to the 10bit value of the "top left pixel" for every 2x2 block of pixels, then it can round everything to 8bit.(**/RANT**) Basically that means they would have to do a lot of extra work when programing the algorithm to specifically reconstruct 8bit to 10bit when down sizing 4k to HD in post. I think it's far more likely they they just set the "round up" threshold to a static .5 or higher; which would mean we're back to the 25-100% chance of 10bit accuracy, though the statistics law of large numbers would say we can expect a 50% chance of accuracy with could be considered similar to 9bit which is still twice the accuracy of 8bit. tupp, are you taking about "perceived" color depth? Which would factor in viewing distance and resolution. I agree that the perceived color depth would have broad characteristics but from a technical standpoint Bit Depth is bits per channel and Color Depth is bits per pixel. For example I would say an image's bit depth is 8bits per color which would be a 24bit color depth image because the 8bits for each color (RGB) would be 8bits x 3channels for each pixel; however from this technical standpoint Bit Depth will not always equal Color Depth, for example you could have an image that has the same 8bit Bit Depth but has a 32bit Color Depth when using an Alpha Channel because then the equation changes to 8bits x 4channels (RGBA). Either way the bottom line is how many possible shades can the image contain, from that definition anything possible of 1,024 shades per channel would be considered a 10bit image (even if it isn't is using the same precision as a true 10bit source), so 8bit 4k would "technically" scale to 10bit HD but it's unlikely it would reproduce colors as accurately as a proper 10bit image.
  7. I like the analogy of Bit depth being like rulers for this argument, I see a "10bit HD" source as one person having to produce a measurements in full 1/4 centimeter while "8bit 4k to HD" is like 4 different people measuring in full centimeters then averaging to the median of their measurements. For example let's say they have to measure a subject that is 3.25 cm. The guy measuring in 1/4 centimeters easily and accurately produces the measurement of 3 1/4cm. The four guys measuring in full centimeters would have to choose between 3cm or 4cm initially, if three of the four measured it as 3cm and one guy measures it as 4cm then their result would be accurate at 3 1/4cm. But if they ended up with a different set of initial measurements they could result in an inaccurate measurement of 3 1/2cm or 3 3/4cm. So it is possible to down sample 8bit 4k and gain more accurate color depth similar to 10bit HD but you will be introducing an opportunity to get slightly inaccurate results compared to an actual 10bit source, in fact each pixel has only a 1 in 4 chance of being accurately sampled to 10bit, this would be noticed on edges of color objects and in gradients. I don't think it's worth it as a 10bit replacement but it's definitely worth it for 4:4:4 and hey it's better than 8bit so if you're stuck with it I would down sample it. Bit-rate compression is another story, as 100mbps is not enough for 4:4:4 HD. Luckily we are bloating to 4:4:4 after recording so we are able to reset the bit-rate, allowing adequate room for the extra information to keep the same detail of the original, I would suggest 400mbps at least up to 1.6gbps if you really think it's 10bit.
  8.   http://www.magiclantern.fm/forum/index.php?topic=2764.0   This is interesting (to me): Encoder feed is 1720x974 (550D) and 1904x1072 (5D3) For 600D, sizes are 1728x972 (crop) and 1680x945 (non-crop)   So the T3i was actually only as good as the T2i in its 3x crop mode??  WOW  I'm afraid to hear the numbers for the newer Rebels and isn't that 1680x945 number very familiar (7D's HDMI out image resolution) so I bet the 7D was just as bad as the T3i.
  9. What needs to be understood is that these "2K" DNGs actually have about 1920x1080 pixels of "actual image", the rest are black bars.   So far each model seems to have a set width for the "actual image" but the image height can be changed by setting the camera's recording resolution in Canon's menus.  Also in the 5x and 10x mode the width can be increased but the height is about cut in half.  Further more 1080p and 720p modes seem to use the same "actual image size" of about 1920x1080 but the DNGs also mimic the frame-rate so in 720p mode you get a usable 1080p60.  However the memory buffer to your storage card is limited to about 30 frames, so that's about 1 second for most frame-rates and half a second for 60 fps.     Here are a few examples of "usable image" sizes: 60D in 1080/720 = 1736 x 1154 60D in 480p = 1736 x 694 (almost 720p but needs a little up-resing in post) 60D in 5x = 2520 x 1764 60D in 10x = 2520 x 1080   5D3 in 1080 = 1931 x 1288 5D3 in 5x = 3592 x 1320 (a bit of a crop factor though, Andrew said about 1.3x) 5D2 in 1080 = 1880 x 1248   There have been very little advancements, 1% tried 720p 24fps on the 6D and only got 54 frames.  That is about 20 more frames and about 2 seconds and so far the longest 24fps recording on record.  They have also found that turning the camera to "JPEG" (Not RAW+JPEG or RAW) increases the frame buffer to about 50 also but won't compress the DNG.  The problem is that the buffer simply isn't fast enough and as I said before they will have to compress the DNG much like BlackMagic does.     Alex just threw in a quick DNG converter to see if it would work, it does but it was designed by CHDK some time ago and based on a previous version that didn't support compressions, if they can implement the newer DNG then they might be able to add compression but even at a basic 3:1 compression ratio that's only 3-6 seconds before the buffer fills again.  
  10. They should try the 7D, it has the biggest/fastest buffer.
  11. Actually the 5D2 and 5D3 have the same buffer sizes but 5D3 supports UDMA 7 (1000x CF cards).     They are currently preforming active tests on the 5D3, 5D2, 6D, 60D and 600D
  12. Just to clear some stuff up here: 1. Only about a 1920x1080 crop of each 2040x1428 file is the actually image, the rest is black bars. (this is on the 5Ds, the others are different like the 6D seems to be less and 60D in 5x mode is "usable 2520x1080") 2. DNG is not debayered, I don't know about CHDK's DNG converter that Alex has implemented into Magic Lantern but I bet, like Abode CinemaDNG and DNG, it needs debayering in post. "Images from most modern digital cameras need to be debayered to be viewed..." 3. Magic Lantern's YUV422 video recorder/silent pic is different then their new 14bit DNG RAW silent pics. (though they are both based on saving an image from the live view buffer) 4. H.264, YUV422 and HDMI are all debayered and interpolated in camera, that's why it is no longer RAW 5. Right now 14bit DNG can record a 1080p usable resolution at all frame rates (24p-60p) but only about 30 of these 5.09MB DNG files (142.7 MBs) can be written before the buffer to the CF or SD card fills (which is supposedly 150MB); that's 1 second for 24p-30p or half a second for 60p. The answer seems to be in cropping the file down to just the usable image before sending to the card or/and adding some relatively light compression to get a stable 24p video. 1% said that he got up to 50 frames with a 720p resolution at 24fps which is the longest on record so far for 24p.
  13.   It's true that the differences between 4:2:2 and 4:2:0 is hard to see in the end result but it helps keep everything together in post.  However you should see more of a difference in an uncompressed signal vs 24-50Mbps AVC (IPB).     What was the recording bit-rate for this video?     Though in most situations I would still prefer the smaller AVC file sizes.
  14.   If this ever gets to 24fps its obviously going to have bandwidth and storage issues, its only use would be for quick "b-roll"/"cut away" shots for any type of filming, I doubt it would be as useful as a BlackMagic for any other situation.   I wonder if this would work better on a camera with a bigger buffer, like the 7D or 1D mk4.
  15.   I agree, it looks like the G6 hardware has the potential to do better than the GH2 and maybe even the GH3 (moire mostly) but that doesn't mean Panasonic won't hold it back in some other way.  I'm trying to stay reasonable and resist the urge to dream about 4:2:2 clean HDMI or a high bit-rate hack within a year of its release but despite what I've seen with those auto/poorly exposed youtube "reviews" I'm hopeful that we will get decent quality with built in focus peaking :D 
×
×
  • Create New...