Jump to content

P337

Members
  • Posts

    67
  • Joined

  • Last visited

Everything posted by P337

  1. That's interesting, I didn't realize how destructive h.264 could be to color depth. Sorry to put you on the spot but I'd like to see some links or examples that backs that claim (I'll also google it) Thanks hmcindie!
  2. Ok, good question, so let's figure this out. Say you've resampled the 4K/UHD to HD then cropped to 720 and then resampled back up to HD. Scaling from 1280x720 to 1920x1080 needs to add 1 new pixel after every 2 pixels (both horizontal and vertical) which means every 3rd pixel are interpolated in post rather than "recorded" (but remember every pixel from a bayer sensor is interpolated at some point, what is important is how accurate the interpolation method is.) The good news is if using a decent scaling algorithm this will still be a 4:4:4 image since you aren't simply copying pixels but sampling from a group of 4:4:4 source pixels which gives them a very high chance of accuracy; I'm not saying there will be no quality loss vs source 4:4:4 HD footage, because this still requires every 3rd pixel in the final image to be estimated which means it could get it wrong from time to time, but any artifacts or softness would be minimal. Compare this method to how a "recorded pixel" is created from a bayer pattern sensor, in the camera a 3x3 block (9 pixels) of Red, Green OR Blue "pixels" (sensels really) are sampled to interpolate one pixel; the pixels being created here in post are interpolated from sampling a 4x4 block (16 pixels) of full RGB pixels. Now instead let's say you've cropped 4k/UHD to 75% (2560x1440 aka 2.5k) then resampled that to HD. (First off if you did any reframing in 2.5k before resampling to HD try to do them in values of 4 pixels (4,8,12,16, 20, etc.) from the center, because you're in a 4:2:0 environment that you plan to scale down, if not then your results could be slightly worse but probably not noticeably.) So cropping UHD to 2.5k has not made any changes to chroma subsampling or bit depth yet since all you've done was delete the outer pixels but when you resample 2.5k to HD now you are removing every 3rd pixel (both horizontal and vertical). There isn't enough data here to sample up to a 4:2:2 subsampling pattern so they'll be resampled back to 4:2:0 and will likely add extra aliasing. The same goes for the Bit Depth, it might allow up to a possible 9bit precision for every other pixel but that isn't enough to call the entire image 9bit, but I guess you could call it 8.5bit if you like :) In the end what you're talking about here is an "8.5bit" 4:2:0 1080p image (with extra aliasing) vs a slightly inaccurate 10bit 4:4:4 1080p image.
  3. No, unfortunately when you crop an image you're straight up throwing away pixels and cutting down resolution. Once you crop the image past 3840x2160 you lose that 2:1 pixel scale rate to 1920x1080, which negates any gains found in resampling "4k"/UHD to HD. You should resample the UHD to HD with Lanczos first, you want that improved image so make sure to render that in 10bit, then crop that down to 1280x720. That would get you perfect 720p footage with the ability to reframe. Then if you're feeling lucky you can try to up-res that back to HD with a Mitchell interpolation algorithm (render this in 14bit, if you can), at least that would keep the artifacts to a minimum, but I'd be happier with the reframed 720p image.
  4. Right, well a proper de-squeeze is stretching the image horizontally, which "duplicates" pixels (1920x1080 to 3840x1080). However you could also just "squash" the image vertically (1920x540), this would "resample" the vertical pixels. The "squashing" technique should "technically" result in a 4:2:2 9bit image, that is if you're averaging the pixels down rather than just deleting them, use a "Sinc filter" like Lanczos.
  5. Nah, an anamorphic de-squeeze actually adds pixels rather than replacing or resampling them and since it's essentially duplicating the already recorded pixels over to the right it would still have the same bit depth/color depth, color space, dynamic range/latitude. Someone might make the argument of a perceived difference but the numbers are the numbers and nothing "new" is created in the process. Actually instead of simply duplicating you could interpolate the "new" pixels with something like a Mitchell filter (samples the surrounding 16 "source pixels" to make one new pixel) which, "technically speaking", an 8bit source would have a chance of achieving UP TO "14bit per channel accuracy" (more likely 11bit) for every other horizontal pixel LOL; but hey it could make a difference to the perceived bit depth and dynamic range :D
  6. Ah that's interesting tosvus, so if the GH4 is like the GH3 it will use a 12bit A/D converter and if its video processing is anything like most cameras the algorithm would take 12bit values from the sensor then "rounds" those to 8bit values. The question then is what's the "round up" threshold in panasonic's algorithm. To do what you suggest would require Panasonic engineers to use a variable threshold; (**WARNING TECHNICAL RANT AHEAD**) first they would have to break the pixels into 2x2 blocks, then round the 12bit values of (at least) every "top left pixel" to 10bit before they can set the "round up" threshold for the rest of the pixels (every "top right pixel" and the two "bottom left and right pixels") in relation to the 10bit value of the "top left pixel" for every 2x2 block of pixels, then it can round everything to 8bit.(**/RANT**) Basically that means they would have to do a lot of extra work when programing the algorithm to specifically reconstruct 8bit to 10bit when down sizing 4k to HD in post. I think it's far more likely they they just set the "round up" threshold to a static .5 or higher; which would mean we're back to the 25-100% chance of 10bit accuracy, though the statistics law of large numbers would say we can expect a 50% chance of accuracy with could be considered similar to 9bit which is still twice the accuracy of 8bit. tupp, are you taking about "perceived" color depth? Which would factor in viewing distance and resolution. I agree that the perceived color depth would have broad characteristics but from a technical standpoint Bit Depth is bits per channel and Color Depth is bits per pixel. For example I would say an image's bit depth is 8bits per color which would be a 24bit color depth image because the 8bits for each color (RGB) would be 8bits x 3channels for each pixel; however from this technical standpoint Bit Depth will not always equal Color Depth, for example you could have an image that has the same 8bit Bit Depth but has a 32bit Color Depth when using an Alpha Channel because then the equation changes to 8bits x 4channels (RGBA). Either way the bottom line is how many possible shades can the image contain, from that definition anything possible of 1,024 shades per channel would be considered a 10bit image (even if it isn't is using the same precision as a true 10bit source), so 8bit 4k would "technically" scale to 10bit HD but it's unlikely it would reproduce colors as accurately as a proper 10bit image.
  7. I like the analogy of Bit depth being like rulers for this argument, I see a "10bit HD" source as one person having to produce a measurements in full 1/4 centimeter while "8bit 4k to HD" is like 4 different people measuring in full centimeters then averaging to the median of their measurements. For example let's say they have to measure a subject that is 3.25 cm. The guy measuring in 1/4 centimeters easily and accurately produces the measurement of 3 1/4cm. The four guys measuring in full centimeters would have to choose between 3cm or 4cm initially, if three of the four measured it as 3cm and one guy measures it as 4cm then their result would be accurate at 3 1/4cm. But if they ended up with a different set of initial measurements they could result in an inaccurate measurement of 3 1/2cm or 3 3/4cm. So it is possible to down sample 8bit 4k and gain more accurate color depth similar to 10bit HD but you will be introducing an opportunity to get slightly inaccurate results compared to an actual 10bit source, in fact each pixel has only a 1 in 4 chance of being accurately sampled to 10bit, this would be noticed on edges of color objects and in gradients. I don't think it's worth it as a 10bit replacement but it's definitely worth it for 4:4:4 and hey it's better than 8bit so if you're stuck with it I would down sample it. Bit-rate compression is another story, as 100mbps is not enough for 4:4:4 HD. Luckily we are bloating to 4:4:4 after recording so we are able to reset the bit-rate, allowing adequate room for the extra information to keep the same detail of the original, I would suggest 400mbps at least up to 1.6gbps if you really think it's 10bit.
  8.   http://www.magiclantern.fm/forum/index.php?topic=2764.0   This is interesting (to me): Encoder feed is 1720x974 (550D) and 1904x1072 (5D3) For 600D, sizes are 1728x972 (crop) and 1680x945 (non-crop)   So the T3i was actually only as good as the T2i in its 3x crop mode??  WOW  I'm afraid to hear the numbers for the newer Rebels and isn't that 1680x945 number very familiar (7D's HDMI out image resolution) so I bet the 7D was just as bad as the T3i.
  9. What needs to be understood is that these "2K" DNGs actually have about 1920x1080 pixels of "actual image", the rest are black bars.   So far each model seems to have a set width for the "actual image" but the image height can be changed by setting the camera's recording resolution in Canon's menus.  Also in the 5x and 10x mode the width can be increased but the height is about cut in half.  Further more 1080p and 720p modes seem to use the same "actual image size" of about 1920x1080 but the DNGs also mimic the frame-rate so in 720p mode you get a usable 1080p60.  However the memory buffer to your storage card is limited to about 30 frames, so that's about 1 second for most frame-rates and half a second for 60 fps.     Here are a few examples of "usable image" sizes: 60D in 1080/720 = 1736 x 1154 60D in 480p = 1736 x 694 (almost 720p but needs a little up-resing in post) 60D in 5x = 2520 x 1764 60D in 10x = 2520 x 1080   5D3 in 1080 = 1931 x 1288 5D3 in 5x = 3592 x 1320 (a bit of a crop factor though, Andrew said about 1.3x) 5D2 in 1080 = 1880 x 1248   There have been very little advancements, 1% tried 720p 24fps on the 6D and only got 54 frames.  That is about 20 more frames and about 2 seconds and so far the longest 24fps recording on record.  They have also found that turning the camera to "JPEG" (Not RAW+JPEG or RAW) increases the frame buffer to about 50 also but won't compress the DNG.  The problem is that the buffer simply isn't fast enough and as I said before they will have to compress the DNG much like BlackMagic does.     Alex just threw in a quick DNG converter to see if it would work, it does but it was designed by CHDK some time ago and based on a previous version that didn't support compressions, if they can implement the newer DNG then they might be able to add compression but even at a basic 3:1 compression ratio that's only 3-6 seconds before the buffer fills again.  
  10. They should try the 7D, it has the biggest/fastest buffer.
  11. Actually the 5D2 and 5D3 have the same buffer sizes but 5D3 supports UDMA 7 (1000x CF cards).     They are currently preforming active tests on the 5D3, 5D2, 6D, 60D and 600D
  12. Just to clear some stuff up here: 1. Only about a 1920x1080 crop of each 2040x1428 file is the actually image, the rest is black bars. (this is on the 5Ds, the others are different like the 6D seems to be less and 60D in 5x mode is "usable 2520x1080") 2. DNG is not debayered, I don't know about CHDK's DNG converter that Alex has implemented into Magic Lantern but I bet, like Abode CinemaDNG and DNG, it needs debayering in post. "Images from most modern digital cameras need to be debayered to be viewed..." 3. Magic Lantern's YUV422 video recorder/silent pic is different then their new 14bit DNG RAW silent pics. (though they are both based on saving an image from the live view buffer) 4. H.264, YUV422 and HDMI are all debayered and interpolated in camera, that's why it is no longer RAW 5. Right now 14bit DNG can record a 1080p usable resolution at all frame rates (24p-60p) but only about 30 of these 5.09MB DNG files (142.7 MBs) can be written before the buffer to the CF or SD card fills (which is supposedly 150MB); that's 1 second for 24p-30p or half a second for 60p. The answer seems to be in cropping the file down to just the usable image before sending to the card or/and adding some relatively light compression to get a stable 24p video. 1% said that he got up to 50 frames with a 720p resolution at 24fps which is the longest on record so far for 24p.
  13.   It's true that the differences between 4:2:2 and 4:2:0 is hard to see in the end result but it helps keep everything together in post.  However you should see more of a difference in an uncompressed signal vs 24-50Mbps AVC (IPB).     What was the recording bit-rate for this video?     Though in most situations I would still prefer the smaller AVC file sizes.
  14.   If this ever gets to 24fps its obviously going to have bandwidth and storage issues, its only use would be for quick "b-roll"/"cut away" shots for any type of filming, I doubt it would be as useful as a BlackMagic for any other situation.   I wonder if this would work better on a camera with a bigger buffer, like the 7D or 1D mk4.
  15.   I agree, it looks like the G6 hardware has the potential to do better than the GH2 and maybe even the GH3 (moire mostly) but that doesn't mean Panasonic won't hold it back in some other way.  I'm trying to stay reasonable and resist the urge to dream about 4:2:2 clean HDMI or a high bit-rate hack within a year of its release but despite what I've seen with those auto/poorly exposed youtube "reviews" I'm hopeful that we will get decent quality with built in focus peaking :D 
  16.   Panasonic apparently pixel binds 2x2 pixels to reduce the sensor's full resolution to 1/4 before reading the sensor (approximately 12Mp to 3Mp for the GH3), then further reduces that when processing to "FullHD" (about 2Mp). - http://www.imaging-resource.com/news/2012/09/28/qa-with-panasonic-the-story-behind-the-new-gh3-and-compact-system-tech   Also Panasonic claimed disabling the multi-aspect feature of the sensor helps them improve their sensor reads, enabling 1080p60 as well as a cleaner image.  This makes sense since the GH2 had to down res approximately 3.5Mp to 2Mp with an older processor while the G6 will be converting 3Mp to 2Mp with a newly developed processor designed to further increase signal to noise, preserve details in noise reductions and widen dynamic range.  Checkout the G5 since it also used the GH2 sensor with supposedly better results, and it did it without this new processor. (I'm still trying to track down where I read all this so take it with a grain of salt)   Can't help to think this could be Panasonic trying to show off what they can do with their own hardware technology, vs the Sony sensor in the GH3, justing on the specs and price Panasonic seems to be ok "cannibalizing" some of the GH3 sales to make this point. 
  17. Everyone seems to be pre-occupied bickering over youtube videos so I did some digging and one thing I found is that the GH3 incorporates 8x8 and 4x4 pixel macroblocks into all of their their Intra and Inter frame compressions where as the GH1 and GH2 only used 4x4 on everything.     As I understand it, this helps the GH3 be more efficient with bandwidth and supposedly helps retain fine detail areas, so if this was carried over then the G6 should have the potential of slightly better details than the GH2.      
  18. @Andrew may be in the best situation to assess this since he just finished the DPReviews' video section of the GH3 where he compared it to the GH2 but I leave this question open to anyones speculations.   Since no one who actually had hands on this camera did any proper video tests (DPReview should have send EOSHD lol) we are forced to estimate quality until those tests are available but if we assume it has a sensor just as good as the GH2/G5 and a processor as good as the GH3 which do you expect the video quality to be closer too?  GH3 or GH2?    Specifically my worries are going back to the noise levels of the GH2 , seeing the banding return and all this stuff:     Let my put this question another way; do you think the issues of the GH2 compared to the GH3 mostly lie in the sensor or the processor of the camera?   Aside from that wouldn't an increase of bit-rate solve everything?  Since they both use h.264 in (I assume) IPB, does the GH2 at 50Mbps equal the GH3 at 50Mbps?  And isn't there an Intra (ALL-I) hack for the GH2 as well?
  19.   I made a huge post about this the original Pocket thread:  http://www.eoshd.com/comments/topic/2461-blackmagic-pocket-cinema-camera/page-3
  20. Nice! Thanks for this Julian. Also to anyone with access to questionable lenses and a Canon Rebel T3i(600D) you can check them in that camera's 3x digital movie crop mode which supposedly crops the actual sensor down to about a Super 16 size, might not be exactly the same as the PocketCC though.
  21. The peaking looks good, hope it's adjustable though, and hopefully BlackMagic will enable CinemaDNG on their external HDMI recorder.   Or is the HDMI out no longer considered RAW?  I know it's uncompressed over HDMI but will it be 12bit 1080p30 and do they have to De-Bayer the signal?
  22. One recently popular example of Super 16 is the Walking Dead on AMC,  Cinematographer David Boyd said they choose Super 16 for the increased speed over 35mm and to give it a reality/documentary feel.  They also considered Digital but decided they wanted the s16 film's "texture" and grain, which are two things we wouldn't get out of the PocketCC but can easily be added in post.     So far I've noticed when watching the show, the actors seem to have a lot of "walking/running around space", I assume with S16 you can keep the filming crew back out of the way while still getting a tight shot easier.  They also do a lot of what looks like outdoor on location scenes in daylight with a lot of running around, which I also assume is easier with the smaller S16 cameras.   Also I've noticed most Cine Prime sets (which are meant for "Super35"~28.2mm) typically range is 24mm-35mm-50mm-85mm which would give a similar view to an 12mm-17mm-24mm-40mm set on "Super 16"~14.3mm.
  23.   The good thing is that the PocketCC has a Micro HDMI port, which I think means it has HDMI 1.4 or later.   That supports up to a 16 bit signal, 4k@24p, QHD@30p and even 3D FullHD@24p;  the newest 1.4b even supports FullHD@120p.  So we should have no trouble getting a completely uncompressed 1080p30 12bit signal as well.  But does it de-bayer the image for HDMI output?  Because that would mean it is no longer RAW.  Either way I still think internal ProRes and waiting for a cheaper card is the way to go.    Also that 3D spec is interesting, doesn't Panasonic have a 3D MFT lens too?  I wonder how that works.
  24. grabbed this list of c-mount lenses for PocketCC from another forum:
  25. Adding to this and summarizing my personal assessment   Right now I use DSLRs and it cost me about $700 (cards, batteries, accessories and lens) to get each one up and running but to get the PocketCC to run at an equal level to replace one of my A or B cams is proving to be quite more expensive...   First I figured to get a PocketCC up and running with my work as an A cam or B cam would cost me about $150 in cards (since all of my current SD cards aren't fast enough).     Second I would need to buy a new "ultra wide" lens as my 24mm will now be 70mm, I've narrowed my choices to either a $500 Tamron 17-50/2.8 VC (and hope a "smart" M4/3 to EF adapter comes out soon) or the $1300 Panasonic 12-35/2.8.   Third I would need more batteries, to last about 2 hours of on time I would need at least another n-20el battery which is around $50.   Fourth none of my LCD loupes fit a 3.5" 16:9 screen, the only one I found that covers 3.5" is the Varavon MultiView but it's $300 and huge! ...and ugly.   Fifth I might need a new "project storage system" since my current setup is meant for 100GB projects and this looks like those projects could potentially be at least 2.5x larger with the ProRes HQ option alone; which will likely be another $100.   So to incorporate it I'm likely looking at another $1,100 - $2,000, potentially the same cost range as adding a 6D or D600 or GH3 or NEX VG30 or D7100 if not more... but with the PocketCC it forces me to get ready for a "RAW/Low Compression" workflow which I think was Black Magic's point and not really a bad idea for me to do.   These numbers may not apply to everyone and of course you don't need to buy all this stuff with the purchase of a PocketCC, technically it will "work" with as little as a $13 SD card and a $100 lens, but that just makes this a suitable toy rather then a tool, which is perfectly fine if this is for a hobby or you're using it to learn on but if you need this to make you money prepare to spend a bit more than $995.
×
×
  • Create New...