Jump to content

Exploring Nikon D5200 HDMI output - review update


Andrew Reid

Recommended Posts

I don't even understand why the question is so difficult to answer. If the 'capture format' is known that defines the outer limit of the color space, 4:4:4 being 100% uncompressed. Minute that you apply compression you can bet that color compression is going to be one of the losses, thus 4:2:2, 4:2:0 etc.

 

One of the reasons that it's difficult is that human eye is not very sensitive to color. You can spot the differences when doing greenscreen because the green channel is now suddenly 1/4th the resolution of the luma channel. 4:2:2 brings that up to only half res. But you won't really spot that difference easily, except in motion graphics with clear lines and chroma. There it can actually be quite a distraction. DVD's suffered badly from 4:2:0 artifacting, The Abyss film with it's huge reds and blues really suffered from it on DVD. But in HD the effect is quite a bit smaller. Because when you increase the res from 720x576 to 1920x1080, you also increase the chroma res. 

 

You can also get 4:4:4 from just scaling down a high res 4:2:0 file. Though basically EVERYTHING on the internet, bluray and anywhere you project stuff is only 4:2:0. So this just helps in grading (if you do masks based on chroma, and then it only helps on the edges) or greenscreen. 

 

So a lot of people have misconceptions of 4:2:2 and the like because it is not that easy to spot. Also, if you have an original 4:4:4 file and just convert it to 4:2:0, it will have no benefit at all compared to shooting 4:2:0 originally. Unless you do specific chroma related stuff.

Link to post
Share on other sites
  • Replies 77
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

For those looking for simplification, a summary of what I've learned so far about the destructive aspects of video compression follows.   This is from the perspective of someone who likes to shoot b

It's uncompressed 4-2-2 guys.   I'm finally getting it to work with the Blackmagic HyperDeck Shuttle and it grades amazingly. Noise grain is finer, there's obviously far less compression going on vi

Not exactly.   First, each Cb and Cr (could also be UV) channel represent the deviation from gray on an axis ( from wikipedia : CB/PB and CR/PR are deviations from grey on blue–yellow and red–cy

The greenscreen footage is not the best to judge if it's real 4:2:2 unfortunately...

There simply isn't much detail in the color channels and it is hard to distinguish the detail provided by better subsampling from detail that is added by better compression :/

A more complex outdoor scene comparsion would be useful!

 

By the way, does the HDMI output have sound?

Link to post
Share on other sites

"You can also get 4:4:4 from just scaling down a high res 4:2:0 file."
But that's the whole point, it's FAKE 4:4:4 once you toss the data at capture to record a 4:2:0 the color gamut is compressed and the original gamut NOT retreivable. It's the same as uprezing file resolutions. FAKE.

This is why the only guaranteed format for recording 4:4:4 is RAW UNCOMPRESSED.
All this 'gobbeldy gook' shell game that manufacturers play with consumers is ridiculous.

Link to post
Share on other sites

I'm begining to wonder if folks undestand the principles behind file compression.

No time for a long tutorial here but basically:

  • Full Resolution Capture Files are RAW UNCOMPRESSED at certain bit depths.
  • Color compression invloves applying algorythims that consolidate the gamut of a full 24-64bit image essentially minimizing the nuances by lumping pixel colors by like kind using a CLUT ( Color Look Up Table) to reduce the gross file size from color differentiation (think of a bucket of colored blocks, where you toss out the burnt umber and sienna ones and replace them with blocks that are a brown mix of the two.
  • Resolution compression is a straight line reduction in pixel density
  • Exotic compression schemes like JPEG, MJPEG, MPEG, GIF and TIFF involve a combination of color and resolution compression with some exotic edge compressions as well.
  • The fundamental law is that you cannot really retrieve data that has been eliminated during capture of a digital image, in post or at any point in the DSP chain ONCE IT HAS BEEN ELIMINATED . You can fake a partial retreival of color or resolution density but it's never that close to the original. It just looks good in product features declarations. That's why the best Chroma Key demands RAW capture or a data stream that's parsed to an authentic 4:4:4 file.
  • That's also why so many tests I've seen of HDMI vs 'native' capture do not demonstrate a quality difference when you burry your nose into the images, that is commensurate with the cost of the external recorders out there. The difference is simply not there for movies.
  • Basically, the recorder makers are taking us for a ride and they know it. It's the camera companies that are also fooling us by playing this semantic game with terms like "Clean Output over HDMI" , even "Uncompressed HDMI" and now, fake color spaces.
     
Link to post
Share on other sites
  • Administrators

"You can also get 4:4:4 from just scaling down a high res 4:2:0 file."
But that's the whole point, it's FAKE 4:4:4 once you toss the data at capture to record a 4:2:0 the color gamut is compressed and the original gamut NOT retreivable. It's the same as uprezing file resolutions. FAKE.

This is why the only guaranteed format for recording 4:4:4 is RAW UNCOMPRESSED.
All this 'gobbeldy gook' shell game that manufacturers play with consumers is ridiculous.

 

You are quite right.

 

I doubt even the D800 is true 4:2:2 as that requires the sensor to sample far more data. It can't even sample data fast enough to avoid line skipping how can it sample colour at 4:2:2?

 

Also confusing is that the original GH1 "did 4:2:2" in MJPEG mode. The file metadata said 4:2:2. Can that really be true?

Link to post
Share on other sites

Please explain a lossy 4:4:4 sampling of the color data in a file.
Especially in reference to this ecellent reference pulled from Wikipedia

 

Lets not go really far. JPEG can be 4:4:4, or can use chroma subsampling (http://en.wikipedia.org/wiki/JPEG ) is JPEG 4:4:4 lossless ? You tell me.

 

ProRes 4:4:4 is not lossless either.

 

You mixing up chroma subsampling with compression. Two different things.

Link to post
Share on other sites

I doubt even the D800 is true 4:2:2 as that requires the sensor to sample far more data. It can't even sample data fast enough to avoid line skipping how can it sample colour at 4:2:2?

 

Sampling from the sensor is not done in 4:2:2 or anything like that. It is sampled from RGB cells in a bayer pattern. From demosaicing you get a 4:4:4 RGB (even though you only have 2G, 1R, 1B - it could be similar or to a 4:2:2 RGB - but that doesn't exist.)

 

from that demosaiced 4:4:4 RGB, they are matrixing  it to YUV (in fact YCbCr) to get 1 luma channel and 2 chroma channel. This is what is hold in those codec - not rgb.

 

After that, they may downsample the chroma channels to 4:2:2, 4:2:0, or 4:1:1, or something else.

 

And after that some block splitting, DCT, etc...

Link to post
Share on other sites

Lets not go really far. JPEG can be 4:4:4, or can use chroma subsampling (http://en.wikipedia.org/wiki/JPEG ) is JPEG 4:4:4 lossless ? You tell me.

 

ProRes 4:4:4 is not lossless either.

 

You mixing up chroma subsampling with compression. Two different things.

Mixing them up is exacty what I'm not doing. Show me Lossy 4:4:4 COLOR? The resolution of an image may suffer from all manner of Block compression( JPEG/MPEG/TARGA etc etc etc)  but if the RAW (sampled)  color space is 4:4:4, then there is NO COLOR loss. That's my whole point. Color Compression and dimensional compression are separate if you maintain bit depth.
When the manufacturers start messing with 'wrappers' that's a polite way of saying they are scamming us into buying into a delivered product that's been mangled to keep file sizes down and throughput high.
I spent too much time working on MPEG2 and DSL protocols to put up with all this nonesense.
You notice that Andrew is scratching his head as to why 'Clean HDMI' to a recorder is very very marginally better than internal encoding.
The reason is that the color space call outs are lies.

Link to post
Share on other sites

For those looking for simplification, a summary of what I've learned so far about the destructive aspects of video compression follows.

 

This is from the perspective of someone who likes to shoot but loses interest at the level of coding and mathematical formulas, or where the complexity of information outweighs the practical benefits gained from the knowledge outside of the lab/software or hardware development.

 

If you're the same, you'll probably like it. 

 

---

 

Bit-depth = number of possible shades.

 

8-bit allows 255 different levels of colour for each channel. 10-bit allows 1023 different levels. And so on.

 

Side effects of limited bit-rate such as 8-bit include banding in areas with subtle gradients such as sky and smoke, "plastic" looking skin tones.

 

Practical fix: shoot your footage as close to your final look as possible. If you shoot flat, colour grade in After Effects, DaVinci, or another application with a 32-bit processing mode.

 

---

 

Sub sampling = spatial resolution of colour channels. There are three colour channels in digital video.

 

Uncompressed is R G and B, all at full resolution.

 

Sub-sampled is Y (black and white, or luminance), Cb (blue) and Cr (red). 4:4:4 all channels are full resolution. 4:2:2 the colour channels are half resolution. 4:2:0 the blue channel is half resolution, the red channel is quarter resolution.

 

This is not a mathematically perfect way of describing it, but it's conceptually sound for most of us in practice. It's as much as we need to know.

 

Side effects of 4:2:0 sub sampling include jagged pixelation and edges to red areas such as red glow from lights or red clothing. 

 

Practical fix: use a cooler white balance and bring your red back in post using a finishing application like after effects.

 

---

 

Bit rate = the amount of data used for video encoding measured over time.

 

50mbps is 50 megabits per second = 6.25 megabytes per second, for example.

 

This data rate alone does not necessary directly reflect visual and aesthetic quality, as compression algorithms and implementations are extremely complex and varied. Some fall within standards, others do not.

 

I-frame codecs encode each frame individually. I-frame allows most film-like motion "cadence".

 

GOP encoding uses Groups Of Pictures. The longer the GOP the more the codec can struggle with lots of movement. Long GOPs can contribute to a digital video "feel"

 

Side effects of limited bit rates include pixelation in high-motion shots, very little data in under-exposed or dark areas leading to blockiness and inability to recover shadow detail. general masking of natural sensor noise (grain) with unattractive pixelation.

 

Practical fix: a low bit rate is very destructive even with an advanced codec like AVCHD. This is why Canon use AVCHD for the C100, and their implementation of MPEG2 for the C300.

 

If your camera has a "black level" or "pedestal" or "Cinestyle" or "DRO", you can shift this up a little to prevent data from being encoded where there is very little priority given to it by the codec. This does spread data more thinly though, also remember your 8 bits...  Hack your camera if you can ;)

 

---

 

In short, working with compressed footage is a bit of a balancing act. A huge amount of data is thrown away in order to make files small and to separate markets.

 

The process is destructive, and cannot be reversed, though being intelligent on set and in post can help a lot. 

 

The ideal is something like Red R3D: visually lossless compression that maintains raw processing capabilities. It is a joy to work with.

 

Ironically, it's actually more important  that you get your shot right with a cheaper consumer camera than with a RAW camera, as you can't do so much in post production. Though users of lower end DSLRs are the least likely to use a light meter, for example, they are actually the most likely to benefit from it.

 

Practice makes perfect.

Link to post
Share on other sites

jharding finally nails it, where I seemed to have failed.
"Sub sampling = spatial resolution of colour channels. There are three colour channels in digital video.

Uncompressed is R G and B, all at full resolution.

Sub-sampled is Y (black and white, or luminance), Cb (blue) and Cr (red). 4:4:4 all channels are full resolution. 4:2:2 the colour channels are half resolution. 4:2:0 the blue channel is half resolution, the red channel is quarter resolution.

This is not a mathematically perfect way of describing it, but it's conceptually sound for most of us in practice. It's as much as we need to know."

Let's get the DSLR makers to stop repackaging or wrapping thier compressed streams and we'd have something you can gradee in post. Or at least, they must abandon misleading terminology

Link to post
Share on other sites

Sub sampling = spatial resolution of colour channels. There are three colour channels in digital video.

 

Uncompressed is R G and B, all at full resolution.

 

Sub-sampled is Y (black and white, or luminance), Cb (blue) and Cr (red). 4:4:4 all channels are full resolution. 4:2:2 the colour channels are half resolution. 4:2:0 the blue channel is half resolution, the red channel is quarter resolution.

 

This is not a mathematically perfect way of describing it, but it's conceptually sound for most of us in practice. It's as much as we need to know.

 

Side effects of 4:2:0 sub sampling include jagged pixelation and edges to red areas such as red glow from lights or red clothing. 

 

Practical fix: use a cooler white balance and bring your red back in post using a finishing application like after effects.

 A few clarification are needed are.

 

Uncompressed does by no mean RGB and full resolution. Uncompressed is a codec with raw data and without lossless compression (zip or similar), think a regular text file, a .bmp, or a .wav. An uncompressed video codec can be YUV 4:2:2. If you are using an uncompressed codec, you can recode it millions of times without ever losing any information for the source.

 

There is lossless codec. Similar to uncompressed codec, you never lose information. The difference ? Lossless compression. Think a zipped file, a .flac audio file. etc.

 

And finaly, lossy compression. Every time you compress with it, you lose a generation.

 

Just to be clear, if you have a 4:4:4 video file and you encode it to an uncompressed 4:2:2 (or lossless 4:2:2), there is some information discarded, but it is like saying that you lose information by exporting a 5.1 24-bit 96kHz audio file to 16bit 44.1hz stereo, you choose to discard some data. But from there you can open the file, resave it in aiff in the same format, feed it in PCM through SP/DIF - you still don't lose information.

 

Like I wrote in a previous post, 4:2:0 is not a and half resolution blue channel, and a quarter resolution red channel. Both chroma channels - which neither are blue or red - have the same sampling : a quarter of resolution.

 

As for the channels - if YCbCr - the Cb is a yellow/purple saturation channel, the Cr is kind of turquoise/pink saturation channel. Here is an example : from ( http://en.wikipedia.org/wiki/YCbCr )

 

257px-Barns_grand_tetons_YCbCr_separatio

 

 

As for practical fixes, I would suggest :

- Never sharpen in RGB. Sharpen only the luma channels. Sadly most NLE does not include a luma only shapener.

- Slightly blur or smooth the chroma channels - or the problematic region. That is the way to get ride easily of color moire without affecting resolution.

- Grade mostly in Luma/Chroma. Much better result.

Link to post
Share on other sites

^Yet still no one has explained why a bayer pattern with a 4:2:2 photosite ratio isn't implicitly limit you to chroma 4:2:2.  

Let's try.

 

 

 

For each photosite in a bayer pattern, you capture luminosity for a certain region of the spectrum. But this is not luminosity as we see human see it. (Think putting a red filter on a b&w film). So for each block of 4 photosite (RGGB) they extrapolate some luminosity as if it was white light using (in REC 709) this formula Y = 0.2126 R + 0.7152 G + 0.0722 B (http://en.wikipedia.org/wiki/Luma_%28video%29). So the luminosity you captured is not perfect.

 

What you have captured contain 4 samples of color information.

 

So for the resultant 4 pixels, we have 4 luma info (aproximated) and 4 chroma info.

 

If we wanted to capture a real 4:4:4 RGB we would need for a block of 4 pixels - 4 greens, 4 red, 4 blue. As each one provide it's part of the spectrum in luminosity, we have 12 samples of luminosity (remember 4 luma for Bayer). (If we converted to 4:4:4 YUV, 4 samples of luma, 8 of chroma)

 

Now let's go from 4:4:4 to 4:2:2. To go to 4:2:2, we have to convert to YCbCr. For both chroma channel we discard half the information.

So we get 4 samples of luma and 4 samples of chroma.

 

4:2:2 YCbCr is 8 samples of combined chroma/luma. Bayer is 4 samples. So Bayer is half as good as 4:2:2.

 

 

So why 4:2:2 from the uncompressed hdmi out is less good than Raw from the BMCC ? Because first the 4:2:2 you get is from a Bayer Pattern, so you already converted all the data. But you mainly processed the Raw Data to some preformated preset from Nikon, Arri, Sony or Hasselblad - it is not as automatic - far from the dumbest guy using a Raw Converter.

 

So now is 4:2:2 better than Bayer. Yes. Much better. I would no convert any of my work back to a bayer encoded file. It is that bad.

 

Look at what it is without any demosaicing :

 

360px-Normal_and_Bayer_Filter_%28120px-C

 

 

What would be the best output in my opinion that every camera should have ? Not HDMI of SDI, but a digital raw output protocol, somthing like S/PDIF or AES EBU for audio where you output RAW and you do what you want with it from your external recorder. First, about the same bandwidth for a 16bit RAW Bayer than a 8bit 4:2:2 uncompressed. You'll also be able to convert on the fly using some standard algorithm or proprietary (think Fujifilm). And to whatever codec you need...

Link to post
Share on other sites

A few additonal comments some specific to Nikon and Canon DSLRs.

raw is 'single channel' 12 or 14bit data in linear light values and has no color gamut, ie: rec709, prophoto, ACES defined. So no skewing of values by gamma encoding and no restricted color gamut other than limitations of the camera hardware. Both those choices become user choices in the raw development process.

Canons Digic processor handles this, it takes in the raw, does the raw development process like demosaic, line skipping (in pre mk3 models), applies various processing steps including camera response curve, picture style and outputs 4:2:2 YCC (lets leave analog YUV in the gutter). Not RGB. The 4:2:2 is aimed for the LCD display which takes YCC input.

Canon added a h264 encoder chip to its line of cameras and tapped into that 4:2:2 and sent a feed to the h264 encoder and jpg image writer.

The 4:2:2 YCC has been calculated slightly differently to rec709 mentioned above, for many models of Nikon, Canon and GH3 MOVs, the luma chroma mix is based on BT601 luma coeffs ie: color matrix, uses a rec709 transfer curve to go to gamma encoded YCC rather than linear light and declares rec709 color primaries in order to define the color gamut. The Nikon uses a BT470 transfer curve not rec709.

The result is not rec709 4:2:2 in camera but JFIF, think jpg, chroma is normalized over the full 8bit range mixed with full range luma.

That normalized 4:2:2 gets fed to the LCD screen and h264 encoder and soon for 5D MKIII hdmi out but to rec709 no doubt.

YCC 4:4:4 and RGB are not interchangable in dicussion but belong to two different color models and need handling correctly accordingly especially getting luma coeffs and luma range correct in the conversion to RGB for display, otherwise undue clipping of channels will occur, artifacts created and wrong colors, pinks skewed toward orange blues to green.

Great info CD.

Link to post
Share on other sites

 A few clarification are needed are.

 

Uncompressed does by no mean RGB and full resolution. Uncompressed is a codec with raw data and without lossless compression (zip or similar), think a regular text file, a .bmp, or a .wav. An uncompressed video codec can be YUV 4:2:2. If you are using an uncompressed codec, you can recode it millions of times without ever losing any information for the source.

 

Like I wrote in a previous post, 4:2:0 is not a and half resolution blue channel, and a quarter resolution red channel. Both chroma channels - which neither are blue or red - have the same sampling : a quarter of resolution.

 

As for the channels - if YCbCr - the Cb is a yellow/purple saturation channel, the Cr is kind of turquoise/pink saturation channel. Here is an example : from ( http://en.wikipedia.org/wiki/YCbCr )

 

Yes I should've said "when I've come across uncompressed it's usually RGB 444" to be more precise! I did make a 4:2:2 uncompressed file a while ago, but most uncompressed I've dealt with has been RGB+alpha. 

 

I threw out the info on resolution and yellow purple etc because it's a bit conceptually messy and I was trying to simplify. People find it easier to understand as red and blue and black & white than the odd shades that represent the reality of how the computer deals with it!

 

And if you want, you can think of it as red and blue, to make it easy. And I like to make it easy!

 

600px-YCbCr-CbCr_Scaled_Y50.png

 

I'll add a few edits to the post, cheers for the in depth knowledge my man!

Link to post
Share on other sites

@jgharding why are you talking about RGB 444 and then showing a YCbCr color model diagram?

 

"Odd shades that represent the reality of how the computer deals with it", it's not how the computer deals with it, it's how the image is represented and the nature of the encoding by the camera, it's the color model. Computer displays are RGB based so a conversion from YCC to RGB has to be done to give us back the RGB?

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...