Sign in to follow this  
Followers 0

Exploring Nikon D5200 HDMI output - review update

75 posts in this topic

Posted · Report post

Lets not go really far. JPEG can be 4:4:4, or can use chroma subsampling (http://en.wikipedia.org/wiki/JPEG ) is JPEG 4:4:4 lossless ? You tell me.

 

ProRes 4:4:4 is not lossless either.

 

You mixing up chroma subsampling with compression. Two different things.

Mixing them up is exacty what I'm not doing. Show me Lossy 4:4:4 COLOR? The resolution of an image may suffer from all manner of Block compression( JPEG/MPEG/TARGA etc etc etc)  but if the RAW (sampled)  color space is 4:4:4, then there is NO COLOR loss. That's my whole point. Color Compression and dimensional compression are separate if you maintain bit depth.
When the manufacturers start messing with 'wrappers' that's a polite way of saying they are scamming us into buying into a delivered product that's been mangled to keep file sizes down and throughput high.
I spent too much time working on MPEG2 and DSL protocols to put up with all this nonesense.
You notice that Andrew is scratching his head as to why 'Clean HDMI' to a recorder is very very marginally better than internal encoding.
The reason is that the color space call outs are lies.

Share this post


Link to post
Share on other sites

Posted · Report post

For those looking for simplification, a summary of what I've learned so far about the destructive aspects of video compression follows.

 

This is from the perspective of someone who likes to shoot but loses interest at the level of coding and mathematical formulas, or where the complexity of information outweighs the practical benefits gained from the knowledge outside of the lab/software or hardware development.

 

If you're the same, you'll probably like it. 

 

---

 

Bit-depth = number of possible shades.

 

8-bit allows 255 different levels of colour for each channel. 10-bit allows 1023 different levels. And so on.

 

Side effects of limited bit-rate such as 8-bit include banding in areas with subtle gradients such as sky and smoke, "plastic" looking skin tones.

 

Practical fix: shoot your footage as close to your final look as possible. If you shoot flat, colour grade in After Effects, DaVinci, or another application with a 32-bit processing mode.

 

---

 

Sub sampling = spatial resolution of colour channels. There are three colour channels in digital video.

 

Uncompressed is R G and B, all at full resolution.

 

Sub-sampled is Y (black and white, or luminance), Cb (blue) and Cr (red). 4:4:4 all channels are full resolution. 4:2:2 the colour channels are half resolution. 4:2:0 the blue channel is half resolution, the red channel is quarter resolution.

 

This is not a mathematically perfect way of describing it, but it's conceptually sound for most of us in practice. It's as much as we need to know.

 

Side effects of 4:2:0 sub sampling include jagged pixelation and edges to red areas such as red glow from lights or red clothing. 

 

Practical fix: use a cooler white balance and bring your red back in post using a finishing application like after effects.

 

---

 

Bit rate = the amount of data used for video encoding measured over time.

 

50mbps is 50 megabits per second = 6.25 megabytes per second, for example.

 

This data rate alone does not necessary directly reflect visual and aesthetic quality, as compression algorithms and implementations are extremely complex and varied. Some fall within standards, others do not.

 

I-frame codecs encode each frame individually. I-frame allows most film-like motion "cadence".

 

GOP encoding uses Groups Of Pictures. The longer the GOP the more the codec can struggle with lots of movement. Long GOPs can contribute to a digital video "feel"

 

Side effects of limited bit rates include pixelation in high-motion shots, very little data in under-exposed or dark areas leading to blockiness and inability to recover shadow detail. general masking of natural sensor noise (grain) with unattractive pixelation.

 

Practical fix: a low bit rate is very destructive even with an advanced codec like AVCHD. This is why Canon use AVCHD for the C100, and their implementation of MPEG2 for the C300.

 

If your camera has a "black level" or "pedestal" or "Cinestyle" or "DRO", you can shift this up a little to prevent data from being encoded where there is very little priority given to it by the codec. This does spread data more thinly though, also remember your 8 bits...  Hack your camera if you can ;)

 

---

 

In short, working with compressed footage is a bit of a balancing act. A huge amount of data is thrown away in order to make files small and to separate markets.

 

The process is destructive, and cannot be reversed, though being intelligent on set and in post can help a lot. 

 

The ideal is something like Red R3D: visually lossless compression that maintains raw processing capabilities. It is a joy to work with.

 

Ironically, it's actually more important  that you get your shot right with a cheaper consumer camera than with a RAW camera, as you can't do so much in post production. Though users of lower end DSLRs are the least likely to use a light meter, for example, they are actually the most likely to benefit from it.

 

Practice makes perfect.

Share this post


Link to post
Share on other sites

Posted · Report post

So it seems if you've got one of these new Nikons and an external recorder, you may as well use it. Just don'r expect miracles...

Share this post


Link to post
Share on other sites

Posted · Report post

jharding finally nails it, where I seemed to have failed.
"Sub sampling = spatial resolution of colour channels. There are three colour channels in digital video.

Uncompressed is R G and B, all at full resolution.

Sub-sampled is Y (black and white, or luminance), Cb (blue) and Cr (red). 4:4:4 all channels are full resolution. 4:2:2 the colour channels are half resolution. 4:2:0 the blue channel is half resolution, the red channel is quarter resolution.

This is not a mathematically perfect way of describing it, but it's conceptually sound for most of us in practice. It's as much as we need to know."

Let's get the DSLR makers to stop repackaging or wrapping thier compressed streams and we'd have something you can gradee in post. Or at least, they must abandon misleading terminology

Share this post


Link to post
Share on other sites

Posted · Report post

^Yet still no one has explained why a bayer pattern with a 4:2:2 photosite ratio isn't implicitly limit you to chroma 4:2:2.  

Share this post


Link to post
Share on other sites

Posted · Report post

Sub sampling = spatial resolution of colour channels. There are three colour channels in digital video.

 

Uncompressed is R G and B, all at full resolution.

 

Sub-sampled is Y (black and white, or luminance), Cb (blue) and Cr (red). 4:4:4 all channels are full resolution. 4:2:2 the colour channels are half resolution. 4:2:0 the blue channel is half resolution, the red channel is quarter resolution.

 

This is not a mathematically perfect way of describing it, but it's conceptually sound for most of us in practice. It's as much as we need to know.

 

Side effects of 4:2:0 sub sampling include jagged pixelation and edges to red areas such as red glow from lights or red clothing. 

 

Practical fix: use a cooler white balance and bring your red back in post using a finishing application like after effects.

 A few clarification are needed are.

 

Uncompressed does by no mean RGB and full resolution. Uncompressed is a codec with raw data and without lossless compression (zip or similar), think a regular text file, a .bmp, or a .wav. An uncompressed video codec can be YUV 4:2:2. If you are using an uncompressed codec, you can recode it millions of times without ever losing any information for the source.

 

There is lossless codec. Similar to uncompressed codec, you never lose information. The difference ? Lossless compression. Think a zipped file, a .flac audio file. etc.

 

And finaly, lossy compression. Every time you compress with it, you lose a generation.

 

Just to be clear, if you have a 4:4:4 video file and you encode it to an uncompressed 4:2:2 (or lossless 4:2:2), there is some information discarded, but it is like saying that you lose information by exporting a 5.1 24-bit 96kHz audio file to 16bit 44.1hz stereo, you choose to discard some data. But from there you can open the file, resave it in aiff in the same format, feed it in PCM through SP/DIF - you still don't lose information.

 

Like I wrote in a previous post, 4:2:0 is not a and half resolution blue channel, and a quarter resolution red channel. Both chroma channels - which neither are blue or red - have the same sampling : a quarter of resolution.

 

As for the channels - if YCbCr - the Cb is a yellow/purple saturation channel, the Cr is kind of turquoise/pink saturation channel. Here is an example : from ( http://en.wikipedia.org/wiki/YCbCr )

 

257px-Barns_grand_tetons_YCbCr_separatio

 

 

As for practical fixes, I would suggest :

- Never sharpen in RGB. Sharpen only the luma channels. Sadly most NLE does not include a luma only shapener.

- Slightly blur or smooth the chroma channels - or the problematic region. That is the way to get ride easily of color moire without affecting resolution.

- Grade mostly in Luma/Chroma. Much better result.

Share this post


Link to post
Share on other sites

Posted · Report post

^Yet still no one has explained why a bayer pattern with a 4:2:2 photosite ratio isn't implicitly limit you to chroma 4:2:2.  

Let's try.

 

 

 

For each photosite in a bayer pattern, you capture luminosity for a certain region of the spectrum. But this is not luminosity as we see human see it. (Think putting a red filter on a b&w film). So for each block of 4 photosite (RGGB) they extrapolate some luminosity as if it was white light using (in REC 709) this formula Y = 0.2126 R + 0.7152 G + 0.0722 B (http://en.wikipedia.org/wiki/Luma_%28video%29). So the luminosity you captured is not perfect.

 

What you have captured contain 4 samples of color information.

 

So for the resultant 4 pixels, we have 4 luma info (aproximated) and 4 chroma info.

 

If we wanted to capture a real 4:4:4 RGB we would need for a block of 4 pixels - 4 greens, 4 red, 4 blue. As each one provide it's part of the spectrum in luminosity, we have 12 samples of luminosity (remember 4 luma for Bayer). (If we converted to 4:4:4 YUV, 4 samples of luma, 8 of chroma)

 

Now let's go from 4:4:4 to 4:2:2. To go to 4:2:2, we have to convert to YCbCr. For both chroma channel we discard half the information.

So we get 4 samples of luma and 4 samples of chroma.

 

4:2:2 YCbCr is 8 samples of combined chroma/luma. Bayer is 4 samples. So Bayer is half as good as 4:2:2.

 

 

So why 4:2:2 from the uncompressed hdmi out is less good than Raw from the BMCC ? Because first the 4:2:2 you get is from a Bayer Pattern, so you already converted all the data. But you mainly processed the Raw Data to some preformated preset from Nikon, Arri, Sony or Hasselblad - it is not as automatic - far from the dumbest guy using a Raw Converter.

 

So now is 4:2:2 better than Bayer. Yes. Much better. I would no convert any of my work back to a bayer encoded file. It is that bad.

 

Look at what it is without any demosaicing :

 

360px-Normal_and_Bayer_Filter_%28120px-C

 

 

What would be the best output in my opinion that every camera should have ? Not HDMI of SDI, but a digital raw output protocol, somthing like S/PDIF or AES EBU for audio where you output RAW and you do what you want with it from your external recorder. First, about the same bandwidth for a 16bit RAW Bayer than a 8bit 4:2:2 uncompressed. You'll also be able to convert on the fly using some standard algorithm or proprietary (think Fujifilm). And to whatever codec you need...

jgharding likes this

Share this post


Link to post
Share on other sites

Posted · Report post

A few additonal comments some specific to Nikon and Canon DSLRs.

raw is 'single channel' 12 or 14bit data in linear light values and has no color gamut, ie: rec709, prophoto, ACES defined. So no skewing of values by gamma encoding and no restricted color gamut other than limitations of the camera hardware. Both those choices become user choices in the raw development process.

Canons Digic processor handles this, it takes in the raw, does the raw development process like demosaic, line skipping (in pre mk3 models), applies various processing steps including camera response curve, picture style and outputs 4:2:2 YCC (lets leave analog YUV in the gutter). Not RGB. The 4:2:2 is aimed for the LCD display which takes YCC input.

Canon added a h264 encoder chip to its line of cameras and tapped into that 4:2:2 and sent a feed to the h264 encoder and jpg image writer.

The 4:2:2 YCC has been calculated slightly differently to rec709 mentioned above, for many models of Nikon, Canon and GH3 MOVs, the luma chroma mix is based on BT601 luma coeffs ie: color matrix, uses a rec709 transfer curve to go to gamma encoded YCC rather than linear light and declares rec709 color primaries in order to define the color gamut. The Nikon uses a BT470 transfer curve not rec709.

The result is not rec709 4:2:2 in camera but JFIF, think jpg, chroma is normalized over the full 8bit range mixed with full range luma.

That normalized 4:2:2 gets fed to the LCD screen and h264 encoder and soon for 5D MKIII hdmi out but to rec709 no doubt.

YCC 4:4:4 and RGB are not interchangable in dicussion but belong to two different color models and need handling correctly accordingly especially getting luma coeffs and luma range correct in the conversion to RGB for display, otherwise undue clipping of channels will occur, artifacts created and wrong colors, pinks skewed toward orange blues to green.

Great info CD.

jgharding likes this

Share this post


Link to post
Share on other sites

Posted · Report post

 A few clarification are needed are.

 

Uncompressed does by no mean RGB and full resolution. Uncompressed is a codec with raw data and without lossless compression (zip or similar), think a regular text file, a .bmp, or a .wav. An uncompressed video codec can be YUV 4:2:2. If you are using an uncompressed codec, you can recode it millions of times without ever losing any information for the source.

 

Like I wrote in a previous post, 4:2:0 is not a and half resolution blue channel, and a quarter resolution red channel. Both chroma channels - which neither are blue or red - have the same sampling : a quarter of resolution.

 

As for the channels - if YCbCr - the Cb is a yellow/purple saturation channel, the Cr is kind of turquoise/pink saturation channel. Here is an example : from ( http://en.wikipedia.org/wiki/YCbCr )

 

Yes I should've said "when I've come across uncompressed it's usually RGB 444" to be more precise! I did make a 4:2:2 uncompressed file a while ago, but most uncompressed I've dealt with has been RGB+alpha. 

 

I threw out the info on resolution and yellow purple etc because it's a bit conceptually messy and I was trying to simplify. People find it easier to understand as red and blue and black & white than the odd shades that represent the reality of how the computer deals with it!

 

And if you want, you can think of it as red and blue, to make it easy. And I like to make it easy!

 

600px-YCbCr-CbCr_Scaled_Y50.png

 

I'll add a few edits to the post, cheers for the in depth knowledge my man!

Share this post


Link to post
Share on other sites

Posted · Report post

@jgharding why are you talking about RGB 444 and then showing a YCbCr color model diagram?

 

"Odd shades that represent the reality of how the computer deals with it", it's not how the computer deals with it, it's how the image is represented and the nature of the encoding by the camera, it's the color model. Computer displays are RGB based so a conversion from YCC to RGB has to be done to give us back the RGB?

Share this post


Link to post
Share on other sites

Posted · Report post

To be noted :

 

On my post about Bayer and 4:2:2. All I wrote is valid in 1:1 Bayer resolution to final resolution, meaning a 1920x1080 Bayer for a 1080 final resolution.

 

If you have higher resolution Bayer this change. Ideally, a bayer sensor should double (or quadruple) the final resolution, eg. 3840x2160 for a 1080 video. From there, you could get a nice 4:4:4 (slighlty oversampled in the green.)

BydrodoFieddy and OffewMapype like this

Share this post


Link to post
Share on other sites

Posted · Report post

The chroma sample X:Y:Z is a just a reference for ratios of information The use of "4" for "X" is just convention. So in other words, to get 4:4:4 from a Bayer, you need to down sample, then throw away some green, right?

 

Thanks for everyone's input.

Share this post


Link to post
Share on other sites

Posted · Report post

The chroma sample X:Y:Z is a just a reference for ratios of information The use of "4" for "X" is just convention. So in other words, to get 4:4:4 from a Bayer, you need to down sample, then throw away some green, right?

 

Thanks for everyone's input.

 

No you don't throw away extra green. It is just part of the bayer process.

 

What I meant is to have 4:4:4 in 1920x1080 on a FoveOn  for example, or tri CCD, you need a 1920x1080 x 3 sensor providing 6220800 photosites or 6mpx.

 

 

On a Bayer, you don't have that 1-1-1 red, green, blue ratio. but a 1-.5-.5. So you need to double the resolution to get at least each color for a final pixel. This is 8294400 photosites or 8mpix for a 1920x1080.

Share this post


Link to post
Share on other sites

Posted · Report post

@jgharding why are you talking about RGB 444 and then showing a YCbCr color model diagram?

 

I was addressing to two different things in that post. I think we're all confusing each other now which was never the point of my post, so I shall bow out!

Share this post


Link to post
Share on other sites

Posted · Report post

Chauffeurdevan or anyone who knows :)

 

 

Could you explain

 

Wny does the BMC 2.5K have to be downsampled to 1920 x1080

 

What I think I understand

The sensor is 2.5K and not 4K then it has to be downsampled to maintain its integrety because of bayer processing.

 

 What I want to do

 

Use super 16mm lenses and either crop or debayer down to a resolution that means I dont need to crop. So what is the best option for maintaining highest 1920 resolution.

1) Debayer the 2.5K down to 1920 and crop that.

2) Crop the 2.5 k image to super 16mm size then debayer that to 1920

 

I have a feeling the answer will be 2)  But how much resolution or quality hit would there be?

Share this post


Link to post
Share on other sites

Posted · Report post

There are some serious amounts of (des)information in this thread... Both regarding how chroma subsampling schemes are laid out, and how normal Bayer based sensors record video images in consumer-grade DSLR devices.

 

Now, in order from capture to recording:

 

Each manufacturer (and also separate models of cameras!) have their own way to read a sensor to create an initial image to "build" the video image from.

*Some cameras line-skip, since that's a very easy (and bandwidth-economically good) way to read a sensor FAST. Unfortunately, this gives lots of noise (much of the actual recorded information on the sensor is just thrown out unused) and lots of orientation dependent aliasing. This exact aliasing depends on how the manufacturer choses to scale the image from the original resolution of the sensor.

*Some cameras have other means of restricting the amount of pixels/second it has to read. Those can include true binning, patterned subsampling and many other schemes. Most of those schemes can also be reverse-engineered if you know what you're doing.

 

Almost ALL consumer-oriented cameras do after this initial sifting of information. Most of the quality loss except for the compression and chroma sub-sampling occurs here! At the second leg in the image pipeline you have a complete RGB 4-4-4 image at some (smaller) pixel scale. This depends on the original resolution of the sensor vs the subsampling method chosen. But it's often around 1200-1350 pixels on the X-axis and 800-950 pixels on the Y-axis.

 

Those are true 4-4-4 RGB images! But they aren't true HD resolution... Which is why most DSLR images are quite a lot softer and less detailed than true 1080p video.

 

The video compression engine accepts RGB as input, doing YCbCr(YUV) transform before sending the image stream to the encoder is just a big waste of effort. In the encoder input, the 4-4-4 RGB image is subsampled into 1920x1080 Y-channel data and [some] resolution CbCr(UV) data before it's sent in to the compression encoding.

 

So, no - you don't need an area of 4x4 Bayer-coded pixels to make 4-4-4 video. You need 2x2 pixels to get full-resolution images for ALL CHANNELS, the definition of 4-4-4. If you do a Bayer interpolation before coding, you only need ONE pixel to get 4-4-4 video... See Nikon D4 1:1 crop video mode next.

 

One instance is the 1:1 pixel crop video you can get from a Nikon D4. The video image in that mode is made from a 1920x1080 crop from the central part of the sensor. That image needs to be Bayer-interpolated before it's a full RGB image, but at least it's a full-res image. That's why it's so much better than the large modes in that camera - the large crop modes use line-skipping and lower actual image resolution. The loss from that is much bigger than the loss from having to do a Bayer interpolation.

 

If you want to read more about chroma subsampling and the exact layouts used, I'd recommend this:

http://dougkerr.net/pumpkin/articles/Subsampling.pdf

jgharding likes this

Share this post


Link to post
Share on other sites

Posted · Report post

Have you tried setting HDMI to 1080p instead of auto?

When I do this I get a true progressive feed but it is 29.97 frames (30p) even though the camera is set to PAL / 25p.

 

What about trying it in NTSC and trying 24p from 60i? <Quote<<<

I am very surprised to learn that you can get any "clean" HDMI out of a D5200.

I am also surprised that the Video quality is so good.

 

I am using a V1, D7100, D600, and D800 for my "Home Movies".

 

With the D7100 I had to cycle with the info button to get clean output that filled my Camax monitor

and recorded to my Shuttle II without any info showing.

What you see on the external monitor is what you get.

 

That camera does not have a 1080P setting for the HDMI output so I have to use Auto.

This works fine if the in-camera Video settings are set for 1080P at 30fps

and record to DNxHD 220 (I had to download the free codec pac from AVID to use the files on my PC).

With ProRes the recordings are at 60fps, slightly lower quality, and much more difficult to deal with in  PowerDirector 11.

Share this post


Link to post
Share on other sites

Posted · Report post

Hi everyone,

 

I have a new D5200, and am having an issue getting FULL FRAME 1080 out of the hdmi socket - yes I get a cropped 1080 which needs post processing to bring it to full frame, thereby losing quality.

 

I've read all the above thread about camera settings, I'm simply not able to get a full raster image.

 

I spoke to Nikon, they say the camera has this as a limitation - so, can someone explain this to me:

 

Does the Nikon D5200 output 1920x1080 via the HDMI socket at full raster? Yes or No, and how did you achieve this?

 

thanks 

 

Paul :-)

Share this post


Link to post
Share on other sites

Posted · Report post

After again speaking with Nikon, and further testing with my camera firmware C1.00 L1.006 - it appears impossible to externally record full HD from the D5200.

 

I've connected the camera to a Black Magic Intensity Pro HDMI capture card - result: 1920x1080 file with black bars around a size reduced image.

 

Connected to a Grass Valley HD Storm card yields exactly the same results.

 

Viewing on an external monitor produces the same results.

 

It looks like this entire thread is based on a myth - unless of course you like working with silly cropped images.

 

I'm in conversation with Atomos, who make both the Ninja and Samuri Blade capture devices. They say that they are having success, but maybe this is at 720.... as they quote the D800 as a working source, which does say in its specification that it has full, clean HDMI output expressly for external recording.

 

Paul :-)

Share this post


Link to post
Share on other sites

Posted · Report post

You have to press the "info" button, next to record button a few times to get the full picture, not the <i> button.

I also had to RTFM to find out.

page 98 reference manual

Share this post


Link to post
Share on other sites

Posted · Report post

@Andrew: Also, if you find the time/possibilities, I'd like to learn more about shooting with an external recorder. Never done this before. What are the options, prices, drawbacks etc. What would you suggest for a budget shooter in combination with the D5200?

 

The BlackMagic Shuttle seems to be the cheapest option. I really like the Atomos Ninja 2 with built in monitor, but it's pricey... It would be nice if there was something in between with monitoring options.

 I use a CaMax 5.6" HD monitor mounted on my TriPod Leg and pluged into the Shuttle II.

I get a superior view and a cheaper recording solution.

Share this post


Link to post
Share on other sites

Posted · Report post

Thanks for posting this. Maybe someone has a definitive way of finding out whether this is true 4:2:2 or not.

 

What are your impressions of this camera for video?

Open the file with MediaInfo

Share this post


Link to post
Share on other sites

Posted · Report post

Hi guys! 

 

I have a D5200, and love it for video.

My question: does the HDMI output send signal when you are in live mode,

and not recording? This would mean I could use a Hyperdeck (or other) to

shoot continuously! 

 

This would be... awesome!

 

Thanks in advance, 

 

Simon

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0