Jump to content

ProRes vs ProRes - A first look at uncompressed HDMI with the Nikon D5200 vs the Blackmagic Cinema Camera


Andrew Reid
 Share

Recommended Posts

Again, your solutions can be good in some practical situations (not in others, like real life documentaries), but this is not the purpose of this topic, where we are talking about tests to prove camera's qualities.

 

Most documentaries that I see, produced with much more expensive cameras, have to roll with their limitations in uncontrollable circumstances.  I fail to see the point at being so obtuse about this or what cameras would even be viable as test candidates to somehow account for the unpredictable with somehow magical properties that overcome the inherent limitations of 422 encoding to a bitstream that can only be so big, through a shitty, consumer grade digital interface. 

 

 

And I'm not talking about magic, I'm talking about normal things allowed by floating point colour data.

 

Line up the cameras that record floating point color, sent out through HDMI (the BMCC has a leg up here), to be recorded by...something.  Let's see'm.  I'm really curious now.

 

 

edit: of course this is all academic though, given the BMCC the obvious choice is "shoot raw".

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

Didn't realise that you could use PPs on a Nikon - can you use the ones for Canon on a Nikon or are they specific for each make?

If you can there are many more better PPs out there.

 

I've used the Flaat PPs on a canon & they aren't great (10 is the best of their bunch) - their claims of more DR aren't exactly true (what you gain in some areas, you lose in others). 

 

In fact all claims that a PP is going to improve things is a little bit off the mark - 8-bit isn't going to get [dramatically] better because of a PP, it'll only give you a bit more wiggle room.

Link to comment
Share on other sites

Would really love an answer to this: Posted 25 February 2013 - 12:16 AM What's the cheapest/easiest way to monitor audio via the mini hdmi output on the d5200? I assume a simple crossover cable/adapter isn't an option given its digital coming out via the mini HDMI whilst you need to monitor in analogue via headphones.
Link to comment
Share on other sites

...

 

It is possible to simply turn off the fullrange flag in the stream, which is something I personally do in order to avoid the scaling of levels into reduced range at import into the NLE, so you have access to super white and black, then work at 32bit float which holds onto RGB values beyond the 0.0 to 1.0 range but this needs care, then scale into strict rec709 for encoding out.

...

May I ask how are stripping/changing the VUI flag? Thank you so much.

Link to comment
Share on other sites

Has anyone every used Flaat Picture Profile for Nikon?  

 

http://www.similaar.com/foto/flaat-picture-controls/index.html

 

I'm wondering if you have any comments and think this would work with the D5200.

 

Thanks!

 

I've used it on my D800. I would use it if I really need lots of dynamic range above all, but only then. I prefer the way Neutral or my own custom color profile look. My own custom profile has same settings as the Standard profile, but a bit less contrast.

 

According to the D5200 manual, the loading of custom color profiles functions just as it does on the D7000 and D800: http://www.nikonusa.com/pdf/manuals/dslr/D5200RM_NT(11)01.pdf (check from page 90 and onwards, Custom color profile).

Link to comment
Share on other sites

Didn't realise that you could use PPs on a Nikon - can you use the ones for Canon on a Nikon or are they specific for each make?

If you can there are many more better PPs out there.

 

Specific ones.

 

On the D800, D7000 & D7100 it works like this: 

- A few simple color profiles can be created and edited through in-camera menus.

- Custom color profiles downloadable from the net can be placed in a special folder on the memory card, which the camera will search for when you get into the menus. From there you can then load the custom color profile.

- The Flaat color profiles are not the same as the simple ones you can create in-camera. They are more advanced custom profiles which can be created with Nikon's own softwares View NX 2 & Capture NX 2. If you're interested in finding out more about how to load or create these, there's some info in the D800 manual at page 172: http://www.nikonusa.com/pdf/manuals/dslr/D800_EN.pdf

 

I'd doubt the D5200 is much different with the custom color profiles. Maybe a bit dumbed-down considering the in-camera menu options.

 

Here's a test of Flaat picture styles on Nikon D800 (test is not done by me, it's just one I looked through when I figured out the picture styles myself):

https://vimeo.com/52227075

Link to comment
Share on other sites

No problem, the combination of BT709 primaries and BT601 color matrix coeffs is typical of jpeg / JFIF.

Make sure you use a 32bit workflow with these full range files, like FCPx, Premiere CS 5.5 onwards, that any intermediate processes like exporting to image sequences is via OpenEXR or that you're staying YCbCr to manipulate levels in grade into the restricted 16 - 235 luma range for encoding to delivery.
Link to comment
Share on other sites

Has anyone every used Flaat Picture Profile for Nikon?  

 

http://www.similaar.com/foto/flaat-picture-controls/index.html

 

I'm wondering if you have any comments and think this would work with the D5200.

 

Thanks!

 

I use Flaat 10p and 11p for my d5200 right now. (Yes, i switched after seeing Andrews review) I've used the 'TassinFlat' picture profile for my d7000 sometime before, which i liked the most. On the d5200 i sometimes see blotchy (compression?) artifacts in the shadows, even at daylight. But it still is splendid nevertheless! A must-have for production work. We're going to shoot a corporate film soon on d5200's.

 

You should see this video. They tested the Flaat picture profile on d5200: https://vimeo.com/60629733  

Link to comment
Share on other sites

I have one more ? about the VUI video_full_range_flag.

 

This flag is just used for viewing, right, it has no relationship to how the camera records?

 

I ask this because sometimes I hear it mentioned that a particular camera records H.264 full range, as if recording "full range" is an advantage. But isn't H.264 always going to map the luma between 0 and 1, and the chroma between -.5 and +.5?

 

FWICT, all the video_full_range_flag does is tell the decoder whether or not to map YCbCr to video levels or 0 to 255. It doesn't actually add any more dynamic range to the camera. Correct?

 

And cameras that record H.264 "full range" don't have more dynamic range because of it, right?

 

Thank you so much!

Link to comment
Share on other sites

I have one more ? about the VUI video_full_range_flag.

 

This flag is just used for viewing, right, it has no relationship to how the camera records?

 

Hi, you're correct, the VUI Options (Visual Usability Information) is to signal to a decompressing codec that the h264 source is fullrange and if that codec respects the flag the output is scaled luma and chroma into 16 - 235, 16 - 240 for chroma, so as soon as a Canon, Nikon or GH3 (MOV) h264 file is transcoded levels are scaled into 'restricted' range. QT based aplications like FCPx, Davinci Lite, Mpegstreamclip and codec implementations like ffmpeg, CoreAVC, Mainconcept for Premiere CS5 onwards all respect the flag.
 

I ask this because sometimes I hear it mentioned that a particular camera records H.264 full range, as if recording "full range" is an advantage.

 

Internally within the camera JFIF (jpeg) raw 4:2:2 is used and sent to the h264 encoder, so a bit like encoding a jpg image sequence to h264 but specifying encoding over full range of YCbCr levels, which can be done with x264 for sure. Then using the VUI Option, flagging it full range for the scaling at playback / transcode etc but the actual camera data is encoded and quantized over full available 8bit range in YCC.

JFIF normalizes chroma over the full range too. So to maintain the relationship between luma and chroma all levels are scaled at playback / transcode. Allowing full range luma and chroma to be converted to 8bit RGB for display or processing based on the assumption that the levels are 'restricted' range when they're not will result in clipped RGB channels. Which is why I mentioned previously about care needing to be taken.

 

But isn't H.264 always going to map the luma between 0 and 1, and the chroma between -.5 and +.5?

 

Yes, many camera's shoot Display Referred video, for example rec709, in 32bit RGB speak it's 0.0 - 1.0 in 8bit it's RGB 0 - 255. But that is at the point it's converted to RGB. In theory 8bit YCbCr can hold more data than display referred 8bit RGB. It's not that h264 does any mapping it's the receiving application that does that and color space conversions.

 

It's all an interpretation which can vary somewhat but ultimately the conversion to RGB for display will be based on restricted range 16 - 235 / 240 YCC to 0 - 255 RGB assumption however in a 32bit workspace the conversion to RGB will not clip highlights and crush shadows, evident in the YCC waveform or color picker sampling pixels in a frame, values below 0.0 and above 1.0 are maintained, although we'd be mistaken for thinking shadows are crushed and highlights clipped if solely relying on the NLE 8bit display referred preview visually rather than viewing the scopes and sampling highlights and shadows with a color picker finding values > 1.0 and < 0.0.

 

Take for example a Sony camera like a NEX5n, FS100, FS.... they all are able to encode over 16 - 255 YCC, sure perhaps a camera operator shouldn't allow that but with the now common 32bit processing it's not so critical if manipulation / grading is going to happen anyway, compared to the old 8bit integer RGB processing in older NLE's.

 

But there's also nothing stopping a conversion that does 0 - 255 YCC to 0 - 255 RGB at 8bit by using a different luma + chroma mix calculation and just getting a slightly higher gamma output.

 

FWICT, all the video_full_range_flag does is tell the decoder whether or not to map YCbCr to video levels or 0 to 255. It doesn't actually add any more dynamic range to the camera. Correct?

And cameras that record H.264 "full range" don't have more dynamic range because of it, right?

 

There's maybe a bit of a stop extra but no, it's not about DR. There are many manipulations that are 'better' done in the camera's native YCC space rather than in RGB.

 

If we want to do a gentle denoise in YCC space with Neat Video for example to blend pixels and fill some of the in between float values in a 32bit work space before grading, why would we first insist on scaling those luma gradients into restricted range at decompression at 8bit precision?

 

It's not like we're expanding the range into 0 - 255, the h264 has been encoded and quantized that way, in YCC space, luma and chroma on seperate planes, unlike the RGB color model, we can manipulate luma more easily if the tools are available.

 

Yes we need to scale into restricted range before encoding for delivery, so the final 'correct' conversion to 8bit  RGB for display is done by some non color managed playback where everything is assumed correct to specification restricted range.

 

But whilst in a 32bit workspace personally I don't think it's necessary, aside from ensuring the NLE / Grading application preview is correct via whatever color management method is available. LUT'd or otherwise.

Link to comment
Share on other sites

Hi yellow,

 

Thanks very much for the description. After looking at Annex E of the H.264 specification, it looks like we may have the handling of the video_full_range_flag in reverse, LOL! It seems that if the flag is set to 0, the output is scaled to the video legal range. And if the flag is set to 1, then the output is not scaled. Here's a quote from that section:

 

 

video_full_range_flag

 

indicates the black level and range of the luma and chroma signals as derived from E’Y, E’PB, and E’ PR analogue component signals, as follows.

 

- If video_full_range_flag is equal to 0,

 

Y = Round( 219 * E’Y + 16 )

Cb = Round( 224 * E’PB + 128 )

Cr = Round( 224 * E’PR + 128 )

 

- Otherwise (video_full_range_flag is equal to 1),

 

Y = Round( 255 * E’Y )

Cb = Round( 255 * E’PB + 128 )

Cr = Round( 255 * E’PR + 128 )

 

When the video_full_range_flag syntax element is not present, video_full_range_flag value shall be inferred to be equal to 0.

 

What do you make of this? Thanks so much again! 

Link to comment
Share on other sites

hi,

 

Yes, that makes sense, if flag is 1 then assume jpeg levels, ie: full range and therefore for 'valid' YCbCr scale levels, evident in any transcode to DNxHD or Prores etc, if flag is 0 then treat as YCbCr levels, those outside of 'legal' range just get 'clipped' to 0 & 1.

 

The following link includes two mp4's derived from the same full range x264 encoding. I created the two mp4's by simply remuxing and setting the flag on each using Komisar's MP4Box.

 

http://dl.dropbox.com/u/74780302/fullrangetest.zip

 

With the flagged 'on' you'll see that the 16 & 255 text is visible, ie: out of range values originally encoded by x264 have been scaled into 'view' and that the NLE waveform in a 32bit project will show nothing over 100.

 

With the flagged 'off' you'll see that the 16 & 255 text is not visible, just horizontal black and white bars, ie: out of range values originally encoded by x264 have been clipped in the 'view' BUT the NLE waveform in a 32bit project will show values over 100 which can be scaled in the grade with a simple 32bit levels effect or similar and the text brought into view. At 8bit the values will generally be lost completely.

 

The full range x264 encoding with no flag set does not show the 16 & 255 text. ie: 0

Link to comment
Share on other sites

I am not at my editing machine right now, but I will certainly check out the files. Thanks so much. ;)

 

I think there is a larger issue that I'm not sure I've grasped completely. So does the H.264 specification look at the decoding from the perspective that correctly decoded video should be 16 to 235? If so, then it would make sense that full range video should be scaled 16 to 235 when decoded.

 

Thanks again!

Link to comment
Share on other sites

h264 is just the chosen vehicle to transport the moving image, like any codec its up to us to use it 'correctly', for final display of rec709 16 - 235 is required.

But h264 does not only support rec709 but also xvYCC where the image is encoded over 1 - 254 range providing an extended gamut of theoretically 1.8x colors of rec709 but still using rec709 color primaries, a Sony PS3 and hdmi support xvYCC and modern TVs can support the extended gamut, but it hasn't taken off. :-)

But that's an aside, if we can encode over full levels range from camera to edit, handle 'invalid' RGB in 32bit grading and then scale at point of encoding for delivery or flag full range then why not use it in the intermediate stages?
Link to comment
Share on other sites

See this is exactly the crux of the matter that I'm trying to get perspective on. I primarily use Avid Media Composer. And unlike many other NLE's, MC doesn't use any under the hood trickery to make the image look correct on the computer screen. When editing in MC, 0 appears as true black, 16 as a very dark grey, 235 as almost pure white, and 255 as pure white.

 

Now people may think this crazy for an application designed with broadcast in mind. But it actually is wonderful, b/c there are no surprises from remappings going on behind the scenes. And if you want to see how your footage will look with 16 being black and 235 being white, you can use an external broadcast monitor or the Full Screen Playback feature w/ the remap luminance for computer screens selected.

 

But here's where it gets messy in Avid land. When using Avid's file linking feature called AMA to link to a full range flagged H.264 file, Media Composer remaps 0 - 255 to 16 - 235. Now maybe that's what should be done. But Avid's AMA has a setting called "Do not modify levels." At first glance, you would think that with that selected, a full range file would display inside Media Composer as a full range. But it doesn't; it's remapped to broadcast legal.

 

So if the "accepted" way to handle the video_full_range_flag is to remap, then Avid's probably doing what it should. But After Effects and Premiere, for example, do not seem to remap when the full range flag is on and the source file's color management profile is used. So I'm trying to figure out which behavior is right.

 

So with your help, I think I understand what MC is doing. What I still don't understand is if it should be considered "correct."

 

Thanks again!!

Link to comment
Share on other sites

See this is exactly the crux of the matter that I'm trying to get perspective on. I primarily use Avid Media Composer. And unlike many other NLE's, MC doesn't use any under the hood trickery to make the image look correct on the computer screen. When editing in MC, 0 appears as true black, 16 as a very dark grey, 235 as almost pure white, and 255 as pure white.

I wouldn't say it was trickery exactly, 'Correct' YCbCr to RGB should involve the remapping in the color space conversion and our screens are RGB but saying that I frequently transfer the full range YCbCr levels to become same levels in RGB when processing at 8bit to ensure levels are not clipped. Does MC offer a true 32bit float linear environment or just with regard to precision?

 

Some remapping must occur when mixing YCbCr and RGB sources on the same timeline as black and white levels will differ, so assume user applies a levels filter or similar?

 

So if the "accepted" way to handle the video_full_range_flag is to remap, then Avid's probably doing what it should. But After Effects and Premiere, for example, do not seem to remap when the full range flag is on and the source file's color management profile is used. So I'm trying to figure out which behavior is right.

 

Depends on what version of Premiere and AE. Anything prior to CS5 will be suspect with regard to YCC to RGB but from CS5.5 Mercury Engine offering 32bit workspace the flagon.mp4 is scaled and the flagoff.mp4 isn't evident in 108% on YC waveform. Have found that round tripping full range requires OpenEXR to retain <0.0 & >1.0 levels though DPX won't.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...