Jump to content

see ya

Members
  • Posts

    215
  • Joined

  • Last visited

Posts posted by see ya

  1. So this is a 3D 'Look' LUT then if your saying add it to the end of the chain created with the Resolve dpx image? Rather than a Canon camera space to Log or rec709 3D transform created by using a target under controlled conditions and something like Argyll CMS to generate a high precision 3D LUT in numerous formats like eecolor, cube etc?

    Your write up kind of blurs the line between the different purposes of a 3D LUT.

    A 3D LUT is destructive but if a shaper LUT is used first that can be avoided.

  2. Suggest using the ML raw scopes it provides for ETTR. Otherwise you're looking at a camera jpeg processed mangled feed that doesn't tell you what your raw really contains.

    Depending on the color of light your shooting under the other RGB channels are multiplied out to get you your white balance, that's whats really under the bulls**t color temperature / tint sliders.

    So the more you ETTR and risk clipping any of the channels depending on the color of your light source the other 'weaker' channels are dragged up along with the noise.

    The well abused term 'recovering' highlights is bulls**t, if you clip a channel its clipped, I think many use the term because in reality you have say a 7 'stop' or greater source and trying to view it through a 5 'stop' sRGB / rec709 gamma curve.

    The data is there as long as we don't clip a channel, it doesn't need recovering, just massaging into the limited viewing window.

    If we do clip a channel then very possibly some sort of highlight handling strategy is required whether that's clip to white, fudge to neutral grey which in turn looses the specular higjlights or try blending from other channels.
  3. OK... so i'm a little unclear here.  What's causing the pink magenta cast in premiere?  is it the Raw Magic conversion process that's adding it?  or is it the original Raw file created by magic lantern software?


    My guess, Premiere assumes to low a sensor saturation point so consequent R, G & B channels can't be scaled correctly to suit the camera raw source, similar to green shadows from assuming wrong black level.
  4. On the spec you choose for Resolve at least 3GB VRAM on the GTX770, for the money a Zotac 4GB would be good, 5 yr warranty too.

    RAM I'd watch those jerk off heat sinks possibly clashing with your CPU cooling if you were to choose a fan rather than water cooled. CAS is high and only dual channel? If 2x 8GB? Maybe consider some low CAS, low voltage Kingston HyperX or similar.

    16GB is entry level for any RAM cache, play blast.

    Does your mobo support dual 16x PCI-e 3.0 if you add a second GPU, if you're going for Resolve 10b with dual monitors then think about adding a second GPU for just GUI, as 10b supports multi GPU and prefably on a mobo supporting dual 16x not 16x then 8x and 8x.

    Blackmagic recommend the Asus P9X79 Pro for that, socket 2011 Ivy Bridge, 4 channel RAM etc, have you done a cost comparison with socket 2011 and a quad core?

    For PSU I'd suggest the 850W to cover things like second GPU and a small disk RAID. RAID 0 for that just as workspace, RAID 1 for backup to a Synology NAS and a eSATA dock for bare drives for off site. No RAID 5 or 6 unless using enterprise standard drives.
  5. Poor Blanche, you guys have not answered her question at all. She's trying to find out what would be the best size "variable" ND to get for her GH3 with the view of using other lenses in the future, such as Nikons or Canons, or Samyangs or whatever...

     

    The biggest you can afford, then step rings down to small lens threads to suit. :-)

     

    Had a Lightcrafts II 62mm VariND and a set of step rings for the usual 49mm, 52mm, 55mm, 58mm to 62mm. Covered the Meyer Optiks, Olympus and Pentax lenses I use.

  6. I've always steered clear of Cokins for video / moving camera as the filter holder is not enclosed like a matte box so light can hit the filter from all angles, including behind between lens and back of filter, used to have a nice collection of them, grads and all that, used for stills but not a moving camera.

     

    Had a Lightcrafts II 62mm VariND and a set of step rings for the usual 49mm, 52mm, 55mm, 58mm to 62mm. Covered the Meyer Optiks, Olympus and Pentax lenses I use.

     

    But recently gone back to fixed ND's as all the lenses are declicked, chose 2 and 3 stop versions of B+W German made brass frames and hard coatings.

     

    Slightly cheaper alternatives are the Hoya Digital Pro 1's but in the past I found them a bit difficult to keep clean.

     

    Whatever manufacturer you look at for filters ensure you look at multicoated versions when comparing and buying.

     

    No need to stack filters like UV + ND you'll risk getting light bouncing about and ghosting.

  7. It's funny to read when people mention the 'Prolost' Neutral as if it were a 'Picture Style'.

     

    I'm pretty sure when Stu named it that as a tongue in cheek sideways swipe at all the fancy pants picture profiles been knocked up and the 'slightly' exagerated claims made about them.

     

    He's long since suggested the 'flat' approach on prolost.com and before that in the DV Rebels Guide and the fact that just flattening neutral down a bit can be done without the need for a laptop to load the PS out in the 'field', to set in numerous cameras when a group get together to shoot adhoc.

     

    They'll be many out there with dialed down Neutral not even knowing they were using 'Prolost', and not only on Canon DSLR's but the for runners in the budget indie shooter hands such as the Canon HV range, :-)

  8. As I can see, captured prores has slight shift to green :(. ALL-I looks well white balanced. Definitively issue from new firmware, so post white balance should needed to correct this little shift.

     

    Green color shift? Pinks to Orange as well? Nothing to do with WB. Use the right color matrix luma coefficients for BT601 instead of BT709 which will the assumed for HD source on pixel count if BT601 is not declared in the stream. Or other way round if Prores is flagging BT601 and the source is BT709.

     

    Also is the source full range luma or not? Not sure if Prores even does full luma, but that's what the Canons output.

  9. This isn't a new technique.  It can't replace green screen entirely, otherwise it would have done so already.  It's neat though.

     

    It's not about replacing chroma key technique, it was the chosen option for this shot on this project and partly due to the chosen camera and decision to risk potentially more expensive vfx rework if the shot need say a sky replacement, too much baked in compared to chroma key.

     

    What's truly impressive is that we have projectors finally bright enough that the effect can be more or less invisible now, not to mention able to contribute to a shooting stop.

     

    It was the sensitivity of the camera used Sony F65 and the fact the shots were on fast lenses wide open that a 'reasonable' amount of projectors could be used on this occasion instead of a stupid amount, cost and stability wise. Rather than projectors being brighter now.

     

    http://www.fxguide.com/fxpodcasts/fxpodcast-248-oblivion/

  10. Yeah, jesus huh, yes it's a great bit of kit for some shots, not all shots, it's another tool BUT it isn't a 'Game Changer', it doesn't warrant the build up it's had, it doesn't warrant comparison as a 'game changer' to what the 5D MK II did to the 'industry', shot for shot in a movie how many times use this device, how many times use a 'game changing' camera, every shot.

     

    I'm not judging the device based on the videos put up, I'm commenting on the over use if anything, yes it's to show off the device, better will follow but considering they had the heads up, surprised of whats been put up, that's all.

  11. Ok so Mr Laf has updated his site with some bullshit about flying f--kin cameras, is this the bullshit bingo game changer he's been touting last few days then he's had his head up his a$$ far too long. If not then the nonsense continues.

    Would have commented over there but his site constantly crashes my ios device. :-)
  12. See this is exactly the crux of the matter that I'm trying to get perspective on. I primarily use Avid Media Composer. And unlike many other NLE's, MC doesn't use any under the hood trickery to make the image look correct on the computer screen. When editing in MC, 0 appears as true black, 16 as a very dark grey, 235 as almost pure white, and 255 as pure white.

    I wouldn't say it was trickery exactly, 'Correct' YCbCr to RGB should involve the remapping in the color space conversion and our screens are RGB but saying that I frequently transfer the full range YCbCr levels to become same levels in RGB when processing at 8bit to ensure levels are not clipped. Does MC offer a true 32bit float linear environment or just with regard to precision?

     

    Some remapping must occur when mixing YCbCr and RGB sources on the same timeline as black and white levels will differ, so assume user applies a levels filter or similar?

     

    So if the "accepted" way to handle the video_full_range_flag is to remap, then Avid's probably doing what it should. But After Effects and Premiere, for example, do not seem to remap when the full range flag is on and the source file's color management profile is used. So I'm trying to figure out which behavior is right.

     

    Depends on what version of Premiere and AE. Anything prior to CS5 will be suspect with regard to YCC to RGB but from CS5.5 Mercury Engine offering 32bit workspace the flagon.mp4 is scaled and the flagoff.mp4 isn't evident in 108% on YC waveform. Have found that round tripping full range requires OpenEXR to retain <0.0 & >1.0 levels though DPX won't.

  13. h264 is just the chosen vehicle to transport the moving image, like any codec its up to us to use it 'correctly', for final display of rec709 16 - 235 is required.

    But h264 does not only support rec709 but also xvYCC where the image is encoded over 1 - 254 range providing an extended gamut of theoretically 1.8x colors of rec709 but still using rec709 color primaries, a Sony PS3 and hdmi support xvYCC and modern TVs can support the extended gamut, but it hasn't taken off. :-)

    But that's an aside, if we can encode over full levels range from camera to edit, handle 'invalid' RGB in 32bit grading and then scale at point of encoding for delivery or flag full range then why not use it in the intermediate stages?
  14. hi,

     

    Yes, that makes sense, if flag is 1 then assume jpeg levels, ie: full range and therefore for 'valid' YCbCr scale levels, evident in any transcode to DNxHD or Prores etc, if flag is 0 then treat as YCbCr levels, those outside of 'legal' range just get 'clipped' to 0 & 1.

     

    The following link includes two mp4's derived from the same full range x264 encoding. I created the two mp4's by simply remuxing and setting the flag on each using Komisar's MP4Box.

     

    http://dl.dropbox.com/u/74780302/fullrangetest.zip

     

    With the flagged 'on' you'll see that the 16 & 255 text is visible, ie: out of range values originally encoded by x264 have been scaled into 'view' and that the NLE waveform in a 32bit project will show nothing over 100.

     

    With the flagged 'off' you'll see that the 16 & 255 text is not visible, just horizontal black and white bars, ie: out of range values originally encoded by x264 have been clipped in the 'view' BUT the NLE waveform in a 32bit project will show values over 100 which can be scaled in the grade with a simple 32bit levels effect or similar and the text brought into view. At 8bit the values will generally be lost completely.

     

    The full range x264 encoding with no flag set does not show the 16 & 255 text. ie: 0

  15. I have one more ? about the VUI video_full_range_flag.

     

    This flag is just used for viewing, right, it has no relationship to how the camera records?

     

    Hi, you're correct, the VUI Options (Visual Usability Information) is to signal to a decompressing codec that the h264 source is fullrange and if that codec respects the flag the output is scaled luma and chroma into 16 - 235, 16 - 240 for chroma, so as soon as a Canon, Nikon or GH3 (MOV) h264 file is transcoded levels are scaled into 'restricted' range. QT based aplications like FCPx, Davinci Lite, Mpegstreamclip and codec implementations like ffmpeg, CoreAVC, Mainconcept for Premiere CS5 onwards all respect the flag.
     

    I ask this because sometimes I hear it mentioned that a particular camera records H.264 full range, as if recording "full range" is an advantage.

     

    Internally within the camera JFIF (jpeg) raw 4:2:2 is used and sent to the h264 encoder, so a bit like encoding a jpg image sequence to h264 but specifying encoding over full range of YCbCr levels, which can be done with x264 for sure. Then using the VUI Option, flagging it full range for the scaling at playback / transcode etc but the actual camera data is encoded and quantized over full available 8bit range in YCC.

    JFIF normalizes chroma over the full range too. So to maintain the relationship between luma and chroma all levels are scaled at playback / transcode. Allowing full range luma and chroma to be converted to 8bit RGB for display or processing based on the assumption that the levels are 'restricted' range when they're not will result in clipped RGB channels. Which is why I mentioned previously about care needing to be taken.

     

    But isn't H.264 always going to map the luma between 0 and 1, and the chroma between -.5 and +.5?

     

    Yes, many camera's shoot Display Referred video, for example rec709, in 32bit RGB speak it's 0.0 - 1.0 in 8bit it's RGB 0 - 255. But that is at the point it's converted to RGB. In theory 8bit YCbCr can hold more data than display referred 8bit RGB. It's not that h264 does any mapping it's the receiving application that does that and color space conversions.

     

    It's all an interpretation which can vary somewhat but ultimately the conversion to RGB for display will be based on restricted range 16 - 235 / 240 YCC to 0 - 255 RGB assumption however in a 32bit workspace the conversion to RGB will not clip highlights and crush shadows, evident in the YCC waveform or color picker sampling pixels in a frame, values below 0.0 and above 1.0 are maintained, although we'd be mistaken for thinking shadows are crushed and highlights clipped if solely relying on the NLE 8bit display referred preview visually rather than viewing the scopes and sampling highlights and shadows with a color picker finding values > 1.0 and < 0.0.

     

    Take for example a Sony camera like a NEX5n, FS100, FS.... they all are able to encode over 16 - 255 YCC, sure perhaps a camera operator shouldn't allow that but with the now common 32bit processing it's not so critical if manipulation / grading is going to happen anyway, compared to the old 8bit integer RGB processing in older NLE's.

     

    But there's also nothing stopping a conversion that does 0 - 255 YCC to 0 - 255 RGB at 8bit by using a different luma + chroma mix calculation and just getting a slightly higher gamma output.

     

    FWICT, all the video_full_range_flag does is tell the decoder whether or not to map YCbCr to video levels or 0 to 255. It doesn't actually add any more dynamic range to the camera. Correct?

    And cameras that record H.264 "full range" don't have more dynamic range because of it, right?

     

    There's maybe a bit of a stop extra but no, it's not about DR. There are many manipulations that are 'better' done in the camera's native YCC space rather than in RGB.

     

    If we want to do a gentle denoise in YCC space with Neat Video for example to blend pixels and fill some of the in between float values in a 32bit work space before grading, why would we first insist on scaling those luma gradients into restricted range at decompression at 8bit precision?

     

    It's not like we're expanding the range into 0 - 255, the h264 has been encoded and quantized that way, in YCC space, luma and chroma on seperate planes, unlike the RGB color model, we can manipulate luma more easily if the tools are available.

     

    Yes we need to scale into restricted range before encoding for delivery, so the final 'correct' conversion to 8bit  RGB for display is done by some non color managed playback where everything is assumed correct to specification restricted range.

     

    But whilst in a 32bit workspace personally I don't think it's necessary, aside from ensuring the NLE / Grading application preview is correct via whatever color management method is available. LUT'd or otherwise.

  16. No problem, the combination of BT709 primaries and BT601 color matrix coeffs is typical of jpeg / JFIF.

    Make sure you use a 32bit workflow with these full range files, like FCPx, Premiere CS 5.5 onwards, that any intermediate processes like exporting to image sequences is via OpenEXR or that you're staying YCbCr to manipulate levels in grade into the restricted 16 - 235 luma range for encoding to delivery.
  17. Correct 5D2RGB isn't required for adjusting levels before import into a 32bit workspace.

    I'd suggest only thing 5D2RGB is transcoding to Prores for the sole reason of playback performance, but then anything that goes to Prores would do.

    No gain using 5D2RGB imho if 32bit on the GPU is available. OpenCL or CUDA.
  18. No RunGunShoots highlights are above 100 so as he says he needs to pull them down to so strongest highlights are at or just below 100.

    At the shadows end, the levels don't need adjusting, they are at the lowest 'legal' level already. Adjusted up maybe depending on the look RGS is after.
×
×
  • Create New...