Jump to content

Devon

Members
  • Posts

    56
  • Joined

  • Last visited

Posts posted by Devon

  1. 3 hours ago, paulinventome said:

    Unbound RGB is a floating point version of the colours where they have no upper and lower limits (well infinity or max(float) really). If a colour is described in a finite set of numbers 0...255 then it is bound between those values but you also need the concept of being able to say, what Red is 255? That's where a colourspace comes in, each colourspace defines a different 'Redness'. So P3 has a more saturated Red than 709. There are many mathematical operations that need bounds otherwise the math fails - Divide for example, whereas addition can work on unbound.

    I think I’m getting there! Thanks so much Paul! I’ll keep reading about color management and  scene referred workflows!

  2. On 3/31/2020 at 5:08 AM, paulinventome said:

    Actually you don't really want to work in a huge colourspace because colour math goes wrong and some things are more difficult to do.  Extreme examples here:

    https://ninedegreesbelow.com/photography/unbounded-srgb-as-universal-working-space.html

    Thanks Paul! Thankful for you all here to help clear up questions! 

    One more question regarding your link; I can’t seem to find out what the heck “unbounded sRGB” is. Is it a Linux color space? Interpolated sRGB? And why would anyone interpret a RAW file, and immediately convert to sRGB for editing??? 

    So confused 😭

  3. 23 hours ago, CaptainHook said:

    Blackmagic Design Film for colour space/gamut (Colour Science Gen 1) in a DNG workflow is 'sensor space'. So it's really how the sensor sees colour. This is not meant to be presentable on a monitor, you're meant to transform to a display ready space but to do so requires the proper transforms for that specific camera/sensor (so you can't use the CST plugin for non BM cameras for example) in which the transform varies depending on the white balance setting you choose.

    If you're happy to grade from sensor space, that's fine but I would also join in recommending Rec.709 or an ACES workflow in this case since neither approach will clip any data and will also do the heavy lifting of transforming the sensor response no matter the wb setting.

    What’s a “universal” color space to work in? Let’s say I wanted to deliver to major movie theaters, but also export for YouTube. Can’t we just work in the largest color space, and while exporting, convert the colors for different destinations (ie: YouTube vs movie theater?)

    also, is Adobe DNG converter any use right now to make DNG file size smaller from the FP?

    What does Resolve do with Adobe’s Smart Previews?

  4. I totally agree @Bioskop.Inc

    If you're using spherical lenses, you're limited to 1920x1080. If you use anamorphic, and stretch the footage, you're doubling horizontal resolution. So if we decide to crop in post, it's hard to think of it as a loss in resolution when you just doubled your horizontal resolution. 

    Depending on the horizontal crop I choose from a 2x lens, it also produces the same horizontal FOV the other anamorphic options (1.5x, and 1.33x) would produce (assuming the taking lens doesn't change.) Is this true too?

  5. So I had a thought today. WARNING: I tend to ramble, so bear with me :) 

    --I have been thinking about buying an anamorphic lens lately. I shoot with a Sony A7s, and therefore am resticted to 16:9. I don't have a large budget, and have settled on buying a 2x anamorphic lens off of Ebay.

    One of my concerns is the ultra-wide aspect ratio 2x produces on a 16:9 sensor (3.55:1.)

    If I stretch the anamorphic footage in post, rather than squeeze, I double my horizontal resolution.

    But that horribly-wide aspect ratio (3.55:1) it produces is just too wide for my taste.

    So, after doing a bit (a lot) of math, I realized that if I just horizontally crop the image in post to fit a 2.66:1 or 2.39:1 composition (correcting the distortion by horizontally stretching the footage), that there is essentially NO need to buy a 1.33x or 1.5x lens.---Depending on the horizontal crop, a 2x lens essentially has all 3 types of anamorphic lenses (1.33x, 1.5x, and 2x) "built in" (depending on the horizontal crop I choose.)

    Depending on the horizontal crop I choose from a 2x lens, it also produces the same horizontal FOV the other anamorphic options (1.5x, and 1.33x) would produce. 

    So after all this rambling, my question is as follows...

    For those of you familiar with anamorphic shooting, is this theory essentially correct?

    Thank you for sticking with me :) I know this is a lot of info to take in. 

  6. Hello all!

    I was curious if anyone knew of a fancy trick to have a video file recorded at 60fps, playback at 24fps without opening in after effects, premeire, etc. and re-rendering the whole file. Could this be done by altering the video file's "code" in Mac's Terminal app, or something fancy like that? Please let me know if you guys are completely lost as to what I'm asking.

    Thanks!

  7. I'm having trouble understanding what After Effect's "Linearlize working space" does and where it can benefit me. I shoot all my footage in SLOG-2, and as soon as I check "Linearlize working space" and "Color management" in after effects, I cannot seem to recover highlights and shadows. The image just looks bad.

    If I don't use this feature, can my video files look differently from computer to computer when watching on youtube (assuming I color manage my footage in sRGB?) Do the gamma values change? Once you render to H.264, isn't the footage linearlized during the rendering process anyway?

    Sorry for the lack of detail here. I have read posts about this here http://prolost.com/blog/2009/9/30/passing-the-linear-torch.html and here http://prolost.com/blog/2006/6/4/know-when-to-log-em-know-when-to-lin-em.html

    Even after reading these posts, I still cannot determine if I should be using "Linearlize Working Space" when editing SLog-2 encoded footage.

  8. @EyeSoul thanks for getting us back on track. When you say it lacked the "pop" of your nikon, are you referring to the color and contrast rendition? Or are you referring to the character of the barrel distortion? 

    Wow that focal reducer looks great! I would never guess you shot those on a crop!

    That's a cute pair of kids btw :)

  9. http://www.bhphotovideo.com/c/product/1174425-REG

    This is so tempting. I shoot with a Sony a7s and legacy film lenses are my "go-to." Most of my film lenses are a bit soft compared to modern lenses. With that in mind - I've been looking for a fast 35mm lens to adapt to my Sony, and I'm thinking the image quality from this would closely match a legacy 35mm lens. 

    This lens is also cheaper than most legacy lenses too. 

    What you guys think? Would it be a waste of money to buy it? 

  10. Thanks iaremrsir! I understand Slog-2 and how the curve squeezes the max dynamic range into the image. Although, I'm wondering how black level, black gamma, and knee affect the profile all together? (And I guess I am looking for a simple definition as to what they are.)

    Please correct me when I am wrong in the following:

    Black level: Essentially sets the black point at 0 IRE?

    Black gamma: confused. Is this a gamma inside a gamma?!?! (gamma inception)

    knee: shifts where the "hump" in the log curve sits? Does adjusting the knee raise or lower this "hump?"

     

    I realize leaving these settings at default would be easy, but I have this horrible obsession about fully understanding my camera.

    Can I alter the settings (black gamma, level, and knee) to squeeze even more dynamic range into the image?

     

    Sorry I am further confusing everyone! Thanks again!

×
×
  • Create New...