Jump to content

sunyata

Members
  • Posts

    391
  • Joined

  • Last visited

Reputation Activity

  1. Like
    sunyata reacted to tupp in Linux everything! Who's Interested?   
    Not really.
     
    Yes.  This is the typical FUD scenario -- early adopter of Red Hat, then got disinterested.  I've never used Red Hat.
     
     
    That's fine.  I would rather have open source and free software.
     
     
    Disagree wholeheartedly.  With open source and free software, I can do almost anything that can be done with proprietary software.  Furthermore, open source software often can do more than proprietary software, as a lot of the innovation occurs in open-source code.
     
    I would rather use software from a coder who is enthusiastic than from one who is merely drawing a paycheck.
  2. Like
    sunyata got a reaction from Nick Hughes in Blue Color Clipping   
    Yea, I tried to help Kristopher with this issue, but the problem was basically unrecoverable. I did make him a LUT though, which just turned the blue spot down to look less saturated while trying not to affect skin tones. Others have suggested avoiding the problem by shooting white balance higher than 5000k and turning PP off. Assuming you were also using SGamut, my best guess is that your problem lies there, specifically it being too wide in the blue corner when scaling into sRGB, and with a lot of blue LED light in your signal -- see image below -- If you look at the problem with a scope, it's not actually clipping blue, it's red and green that are dropping to zero past a certain threshold; blue still has data. That makes sense if you consider how wide SGamut is in B. I also suspect it's the reason people are having other color problems shooting log with the A7s. Another more analog solution could be to use a blue blocker (orange) filter to avoid triggering or overrunning the threshold and then color correcting for the orange in post.
     

     
     
  3. Like
    sunyata got a reaction from TheRenaissanceMan in Blue Color Clipping   
    Yea, I tried to help Kristopher with this issue, but the problem was basically unrecoverable. I did make him a LUT though, which just turned the blue spot down to look less saturated while trying not to affect skin tones. Others have suggested avoiding the problem by shooting white balance higher than 5000k and turning PP off. Assuming you were also using SGamut, my best guess is that your problem lies there, specifically it being too wide in the blue corner when scaling into sRGB, and with a lot of blue LED light in your signal -- see image below -- If you look at the problem with a scope, it's not actually clipping blue, it's red and green that are dropping to zero past a certain threshold; blue still has data. That makes sense if you consider how wide SGamut is in B. I also suspect it's the reason people are having other color problems shooting log with the A7s. Another more analog solution could be to use a blue blocker (orange) filter to avoid triggering or overrunning the threshold and then color correcting for the orange in post.
     

     
     
  4. Like
    sunyata reacted to Zak Forsman in Great Deal on X-Rite ColorChecker Passport   
    I've been keeping an eye out for a deal on one of these. Don't know how long it will last but the coupon code PSWBH15 will take $40 off the $99 price tag at B&H Photo. Davinci Resolve users will find it particularly useful. I'm hoping with enough testing and tweaking, I can come up with a custom LUT that will neutralize the GH4's unique color issues.

    http://www.bhphotovideo.com/c/product/651253-REG/X_Rite_MSCCPP_ColorChecker_Passport.html

    How to use it in Davinci Resolve to balance colors. This gives a general idea despite the fact that the guy in the video screws things up a bit (like setting his ProRes footage to sRGB instead of Rec 709). And I think he's comparing an unbalanced tungsten shot to a balanced fluorescent shot. I think if he'd balance both, they'd be close in the end?
    https://www.youtube.com/watch?v=onom8tpiof8 
     
  5. Like
    sunyata got a reaction from IronFilm in Linux everything! Who's Interested?   
    And Lightworks is available for Ubuntu or Fedora/RHEL/CentOS as a $25 per month pro license option with no obligation, just quit when the job is done. I just used it for a show (using CentOS 6) where I had to go through a season of episodes, primarily to make batch lists for pulls. It allows you to create rolling cue points with name and timecode, then you can use those cues to create subclips or export a spreadsheet, which was exactly what I needed for editorial. I was also able to create custom clip overlays as templates with my reference name, source timecode in h:m:s, subclip runtime in frames, source reel name etc. Very intuitive interface.
  6. Like
    sunyata got a reaction from IronFilm in Linux everything! Who's Interested?   
    Hey Jonesy, I'm a Linux user primarily. Since the death of IRIX, most post production and vfx software that used to run on SG workstations has long since been ported over.
    commercial post software includes:
    Maya, Houdini, Flame, Lustre, Nuke, Shake, PF* apps, MochaPro, Mari, Mudbox, Arnold, Renderman, Lightworks
    useful free software:
    Gimp, inkscape, ffmpeg, mplayer, mencoder, dvdauthor (for screeners), VLC, OpenColorIO, and tons of vfx utility apps, see a list here: http://opensource.mikrosimage.eu/index.html
    free communication and f-off related:
    LibreOffice, Steam, Pidgin, Thunderbird, Chrome, Banshee, Rhythmbox, Spotify, Renoise, Bacula (for project/footage backups)
    and of course more programming tools than you could list, I prefer to use GNU/Linux emacs. 
    Still need a dual boot option though for certain things like taxes, incoming photoshop files, certain games, commercial audio like Ableton and Max.. listening to my old DRM files (thanks Apple).
  7. Like
    sunyata got a reaction from Xavier Plagaro Mussard in Who will break the interenal 10 bit hybrid barrier?   
    I get the feeling you want to see some sexy slo-mo footage as a test, but that really wouldn't tell you much empirically about bit depth, chroma sub-sampling and compression as it affects grading. this test was done with a linear radial gradient that simply changes color.. the color part to test certain colors that don't do so well with compression. it also tests resizing 4k to HD converted to float, to see what that gets you, in addition to various color spaces and codecs. it goes fast so you need to pause frequently and watch fullscreen.  (the preview window lets you see more closely what the artifacts look like)
     
     
  8. Like
    sunyata got a reaction from vaga in Who will break the interenal 10 bit hybrid barrier?   
    I get the feeling you want to see some sexy slo-mo footage as a test, but that really wouldn't tell you much empirically about bit depth, chroma sub-sampling and compression as it affects grading. this test was done with a linear radial gradient that simply changes color.. the color part to test certain colors that don't do so well with compression. it also tests resizing 4k to HD converted to float, to see what that gets you, in addition to various color spaces and codecs. it goes fast so you need to pause frequently and watch fullscreen.  (the preview window lets you see more closely what the artifacts look like)
     
     
  9. Like
    sunyata got a reaction from Chrad in Who will break the interenal 10 bit hybrid barrier?   
    I get the feeling you want to see some sexy slo-mo footage as a test, but that really wouldn't tell you much empirically about bit depth, chroma sub-sampling and compression as it affects grading. this test was done with a linear radial gradient that simply changes color.. the color part to test certain colors that don't do so well with compression. it also tests resizing 4k to HD converted to float, to see what that gets you, in addition to various color spaces and codecs. it goes fast so you need to pause frequently and watch fullscreen.  (the preview window lets you see more closely what the artifacts look like)
     
     
  10. Like
    sunyata got a reaction from Santiago de la Rosa in Who will break the interenal 10 bit hybrid barrier?   
    I get the feeling you want to see some sexy slo-mo footage as a test, but that really wouldn't tell you much empirically about bit depth, chroma sub-sampling and compression as it affects grading. this test was done with a linear radial gradient that simply changes color.. the color part to test certain colors that don't do so well with compression. it also tests resizing 4k to HD converted to float, to see what that gets you, in addition to various color spaces and codecs. it goes fast so you need to pause frequently and watch fullscreen.  (the preview window lets you see more closely what the artifacts look like)
     
     
  11. Like
    sunyata got a reaction from Gregormannschaft in Who will break the interenal 10 bit hybrid barrier?   
    I get the feeling you want to see some sexy slo-mo footage as a test, but that really wouldn't tell you much empirically about bit depth, chroma sub-sampling and compression as it affects grading. this test was done with a linear radial gradient that simply changes color.. the color part to test certain colors that don't do so well with compression. it also tests resizing 4k to HD converted to float, to see what that gets you, in addition to various color spaces and codecs. it goes fast so you need to pause frequently and watch fullscreen.  (the preview window lets you see more closely what the artifacts look like)
     
     
  12. Like
    sunyata reacted to jax_rox in De-mystifing Log and other things   
    Looks, picture profiles, LUTs and Log.
    For those who were recently talking about understanding log and how it works etc. Here's a pretty good article from newsshooter that breaks it down
    http://www.newsshooter.com/2015/07/27/looks-picture-profiles-luts-and-log-why-when-and-how-you-should-use-them/
     
  13. Like
    sunyata got a reaction from Nick Hughes in Why recording LOG with an 8bit codec is most probably going to get you in trouble.   
    lots of different issues in this thread. 1) does log in 8bit ruin colors? 2) does much power come with log and is it magic? 3) are wide gamuts the real problem? 4) does Kodak choosing 10bit for Cineon  mean that 10bit is the least you can get away with before you see artifacts? 5) is it really all about the camera as a package and the sensor, codec, even the lens for example? 6) should we be more concerned with chroma sub-sampling? 7) is this debate incredibly boring and useless?
    4) trick question, Cineon was designed to re-print a film negative, not based on digital to digital tests, it was also R',G',B'. so unfortunately i don't think it can be used as a fair comparison, even though it was how this workflow started. 
    7) not at all, i just wasted several minutes. 
    i think log has the same advantages with 8bit that it does with 10bit or any other depth, even though i disagree that 8bit is not distinguishable from 10bit unless doing keys. when you combine 4:2:0 compression with 8bit you get a negative re-enforcing effect (the blocks get much larger in dark areas because they have fewer codes to use), which shows up when doing lifts in particular. but all that is slightly separate from the initial log color question, of which log is the easy part to untangle; figuring out all the other stuff is really the challenge. so in that sense i think the "other" things that affect color, such as everything in question 3 and 5, are really the problem.
    the thread started with 8bit log and color, but sub-sampling and bit depth came up so i thought i'd re-post this old video i did, it's exaggerated but hopefully useful. hit spacebar (pause) when the description changes.
     
     
  14. Like
    sunyata got a reaction from kaylee in Why recording LOG with an 8bit codec is most probably going to get you in trouble.   
    lots of different issues in this thread. 1) does log in 8bit ruin colors? 2) does much power come with log and is it magic? 3) are wide gamuts the real problem? 4) does Kodak choosing 10bit for Cineon  mean that 10bit is the least you can get away with before you see artifacts? 5) is it really all about the camera as a package and the sensor, codec, even the lens for example? 6) should we be more concerned with chroma sub-sampling? 7) is this debate incredibly boring and useless?
    4) trick question, Cineon was designed to re-print a film negative, not based on digital to digital tests, it was also R',G',B'. so unfortunately i don't think it can be used as a fair comparison, even though it was how this workflow started. 
    7) not at all, i just wasted several minutes. 
    i think log has the same advantages with 8bit that it does with 10bit or any other depth, even though i disagree that 8bit is not distinguishable from 10bit unless doing keys. when you combine 4:2:0 compression with 8bit you get a negative re-enforcing effect (the blocks get much larger in dark areas because they have fewer codes to use), which shows up when doing lifts in particular. but all that is slightly separate from the initial log color question, of which log is the easy part to untangle; figuring out all the other stuff is really the challenge. so in that sense i think the "other" things that affect color, such as everything in question 3 and 5, are really the problem.
    the thread started with 8bit log and color, but sub-sampling and bit depth came up so i thought i'd re-post this old video i did, it's exaggerated but hopefully useful. hit spacebar (pause) when the description changes.
     
     
  15. Like
    sunyata got a reaction from TheRenaissanceMan in Why recording LOG with an 8bit codec is most probably going to get you in trouble.   
    lots of different issues in this thread. 1) does log in 8bit ruin colors? 2) does much power come with log and is it magic? 3) are wide gamuts the real problem? 4) does Kodak choosing 10bit for Cineon  mean that 10bit is the least you can get away with before you see artifacts? 5) is it really all about the camera as a package and the sensor, codec, even the lens for example? 6) should we be more concerned with chroma sub-sampling? 7) is this debate incredibly boring and useless?
    4) trick question, Cineon was designed to re-print a film negative, not based on digital to digital tests, it was also R',G',B'. so unfortunately i don't think it can be used as a fair comparison, even though it was how this workflow started. 
    7) not at all, i just wasted several minutes. 
    i think log has the same advantages with 8bit that it does with 10bit or any other depth, even though i disagree that 8bit is not distinguishable from 10bit unless doing keys. when you combine 4:2:0 compression with 8bit you get a negative re-enforcing effect (the blocks get much larger in dark areas because they have fewer codes to use), which shows up when doing lifts in particular. but all that is slightly separate from the initial log color question, of which log is the easy part to untangle; figuring out all the other stuff is really the challenge. so in that sense i think the "other" things that affect color, such as everything in question 3 and 5, are really the problem.
    the thread started with 8bit log and color, but sub-sampling and bit depth came up so i thought i'd re-post this old video i did, it's exaggerated but hopefully useful. hit spacebar (pause) when the description changes.
     
     
  16. Like
    sunyata got a reaction from Daniel Acuña in ARRI Alexa 65 in new trailer!   
    Looks strikingly like his photos.. https://instagram.com/chivexp/
  17. Like
    sunyata got a reaction from Xavier Plagaro Mussard in Linux everything! Who's Interested?   
    Hey Jonesy, I'm a Linux user primarily. Since the death of IRIX, most post production and vfx software that used to run on SG workstations has long since been ported over.
    commercial post software includes:
    Maya, Houdini, Flame, Lustre, Nuke, Shake, PF* apps, MochaPro, Mari, Mudbox, Arnold, Renderman, Lightworks
    useful free software:
    Gimp, inkscape, ffmpeg, mplayer, mencoder, dvdauthor (for screeners), VLC, OpenColorIO, and tons of vfx utility apps, see a list here: http://opensource.mikrosimage.eu/index.html
    free communication and f-off related:
    LibreOffice, Steam, Pidgin, Thunderbird, Chrome, Banshee, Rhythmbox, Spotify, Renoise, Bacula (for project/footage backups)
    and of course more programming tools than you could list, I prefer to use GNU/Linux emacs. 
    Still need a dual boot option though for certain things like taxes, incoming photoshop files, certain games, commercial audio like Ableton and Max.. listening to my old DRM files (thanks Apple).
  18. Like
    sunyata got a reaction from Jonesy Jones in Linux everything! Who's Interested?   
    Hey Jonesy, I'm a Linux user primarily. Since the death of IRIX, most post production and vfx software that used to run on SG workstations has long since been ported over.
    commercial post software includes:
    Maya, Houdini, Flame, Lustre, Nuke, Shake, PF* apps, MochaPro, Mari, Mudbox, Arnold, Renderman, Lightworks
    useful free software:
    Gimp, inkscape, ffmpeg, mplayer, mencoder, dvdauthor (for screeners), VLC, OpenColorIO, and tons of vfx utility apps, see a list here: http://opensource.mikrosimage.eu/index.html
    free communication and f-off related:
    LibreOffice, Steam, Pidgin, Thunderbird, Chrome, Banshee, Rhythmbox, Spotify, Renoise, Bacula (for project/footage backups)
    and of course more programming tools than you could list, I prefer to use GNU/Linux emacs. 
    Still need a dual boot option though for certain things like taxes, incoming photoshop files, certain games, commercial audio like Ableton and Max.. listening to my old DRM files (thanks Apple).
  19. Like
    sunyata reacted to cpc in Learning time: What's a Log Gamma? S-Log, C-Log, V-log, Log-C...   
    sunyata's analogy is quite good. A small correction only: Prints don't have a linear representation of the scene light, not at all. Prints are heavily gamma corrected for projection in dark environments much more so than material meant to be shown on emitting displays. Print-through film curves (that is, scene-to-projection) typically have a gamma in the range 2.5-2.8.
    Now first it is important where the "log" comes from. It is because humans perceive exponential light changes as linear changes. This is a logarithmic relationship. Hence, log. Log curves mimic this. Exponential scene light changes are recorded as linear changes. In other words, each increase of exposure with a stop (or doubling the light) takes the same number of coding values to encode, and not double the values of the previous stop (as do linear encodings).
    There are a couple of technical benefits:
    1) Much more effective and economical utilization of available coding space. This is the reason log curves encode wide dynamic ranges effectively in a smaller bitdepth. Cineon was developed to capture the huge DR of negative film in only 10 bits.
    2) (And related to 1) Increased tonal precision in the dark parts of the picture, compared to a physically correct linear encoding (when using the same coding space).
    Since sensors work linearly, purely logarithmic curves would waste some coding space in the blacks, because there is not enough density there. That's why practically all log curves are pseudo-log, with some compression in the black end. Arri's Log-C is probably the closest to pure log. Canon's C-log is the furthest away from pure log. The other reason is, as mentioned, mimicking Cineon. This is also, I believe, one of the main reasons all log curves have a raised pure black level. This mimicks the base density (D min) of film, as encoded in the Cineon curve to accommodate scanning film densities.
  20. Like
    sunyata reacted to maxotics in Learning time: What's a Log Gamma? S-Log, C-Log, V-log, Log-C...   
    Here's my crack at it.  There's a little man in your camera.  He sees everything through the sensor, God like vision   However, he is only given 255 paints for each color, red green and blue.   Each color ranges from very dark to very light.  He uses this combination of 255 x 255 x 255 reds, greens, and blues, to create a full color image for you.  The problem for our little camera-man is that he often sees colors, say a blue, that sit between two of his blue paints.  Might be a 243.5, a little brighter than 243 and a little darker than 244.  Indeed, he believes he really needs 1,000 paints per color to render a good image.  
    But, and this is the first KEY thing, HE ONLY HAS 255 PAINTS TO WORK WITH IN EACH COLOR.  
    You go to the beach with your camera and you take an image of your wife.  The man in your camera says, it's a shame I don't have more lighter colors because there's a fantastic twinkle in your wife's eyes and nice colors in those clouds.  I have all these dark colors and I don't need any of them.  
    So what if you found a way to take his palette of 255 colors, throw out half of the dark colors and give him double the amount of light colors?  So you have, say 1,3,5,7 at the low end and then, 225, 225.5, 226, 226.6 at the high end?  What if you did that, but spread it out evenly (Curved them); that is, gave him only a few paints for dark colors but more and more colors as you got lighter--KEEPING IN MIND YOU HAVE A MAXIMUM of 255?
    You DO NOT END UP WITH MORE RECORDED DYNAMIC RANGE.  Rather, you have REDUCED dynamic range where you AESTHETICALLY don't care about it, and INCREASED dynamic range where you do.  But it is a judgment call.  The total dynamic range is still 255 colors.
    I got into a lot of trouble with these logs on the GH4 because I don't have enough experience to know when it's better to shift the recorded dynamic range.  I'd rather have RAW because you can apply curves AFTER the fact.  If you shoot S-LOG in an evenly lit scene you'll end up with muddy darks because you didn't give them the same paints as you gave the lights.
    Hope this helps!
     
  21. Like
    sunyata got a reaction from maxotics in Learning time: What's a Log Gamma? S-Log, C-Log, V-log, Log-C...   
    Ebrahim. I'm not sure if this will help, but one non-mathematical way to think about encoding log gamma is to imagine just spraying paint against a wall. If the wall is flat, the density of the spray should be even when it dries. If you curve the wall (log) with a knee and shoulder etc and do the same experiment, when the wall is straightened out, the distribution of the paint should have areas that are dense and other areas that are thin. It's the same total amount of info, just re-distributed to encode areas where more detail is needed. For example, setting middle gray at 18% (what we see as 50%) and moving it toward the middle to allow more code values underneath. This was essentially Kodak's scheme to preserve film print density in a low bit depth workflow and digital cameras today are using the same technique, but it was not mean to be the final look of the gamma, the print that was eventually created went back to linear. All the different Log-X types are just variants for different proprietary workflows, but borrowing from Cineon. 
  22. Like
    sunyata got a reaction from jcs in Audio Sample Rate   
    Ever since the whole Neil Young thing I've been wondering about this too and was first inclined to agree Neil, who I'll always agree with in fender amp selections, but in this respect it seems that he might be wrong, 16bit 44.1kHz  is about all the human ear can hear, 24bit 48khz is more necessary for recording and mastering overhead, but not for final delivery. Format wav vs aiff etc is not as important as the codec, i.e. pcm_s16le etc. In a terminal (if you have ffmpeg installed) you can see the audio codec options by typing: "ffmpeg -codecs | grep DEA" - this will give you a list of supported encoding and decoding audio codecs. 
    Best reference I've found on this topic, which I think was also in response to "pono". http://xiph.org/~xiphmont/demo/neil-young.html 
    Specifically with respect to the bit rate issue from the article link above:
    When does 24 bit matter?
    Professionals use 24 bit samples in recording and production [14] for headroom, noise floor, and convenience reasons.
    16 bits is enough to span the real hearing range with room to spare. It does not span the entire possible signal range of audio equipment. The primary reason to use 24 bits when recording is to prevent mistakes; rather than being careful to center 16 bit recording-- risking clipping if you guess too high and adding noise if you guess too low-- 24 bits allows an operator to set an approximate level and not worry too much about it. Missing the optimal gain setting by a few bits has no consequences, and effects that dynamically compress the recorded range have a deep floor to work with.
    An engineer also requires more than 16 bits during mixing and mastering. Modern work flows may involve literally thousands of effects and operations. The quantization noise and noise floor of a 16 bit sample may be undetectable during playback, but multiplying that noise by a few thousand times eventually becomes noticeable. 24 bits keeps the accumulated noise at a very low level. Once the music is ready to distribute, there's no reason to keep more than 16 bits.
     
  23. Like
    sunyata reacted to maxotics in Educate me please: What is down-scaling?   
    Hi Ebrahim. Agree with wither mercer, but just to have fun, I'll argue you're not over-thinking it enough
    Is an image a collection of perfect data points?  If each pixel recorded the color and saturation perfectly do you end up with a perfect image?  
    I'll argue that the answer is 'no' because no matter how well a pixel records data there is space between each pixel that does NOT record data.  A digital image is really black canvas populated with color dots that never touch each other.  What this means is that if you image a field of tall grass there will be parts of the small grass blades that, when their light makes its way through the optics, fall on dead space on the sensor.  This is true, AFAIK, with every sensor made and it doesn't matter how high the resolution 1K or 4K.  Obviously, the higher the resolution, the less noticeable this problem.
    If a blade of grass "breaks apart" so to speak, between the pixels of your 4K image, downscaling cannot create "data" information that isn't there.  Just like when your image processing system (camera or post) must deal with it (aliasing) with the original image, the downscaler must deal with it when combining the larger set of pixels into a smaller.
    The chief difference between all these algorithms, AFAIK, is how many pixels around the hypothetical center pixel the software looks at to determine the best value (though much of it is subjective, some will like one algo, others, another).  Take the blade of grass.  If the algo only look at the pixels above, it would never see the disconnect between the pixel to the left.  An algo that looks at 16 pixels, say, to calculate 1 pixel, can often do it better than 4 pixels because it can "see" more of the image and make a good decision about what to create.
    The more pixels the algo looks at, the better, in my experience.  Though, like Mercer says, this isn't a problem that yields significant improvement in most footage.
    So my answer to your question is that running your footage through the most sophisticated algorithm before you edit in your NLE should deliver the best results.  Most likely, the NLE will be sluggish if running it real time (which is why you would process it before).
    For example, Amaze is a great debayering algorithm, but I doubt most PCs can use it rendering RAW in real time.
    Hope this makes sense!
     
     
  24. Like
    sunyata got a reaction from maxotics in Educate me please: What is down-scaling?   
    Ebrahim- I'll bite here and go a little further on Maxotics' point. No matter what algorithm you use to downscale, it's impossible to preserve data. A better descriptor would be transformation vs preservation, from one facsimile of reality to another smaller one. By using different algorithms to downsample, you aren't getting better accuracy, you're getting different results, which are more like creative choices. Some algos like Lanczos look sharper when resizing images because they are keeping areas of transition in contrast (edges) sharper than say Cubic, through iteration. Edges are how we see shapes, but that's not a more accurate method, it's just sharper looking. There are even sharper algos than Lanczos if that's what you're looking for, but the only way to really preserve data is to keep the source 4k files. Beyond that, choices with reformatting are subjective. For me recently, I needed to stick with Cubic when reformatting 4k because I was matching HD alexa background plates, it's also fast. But nuke's help page on their available reformatting algos is pretty useful, check it out. 
     
    http://help.thefoundry.co.uk/nuke/9.0/Default.html#comp_environment/transforming_elements/filtering_algorithm_2d.html?Highlight=Lanczos
     
     
  25. Like
    sunyata got a reaction from jcs in Driving Mr. Tarantino:   
×
×
  • Create New...