Jump to content

sunyata

Members
  • Posts

    391
  • Joined

  • Last visited

Everything posted by sunyata

  1. This is just an idea, I don't use Premiere so... It might be possible to desaturate to grayscale in rgb in 3 different gamma ranges, low, med and high.. then use those clips or however Premiere does it, as inputs for a HSL adjustment, adding one of these luma clips to "S". Might look strange though. You could also try applying saturation to rgb using a luma range clip as input to a saturation node, if such a thing exists. First option would require being able to convert color spaces in a single workflow.
  2. There is a custom in-camera profile recommendation to be used with this custom gamma post adjustment that has a color grade, all baked into a single 3D LUT, which tries to bring the gamma and color in range for 3rd party *log LUTs - with a little hyperbole thrown in about "gives ... log capability" and "all the other benefits of log". You're correct with respect to the impossibility of increasing detail. A LUT that is designed to adjust only gamma can also be described as 1D, it's sometimes referred to (when used in conjunction with a 3D lut) as a pre-LUT. Separating gamma from color adjustments also allows you the flexibility to edit conversions, offsets, illuminants or grades separately from intensity. 1D's are invertible because they have no crosstalk, although that's more of a workflow pipeline issue, not a big deal for quick looks. * log is generic but obviously there are several different popular versions, some very different. sRGB and rec709 by comparison do have standardized gamma.
  3. We're mixing different topics, again. If you're comparing free software to commercial software for graphic design, then what you get for free with GNU/Linux, or any other OS, is clearly not going to be on par with Photoshop or Illustrator in terms of viability in a commercial setting, but that's a silly way to judge all of free vs commercial software and GNU/Linux in general. In science and engineering, open source is often better than commercial software and is constantly winning new ground from students and teachers that want to be free to do research. For example, in statistical computing, data mining and genetic research, Julia and it's growing list of free packages are gaining huge ground over MATLAB and R, benchmarks are comparable to C even though it's dynamic; it's released under the MIT/GPL V2 license. In web development obviously GNU/Linux and open source is ubiquitous, the point has already been made about Oracle above, which has just been found to have a fresh sql injection exploit. As for non-free commercial software for video editing, Lightworks is now on parity across all platforms including Ubuntu and Fedora. Take a look at a professional editor that has been using it for many years now and see if there is a comparison to Premiere (the comparison would likely be to Avid). I didn't mention this either because it's prohibitively expensive, but flame and lustre run on RHEL or Centos exclusively. So I think we're comparing just the free graphic design stuff and consumer level NLE's above, but that's not the whole picture.
  4. I think jcs is speaking from the perspective of writing commercial desktop software and trying to sell copies, which is not the only way to make a living as a programmer. You could use GNU/Linux and work for Google for example and retire at 30 with stock options? Okay, maybe 40.
  5. not to be too cryptic but i couldn't care less about 4k digital, other than when a major client tells everyone that all shows going forward are going to be DCI P3 and UHD delivery.. which just happened unfortunately.
  6. ha.. you referenced the a7s video and I figured you were dealing with the same camera. What is the gamut and mode you are shooting in?
  7. always use your fastest disk for cache since most software now is using disk cache for playback, other than that it's most likely your video card for RT effects.
  8. Yea, I tried to help Kristopher with this issue, but the problem was basically unrecoverable. I did make him a LUT though, which just turned the blue spot down to look less saturated while trying not to affect skin tones. Others have suggested avoiding the problem by shooting white balance higher than 5000k and turning PP off. Assuming you were also using SGamut, my best guess is that your problem lies there, specifically it being too wide in the blue corner when scaling into sRGB, and with a lot of blue LED light in your signal -- see image below -- If you look at the problem with a scope, it's not actually clipping blue, it's red and green that are dropping to zero past a certain threshold; blue still has data. That makes sense if you consider how wide SGamut is in B. I also suspect it's the reason people are having other color problems shooting log with the A7s. Another more analog solution could be to use a blue blocker (orange) filter to avoid triggering or overrunning the threshold and then color correcting for the orange in post.
  9. And Lightworks is available for Ubuntu or Fedora/RHEL/CentOS as a $25 per month pro license option with no obligation, just quit when the job is done. I just used it for a show (using CentOS 6) where I had to go through a season of episodes, primarily to make batch lists for pulls. It allows you to create rolling cue points with name and timecode, then you can use those cues to create subclips or export a spreadsheet, which was exactly what I needed for editorial. I was also able to create custom clip overlays as templates with my reference name, source timecode in h:m:s, subclip runtime in frames, source reel name etc. Very intuitive interface.
  10. i was just referring to chromaticity. https://en.wikipedia.org/wiki/8-bit_color, or chroma separate from luma per channel. from above link on 8-bit color "The maximum number of colors that can be displayed at any one time is 256." i checked after effects which i don't use that much anymore, sorry, they do refer to 8-bit as millions of colors (for some reason i thought they called it thousands), my bad, big brain fart (referring to me).
  11. to calculate number of possible colors per bit depth, the formula is 2^n where n=bit depth. 1bit would be 2 colors, 4bit would be 16 and so on. photoshop will call 8-bit "thousands of colors" but that's like marketing-speak, they're including all intensity levels, so 256*256, not only chromaticity values. 8-bit will always be 256 colors and 10-bit will be 1024.
  12. yes, the thing with banding is that it does help to have noise or grain to promote dithering but this was done with 32bit float radial gradients, so oddly, the 8bit RGB uncompressed holds up nicely because if it's inherent dithering.
  13. I get the feeling you want to see some sexy slo-mo footage as a test, but that really wouldn't tell you much empirically about bit depth, chroma sub-sampling and compression as it affects grading. this test was done with a linear radial gradient that simply changes color.. the color part to test certain colors that don't do so well with compression. it also tests resizing 4k to HD converted to float, to see what that gets you, in addition to various color spaces and codecs. it goes fast so you need to pause frequently and watch fullscreen. (the preview window lets you see more closely what the artifacts look like)
  14. lots of different issues in this thread. 1) does log in 8bit ruin colors? 2) does much power come with log and is it magic? 3) are wide gamuts the real problem? 4) does Kodak choosing 10bit for Cineon mean that 10bit is the least you can get away with before you see artifacts? 5) is it really all about the camera as a package and the sensor, codec, even the lens for example? 6) should we be more concerned with chroma sub-sampling? 7) is this debate incredibly boring and useless? 4) trick question, Cineon was designed to re-print a film negative, not based on digital to digital tests, it was also R',G',B'. so unfortunately i don't think it can be used as a fair comparison, even though it was how this workflow started. 7) not at all, i just wasted several minutes. i think log has the same advantages with 8bit that it does with 10bit or any other depth, even though i disagree that 8bit is not distinguishable from 10bit unless doing keys. when you combine 4:2:0 compression with 8bit you get a negative re-enforcing effect (the blocks get much larger in dark areas because they have fewer codes to use), which shows up when doing lifts in particular. but all that is slightly separate from the initial log color question, of which log is the easy part to untangle; figuring out all the other stuff is really the challenge. so in that sense i think the "other" things that affect color, such as everything in question 3 and 5, are really the problem. the thread started with 8bit log and color, but sub-sampling and bit depth came up so i thought i'd re-post this old video i did, it's exaggerated but hopefully useful. hit spacebar (pause) when the description changes.
  15. If you look at his photography you can see that he's able to achieve a very similar aesthetic with different cameras (according to nofilmschool and some digging he's mentioned using an iPhone 5 (LOL), Ricoh GR, and a Nikon D800, shooting with a 28mm or 24mm lens). https://instagram.com/chivexp/
  16. Looks strikingly like his photos.. https://instagram.com/chivexp/
  17. Hey Jonesy, I'm a Linux user primarily. Since the death of IRIX, most post production and vfx software that used to run on SG workstations has long since been ported over. commercial post software includes: Maya, Houdini, Flame, Lustre, Nuke, Shake, PF* apps, MochaPro, Mari, Mudbox, Arnold, Renderman, Lightworks useful free software: Gimp, inkscape, ffmpeg, mplayer, mencoder, dvdauthor (for screeners), VLC, OpenColorIO, and tons of vfx utility apps, see a list here: http://opensource.mikrosimage.eu/index.html free communication and f-off related: LibreOffice, Steam, Pidgin, Thunderbird, Chrome, Banshee, Rhythmbox, Spotify, Renoise, Bacula (for project/footage backups) and of course more programming tools than you could list, I prefer to use GNU/Linux emacs. Still need a dual boot option though for certain things like taxes, incoming photoshop files, certain games, commercial audio like Ableton and Max.. listening to my old DRM files (thanks Apple).
  18. Thanks and yes, I wasn't referring to the print gamma profile, I meant film (the substance), sorry if that wasn't clear. In particular I think it's good to get out there that Log is part of a DI workflow (assuming low bit depth) and not the look of film or the film stocks that people still use as a reference. Also agree with Maxotics point, it's good to point out that you aren't encoding more total data, just shuffling around what you want to preserve.
  19. Ebrahim. I'm not sure if this will help, but one non-mathematical way to think about encoding log gamma is to imagine just spraying paint against a wall. If the wall is flat, the density of the spray should be even when it dries. If you curve the wall (log) with a knee and shoulder etc and do the same experiment, when the wall is straightened out, the distribution of the paint should have areas that are dense and other areas that are thin. It's the same total amount of info, just re-distributed to encode areas where more detail is needed. For example, setting middle gray at 18% (what we see as 50%) and moving it toward the middle to allow more code values underneath. This was essentially Kodak's scheme to preserve film print density in a low bit depth workflow and digital cameras today are using the same technique, but it was not mean to be the final look of the gamma, the print that was eventually created went back to linear. All the different Log-X types are just variants for different proprietary workflows, but borrowing from Cineon.
  20. She needs to tweet to exxon and bp about how they need to try harder to not spill oil.
  21. Ever since the whole Neil Young thing I've been wondering about this too and was first inclined to agree Neil, who I'll always agree with in fender amp selections, but in this respect it seems that he might be wrong, 16bit 44.1kHz is about all the human ear can hear, 24bit 48khz is more necessary for recording and mastering overhead, but not for final delivery. Format wav vs aiff etc is not as important as the codec, i.e. pcm_s16le etc. In a terminal (if you have ffmpeg installed) you can see the audio codec options by typing: "ffmpeg -codecs | grep DEA" - this will give you a list of supported encoding and decoding audio codecs. Best reference I've found on this topic, which I think was also in response to "pono". http://xiph.org/~xiphmont/demo/neil-young.html Specifically with respect to the bit rate issue from the article link above: When does 24 bit matter?Professionals use 24 bit samples in recording and production [14] for headroom, noise floor, and convenience reasons. 16 bits is enough to span the real hearing range with room to spare. It does not span the entire possible signal range of audio equipment. The primary reason to use 24 bits when recording is to prevent mistakes; rather than being careful to center 16 bit recording-- risking clipping if you guess too high and adding noise if you guess too low-- 24 bits allows an operator to set an approximate level and not worry too much about it. Missing the optimal gain setting by a few bits has no consequences, and effects that dynamically compress the recorded range have a deep floor to work with. An engineer also requires more than 16 bits during mixing and mastering. Modern work flows may involve literally thousands of effects and operations. The quantization noise and noise floor of a 16 bit sample may be undetectable during playback, but multiplying that noise by a few thousand times eventually becomes noticeable. 24 bits keeps the accumulated noise at a very low level. Once the music is ready to distribute, there's no reason to keep more than 16 bits.
  22. Ebrahim- I'll bite here and go a little further on Maxotics' point. No matter what algorithm you use to downscale, it's impossible to preserve data. A better descriptor would be transformation vs preservation, from one facsimile of reality to another smaller one. By using different algorithms to downsample, you aren't getting better accuracy, you're getting different results, which are more like creative choices. Some algos like Lanczos look sharper when resizing images because they are keeping areas of transition in contrast (edges) sharper than say Cubic, through iteration. Edges are how we see shapes, but that's not a more accurate method, it's just sharper looking. There are even sharper algos than Lanczos if that's what you're looking for, but the only way to really preserve data is to keep the source 4k files. Beyond that, choices with reformatting are subjective. For me recently, I needed to stick with Cubic when reformatting 4k because I was matching HD alexa background plates, it's also fast. But nuke's help page on their available reformatting algos is pretty useful, check it out. http://help.thefoundry.co.uk/nuke/9.0/Default.html#comp_environment/transforming_elements/filtering_algorithm_2d.html?Highlight=Lanczos
  23. Sorry Ebrahim, that sucks. It's a Michael Apted movie so it'll probably show up on Netflix at some point.
  24. Attn glass fanatics, this documentary is targeting you (us). Hopefully Vimeo will get more content like this to help kick start their VOD business model . A quick search on Amazon and Netflix didn't return anything. From the owners of the Vimeo page: "Instrum International was born out of the desire to see festival favorites get placed in the international marketplace."
×
×
  • Create New...