Jump to content

sunyata

Members
  • Posts

    391
  • Joined

  • Last visited

Everything posted by sunyata

  1. i think the net takeaway here is that the quality of the image is what matters and not the resolution.
  2. he seemed to think the tungsten looked true to reality, i guess there is no way to know if it was adding too much red unless you see the shirt for yourself. you're correct though, he didn't test it. anyway, i'd also like to see your test, good luck. it's very true in photography and pre-press, or calibrating monitors. in post production (depending on what you're doing), while it helps of course to have good* footage, most of the work in flame (for example) is very creative and often subjective. clients generally have a low level of interest in spot on color. if it doesn't look right to someone, good luck arguing that the color is accurate based on a color checker. for one scene in american horror story, the dp shot 16mm film, overdeveloped it 2 stops, then transfered to u-matic tape and back to a 2k digital master, all to re-create the look of a 1980's television documentary. *for the look you need to achieve.
  3. remember the shirt was still blue in their resolve color checker tool example? at that point you have to make manual adjustments. but yes, the 3-way color tool would perform precisely 3 wb transforms (for each channel), blended based on the low, mid and high points (this was just a premiere alternative). i rarely strive for automated perfection in color correction. you get within a baseline from which you can make manual adjustments. that's why it's an art and a science. but why do the final color in premiere using exported LUTs and not just do a roundtrip to resolve? i think the variation in environments and the different camera's color response to each scene will be the main problem with creating a Canon 3D LUT (or any camera) for an A7s, with this accuracy you desire. seems like you would need references from both cameras for nearly every lighting scenario (as you speculated) and internal camera setting would also need to be somehow on parity. even within that context, your distribution of colors could still be very different based on the camera's internal characteristics, creating tone imbalances when doing an automated color checker profile match (the blue shirt scenario) or enhancing the quality limitations of the A7s' 8bit 4:2:0 format. you could spend a lot of time trying to automate something to find that your eye is still the fastest and best tool.
  4. yep, it's all true, but maybe instead of making a 3D LUT in resolve for use in premiere, with that auto macbeth tool (for each lighting scenario?), it might be faster to use an x-rite 3 tone grayscale checker (or use the ones on the passport) and then the eye dropper on a 3-way color corrector directly in premiere to set wb calibration. it should be fast, fairly accurate, and then you can make your adjustments. or you could just leave it all for resolve and use the macbeth tool after you do your edits, just export an XML file from premiere.
  5. just came across the "DSLR slate" app with a tiny horizontal checker? no idea if you could calibrate at all off your iPad screen, maybe you could tape a horizontal macbeth chart to the top of an iPad and use the rest of the slate's functionality. and this one is more full featured, $30: https://itunes.apple.com/us/app/movieslate-clapperboard-shot/id320315888?mt=8 and the iPhone CineMeter? got pretty high marks from users on-set..
  6. the iFixit teardown here, the panel is by LG.
  7. i tested these 3 gamma profiles and also 3 custom color matrix transforms (no channel offsets) on different A7s S-Log2 / SGamut ungraded footage.. one preset won't work obviously for all the different ev, lighting and wb settings. the first gamma profile is traced from the F5/F55 forum's cube file, the baked in color grade was removed. nuke is exporting 1D LUTs with 3 columns so i'm working* on a group node with an ocio wrapper to generate a correct 1D file. * slowly
  8. might have to render an intermediate file with something like ffmpeg, which supposedly supports using LUTs, although i haven't tried it.. and it's freeeeee! ffmpeg -r 24 -y -i my_sgamut_video.mov -vf lut3d="/path/to/my_downloaded_lut.3dl" -c:v prores_ks -profile:v 3 -pix_fmt yuv420p /path/to/render/drive/i_hope_it_works.mov
  9. I was thinking the same thing, the histogram looks like a scaling problem. Has a test been done to make sure that just scaling the gamma into the correct range won't solve the problem? a different shaped gamma curve when using HDMI seems unlikely, but who knows... a custom 1D shaper lut might be a fix if scaling is the problem.
  10. tupp- wow, responding to almost every sentence, even the ones you don't have a problem with! well, since i hope to eat dinner sometime tonight, i'll just address some of the highlights: i said, "so the 1bit camera example was theoretical, not in fact how you would implement a 1bit camera in the real world for best results etc.. in which case you would be leaving out too much info really to cover." and you respond, "What is the meaning of this statement? Given enough resolution and barring noise, a 1-bit camera could certainly provide the same amount of color info as an Alexa 65." you misunderstood. i meant if you start to take a simple thought experiment and literally start describing algos and pixel blocking techniques, you introduce too many variables for the thought experiment to be useful. i said, "in order to get that ... black and white image you see in the newspaper, you often need to do several things. first, start with a much higher resolution and generally 8bit image." you said, "One could start with an analog image that has 0-bit depth. Regardless of the original bit depth and resolution (or color depth), the image is rendered within the color depth limits of the printing system." obviously the printer will limit the printed color depth, what i was saying is that you have to start with a high resolution (and color depth) image to create a low resolution monotone image with fine details. anyone that has made an indexed gif or line-art understands this principle. i say, "anyway, what we see is the direct result of a series of changes to a much more detailed image, which we can't then recreate from the printed image, no matter what the resolution." and you say, "Not necessarily -- not if the printing system is higher quality than that of the original image... there could be is no essential loss in quality -- even though the final image is not exactly the same as the original. then it's not the original image, it's something else. i said, " this is often misunderstood because of the "resolution influences color depth as much as bit depth..." type of statement. " you said, "I fail to see how the fundamental relationship between resolution, bit depth and color depth causes misunderstanding, since most are completely ignorant of that actual relationship." it's an oversimplification.. i'll just refer you to the posts and discussions about generating "true 10bit 2k 4:4:4 from 8bit 4k 4:2:0" for some of the confusion. i say, "an 8bit file is considered low dynamic range for this (and other) reasons." you respond, "Not by those who know the difference between bit depth and dynamic range." in this context, dynamic range is referring to the ability to record light info w/o visible steps. we understand that it's not referring to an empirical definition of dynamic range but rather a low or high usable potential. a couple more -- you are very adamant that dynamic range and bit depth are "two different properties", even though i keep stating that i'm not saying they are the same, but nevertheless: you said, "Dynamic range and bit depth are two completely different and independent properties. By the way, dynamic range also exists in analog systems, but bit depth exists ONLY in digital systems " so this is understood by anybody with a little EE and CS knowledge.. but more importantly, i keep stating that what i'm referring to is "usable dynamic range". there is no point in encoding a gazillion stops of dynamic range into a 2bit file format. you can't use it. extensive tests have been done on the human visual system and requirements for minimum unnoticeable image contrast steps, refer to JND or just-noticeable difference with respect to contrast perception. so again, this is not saying dynamic range = bit depth, this is saying that usable dynamic range encoded into a digital file can be limited by too few bits per pixel. and out of that reality, terms like HDR or LDR workflows have emerged. and lastly, you say, "don't believe everything that you read on Wikipedia. The fact is that a 4-bit digital system can have over thirty stops of dynamic range, while a 16-bit system can have only one "stop" of dynamic range." refer to the above point about encoding over 30 stops into 16 colors. anyway, you aren't just saying don't believe Wikipedia about the term "dynamic range" with respect to image formats, you're also saying don't believe the Academy (see ACEs), ILM (see EXR), ImageWorks (see OpenColorIO), UC Berkeley (Paul DeBevec), radiance, the hdr file extension should be renamed, and this link on Wikipedia titled "High Dynamic Range File Formats" should be taken down... and more. Anyway, i gotta stretch now.. :( thanks for the distraction from actual work.
  11. You and I have kinda discussed this in the past.. I think because we are coming at this from different backgrounds, you think that I'm saying that dynamic range is the same as bit depth. more below on this -- so the 1bit camera example was theoretical, not in fact how you would implement a 1bit camera in the real world for best results etc.. in which case you would be leaving out too much info really to cover. but even summing the results of each pixel group with infinite resolution would not account for the loss of information. in order to get that (you called it 1bit) black and white image you see in the newspaper, you often need to do several things. first, start with a much higher resolution and generally 8bit image.. if it's a newspaper like the LA Times for example, you need to convert to grayscale, increase contrast, possibly add unsharp mask and print to a 100 line per inch screen velox (that's probably really dated info). anyway, what we see is the direct result of a series of changes to a much more detailed image, which we can't then recreate from the printed image, no matter what the resolution. this is often misunderstood because of the "resolution influences color depth as much as bit depth..." type of statement. "usable dynamic range and bit depth" you have argued that bit depth and dynamic range are not related, but i noticed in one of your other posts you asked why the banding in a blue sky was so bad with the A7s and wanted to know if there was a solution. an 8bit file is considered low dynamic range for this (and other) reasons. you are trying to encode too much contrast variation into too few code values, all noise and dither strategies aside. if you only had a narrow range of light intensity or gamma, particularly in the shadows, no compression etc, it would be less noticeable, or higher bit depth encoding. image formats below 16bit are often referred to as low dynamic range in a render pipeline.. even 10bit technically is considered LDR. for this, the bit depth and format are both important.. i.e. integer, float or 1/2 float (exr). from Wikipedia on OpenEXR: It is notable for supporting 16-bit-per-channel floating point values (half precision), with a sign bit, five bits of exponent, and a ten-bitsignificand. This allows a dynamic range of over thirty stops of exposure. another context for the term.. and obviously bit depth is important with respect to the dynamic range potential.
  12. yep i agree, but there is another possibility of end user; people that want to make or modify their own camera and would love all the info :)
  13. tupp- there are a lot of misconceptions is probably an understatement, especially with what we're referring to when we say "dynamic range".. because that can describe the scene, the sensor, the codec, and the bit depth of the end file format (which determines usable dynamic range).. but with respect to your 1bit example: if you had a camera that could shoot infinite resolution, but could only determine a white or black pixel based on a relative scene luminance, say setting middle at 18% gray reflector, so above or below that would quantize to white or black, regardless of resolution, you would never have the details that you see in the newspaper example.. which has images that started out as photographs and then went through a line screen, essentially a dithering process, or as an artists engraving. in other words, they started with the additional dynamic range data in order to know what pixels to throw away, or in the mind's eye of an artist that was imagining that information. but, if you are capturing a wider dynamic range with the sensor and then encoding into 1bit, and dithering etc, then that would be more analogous to the newspaper example, although information is still lost that cannot be recreated by downsampling. i'm also sorry to harp on this point..
  14. yes, you are always losing information, from the start of the signal chain to the end, the question is what can you get away with (for what target end use). but there actually have been extensive human tests done on the visual system, i'm always saying this.. i.e. the just noticeable difference for example, with respect to noticeable contrast levels and color changes.. . most of the movies you've watched projected in a theater were probably scanned at 2k 10bit Cineon log (which set the 18% middle gray card to code value 470).. for me probably around 1/2. I'm assuming you're a little younger than me :( . Even the Alexa, which is 16bit internal and usually recording to 10bit 2k Log-C, uses a very similar gamma curve to Cineon (that's what the "C" stands for). This is from Arri's Alexa FAQ page: "If there are not enough steps between the brightest and the darkest part of the image, you will see banding artifacts, where, rather than seeing a gradual change in lightness, you will see distinct bands of lightness. However, 10 bit images usually have enough steps to avoid such artifacts."
  15. Well, to be fair, this project has been in development for several years.. they've completed many landmarks before this beta phase started and I think the goal is admirable. I do have concerns though, but they're mainly over the profitability and their need to spend a lot of time designing the firmware for each sensor (this is largely FPGA design). Then they will have to deal with a very demanding crowd of users (understatement). I think it's going to be for a niche market, within a niche market, but they say they are committed to keeping all technologies open source, so we all can benefit from the ongoing research.
  16. Ha, speaking of the late 90's, I still have 18GB purple SCSI shuttles at home. I want to fire up the E8 and see what's on them, but that will probably never happen... it's going to be stuff like an interlaced 720x480 dancing taco that will just make me sad.
  17. And now the new Lightworks 12.x is the same across Windows, MacOS and Linux.. on a current job I used it to take 4 low res screeners with burned in timecode, each 1hr long episodes, and make all my searchable annotated cue points (rolling when it didn't need to be so accurate) to be used for pull requests from a D5. You can modify the cue point timestamps to match a master tape just by slipping the clip in a new timeline (this was a workaround, modify option wasn't cutting it), then export your cue points for any timeline as a cvs file, it includes timecode as the first field, which can be imported into a spreadsheet template (what we needed for the FCP batchlist). Saved me tons of time copying and pasting and this function works with the free version. Got out of the office, took it to a kinko's masquerading as a Coffee Bean.
  18. Very talented people, horrible management.
  19. After looking at Kristoffer's footage, I noticed that in the bright blue areas, the colors that were clipping are actually red and green (at zero) creating a pure blue saturation area, but there was still a little detail left in the blue channel highlights. It makes me wonder if a yellow filter might be enough to prevent this blue threshold from triggering? You would then have to color correct.. not ideal but might work for blue LED lights and lasers!
  20. Really more about light types and their characteristics. Lots of good info in the other talks too. Introduction to lighting:
  21. watermark it with the person who signed the NDA's name on it and give it to them low resolution? that's how i get screener material that hasn't been released yet. if it gets out, someone gets sued, heads roll, people jump off balconies etc.
  22. the key word is "discernible". *********** technobabble below *************** in an 10bit file you have 2^10 or 1024 shades.. you could interpret that as holding 10 stops if you calculate that as a contrast ratio of 1:1024, but that has nothing to do with what the eye really notices in terms of steps of contrast. So the 13 stop formula above is not what you need for number of shades to encode 13 stops of light from a scene. the just-noticeable difference for levels of contrast is actually based on other factors, including resolution, intensity and refresh rate. you can also encode light intensity as linear, or as a log curve, which has a major impact on the number of shades necessary for noticeable contrast. some tests indicate 12bit is the minimum, but it's standard when working with film or 2k digital footage (which adds grain and or noise), that you can encode 13-14 stops into 10bit log gamma w/o discernible steps. **************** end babble ********************** translation.. 10bit log is good enough for 13-14 stops.
  23. i like your 3D jax illustration! i would repeat my previous comments though about the benefit of keeping gamma adjustments separate from your color space transforms. we use 1D LUTs for gamma because they can be inverted, while 3d LUTs can't. color space transforms are typically implemented (and documented) as a 3x3 matrix, they can be converted one direction, back, or to another space when it's necessary.
×
×
  • Create New...