Jump to content

sunyata

Members
  • Posts

    391
  • Joined

  • Last visited

Everything posted by sunyata

  1. jcs- the main point is abstraction, but here we're summing left to right, not also top to bottom. if you apply this transform to a black and white gradient, or a plot scanline, you won't see color offsets (added or subtracted per channel). it's your basic color space transform. it can also be inverted. SGamut to sRGB 1.87785 -0.79412 -0.08373 = 1 -0.17681 1.35098 -0.17417 = 1 -0.02621 -0.14844 1.17465 = 1 sRGB to SGamut 0.566069 0.342758 0.0911729 = 1 0.0769662 0.799066 0.123967 = 1 0.0223549 0.108626 0.869019 = 1 since there seems to be a lot of variation with SGamut, camera to camera, ev to ev, handling gamma and color space separately lets you mix and match custom gamma profiles / color spaces / offsets in post. you could of course choose to wrap everything into a single baked 3D LUT... whatever you prefer.
  2. not sure, it's an imageworks format that has a 3x3 and an offset.. but that's going towards mixing in all color decisions. it looks like this internally (values are fpo): 0.880608 0.155864 0.017461313 0.0 0.033194133 0.96680668 0.0 0.0 0.013515 0.070699 0.933638605 0.0
  3. hey jcs! it should be noted first off that a 3D LUT is not really used for the same purpose as a 3x3 transform matrix. the 3D LUT can include nearly all your non-temporal top level color grades and so is often used as a 1 way baked in final color look or pre-vis generator.. this 3x3 matrix is a color space transform and should sum across a single row to 1 (rounded to a precision of around 4-6 decimal points), and be a near lossless transform, and invertable, appropriate for cases where you might need to go from one space to another w/ minimal information loss, so they're apples and oranges really. in the case of what we're trying to do here, it makes sense to start with the gamma.. isolate that issue, then a 3x3 color matrix transform to get into the proper working color space, whatever that may be for you, then you can start tweaking your final grade, but from a (hopefully correct) baseline. so it's really about workflow flexibility and abstraction. if you go from SGamut to XYZ with the 3x3 matrix above, you can then apply any number of XYZ to sRGB, XYZ to Adobe Wide, XYZ to ACES.. whatever you need. it might make sense to make a .spimtx file though?
  4. yep, but that statement wasn't a literal question, it was just for the sake of discussion, i.e. rhetorical. rather than a 4k vs 2k war, they seem more concerned about dynamic range and the image quality of the end format. they mention that they don't really care about resolution, they want _____ blank, which sometimes comes across as nice colors, at other times they call it dynamic range, but i think we know what they're getting at, it's the quality of the image not determined by resolution alone. and one person does mention none of his female (but be honest, guys too) are eager for 4k.. that's an age old problem, the fear of the close-up. i hear what ebrahim is saying about never getting enough resolution.. i'm gonna fall on the side that says there is such a thing as too much resolution.. unless we're trying to publish for massive screens that we don't currently have.
  5. do a lot of people get lighting confused with exposure? i'm not sure about that, that would assume people think they are interchangeable terms. anyway, the thing about Citizen Kane is that it was shot between f/8 and f/16, it used fast film, custom lenses, lots of artificial light, all to get as long of focus as I've ever seen still to this day w/o compositing.. but i'd venture to guess that the Kodak Super XX which they used, still printed > 13 stops once it was processed.. unfortunately we'll never get to see the original print quality. in agreement with your fantasy comment, that's why i can watch this movie over and over.
  6. actually, the bmpcc is not the solution to low cost HDR 2k, the sensor has flaws, such as pretty severe moire.
  7. Yes, lighting is the key, but how can you accurately represent that lighting w/o wide dynamic range?
  8. it's not necessarily required that a camera manufacturer makes tv's in order for them to see a profit in selling 4k cameras. if there is a demand, then they will jump on board.. but the initial push i still believe was to provide a low cost incentive to refresh the product cycle.
  9. thanks for posting this, it's what i've been feeling/saying here since my first post. with digital pro-sumer cameras, i believe we need more usable dynamic range, not resolution. personally, 2k at 24p seems to work fine for me: if it isn't broke, why fix it? i think LDR 4k is really a ploy to sell more tv's.
  10. wait, i thought that's what 4k was supposed to do..
  11. so is this using a dichroic beam splitting prism and 3 monochrome sensors?
  12. I'm not really into 4k, or 6k? But I'm glad Arri is making the right moves in the digital space.. would hate to see them go extinct. I guess this is good news for IMAX? i see a little spike after hours, probably coincidence.
  13. i researched this too and found these 2 color transform matrices (first looks to be the same): SGamut to sRGB: 1.87785 -0.79412 -0.08373 -0.17681 1.35098 -0.17417 -0.02621 -0.14844 1.17465 also, this may be useful for someone that needs to go to other color spaces from a neutral XYZ: SGamut to XYZ 0.706493 0.128799 0.115157 0.270984 0.786595 -0.0575787 -0.009678 0.00459997 1.09399 it's a good starting point but the colors can still be too saturated, especially in the blues. jcs- I don't think S-Log3 will make it into the A7s because of it's steeper log curve and wider highlights... for a prosumer camera with 8bit output, that could increase the complaining about artifacts, but you never know. dhessel- not baking in your 1D gamma lut with your color transform into a single 3D lut is smart, the 1D lut is invertable, you can modify w/o changing color, & the 3x3 transforms (which jcs is calling affine) can be inverted and more precise for linear transforms than 3D LUTs, which are bound by their sample resolutions and not reversible.
  14. oh no, not this thread.. you can just scale in your NLE of choice (stated above), or ffmpeg can do this and more for free. for example, i wrote this expression for my batch conversions to dpx sequences, which uses bash, sed, and ffmpeg (you only need to install ffmpeg). it takes a folder of clips (mov extension), converts them all to 10bit 4:2:2 dpx, and saves the sequences into folders with the clip's filenames, minus their extensions. this command will work on mac and linux using the bash shell, which i think is still default in osx. "for i in `ls *.mov`; do mkdir `echo $i | sed 's/(.*)..*/1/'`; ffmpeg -r 23.98 -i $i -s 1920x1080 -pix_fmt gbrp10le `echo $i | sed 's/(.*)..*/1/'`/$i".%04d.dpx"; done" if you're on a mac and don't have ffmpeg installed, google how to install it for mac, tons of walk-through's out there. also, this is just an anti-alias thing, you're not getting true HD (as in same as if recorded at) 10bit 4:2:2 w/o banding etc from a 4k 8bit 4:2:0 file.. if you see chromatic compression in your shadows, or banding, it's still going to be there, just a little smaller and mushier from the resample.
  15. sunyata

    Grading

    hey nahua- i didn't bother to adjust levels of the bottom layer, but of course you could do that to match the black diffusion filter. also the blur is exaggerated, you'd probably want to drop the gain a little to make the glows less opaque. this is really more of a glow effect than a blur when you build it manually.
  16. I'm gonna go with a more general statement because I think once you define something too clearly, you start to lose your creativity, but in general, I think a viewer's state of mind is what defines "cinematic": If you know you're being told a story and your mental state is that of understanding you are watching a facsimile of reality, then I think it could be considered cinematic.
  17. Sounds like you need to set up some internal QC (as said above) of the final export before you deliver. You have to know what your work will look like when it's aired so you can troubleshoot internally and fix the "smooth" problem. You can also try recording the spot when it airs and calibrate against the file you exported for delivery... and throw a crap TV set into the test list too, since that's really what people will see it on.
  18. I believe those screenshots came from an F65 or F55.
  19. The Slog2/SGamut LUT's that I've found included with most software are designed for the F55 and don't work for the A7s. The 3D Slog2/SGamut LUTs posted to Sony's CineAlta forum have color transforms burned in that also look specific to the F55. But actually, it's in the blacks to dark grays that banding and macroblocks are most visible, not in the mid to whites, so this might account for how surprised he is. Our eyes are more attuned to changes in dark contrast. This is why log gamma usually sets middle gray to 18% instead of 50%, which looks like pure white.
  20. guys, this applies to engines, whiskey, amplifiers, blu-ray players, vfx, lenses, monitor panels.. it's your job as DIY'ers, to find these imbalances, take the labels off, solder off the capacitors, hack the firmware, de-restrict the exhaust, snap the aperture rings... and keep it a secret: it's a good thing.
  21. Andrew, do you have the above flowers in S-Log2 / SGamut and rec709, ungraded?? I'm trying to get a gamut conversion matrix finished but I don't own the camera, the flower shot would be perfect for the reds.
  22. I didn't know about this lens, or the connection with the lens designer, and Zeiss' attempt to get the lens coatings, I must be an idiot too!
  23. Depending on what your work environment is, the video card requirements can actually be more expensive than the computer, i.e. working online or offline. For example, primary display with SDI output can be over 4k. That is not including decklinks. Also, the Quadro and GeForce have different architectures and that does become an issue with smooth playback. I've tried to go the GeForce route only to find horizontal lines and artifacts while trying to playback at a locked framerate. The card can be very important with a complete editing suite depending on the demands, it's always been that way unfortunately.
  24. CPU speed and video card aside, heed my advice... you need to think about backups! I'd go with a used / tested LTO-4 or 5, on ebay. If you're serious about editing, you're gonna need a searchable library of footage to keep offline, not just a stack of firewire drives and dead computers. Just my 2 cents.
×
×
  • Create New...