Jump to content

sunyata

Members
  • Posts

    391
  • Joined

  • Last visited

Everything posted by sunyata

  1. So if your idea of good looking is film, and even the digital camera companies use that as a benchmark, people need to remember that film is scanned for a digital intermediate at 10bit log encoded gamma to preserve the density information of the film negative, and so that it can be reprinted w/o any loss. When working on 10bit log, converted to linear so that it looks correct on your screen, you can bring out the details that you want to in post in a non-destructive way. So going back to the original theme of this post, is 4k 8bit 4:2:0 to 2k 10bit 4:4:4 going to really yield "true 10bit 2k", the answer is a clear no, and if you are someone that wants to shoot flat and have latitude in post, you probably should consider shelling out for the Shogun if you use a GH4 and capture 10bit.
  2. Yep, you're right, 2k cineon scans looked great. Better than most digital does today.
  3. Okay JCS-- I'm not sure it is to everyone, but I approve this message. Sorry, I've been getting a lot of robocalls lately. And Tupp, very clearly stated too. That's 3 people, 3 people can't be wrong. But don't get too excited about the fine noise part..
  4. jcs- When you say "nobody was able to come up with an example" you must be referring to another post that I missed because I keep seeing this as an open debate. That's great if it was tested and de-bunked, but I haven't gotten that, especially from the title of this thread. I'll address your comment about 8bit monitors. I work on an 8bit monitor (hopefully I'll change that soon), but the jobs I work on are delivered at 10bit or higher, and the footage is also 10bit or higher. I can see details of the higher source material just fine. If I'm doing a luma matte and i push the whitepoint and blackpoint so that they practically touch, I need the extra dynamic range. It also allows more stops with a color correct, it helps with tracking, pretty much everything.. As I'm making adjustments I can see the changes happen fine, the fact that you work on an 8bit monitor does not inhibit your ability to deliver a 10bit+ job. It's really only in the blacks that you're flying blind sometimes, what is brighter than 255 white is usually not an issue. A couple times there have been some gotchas in the very low end that I couldn't see on my monitor, but you learn how to avoid that. It is almost always with computer generated elements, that could be 3d elements or 2d effects, and in that case film grain is helpful, but also motion blur and just avoiding symmetry in general.
  5. jcs- Yes but, as per the title of this thread, people are confused at to whether or not you can convert 4k 8bit 4:2:0 to 2k 10bit 4:4:4 and get the grading and smoothness of detail that 10bit acquisition can give. They "want to get to the bottom of it", which I appreciate. By bringing up adding noise to reduce banding, algorithms for dither, the limitation of what we see with 8bit monitors (which makes it seem irrelevant for anyone to want anything higher than 8bit unless you're viewing on a 10bit monitor) and the fact that the chromatic channels don't add up to 10bit 4:4:4, although I don't doubt you mean well, you've helped to confuse the issue even more. Andrew is now posting an article on how to "cure banding", almost within minutes of learning that the technique exists. We should really just stay focused on the flaws in the bits to pixels = depth workflow. I think you understand it's fundamental flaw, that it does not represent the scene more accurately or improve stops that you can push before you see artifacts. Can we just agree that all the conversion from 4k to 2k does is anti-alias and shrink? Then the discussion about dithering algos and 8bit monitors can be an aside. I have my thoughts about all that but I don't want to get off topic.
  6. Nope.. it is not 10bit luma from the scene and therefore not really relevant. Noise is inherent in all cameras, film and digital, and I wouldn't advise adding more just to remove artifacts. Banding is very specific to symmetrical lights or shadows and it's usually only an issue with computer generated images. The much bigger problem you face with your footage are the 8bit 4:2:0 artifacts. This will not go away by adding noise. It might become less visible but at the expense of your footage, that's really taking out the sledge hammer. The reason I did a little zoom window in that animation is so that you can see what 8bit 4:2:0 looks like up close vs 10bit 4:2:2. That is your enemy, not banding. There have been a couple of red herrings thrown out here. 1) 8bit monitors 2) adding noise to remove banding and 3) the fact that you only get true 10bit in the luma channel.. that implies that this concept would work if you could have a few more bits, for example, by starting with 8k. This is not the case. All you are accomplishing by reformatting 4k to 2k is anti-aliasing and shrinking the image, no additional bit depth conversion that is akin to capturing in 10bit or greater.
  7. JCS-- Noise is a tried and true trick to prevent banding with CG and I've used it many times when delivering dark luminous glows that were created in post, also adding a matching film stock based on whatever plate I'm trying to match, but it can only do so much. This is even considering rendering 10bit log cineon from a float comp, and delivery to film prints or dcp.. adding noise on top of log can be insurance against banding. The 8bit monitor issue is an old one too, if you're working on an 8bit monitor it is common to sample the image to make sure any artifacts you are seeing are not in your monitor. You have to float over the image with some sampler and make sure the numbers are changing. In fact, if I had to get a new monitor, it would be HD 10bit over 4k 8bit for all these reasons. I have monitors I can test on but I'm working on an 8bit IPS that I'm used to. But you can tell the difference between 8bit from your monitor and an 8bit file that has 4:2:0 compression on it. The chromatic subsampling is very blocky and distinct.
  8. HA, this has turned into the breaking product news thread.. Going a little back on topic, just did this for FPN noise recorded in film gamma mode. Sorry, don't want to "overpost" this. I made a static image of a BMPC's noise fingerprint and then subtracted it. Gonna do some background footage soon hopefully.
  9. First you need to record noise from the camera you are trying to de-noise from.. the technique used was recording with lens cap on at 600 iso, film gamma, 10bit. Then 150 frames of the noise was averaged into a static. That was done in Nuke with the FrameBlend node, which created the "fingerprint" for that specific camera. Then a subtract was done in flame using the generated graphic with the clip, but most NLE's have some way of taking 2 clips and doing a math operation. By adjusting the levels of the fpn graphic, you can control how much of the pattern you are taking out of the clip. Hopefully this can help fix some footage out there while everyone waits for a firmware update.
  10. No problem.. just want to help with explaining these issues of colorimetry and color correction, it's already confusing enough w/o adding to it. One thing about my test that might not be clear.. I'm not just converting already pushed footage, I'm converting a clean gradient first, and then pushing it.. in both the 4k and 2k test. The gradient source has only been converted to nBit depth and chroma, but no heavy grade until after it was reformatted to 2k. Here are some 4k 8bit 4:2:0 tiff's that have not been "pushed" yet, if you want to test on them. The key is to make the gradient very dark, that will show the limited range of 8bit, and then I did a 4:2:0 after that using ffmpeg's "-pix_fmt 420p" option and saved as high quality tiff, to prevent more compression form entering into the test. I'm also not using studio swing range, so this is a little bit better than rec709. http://www.collectfolder.com/420test.zip Also, as a side note.. RGB 8bit uncompressed grades really well. It dithers nicely and doesn't really show heavy banding, I was surprised. I think the real challenge is with the chroma subsampling.
  11. So you will notice that nobody wanted to use your image in their tests Ebrahim.. but what you posted is on the right track as to how to test this theory (although that level of banding is not necessary), you need to try to fix something with clear 8bit 4:2:0 artifacts and then look at it up close when grading at 2k. Andrew, I could not really see clear artifacts in your source 4k image, so to me, it just looks like a little anti-aliasing and a zoom out of 50% is what you are thinking is a bit dept conversion improvement. Keep in mind that nobody at Cineform or even the person that wrote the conversion app (that is just adding 4 pixels) has made claims that the technique will give you the same latitude in post of footage recorded at 10bit depth natively, they have only confirmed the math, not what you can do with the results. This animation I did in Nuke mathematically/visually should suffice, but if you want to use the conversion app, it will have to be from someone on a mac. The second half of the video is 4k reformatted to 2k in 32bit float space. All the 8bit gradients look the same, but the gradients that started in 10bit or higher still grade vastly smoother. In other words, the 4k 8bit 4:2:0 or 4:2:2 gradients did not turn into 10bit gradients when reformatted to 2k and prevent banding when graded.
  12. This is a myth.. you have to record at 10bit to get 10bits of light information from the scene, or latitude in post.. another case of being fooled by math. Say you had 2bits, which is 4 colors, at 100k.. you start to get the idea. You could never recreate all the details of the scene if the information has been truncated to 4 colors, no matter how many pixels you recorded and resampled in post. (you are correct arya44, even though you say you are new to this). It is incorrect to think that there is an inverse relationship between bit depth and pixel depth with respect to depth info from the scene. Since a camera is not just a pixel calculator, but is more importantly a light recording device, the ability to map 4 pixels to 1 and increase the bit depth per pixel is not that relevant if you still can't see the scene any clearer. It also will not be that much more grade-able, and this I just know from experience. You will get anti-aliasing, which is not always what you want, and the noise will look smaller because you just shrunk the picture, but artifacts will still come out with a couple stops, unlike working in true 10bit log that has been converted into linear space. Gotta capture at 10bit to get that latitude in post.
  13. These are the results of trying to isolate fixed pattern noise from a noisy signal and then subtract if from footage. Brad Bell was cool enough to record some 4k noise from the BMPC for me. This requires capturing noise from the camera you're trying to de-noise. The pattern seems to be pretty fixed over time, i.e. it was working with older footage from the same camera. Noise was shot with lens cap on at 600 iso, 4k, 10bit, prores, 25fps, film gamma, and only 150 frames. Nuke was used to average all 150 4k frames into a single image. A subtract was done on the clips in the NLE using the produced 4k 10bit dpx file. The FPN image was de-saturated and levels were adjusted to knock out the white spots and leave the mids. This was done with an expand and clamp on the graphic. Dots seemed to line up perfectly, even after resizing and downloading re-compressed footage, so not ideal conditions. This technique should work on any FPN but the BMPC is one of the clearest. Still plan to test on scenery and live action but for now, this is what I've been able to get. Hope it helps someone out in Black Magic firmware limbo.
  14. I'll be brutally honest here, I started watching this critique and stopped when I realized it was "The Dark Night". This is like having Werner Herzog talk about Tansformers 3. If you want to see a movie that uses the long take to full effect, check out Abbas Kiarostami's "Certified Copy", or his newer "Like Someone in Love".
  15. I'd like to see her blending with dinosaurs!
  16. I think what they are saying with the "sensor RAW output" thing is that they have increased the dynamic range processed in the FPGA from the raw sensor data, which could give you more low light information in your final file, but still linearized within the same 8bit 4:2:0 or 4:2:2 color space. It would be awesome to get that raw data out of the camera somehow.
  17. sunyata

    Grading

    That's what a lot of people say, they say shoot in rec 709 studio swing i.e. legal ranges but I think the problem is happening on the read-in and not actually in the file. Panasonic put it in there for a reason, to allow more range out of the 4:2:0 settings, so something like a "rec709 full-range" color space profile to choose from in your software might be a better solution in the future.
  18. sunyata

    Grading

    I've read that there are several settings you need to consider that have to do with in-camera gamma levels, and you have to watch out for software that does not have a proper color space profile if you shot in full range 255 white levels rec 709.. as this can cause essentialy an expand to happen automatically when imported. I guess that is in fact the default mode? 255 whites in rec 709. Here is someone's recommended settings for low contrast: On the shots that seem to have too much contrast, you could try "clamping" between 16 and 235.. see it that helps, it could be the full range not being properly interpreted. Better yet would be to use the full range w/o clamping by creating a custom color space profile. This could prevent the super white problem on import.
  19. sunyata

    Grading

    Animated color correction cheat sheet using plot scanlines.. This source footage was shot on Arri Alexa and taken off the 2k 10bit 4:2:2 master. For their surveillance footage they used: Canon 7D, GoPro HD Hero, Sony PMW-EX1
  20. On streaming revenues: http://thetrichordist.com/2013/06/24/my-song-got-played-on-pandora-1-million-times-and-all-i-got-was-16-89-less-than-what-i-make-from-a-single-t-shirt-sale/ and here, about the corporations that make money distributing music for free. http://thetrichordist.com/2012/06/05/artists-know-thy-enemy/ ".. today we find ourselves in a battle with an enemy few of us understand. If we were to believe the writings and ramblings of the tech blogosphere, than they would have us believe that our enemy is our fans. This is simply not true." Apple + Beats != $
  21. So you're using the BMPCC at 00:00:11, 00:00:14, 00:00:21 etc, with the back-lighting shots? Nice work!
  22. Hahah... you're driving ME crazy. First of all, criticizing digital projectors is fine, but a Xenon projector refers to the bulb, and it's the most common bulb used in movie theaters today. I guess you're thinking of the contrast on a consumer digital projector? I don't know. Look, I'm kinda tired, I don't want to go on a long technical smackdown, just believe me, I work in flame on a finishing station, as well as do other sundry TD things, and you don't finish with 8bit files in a professional environment and res up for a master.. not unless you have a client that doesn't care? I guess you are telling me that it's standard now, or for the last 2 years, to use 8bit as your digital intermediate format? Is this what I'm missing? The fact that you could swing this swapping of an 8bit proxy in place of missing Red Epic footage does not mean that it's now okay for everyone that works in post to start finishing with 8bit Prores. As for who will notice, on this topic of "high-end theatrical release films" etc, an account rep at some place like E-Film or Deluxe will get the word from a colorist or technician, if it's visible, assuming it get's past your own internal QC. Colorists are trained to look for quality issues, they have the best tools to see the final product. I'm not a director so I'm not familiar with deciding to slide in whatever footage I want at the last minute. What a luxury. Projectors can emit whites brighter than a computer monitor or HDTV and at a wider gamut, it has required at least 10bit log gamma to encode the range for film, a pretty good baseline, which is more like 12bit linear (converted). 8bit in rec709 often looks like TV because it was designed for TV. Again, not saying 8bit 4:2:0 is useless, just responding to some of the wild claims.
  23. What is the point of 15Mbs 4k? Show of hands, how many people are sitting at a 4k screen right now?
  24. I kinda disagree with people like Douglas Trumbull when it comes to the future of cinema, I don't think bigger and faster is better. Maybe a new technology will come around that is more revolutionary than 3d in terms of full immersion, but as for 2d movie/video content, I like 2k at high quality (higher than I can get right now would be ideal), I don't want a screen 2x larger on my desk and I don't need to see a movie 2x wider and taller. I don't like IMAX for that reason, gives me a neck cramp. Sony lost over a billion dollars for fiscal 2014/2015 and I think they are going to be disappointed by 4k sales. We need faster DSL in America btw before we can start using this content anyway, our speeds are terrible.
×
×
  • Create New...