Jump to content

KnightsFan

Members
  • Posts

    1,224
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. How are you judging highlight and shadow info from the histogram?
  2. So I did some tests and the results did not line up exactly with my prediction, however, I think there is a reasonable explanation. All were shot in normal DR using Custom 1 profile. All clips are 1/50 shutter, f6.3, and ISO 800, except the last one. I chose 0.5 for the RGB boost because halving is really easy to evaluate. RGB1.0toLinearMult0.5: Shot with sliders set to 1.0. It was transformed from Rec.709 to Linear space, and then multiplied by 0.5. RGB0.5toLinear: Shot at RGB 0.5 and then converted to linear. I expected this to be exactly 1/2 the value at each point. Obviously, this is not the case, most noticeably in the highlights. One possible explanation is that, like most cameras, the NX1 doesn't shoot a strictly Rec.709 image, but implements a soft highlight rolloff to make the image more pleasing. Unfortunately, I don't know any way to eliminate this source of error. RGB0.5DoubleISOtoLinearMult0.5: This is the interesting result. I shot at ISO 1600 to compensate for the RGB drop to 0.5. I converted from Rec.709 to Linear, and then multiplied by 0.5 the same way as the first shot. Notice how it is almost identical to the first shot, except that it clips sooner. This is what I'd expect from an ISO invariant camera: higher ISO simply reduces highlight headroom without providing any exposure benefit (except that digital gain is done pre-compression which is a HUGE benefit for an 8 bit compressed image). As a side note, the reason we don't have the massive fluctuation in highlight shown in test 2 is that the recorded image is essentially the same exposure (half RGB, double ISO). I think this is more evidence that the RGB boost occurs before the gamma curve, which is where the rolloff would be implemented. This correlates with my findings many months ago, which is that providing an RGB boost closer to 2.0 makes a DR benefit, simply because the camera is tricked into using a lower ISO and thus keeps that headroom, at the expense of shadow detail. What I have not tested is how much the shadow quality differs, since that's pretty subjective and I don't know how to test it objectively. By eye, I don't see any shadow detail loss, but again, that's really subjective and anecdotal. It's worth mentioning that my tests from way back when were in Gamma DR. It may be possible to extend DR in other modes using an RGB <1.0 since they clip at a different point. But if I was right earlier about Gamma C clipping before 0.7, then you may as well just shoot in Gamma DR if you want to maximize DR.
  3. Are you doing photos or video? It seems to me that the RGB sliders essentially do the same thing as an ISO adjustment, but after the actual ISO adjustment clips something. I've seen a lot of claims that the NX1 doesn't use analog gain ("ISO invariant"). So ISO adjustments only adjust digital gain--essentially a multiplier, probably in linear space. If that is the case, then, assuming we avoid clipping, the RGB sliders have no effect other than tricking the camera into thinking it's at a different ISO, which could change the automatic noise reduction and sharpness. That was sort of what my conclusion was, way back last time I posted here. Basically, I can shoot at ISO 400 with the boost, and get an ISO 800 exposure without such aggressive noise reduction. IF my analysis is correct, then lowering your RGB to 0.75 will both lower your highlights and shadows, pre-gamma curve. So you gain some highlight DR, and lose some shadow detail--basically, the same thing as lowering your ISO. We can test this by measuring like you did in Premiere on their 0-255 scale, but instead we transform the image into linear space first. My prediction is that every pixel will drop in value by a factor of 0.75 at that point. Thus, the >255 values drop down into range, and values near 0 will have a lot of noise.
  4. Well, I went to grab some pics to show what I mean, and I may have caused more problems for us. Turns out the clipping happens in Gamma DR but not Gamma C. These were done with RGB set to 0.7 ish. In Gamma C white still peaks, while in DR it has a hard clip. But maybe we can use this info to estimate the processing order for the picture. My initial guess is that this shows that the RGB sliders multiply the R,G,B channels on a linearly encoded image before the gamma curve is applied. (That would make sense, because linear multiplication on a gamma-encoded image is a recipe for disaster!) Then, after the RGB multiplier, the gamma curve is applied. Gamma DR retains all/most dynamic range from the linear image, and thus you see the clipping, whereas Gamma C clips the highlights somewhere before the 0.7 value, so we don't see that hard clip on the scope. More testing required, I guess! As per my post above, which Gamma setting are you on? In Gamma DR, I see sub-white clipping almost immediately. Zebras disappear entirely by 0.87. You can still see some noise above the hard clip on the waveform, but I think that's due to HEVC compression estimations and does not represent any meaningful highlight information.
  5. In my experience, reducing RGB below 1.00 makes it so that you can't have anything actually be pure white. None of your test shots have pure white to begin with, so you won't see those negative side effects. Stick a lamp in the background behind the color checker and you should see the 0.75 getting clipped on the scopes. One thing I do notice is the white balance shifting, which is an unfortunate feature of the NX1: color is not uniform across its dynamic range.
  6. No problem, always glad to see some nice DIY stuff. I use various TalentCell batteries which have 9v and 12v outputs. Though they are unregulated, so I don't know how safe they would be to use with a dummy battery in a camera.
  7. Voltage is probably the issue. I believe the official power supply is here https://www.bhphotovideo.com/c/product/820410-REG/Nikon_27055_EH_5B_AC_Adapter_for.html and it outputs 9v at 4.5A. You could get a 5V to 9V step up voltage converter. Your power bank outputs 5.8A, so with 100% efficiency in a step up converter you're looking at 9v at 3.2A, so it's less current than the Nikon power supply gives. It might be enough, I don't know how much power the BMPCC draws (a 5v 2.4A USB port was sufficient for my GH3 back in the day, so it'll probably work... but I don't know for sure). The other problem is that I doubt any single outlet on that power bank can actually supply 5.8A by itself, that's probably the total amount if you plug three different things in simultaneously. It would be necessary to know how much the max current for any single USB port on that power bank outputs, and how much power the BMPCC actually draws.
  8. @thebrothersthre3 What voltage are you supplying? Are the screws tightly clamped onto the actual wire itself (It almost looks like they are screwed onto the insulation instead of the metal, but it's hard to see in the pic)? Do you have any tools to test conductivity?
  9. I always liked the LS300, but the price vs. recording options is very hard to justify for my needs, much more so now that the XT3 exists. If they made a new version with 10 bit and HEVC, I'd be very interested. Imagine the VSM capabilities if they used a 26MP sensor.
  10. That depends on your software and hardware. Resolve's performance with XT3 footage is significantly better than Premiere's performance with GH5 footage on my computer. That is my opinion as well. Though to be fair my comparison comes from personally using an XT3 vs. editing projects shot by other people on a GH5.
  11. I am a huge fan of the XT3. You mentioned the lack of IBIS is a downside. Could you afford a used gimbal on top of the XT3? Or would warp stabilizer work well enough for your purposes? It would be sketchy for walking shots (though to be fair, so would IBIS), but warp stabilizer can do wonders for a relatively steady handheld shot.
  12. That's fascinating, I'll have to try it out to see whether I can replicate his results. I'm not really sure yet how mismatched Chroma resolution scaling can make the Luma artifacts that he points to as evidence out: "If you take a look around high contrast edges in a 4:2:0 encoded image you will see noticeable chroma artefacts, often appearing as a lighter or darker halo around the edge of objects." (emphasis added). Anyway, seems like a potentially easier option is to edit in 4k (with proxies if needed), then do the YUV downscale after exporting.
  13. Unfortunately I don't believe Resolve has V-Log L, only V-Log. I'm not sure how different it is to be honest, never worked with it myself. Generally, using a color space transformation (CST) is the best and most accurate way to translate between color spaces. However, if you want to keep your old workflow, you can always use LUTs in Resolve the same way as in Premiere. Whether you use a LUT or CST, you can apply it in different places. In the color tab, you can right click on a clip and set it there, or you can set it inside a node. I prefer to do it in a node, because then it is much clearer what the order of operations is. I can never keep the order of operations straight, but with nodes there's a handy graph with arrows that makes it all so simple! With Resolve, you don't have adjustment layers, but you can add clips to Groups. Right click a clip (or clips) in the color tab and Add into a New Group. Now, you have a separate node graph for group pre-clip and group post-clip, which, as the names imply, are done before or after the clip. So you could add all your V-Log shots into a group and then add in the CST/LUT or whatever you want in the group pre-clip section, and then all those clips will start from that baseline. (You also have a timeline node graph, which applies to everything in the timeline after all the other graphs are applied) I've read that it's best to do white balance adjustments in linear space, so my node graph usually goes: 1. CST plugin to linear gamma (V-Log to Linear, in this case) 2. White balance 3. CST plugin from linear to Rec.709 color and gamma 4. Whatever else
  14. @mirekti Apparently production was on break because of Chinese New Year, and they sold out of existing stock. The Z cam people say it will be back in stock soon. The E2 has been in and out of stock a number of times at B&H.
  15. Wait does Resolve support Flog as an input space now?
  16. For adapting! And JVC had that neat feature where you put a M43 lens on and it just uses a M43 crop the sensor, or you could put a S35 lens on and use the whole thing. Very flexible. To be honest, though, I'd go for any mount that you can get an EF lens onto. The only real downside to higher MP is more rolling shutter, if the processor isn't upgraded as well. High MP with downscaling is great for low light. I'm sure all near-future Z Cam products will have 4k60 ProRes in addition to H.265. I don't think they'll have 4k120 though, they seemed to imply that was a unique feature for the E2. It will be interesting to see if Canon makes a decent cinema camera. Their latest photo cameras have been disappointing for video to say the least. It would be quite forgivable in my eyes if they came out with a good low budget RF video camera, as a complement to the R and RP.
  17. Hopefully, yeah. I don't see why low light would necessarily be worse, if they do a full readout and have a fair comparison (4k vs 4k). I find the XT3 has good enough low light for me, so as long as it isn't much worse than that I'd be happy. I wonder what mount they would go with for S35. I'd love to see MFT a la that JVC camera, or if they surprised everyone and threw in with the L mount lot. 3k would be nice, but my gut says more like 4k. Fingers crossed though!
  18. My prediction is 8k S35 in a slightly larger body than the E2.
  19. While we are talking about audio, it would be cool to have a video recorder that can function as a usb audio interface, thereby recording sync sound from an external mixer without introducing any loss or levels issues.
  20. Yeah, exactly. It will take a few years, but we'll get there, and Atomos doesn't want to be caught without a business model when that time comes. Just a few years ago, we used Atomos devices just to bypass 24 Mbps IPB compression. Cameras didn't output 10 bit even through HDMI. I think this time next year, and certainly by the year after that, we'll have a number of decent options that shoot ProRes and/or RAW, like we already have with the P4K, or the Z Cam E2. Just recently we got our first 10 bit full frame hybrid. Now, a couple of cameras have even abandoned SD in favor of faster and more reliable storage options, eliminating yet another barrier to ProRes and Raw. All these point to the external recorder business slowing--not dying completely, but slowing considerably over the next couple of years.
  21. I wonder if Atomos is seeing the end of mainstream external recorders in the coming years. For a while, Atomos had a niche market of being the best option for people on a budget to record higher quality footage. Back around 2014, I had discussions about external recorders, and my position was that they were a bad long term investment, since I thought it was only a matter of time before internal recording was higher quality. Now we've got several photo/video hybrids and budget video cameras that shoot 10 bit, and even a couple that do 422, essentially removing the benefit of an external recorder as far as quality goes. I know there are still benefits to external recorders and will be for the near future, but as internal codecs and formats continue to improve, I predict that fewer people will buy external recorders. I think it's smart for Atomos to both pursue higher quality with ProRes Raw, and to also expand to building dedicated monitors for the market that is satisfied with improved internal recording. As far as this product, it has some cool features. I'm sure it's a fantastic screen, and the Analysis tool looks amazing. I would prefer more physical buttons and less reliance on touchscreen, though.
  22. I've done a number of codec tests, some of which I posted on EOSHD. With H.264, you can easily find a situation where an IPB encoding performs as good or better than All-I at 5x the bitrate. Static shots, for example. As you move the camera more and more, the IPB advantage goes away. I do not buy this. In my tests, I have found that increasing bit depth can increase quality without increasing the bitrate. The more information the encoder has to work with, the more efficiently it can decide what to keep, what to throw out, and what to fudge. The result: you actually need less data to get a better image when using 10 bit. This is true whether the source is 8 or 10 bit. But to answer your real question: I think that, controlling for all other encoder settings, your knee jerk reaction is correct--but in your examples the difference of All-I vs IPB isn't being controlled, and will be the biggest factor depending on the amount of motion in the scene. Yeah, I get the impression that the publicity for the 10 bit 422 paid update made some people miss the fact that even without the update, the S1 shoots 10 bit 420 out of the box, internally.
  23. KnightsFan

    Bane voice

    I heard way back when the movie came out that Bane's voice was 100% dubbed. However, I would definitely record audio on set, even if you plan to dub it all. First of all, you can give your actor nice recordings to listen to when dubbing. Second, you will probably want to process the dubbed audio to actually sound like it come from the real space, and the original recordings will be an excellent reference tool. Third, in the event that your actor is not available to dub and you can't find someone else suitable, you have a backup plan. I might not go all out and be super picky about the sound. Like don't redo the perfect take because a car two miles away backfired during a line, but put some minimal effort into getting usable audio.
×
×
  • Create New...