Jump to content

KnightsFan

Members
  • Posts

    1,351
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. If the signal is lower, the same noise will be more apparent. Or perhaps Nikon is doing some automatic corrections as suggested above.
  2. I don't know about holding back, but it's all about priorities, I think. Z Cam is a bleeding edge company. They make 360 cameras and are actively working on integrating AI into their cameras. Z Cam has a smaller community, and many owners at this point expect bugs and workarounds--it's the price of using bleeding edge. Blackmagic is focused on bringing cinematic imagery at a low cost: they care more about color science and integration into pro workflows than they care about high frame rates and next gen tech. Their target audience is more likely to have learned on film than Panasonic or Z Cam. The P4k is also significantly cheaper than the other cameras, the fact that it even competes spec-wise is impressive. Panasonic is orders of magnitude larger than either company, and needs to compete with the other giants (Canon, Sony, Nikon), both today and tomorrow. Their products need to have near 100% reliability, and be easy for consumers to use. A single bug could kneecap initial reactions to a product, permanently damaging their reputation. Why spend the R&D money on 4k 120 if Sony isn't, and when that money could go towards QA? Of those three companies, it seems to me that Z Cam has the most incentive to innovate with technology. It wouldn't surprise me one bit if Z Cam ends up with the best specs.
  3. I haven't seen any third party tests, but my guess is that they are exaggerating by 2 stops, like most manufacturers. Or, to look at it another way, Arri under-exaggerates by 2 stops. It's kind of like Canon claiming 15 for the C300, although Canon's 15 is measurably less than Arri's 14. Also a lot of the footage I've seen of the WDR mode has bad motion artifacts, so it's probably unusable except for really static shots.
  4. Not officially. They are still waiting for licensing, but a few users have found a way to turn on ProRes via a sort of hack. Those independent sources do confirm the camera's abilities, though. It seems pretty unlikely to me that they are using another camera for their test footage. They are planning on implementing Raw in the E2 alongside ProRes and H.265. Granted, the P4K will have BRaw, which I predict will be better than Z Cam's Raw format.
  5. You can. I thought it was only in all I, but if you can do long gop in 400 then even better.
  6. Prores 4k is 500 mb/s, more than double the 200 on the xt3 which looks just as good to me. I may do some tests to see what the actual difference in accuracy is. The xt3 400 mbps is all i, which is significantly less efficient.
  7. Give h265 the same or even Half the data rate as prores and then compare. 10 bit 422 h265 at a high bitrate looks REALLY good to me.
  8. I shot with an xt3 and an nx1 on a recent project, and while the xt3's footage was visibly better in 4k 24 when i pixel peeped, i really can't say it really made a difference for my project. Of course having that quality in higher frame rates and a faster readout are real benefits, but even the nx1 peaked for diminishing returns in terms of color, compression, and dynamic range. At this point the biggest upgrades i want are for workflow: timecode, ergonomics, false color, and such.
  9. I haven't used Adobe since CC 2015 so I have no idea how Resolve's encoder stacks against theirs. I suspect that it's just that their H.264 encoder isn't great and that they don't really focus on that as it isn't a "pro" codec like ProRes or DNxHR. To be honest, I don't know much about ProRes in general, but my impression is that there are fewer options, whereas H.264 is a massive standard with many parts that may or may not be implemented fully.You'd have to do your own tests, but I would doubt that Resolve's ProRes encoder is as bad as their H.265 one. Actually, to be fair, their H.265 encoder isn't even their product, you have to use the native encoder in your GPU if you have one, and if you don't, you can't even export H.265 at all I think.
  10. No converter ever seems to have the options I'm looking for. I first started using ffmpeg to create proxies. After shooting I run a little python script that scans input folders and creates tiny 500 kb/s H.264 proxies with metadata burn in. I tried other converters but I had so many issues with not being able to preserve folder structure, not being able to control framerate, not being able to burn in the metadata I want, etc. I've also had issues with reading metadata--sometimes VLC's media information seems off, but if I use ffprobe I can get a lot of details about a media file. I also use ffmpeg now to create H.265 files since Resolve's encoders are not very good. I can get SIGNIFICANTLY fewer artifacts if I export to DNxHR, and then use ffmpeg to convert to H.265 than if I export from Resolve as H.265. And recently I did a job that asked for a specific codec for video and audio, the combination of which wasn't available to export straight out. So I exported video once, then audio, then used fmpeg to losslessly mux the two streams. And another little project that required me to convert video to GIF. It's become a real swiss army knife for me. Yes, Resolve has a batch converter. Put all your files on a timeline, and then go to the deliver page. You'll want to select "Individual Clips" instead of "Single Clip." Then in the File tab you can choose to use the source filename.
  11. So the current situation is that: 1. You have H.264 footage, but it is not linking properly in Premiere 2. You can convert H.264 to ProRes with Adobe Media Encoder, but it bakes a black bar at the bottom 3. You can convert H.264 to ProRes with Compressor, but there is a color shift 4. IF you could convert to a properly scaled ProRes file, you can get it to work properly in Premiere. Are all of those correct and am I missing anything major? If not, one option is to use ffmpeg for conversion. It's is my go-to program for any encoding or muxing tasks, and I've never had any issues with it encoding poorly or shifting colors. Is this an option?
  12. I dont think thats a fair assessment of blackmagic. Prores is, for whatever reason, an industry standard. Blackmagic includes it because thats what standard workflows require. If blackmagic were responsible for making prores standard, then you could say they were just trying to market an inferior product. Moreover, as much as i believe that more efficient encoding is better, it is significantly easier on a processor to edit lightly compressed material. Editing 4k h265 smoothly requires hardware that many people simply dont have yet, such as the computer lab at a university i recently used. Prores was simply easier for me to work with. But for the most part, you are right. Processors seem to be a limiting factor for cameras at this point. Even "bad" codecs like low bitrate h264 can look good, if rendered with settings which are simply unattainable for real time encoding with current cameras. Its great to see sony making bigger and better sensors, but with better processors and encoders, last generation sensors could have better output.
  13. Have you looked at the file in other programs? Can you confirm whether it's exclusively a Premiere problem, or is the problem with the files themselves?
  14. Because downsampling decreases noise, thus giving more dr in the shadows (at the expense of resolution of course).
  15. My point was that eventually DR and sensitivity will be good enough, and the convenience of global shutter vs. mechanical will take over. 14 stops vs. 17, 14 vs. 140, same thing if we don't have screens that can reproduce it anyway. Global shutters are already sought after for industrial uses, which means there is always going to be some innovation even if consumers aren't interested. When GS sensors get good enough to put in consumer devices and make satisfactorily good photos at a low cost, then we'll see them proliferate, and us video people will benefit as well. I don't think the DSLR video market is large enough to push towards global shutters on its own. I think global shutters are more likely to be pushed in the photography world. Just my prediction.
  16. If global shutter technology gets reasonably good, I imagine it will find its way into photo cameras in order do do away with shutters. If 10 years from now we have a choice between 20 stops of DR with 6ms rolling shutter vs. 14 stops with global shutter, I'm sure a lot of people would choose the latter.
  17. Exactly. Sensor specs are the upper limit of what a camera can achieve. I'd bet that the first few generations of cameras using these new sensors won't come close to maxing their data rates. Still, it's nice to see tech advancements.
  18. Oh, ok, my mistake. I thought you were responding to the second paragraph you quoted, about 8k making up for 8 bit h264.
  19. Actually you can. For DR, downscaling reduces noise because for each pixel in the downscaled image, you combine four signal values that are (almost always) highly correlated, and combine 4 noise values that are not correlated. Thus, your SNR will be lower and with a lower noise floor, you have more dynamic range in the shadows. You can verify this by using dynamic range testing software, or by a simple calculation-- imagine 16 adjacent pixels that each have an ideal value of 127 out of 255 (e.g. you are filming a grey card). Add a random number between -10 and 10 to each one. Calculate your standard deviation. Now, combine the pixels into groups of four, each of which has an ideal value of 508 out of 1020. Calculate the standard deviation again. The standard deviation on the downscaled image will be lower if the random number generator is, in fact, random and evenly distributed. (This works because in the real world, the signal of each pixel is almost always correlated its neighbors. If you are filming a random noise pattern where adjacent pixels are not correlated, you could expect to see no gain in SNR.) As for color fidelity, a 4:2:0 color sampled image contains the same color information as a 4:4:4 image that is 25% of the size. Each group of 4 pixels, which had one chroma sample in the original image, is now 1 pixel with 1 chroma sample.
  20. Yeah, I'm sure the 16 bit refers to the linear raw file. Applying any curve, whether a creative color grade in post or a simple gamma on a SOOC jpeg, could see benefits from a higher bit depth ADC. An extra 2 bits at the digital quantization stage wouldn't even translate to larger files on a 10 bit video output, but could improve the dynamic range and such.
  21. It is a sony sensor. @DBounce thats correct, two models, one for 30p and one for 60p.
  22. @sanveer yes, its been talked about a few times on the facebook group. It should do 4k60 at 10 bit with a native iso of 250.
  23. I agree. Im a vocal global shutter supporter, but z cam is a small company and might be better off making one really good camera. And as much as i want global shutter, sensor tech isnt there yet without major compromises. @DBounce yeah it is 1".
  24. The GS variant will probably have even fewer takers. It has a 1" sensor, lower frame rates, and less dynamic range. Basically, the RS version will be better in every way except that one feature, so I doubt it will influence sales of the RS version much.
×
×
  • Create New...