Jump to content

KnightsFan

Members
  • Posts

    1,214
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. I use Gamma DR and set the RGB sliders to 1.88, 1.85, and 1.95. I pretty much leave them there always, seems to get me the results I want. It's important to either white balance to a known setting (such as Daylight) or to a greycard exposed exactly in the middle, otherwise the white balance is very bad. I tend to underexpose just a little, as I find that the color response is better on the low end vs the high end of the range.
  2. Looks great! I used Resolve for a couple simple effects on a recent video. I tried using the Fusion tab, but I the performance was terrible, even compared to Fusion standalone. So I ended up just using OFX plugins on the timeline and was pretty happy with it.
  3. True, that is a bit semantic. I tried to address the more meaty argument in my initial post. I dont see how the ability to match in post implies that color science is bs.
  4. Engineering is just applied science. To use my tomatoes example, you can genetically engineer bigger tomatoes by applying the scientific theory behind it.
  5. I'm still unclear on how adjusting gain can change the FWC. Doesn't gain happen to the signal AFTER the well?
  6. To anyone who says "color science is bs:" I'm curious what your definition of color science is. From the CFA, to the amplifier, to the ADC, to the gamma curve and the mathematical algorithm behind it, to the digital denoising and sharpening, to the codec--someone has to design each of those with the end goal of making colors appear on a screen. Some of those components could be the same between cameras or manufacturers. Some are not. Some could be different and produce the same colors. Even if Canon and Nikon RAW files were bit-for-bit identical, that doesn't negate the fact that science and engineering went into designing exactly how those components work together to produce colors. As it turns out, there usually are differences. The very fact that you have to put effort into matching them shows that they weren't identical to begin with. And if color science is negated by being able to "match in post" with color correction, how about this: you can draw a movie in Microsoft Paint, pixel by pixel. There is no technical reason why you can't draw The Avengers by yourself, pixel for pixel, and come up with the exact same final product that was shot on an Arri Alexa. You can even draw it without compression artifacts! Compression is BS! Did you also know that if you give a million monkeys typewriters, they will eventually make Shakespeare? He wasn't a genius at all! The fact that it's technically possible to match in post does not imply equality, whether it's a two minute adjustment or a lifetime of pixel art. Color science is the process of using objective tools to create colors, usually with the goal of making the color subjectively "good." If you do color correction in post, then you are using the software's color science in tandem with the camera's. Of course, saying one camera's color science produces better results is a subjective claim... ...but subjectivity in evaluating results doesn't contradict science at all. If I subjectively want my image to be black and white, I can use a monochrome camera that objectively has no CFA, or apply a desaturation filter that objectively reduces saturation. If you subjectively want an image to look different, you objectively modify components to achieve that goal. The same applies to other scientific topics: If I subjectively want larger tomatoes, I can objectively use my knowledge of genetics to breed larger tomatoes.
  7. According to z cam regarding this particular test, the rolling shutter is worse in wdr mode, and should be on par with the gh5s in normal mode.
  8. Yeah people have complained about that on the facebook group. Apparently z cam is putting a no-sharpening mode into a firmware update, as well as an option for less noise reduction. I am not sure whether the firmware update has been released yet to be honest.
  9. The real problem with IoT devices is internet security. This happened while I was in an internet protocols class and made for some good discussion: https://www.techtimes.com/articles/183339/20161024/massive-dyn-ddos-attack-experts-blame-smart-fridges-dvrs-and-other-iot-devices-why-your-internet-went-down.htm How good is the security on your coffee machine? Making coffee one day, being leveraged to attack Twitter tomorrow, and providing a backdoor into your home network the day after. It's worth being wary of cheap, internet-enabled devices with little to no security.
  10. I don't think you can use a CPU to increase the saturation point of a photosite. We're talking about the full well capacity of the photosites. Not the dynamic range of the image, or how the image is processed, or whether there is a computer-controlled variable ND filter in front of the photosite. Because none of those effect the FWC.
  11. @webrunner5 using the CPU to do HDR doesn't explain how they achieve a higher full well capacity with such a small pixel.
  12. Someone on Sony Rumors speculated that the pixel actually fills several times to make one exposure. Sort of like combining several short exposures into one longer one, but before the ADC and perhaps on a pixel level. I'm not sure if that's how it's actually achieved.
  13. One thing no one talks about is audio for UHD. I have a 4k tv and a 4k graphics card, so i should be able to watch 4k content, right? Nope. My receiver can only take 1080p hdmi, and that is the only way to get 5.1 audio. So to get 4k, i have to do stereo sound via aux input. I have no desire to buy a whole new receiver just to get 4k, let alone 8k, and 5.1 is more important than high resolution for me. Its frustratingly ironic that wanting decent audio is the reason i cant watch 4k right now and have absolutely no interest in 8k content.
  14. True, what i meant was that all it does is tell you to purchase. It doesnt stop working or cripple any features. Its such a refreshingly non-invasive install that i gladly paid, despite it being the easiest thing in the world to steal. No background processes or separate licensing/update software automatically draining resources every time you boot (hello apple and adobe).
  15. I use Reaper as a DAW. It has a free trial thay never actually expires, and the full version is like $50. Ive been using it for maybe 4 years and absolutely love it. I mainly use it for mixing for films, but have used it occasionally for recording simple stuff (which i do in a closet with heavy sleeping bags hanging about for isolation, if you are looking for budget ideas). I use my zoom f4 as an audio interface, and monitor with the classic mdr-7506 headphones.
  16. @GiM_6x transcoding 4k nx1 footage to h264 is many times faster than real time using hardware decoding and encoding on a gtx 1080, running ffmpeg. The 1080 is a high end card for sure, but its last generation consumer hardware. I think the problem is that editing software doesnt use hardware de/encoders to their fullest. Even resolve, which uses the gpu a lot, does most video decoding on the cpu, and then processes effects on the gpu. Thats why my 6 year old i7 struggles with real time editing. If resolvr used my gpu decoder, it would be buttery smooth.
  17. Sounds like a video vs data levels issue? You can specify whether cmimported clips in resolve use data or video levels, and you can also specify which your export uses. You can also specify which to use in VLC. Neither one is universally correct, just check how it looks on whatever platform you distribute on.
  18. @GiM_6x The XT3 can encode 4k60 at 100, 200, or 400 Mbps in HEVC 10 bit. The E2 encodes realtime 4k120 10 bit HEVC at 230 Mbps. That technology is already here for the prosumer market--whether Sony uses it is up to them. As for 8k, I don't know whether that XEVC list is actually for the A7s3, or just the family of formats that Sony's entire lineup will pull from. I think that real time 4k HEVC editing is not far off, and I think that part of the problem is just that our favorite software doesn't leverage existing hardware decoders yet, because as a "non-pro" codec there wasn't much incentive for developers. Resolve Lite doesn't officially support HEVC, and even Resolve Studio only added HEVC export very recently, and it's still slow and not very good. Meanwhile, I can use ffmpeg to generate proxies at significantly faster than 24 fps using my same hardware.
  19. Its a good sign that sony is making a good lineup of hevc codecs with 422 and 10 bit being favored. That 2k lossless version looks very interesting--its about time someone leveraged hevc to make a lossless acquisition format. I am still skeptical about which if any of these will be in the a7s3.
  20. Nice find! Interestingly, I did look at the article NoFilmSchool links to in my original search, but that chart is nowhere to be seen. That is actually the article from 2014 that mentions that that they have changed their testing methods. NoFilmSchool quoting C5D: "Here we tested usable dynamic range of the given cameras. With 14.1 stops the usable dynamic range of the A7S comes surprisingly close to the Arri Amira with its legendary Alexa sensor" C5D (probably edited) "We tested usable dynamic range of the given cameras. With 12 stops the usable dynamic range of the A7S comes surprisingly close to the Arri Amira (13.1 stops) with its legendary Alexa sensor"
  21. I feel your pain. Between Premiere unstability and Flash vulnerabilities, I have a strong distrust of Adobe myself. I use Resolve for editing these days. My biggest project is ~5,000 files / 500 GB and Resolve is handling it just fine. Each of the files actually is imported both in full res and a proxy, so there's closer to 10,000 imported files. Not sure how that scales against your needs...
  22. The article date is in the leftmost column. I think that they often use old test numbers (e.g. on the Fuji page, they include existing numbers from Sony for comparison, rather than re-test the Sony). I didn't find any Sony cameras where they claimed 14 stops, but I would not be surprised if they did on some very old articles. On this article (https://***URL not allowed***/lab-review-sony-a5100-video-dynamic-range-power/) from 2014, they mention a lot of changes to their testing in various updates. The a5100 was "updated" from 13 to 10.5 stops. If you find any, I'll add them to the list anyway. At the bottom of this 2014 article (https://***URL not allowed***/dynamic-range-sony-a7s-vs-arri-amira-canon-c300-5d-mark-iii-1dc-panasonic-gh4/) it says: "Note: We have at one point in 2014 updated our dynamic range evaluation scale to better represent usable dynamic range among all tested cameras. This does not affect the relation of usable dynamic range between cameras." So I would be be cautious comparing numbers from pre-2014 with modern numbers.
  23. Though, discussing C5D's past accuracy is slightly off topic. The real question is, if this test is bogus, what is the actual dynamic range of the Ursa 4.6k?
  24. C5D DR Tests.xlsx Feel free to check. I haven't added the 4.6k article from this topic yet.
  25. @sanveer Last time we talked about this, I actually did that. I looked up every single C5D article that mentioned a dynamic range and compiled them into a spreadsheet. Turns out the only discrepancy was with the A7s2. I contacted C5D about that, and was told which articles had used a 4k to 2k downscale. The 4k downscale increases DR on the A7s2 from about 10.6 to 12 stops. After that was cleared up, I couldn't find any other significant problems. Now, it appears that they always list the resolution and whether any downscaling was done.
×
×
  • Create New...