Jump to content

no_connection

Members
  • Posts

    385
  • Joined

  • Last visited

Everything posted by no_connection

  1. Why do pretty much every (if not all) sample video I have seen crush blacks. Even those I respect most and I thought would know better all had crushed blacks.
  2. To be fair I got excited about the FF Canon ever since I played with the M50 in a store, was something about it that seemed nice when used with EVF. If only Canon didn't refuse to fix all the issues ppl have been complaining about since the a7s. @DjangoSure most hollywood productions will choose this camera for it's amazing AF ability in crop mode on a tripod. If you remove the AF rec I don't see why you would choose this over anything else out there. From a video perspective I think it's only strength is (if it can pull it off) nail perfect AF in handheld shots, but that needs lenses that can deal with 4k crop. And probably be stabilized too.
  3. If that is what you use it for than I really don't think it adds anything what so ever compared to previous offerings. Granted AF is the only thing that would save it, but no Eye AF and limited choice for lenses with the crop, kinda limits the success. I still say 10bit is wasted on it if all you do is static shots with a heavy crop factor.
  4. With that bad rolling shutter it's pretty much dead on arrival, what point is 10bit if you can't use it? "– Still a huge ass crop in 4K recording. Looks the same as 5D IV. Heavy rolling shutter as well. (info via Jordan Drake)" And no sign of any speed booster to help with crop factor ether (at least not electronic), at best maybe metabones could hack something unofficial. No 3rd party RF lenses ether. "– Canon will not be opening up the RF mount specs to third parties (info via Jordan Drake)"
  5. It should be no surprise that ISO 1600 is more noisy than ISO 200. And by exposing two stops over and push it back the noise is kinda the same. So try exposing it like you would normally with normal profile at ISO100 then flip to s-log3 and native ISO 1600, check for clipping and done.
  6. I jumped in when someone was accused of being wrong when there statement was correct, but I guess you rather have ppl being wrong than argue right? If you intentionally misrepresent what I wrote as an example and intentionally use it to miscredit me then I guess I really don't have any more to say.
  7. Wrong. But you could happily change the colors to fit inside if you want. Which is why ppl get paid to correct and manage color. Same could be said about the opposit, no need for log with enough bits. That happens simply by viewing REC709 content on a BT2020 monitor without transforming it. And if you convert SD to HD I'm pretty sure that is someones workflow. But then you missed the point.
  8. That is not a color space nonlinear or not. Color space defines where the primary component are, how "saturated" a pure color can be if that makes sense. It's usually a triangle since 3 components usually is enough. No matter how you dress it up it makes up an unbreakable wall that you can't move outside. Sure you could stretch REC709 to BT2020 on a monitor but it won't be accurate colors and will result in oversaturation and too "rich" colors. Shove any normal content on a quantum dot display and you will see what I mean. What you are talking about is giving different parts of luminosity different amount of accuracy of where colors could be addressed. Something that is always used in vision since it's non linear and it does not make sense having accuracy where it's not needed. Now the use of log does create another problem since codecs are made to use our vision and throw away details we don't see or find important, but since the data is pushed very much around it now throws away data that we see and keeps data we would miss when it's transformed back. It might not be as big of a thing today but something to be aware of. There is always a trade off and you have to choose to take it or not depending on the need.
  9. No it's correct. If you don't transform the data when you go from one color space to another you are changing the colors. If you transform from a larger colorspace into a smaller one and keep colors you will clip the colors outside it. Your analogy about 16-235 is 100% wrong as there is no way to address colors outside the color space. The red channel will only get brighter, not redder when past 235 for example. V-log have it's own color space that happens to fit most other color space. And if you convert that to REC709 it's going to clip unless you change the colors. One way to change them to fit would be to look at them with a normal monitor and the colors would look very flat and desaturated but would not clip.
  10. I think a lot of these tests kind of miss the point that, a certain bitrate will look fine until it does not. Flicking hair in 1% of the frame does not really stress the amount of data needed so it can be at almost lossless level, but when everything starts to move around it will at some point run into a brick wall, and if you have external recorder that can be set much higher. Think lossless. On the flip side, where everything is moving and it's really pushed, so much will be happening anyway that unless you freeze frame, you will not ever manage to see it, provided the codec is sufficiently good at piking what parts to trash. I would assume someone have done studies and found what you can get away with for most usage and then used that to balance the bitrate.
  11. Just record externally uncompressed and then compare it with internal and show the actual difference in the image. Comparing bitrate to prores isn't even beginning to be useful, especially since prores is, how do I say this nicely, kinda crap when it comes to bitrate efficiency. prores shoves pitrate at the problem to compensate for being easy on the CPU to decode. in H264 there is a huge difference in the result depending on how much effort you put into encoding it. For example using same bitrate a fixed function encoding chip could produce kinda crappy image where a CPU using maximum effort could produce a nice looking image.
  12. That would be the problem with it. Enough contrast and it will be "in focus" even tho it's not. Granted I have not used peaking on many cameras but those I have it sounded useful but about 5min later I turned it off cause it gave false indication it was in focus. Maybe peaking is good enough to know the difference nowdays but you really need to test it on the subject you use. If you use peaking as an aid to see where is goes off and rack focus and get a feel for where the middle and focus really is. That is how I do it without peaking.
  13. If they designed the sensor and processor together with readout to make it "the best", why is the rolling shutter so bad? Did they wake up launch day and go "oops, we forgot about that". It would make sense for them to be all about image quality and dynamic range but to drop the ball so bad when it comes to other parts makes you wonder if they just don't care about the moving part of moving pictures.
  14. Front element size together with focal length determine the maximum aperture of the lens. Divide the focal length by the aperture and you get front element diameter, give or take some tolerance. Front element diameter determine the depth of field together with field of view and how you view the resulting image. It's like magic how they all fit together.
  15. I think the read error rate is exaggerated. My 4x 4TB RED in Z2 gets scrubbed every month and is almost full. After a few years of service I have yet to encounter any read error from disk or corrupt data that ZFS picked up. That is a lot of TB read just for scrubs and still no error. However if you are at the point where one drive already failed there is a very high change of other drives being close so the extra safety is at least for me well spent. BTW ZFS resilvers on the fly whenever there is a checksum error. Scrub is only "needed" for data that is not touched regularly. HDD does the same internally too and detects weak spots in magnetic strength, so it it very likely that scrubs actually helps and prevents data from becoming corrupt.
  16. I think there is a decent article about this on the FreeNAS forum somewhere, but the short story is that with larger HDD the chance of data being corrupt or another disk failing while you rebuild the replaced one is too big. And since with RAID 5 during rebuild you don't have any chance to correct for errors or corruption it might very well fail or give you broken data.
  17. Unless you intend to make use of beefy NAS for editing from you could run some local SSD storage for that and have NAS for actual storage or "safe keeping". RAID 0 of say 4 M.2 in a PCIe 16x card don't have to get crazy expensive. The downside is not having all the projects at the same place at the same time. So only current project being "online" at the time. I don't think many uses RAID5 any more, at least not in server. You end up always using two parity disks or more, so RAID6 If you gonna spend some real cash hop over to FreeNAS forum and have a look around. If noise is not a huge issue grabbing a used server and some disks for it might not be super bad if you find the right one. And when you have 64GB+ RAM for intelligent cache you get decent performance. The upside of using ZFS is data integrity and snapshots. So no more corrupt data and virus damage.
  18. Kinda hard to get anywhere close. Think the bulb clipping throws it off a bit.
  19. This is exactly why it's wrong to trick ppl into thinking it's a 24-3000mm f/2.8-8 If it was the front would be 375mm in diameter If you use equivalency to put things into perspective you can't just ignore or leave out the other parts of it.
  20. This probably bugs me more than it should, but they always leave out the f45 part from the equivalent equation. Don't large numbers there sound equally impressive? 24-3000mm f15-45 suddenly sounds really stupid when it comes to light gathering. 4.2-532mm f2.8-8 is the real lens but I'm starting to think that is a stretch too. Although it is impressive zoom range tho, but why.
  21. I'm no expert on Apple but that is 2009 model right? 100$ would be generous to pay, and you will pay that yearly in just electricity. And absolutely not worth to be upgraded. The RAM alone draws about 60W. *edit* jut to be clear, the first one you posted What CPU does the 2nd one have? Ok a quick look at model history and it seems that you need a 2010+ model with at least a better X5675 to get anywhere. And it's still a stretch since you don't have hardware acceleration to fall back on.
  22. How does the EVF compare to other cameras you have tested?
  23. Since when have bayer sensors been a problem? I have taken lots of pictures with my Nikon D80 that I have come back to and went wow, that looks amazing, and back then I didn't shoot raw cause I only had 2GB SD card. He goes on that numbers don't mean anything and yet he completely dissed anything "modern" with high "pixel count" as being not good enough? a step backward? I thought he meant to make a point that content was the important part but it was kinda lost when he pretty much went to spec war film vs modern, which on paper had the better spec(k). (pun intended) Oh and you also need to shoot on Cooke. Not cookie or cake.
  24. Do you have a short clip from above strait from camera with CineLikeV?
×
×
  • Create New...