Jump to content

no_connection

Members
  • Posts

    385
  • Joined

  • Last visited

Everything posted by no_connection

  1. Don't know who that is and I don't think I have seen him. None of my youtube recommended content have any of what you are talking about. What you watch is what you get I guess.
  2. We know at least someone have it on the forum so we should get the right kind of information when it's announced.
  3. Anyone have any good low light tests? While this one suffers from youtube compression it does show some extreme temporal noise reduction. Also some pretty noticeable amp glow in full frame mode.
  4. Well this is a huge topic and kinda hard to explain in a small post, I am by no means an expert but Ill try to explain it. In short you trade noise for bit depth. https://en.wikipedia.org/wiki/Ordered_dithering https://en.wikipedia.org/wiki/Dither So you cram a higher bitdepth from the sensor into a more limited depth you see in the camera file. But that is not the whole story. by doing it right you noise shape the signal for the intended codec so that no matter what, you are not going to get any banding, and depending on the output format that might be more or less noisy. I was impressed when I examined the X-T2 files in Fusion and saw just enough noise to avoid banding but not much more. Do note that it does require some bandwidth, but so does higher bit depth. If you compress too much you will have nether bit depth nor noise, so yes that will make banding, but emulating 10bit to 8bit dithered is not much of a problem, and the noise needed is going to be less than the sensor noise anyway, by my guess. You are still going to want that 10bit to be dithered as well. You can blame banding on codec but it's really a bad implementation of all of the above. So in short, noise is the solution to banding, but it takes proper implementation to get it right. BTW if you want to experiment with dithering for video, MadVR renderer (using for example MPC-HC) have an option to dither the output in realtime with different intended bitdepths. *edit* guess the bit depth of the attached images
  5. The only reason you get banding is improper dithering. Nothing more nothing less. At some point you run into excessive noise at which point you need higher bit depth. A 4bit log would not get banding but it would be extremely noisy to do so. A 10bit file can be less noisy than 8bit to avoid banding. At some point you could get rid of noise and accept that the banding is so small that it's not a problem. But you would still have to reintroduce dither to avoid banding when viewing it. Just look at animated movies.
  6. Good to know how much Sonys "weather sealing" is really worth. I'm saying that based on the details of the sealing, not because it broke from overexposure to salt. Which I guess it bad too.
  7. It took 3 reinstalls to get Compressor to not output silent mp3 files. and Final Cut UI now have zoom controls hidden in a menu which is dumb. Sure I get they wanna sell the touch strip thing but seriously. *edit* Oh and it's still impossible to install to Samsung 850 SSD
  8. Well to beat GH5 it would have to be way better in low light, and it probably is, but I would say that global shutter would be the video thing to do. More refined overall sure, but if it's meant to be video oriented having the option to go global shutter would be the biggest reason to go with a lower res sensor. Also it's the 16th and no big thing.
  9. Well, if you look at a7rII vs a7sII you see there is not a huge difference in low light performance despite 1:4 difference in per pixel area (sensor not video as it can't use all pixels then), I had trouble finding a raw converter tho so I can't really show what I mean. If you scale a7rII jpeg to 50% they are not far off. Yes the pixels themself do get noisier, but when you combine 4 of them it reduce the noise of the resulting pixel you get that back, and can even use it to more intelligently reduce noise. The difference between GH5 and GH5s sensor pixel size is less than the above mentioned. Resolution vs noise is used all the time and is not a new concept, take bitstreaming for example, it's just 100% "noise" but averaged out it makes sound At some point we are going to get down counting photons with "quantum sensors" where resolution is very high but you construct an image out of it. Photon energy would determine the color.
  10. Well bigger pixels with same count would be bigger sensor. For the same sensor size the megapixel count does not change the noise performance much unless you go to the extreme, you can check this by downloading raw files from cameras online and scale them properly to the same size. Go rescale a a7r image to a7s size and you won't get that big of a difference in noise. The fact that a7r don't do full pixel readout is a different issue. Or a7r2 vs a7s2 might be a better comparison. Yes there will be some, and some sensors manage readout better or worse when they cram as much pixels as they can into a small area. And the RGBW or other methods can't really be done with 1:1 readout rato without fun problems like aliasing and other debayer headaches. So oversampling do benefit by it more than the "bigger pixel" approach. I'm just saying that slapping a lower resolution on it won't magically make it a lot better in low light if the "higher res" version of it was done decently enough.
  11. Unless they have some new magic way to collect light, low light is still depending on sensor size and not pixel count. Well that and the ability to process the data.
  12. That does not look like aliasing, you can see strands being just fine right next to the other strands. You can also see that those "aliases" have motion blur in them. Hair can look that way when there is a very small light source. You can also see some "glitter" through the hair. Granted it could be a bit of both, I did not check the video to verify
  13. MPC-HC for life. Also it has portable option so I can keep 32 and 64bit versions and also several different versions with different settings. Do delete the automatic crash report submitter tingy folder tho. That thing is annoying if it ever crashes.
  14. They need to get rid of the Sony level rolling shutter to be any use in video.
  15. Hmm, instead of cinemascope we need a catchy name for real vertical videos. Maybe vertiscope, hmm nonono we need better. Maybe GravityWell cause it's so deep. Or Astrospect as it will reach the stars if you film the horizon. Oh Oh Oh, Highroller to get that analog connection to film.
  16. https://www.youtube.com/user/jdbastro/videos This guy have a bunch of videos, most with night vision attached to a telescope but also some normal. *edit* He uses both a7s and GH3
  17. I was getting excited until I saw that rolling shutter. However this is a huge step in the right direction and bodes very well for the upcoming mirrorless lineup.
  18. I was looking for some footage to see how good/bad RS was on this camera and found this https://***URL removed***/videos/3141149812/olympus-om-d-e-m10-iii-4k-sample-video I notice it's choppy and not fluid so after some analyzing I see every 5th frame is dropped overlaying several frames you can see gaps where the edge should have been filled in, now you see 4 edges and one gap where the 5th should have been. (it's from the container on the train part, if you want to screenshot and drop it into photoshop to verify) I would be hesitant to point a finger before I have checked the prores file in about 2h (but considering logo and transitions seems fine) I would guess that ether the camera is seriously messed up, or dpreview messed it up, but if it's frame rate conversion done by the reviewer then that is at best very neglectful and at worst a deliberate act to downplay the camera. Converting 30fps to 24fps would drop every 5th frame btw.
  19. I could maybe post the fusion file if anyone wants it. The method is pretty simple, start by making a saturation map, detect edges on that so sharp change in saturation make up the bright parts and no change becomes black. Blur that slightly and convert to apha channel.. Use that to blend a desaturated version with the original. That way any sharp color transisions (like fringes) gets dialed down while the overall color remains unchanged. Slow change also remains unaffected, but depends on edge size. The same technique can also be used to bring a busy area down a notch
  20. Can you post a screengrab straight from the camera file? Great care have to be taken when scaling down by that high amount to not introduce aliasing and moire.
  21. Or how I lost my fringe Ok puns aside, using old or sometime new glass can be both interesting and challenging. And sometime they give results that are very pleasing just due to their defects. But that can also have a back side with lenses showing strong abberations where you really don't want them. So I wanted to see if I could take some of the edge off the worst part.
  22. I find is a little hard on the eyes, so much white everywhere. Maybe bring the intensity of the background down a tiny bit?
  23. I have used Hasselblad with a non live view Phase One digital back, and it's interesting how easy it is to nail focus the first time, even for product photography where I had to recheck focus in Capture One anyway (shooting tethered).
  24. I have played with de-clip way back when and it could do pretty neat stuff, so I imagine it would work well now. Not sure if there exist any free alternatives tho. https://www.izotope.com/en/community/blog/tips-tutorials/2016/02/using-the-de-clip-plug-in-to-fix-clipped-vocals.html
  25. This is a complicated topic that cannot be answered by the question you asked. Here is a video that shows some pitfalls about LED and dimmers. So in short, you ether have a LED light that includes a dimming function, in the case of filming light you want constant current dimming, and not PWM dimming as that works by rapidly turning on and off the LED at a high frequency but can still cause problems. (Sony A9 with LED screens comes to mind as an example of what can happen)
×
×
  • Create New...