Jump to content


  • Posts

  • Joined

  • Last visited

Posts posted by no_connection

  1. On 12/20/2017 at 2:23 AM, webrunner5 said:

    I am not too sure dithering and banding go together? Dithering and noise yes. And noise in blocks not bands. Banding as I know it is based on the compression codec, ergo not enough bits.

    Well this is a huge topic and kinda hard to explain in a small post, I am by no means an expert but Ill try to explain it.

    In short you trade noise for bit depth.


    So you cram a higher bitdepth from the sensor into a more limited depth you see in the camera file. But that is not the whole story. by doing it right you noise shape the signal for the intended codec so that no matter what, you are not going to get any banding, and depending on the output format that might be more or less noisy. I was impressed when I examined the X-T2 files in Fusion and saw just enough noise to avoid banding but not much more. Do note that it does require some bandwidth, but so does higher bit depth.

    If you compress too much you will have nether bit depth nor noise, so yes that will make banding, but emulating 10bit to 8bit dithered is not much of a problem, and the noise needed is going to be less than the sensor noise anyway, by my guess. You are still going to want that 10bit to be dithered as well.

    You can blame banding on codec but it's really a bad implementation of all of the above.

    So in short, noise is the solution to banding, but it takes proper implementation to get it right.

    BTW if you want to experiment with dithering for video, MadVR renderer (using for example MPC-HC) have an option to dither the output in realtime with different intended bitdepths.

    *edit* guess the bit depth of the attached images



  2. The only reason you get banding is improper dithering. Nothing more nothing less. At some point you run into excessive noise at which point you need higher bit depth. A 4bit log would not get banding but it would be extremely noisy to do so. A 10bit file can be less noisy than 8bit to avoid banding. At some point you could get rid of noise and accept that the banding is so small that it's not a problem. But you would still have to reintroduce dither to avoid banding when viewing it. Just look at animated movies.

  3. It took 3 reinstalls to get Compressor to not output silent mp3 files.

    and Final Cut UI now have zoom controls hidden in a menu which is dumb. Sure I get they wanna sell the touch strip thing but seriously.

    *edit* Oh and it's still impossible to install to Samsung 850 SSD

  4. 7 hours ago, webrunner5 said:

    Hell maybe we are saying the same thing, but there is no way in heck you can have unbelievable low light using a sensor with say 50mb on a FF sensor.. Unless it is a MF sensor with HUGE sensors. And that is why MF cameras are so good. Large pixels per sq inch to gather more light..

    Well, if you look at a7rII vs a7sII you see there is not a huge difference in low light performance despite 1:4 difference in per pixel area (sensor not video as it can't use all pixels then), I had trouble finding a raw converter tho so I can't really show what I mean. If you scale a7rII jpeg to 50% they are not far off. Yes the pixels themself do get noisier, but when you combine 4 of them it reduce the noise of the resulting pixel you get that back, and can even use it to more intelligently reduce noise.

    The difference between GH5 and GH5s sensor pixel size is less than the above mentioned.

    Resolution vs noise is used all the time and is not a new concept, take bitstreaming for example, it's just 100% "noise" but averaged out it makes sound

    At some point we are going to get down counting photons with "quantum sensors" where resolution is very high but you construct an image out of it. Photon energy would determine the color.

  5. 4 hours ago, webrunner5 said:

    Well it is not the pixel count, it is the pixel size!


    Well bigger pixels with same count would be bigger sensor.

    For the same sensor size the megapixel count does not change the noise performance much unless you go to the extreme, you can check this by downloading raw files from cameras online and scale them properly to the same size. Go rescale a a7r image to a7s size and you won't get that big of a difference in noise. The fact that a7r don't do full pixel readout is a different issue. Or a7r2 vs a7s2 might be a better comparison.

    Yes there will be some, and some sensors manage readout better or worse when they cram as much pixels as they can into a small area.

    And the RGBW or other methods can't really be done with 1:1 readout rato without fun problems like aliasing and other debayer headaches. So oversampling do benefit by it more than the "bigger pixel" approach.

    I'm just saying that slapping a lower resolution on it won't magically make it a lot better in low light if the "higher res" version of it was done decently enough.

  6. 7 hours ago, Don Kotlos said:


    That does not look like aliasing, you can see strands being just fine right next to the other strands. You can also see that those "aliases" have motion blur in them. Hair can look that way when there is a very small light source. You can also see some "glitter" through the hair. Granted it could be a bit of both, I did not check the video to verify

  7. MPC-HC for life. Also it has portable option so I can keep 32 and 64bit versions and also several different versions with different settings.
    Do delete the automatic crash report submitter tingy folder tho. That thing is annoying if it ever crashes.

  8. I was looking for some footage to see how good/bad RS was on this camera and found this

    https://***URL removed***/videos/3141149812/olympus-om-d-e-m10-iii-4k-sample-video

    I notice it's choppy and not fluid so after some analyzing I see every 5th frame is dropped


    overlaying several frames you can see gaps where the edge should have been filled in, now you see 4 edges and one gap where the 5th should have been. (it's from the container on the train part, if you want to screenshot and drop it into photoshop to verify)

    I would be hesitant to point a finger before I have checked the prores file in about 2h (but considering logo and transitions seems fine) I would guess that ether the camera is seriously messed up, or dpreview messed it up, but if it's frame rate conversion done by the reviewer then that is at best very neglectful and at worst a deliberate act to downplay the camera.

    Converting 30fps to 24fps would drop every 5th frame btw.

  9. I could maybe post the fusion file if anyone wants it.

    The method is pretty simple, start by making a saturation map, detect edges on that so sharp change in saturation make up the bright parts and no change becomes black.

    Blur that slightly and convert to apha channel.. Use that to blend a desaturated version with the original.

    That way any sharp color transisions (like fringes) gets dialed down while the overall color remains unchanged. Slow change also remains unaffected, but depends on edge size.

    The same technique can also be used to bring a busy area down a notch

  10. Or how I lost my fringe

    Ok puns aside, using old or sometime new glass can be both interesting and challenging. And sometime they give results that are very pleasing just due to their defects. But that can also have a back side with lenses showing strong abberations where you really don't want them. So I wanted to see if I could take some of the edge off the worst part.



  11. This is a complicated topic that cannot be answered by the question you asked.

    Here is a video that shows some pitfalls about LED and dimmers.


    So in short, you ether have a LED light that includes a dimming function, in the case of filming light you want constant current dimming, and not PWM dimming as that works by rapidly turning on and off the LED at a high frequency but can still cause problems. (Sony A9 with LED screens comes to mind as an example of what can happen)

  • Create New...