Jump to content

Camera resolution myths debunked


cantsin
 Share

Recommended Posts

3 minutes ago, jcs said:

Who is "grossly overestimating this factor" in 2017?

Whereas I wrote for years now that resolution has nothing to do with quality, I frequently fall for his too. I sold my Pocket mainly for the reason that I saw the resolution limits even on my then 1080p monitors. I tested raw, but there were too many artifacts (moire, color fringing). I absolutely loved the ProRes footage, but it obviously wasn't true HD. Being a wedding videographer, I liked the early shots of the Ursa Mini. This was shot in 1080p ProRes, and I had little to complain quality-wise:

Eventually I bought the A6500 (price & size). Thought that it's 4k would free me from worrying about the whole resolution issue. Does it? Of course not. Naive fallacy. 

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
11 minutes ago, bunk said:

Of course you can.
The content was created to watch on a 1080 screen and the term used 'noticeable difference'. The difference is not noticeable watched at 100%. It's start to get noticeable at 200% and even more noticeable at 400%. ...however at 100% you will not be able to see the differences. On a 4K screen that means using only a quather of your screen.
So if you saw a difference you either where looking fullscreen ( and now your screen starts to create 4 new pixels for every one pixel of the content, for allready two different pics) or your screen has pixels that are somehow better then the pixels of a 1080 screen and in that case I for one would really like to see a screen shot of the differences you saw on the pixels of your screen, as I don't think that is even possible.

 

I'm using calibrated 4K Dell displays (27" P2715Q and 32" UP3216Q). Part 1 was 1920x1080, watched 1:1, part 2 was 1920x1200, watched 1:1. The difference is even more pronounced watching full screen! It's not that more detail is created when displaying full screen, as the pixels are interpolated (would be same effect with a 1080p display), it's just easier to see the differences, just like blowing up the stills until you can see the pixels.

I've done lots of experiments with resolution and perceived detail, including showing that the 5D3 can look decent at 1080p when pos sharpened. And more recently showing that soft 1080p can look OK when scaled to 4K then sharpened and grained, and will easily cut with 4K native material (important sometimes when slomo is only 1080p).

You didn't see a difference at 1:1 when he A/B'd, especially in the eyes/eyelashes? It's not dramatic but it is visible. This is for 1080p 1:1. For 4K display, it would be clearly more pronounced when full screen (3840x2160 video on a 4K display), right? So what's Yedlin's point?

 

Link to comment
Share on other sites

7 minutes ago, jcs said:

So what's Yedlin's point?

That the absolute number of pixels stored in the video file doesn't tell anything about the actual spatial resolution. Nothing at all, in fact. And for perceived resolution, that other factors contribute more than spatial resolution, because in the real world the latter never is true, but always scaled up or down.

Link to comment
Share on other sites

34 minutes ago, Axel said:

Eventually I bought the A6500 (price & size). Thought that it's 4k would free me from worrying about the whole resolution issue. Does it? Of course not. Naive fallacy. 

I find 4K useful for post zoom/crop even for 1080p target content. A7S II 4K looks really nice scaled to 1080p (GH5 has more real detail in 4K). Are you saying the A6500 hasn't freed you from 'worrying about the whole resolution issue"? I'm not sure I understand your point?

For medium/close up shots, using a e.g. Black Pro Mist filter on the lens can actually look better for 4K content and display (or adding a little blur in post). For wide shots, the extra detail is typically helpful as long as there are no digital artifacts (aliasing, oversharpening, etc.). Target detail/resolution depends on the needs of the content. More source resolution/detail is always useful and desired if storage costs aren't an issue, right? Would I choose an 8K Red over a 3.4K Alexa? It would depend on the desired final product. For skintones, highlights, DR and overall color the choice is clear :) For a nature shoot, where lots of post punch-ins and crops are required and final delivery for 4K, the 8K Red might make more sense (would need to test it first, probably with their appropriate 'detail' OLPF).

19 minutes ago, Axel said:

That the absolute number of pixels stored in the video file doesn't tell anything about the actual spatial resolution. Nothing at all, in fact. And for perceived resolution, that other factors contribute more than spatial resolution, because in the real world the latter never is true, but always scaled up or down.

Isn't that obvious? Blurry 1080p does not provide as high of perceived detail as true 1080p? Over-sharpened blurry 1080p starts to look like video and gets worse as sharpening artifacts create 'aliasing pixel crawl'. True 1080p requires oversampling sensor photosites (Nyquist again). The C300 I had killer 1080p from a "4K" sensor (only green was averaged, however that's the primary component for luminance (~.6)) thus the C300 I had very good perceived 1080p (it did alias though). Did you check out the F65 4K example shooting the test chart? The rings were super detailed and alias-free- amazing! ("8k" sensor with 4K target.. Nyquist).

Perceived resolution is all about contrast at various spatial resolutions. Starting near the pixel level for convolution sharpening, then going lower frequency with unsharp masking, then getting lower with local contrast enhancement (still using unmask sharp operation, only now we're calling it contrast vs. sharpening), and then with curves and LUTs and finally simple per-pixel multiplication and subtraction (simplest form of contrast). Adding in noise/grain which is higher resolution than the source video can also help to improve perceived resolution: adds higher resolution texture to surfaces.

Isn't all of this well known? If this was Yedlin's point these concepts could be shown in a 30s video!

Link to comment
Share on other sites

5 minutes ago, jcs said:

I find 4K useful for post zoom/crop even for 1080p target content. A7S II 4K looks really nice scaled to 1080p (GH5 has more real detail in 4K). Are you saying the A6500 hasn't freed you from 'worrying about the whole resolution issue"? I'm not sure I understand your point?

Pocket HD is noticeably better than A6500 HD, and I'd like to know why. Sure, low data rates, the way it is downsampled in-camera. Indeed the A6500 UHD 100mbps provides better HD, but on closer look there are still artifacts. I have footage from a borrowed C300 to compare, and the 1080p is pristine in comparison. I also have 4k from a borrowed BMPC and UHD from my buddy's FS7, and that's a whole different league. No wonder, you say? Yeah, but doesn't this confirm the statement that "Ks" don't tell you anything?

Link to comment
Share on other sites

21 minutes ago, Axel said:

Pocket HD is noticeably better than A6500 HD, and I'd like to know why. Sure, low data rates, the way it is downsampled in-camera. Indeed the A6500 UHD 100mbps provides better HD, but on closer look there are still artifacts. I have footage from a borrowed C300 to compare, and the 1080p is pristine in comparison. I also have 4k from a borrowed BMPC and UHD from my buddy's FS7, and that's a whole different league. No wonder, you say? Yeah, but doesn't this confirm the statement that "Ks" don't tell you anything?

I haven't use the A6500, however if the Pocket's HD looks like it has more real resolution vs. the A6500 that means that the Pocket is storing more real detail in the output file. E.g. the A6500 is doing a form of binning etc. that results in loss of detail (information) and thus lower resolution is captured. As you've seen with the C300 I, a 4K sensor with a decent oversample to 1080p results in excellent resolution 1080p. The C300 I is a special case- it averages two sets of greens to get 1080p, and takes R & B as is, thus no deBayering takes place, and the results speak for themselves. So 'Ks' on the sensor side and how those K's are processed into the desired output do indeed make a huge difference. Best results for 1080p on the A7S II and GH4 was to shoot 4K and downsample in post (probably the A6500, too example here*, what artifacts are you seeing- is in-camera sharpening turned off?).

Hasn't it been clear for years that some cameras provide much more detailed 1080p (and now 4K) than others? The standard way to test real resolution to validate (or invalidate) camera manufacturer resolution claims is to shoot boring (and controversial, lol) test charts! Like Geoff Boyle does here: https://vimeo.com/geoffboyle

* this 4K video looks much better- very detailed! He's using a tripod so that helps with compression artifacts. That's one way to work around low bitrates- keep motion to a minimum. Another way is to record externally at a higher bit rate.

Link to comment
Share on other sites

35 minutes ago, jcs said:

I haven't use the A6500 ...

* this 4K video looks much better- very detailed! He's using a tripod so that helps with compression artifacts. That's one way to work around low bitrates- keep motion to a minimum. Another way is to record externally at a higher bit rate.

I agree. For certain cameras, 4k is just the better HD. It should be labelled so. UHD. And I am about to borrow a Shogun in order to pixelpeep the differences. Limiting myself to static tripod shots is not the answer.

Link to comment
Share on other sites

In the ASC mag interview Yedlin readily acknowledges there is nothing new in the video (for technically minded people). It is deliberately narrated using language targeting non-technical people. His agenda is that decision makers lack technical skills but impose technical decisions, easily buying into marketing talk. Also, in the video he explicitly acknowledges the benefits of high res capture for VFX and cropping.

Link to comment
Share on other sites

4 hours ago, AaronChicago said:

I think he is targeting Netflix for making a hard + fast rule of 4K camera acquisition only. It's easy to say 4K is top quality for a consumer, but that is only a fraction of the variables. It's much easier than advertising "Shot with Arri Master Prime" or "Shot with Cooke Speed Panchro but mastered in 6K so it all equals out".

Good one!:)

 

Link to comment
Share on other sites

23 hours ago, jcs said:

As embedded computing power increases, the use of RAW will rapidly diminish. There are still breakthroughs ahead based on generative mathematics- fractals, DNA etc., that will encode information without resolution, allowing any desired output to be rendered.

Here's one example based on machine learning (not clear if they can generate arbitrary output resolution, however the compression quality is very high): 

 

Speaking as someone with a background in the field: balloon juice.

>> however the compression quality is very high): 

However, the image quality of the cherry picked example is very poor.

Question: if a render farm can't produced realistic skin and hair even from the most optimal of description, how do you expect a fraction of the processing power using a much cruder description to do a better job??? It defies common sense.

...There are possible (emphasis on possible) application areas for this technology, but replacing RAW isn't one of them. The tech is possibly workable where extreme levels of compression are needed; the problem is that when you need higher quality then the demands of the algorithm rise exponentially. 

Link to comment
Share on other sites

On ‎7‎/‎25‎/‎2017 at 11:55 AM, HockeyFan12 said:

Sure, but at a normal resolution they still look pretty similar. Remember, he's punched in 4X or something by that point and you're watching on a laptop screen or iMac with a FOV equal to a 200" tv at standard viewing distance. So yeah, on an 800" tv I would definitely want 6k, heck, even on a 200" tv I would. But the biggest screen I've got is a 100" short throw projector or something so the only place where I can see pixels with it is with graphics and text.* I've also been watching a lot of Alexa65-shot content on dual 4k 3D IMAX at my local theater and tbh I can never tell when they cut to the b cam unless it's like a GoPro and then I can REALLY tell. :/

 

A projector is no match for a real screen when it comes to resolution, due to the limitations of the optics those typically have.

I can tell the difference easily on a 64" TV at normal viewing range when watching HD productions compared to UHD productions. Of course they will look the same if the source material is shot at effective HD (or less, as is often the case) resolution, but with real UHD source footage you can see the difference quite easily.

Link to comment
Share on other sites

2 hours ago, tugela said:

I can tell the difference easily on a 64" TV at normal viewing range when watching HD productions compared to UHD productions.

In that case I think I can see the difference as well on my 1080 TV and if so it only shows us the piss poor quality of the HD productions.

Could be wrong though.

Link to comment
Share on other sites

8 hours ago, tugela said:

A projector is no match for a real screen when it comes to resolution, due to the limitations of the optics those typically have.

I can tell the difference easily on a 64" TV at normal viewing range when watching HD productions compared to UHD productions. Of course they will look the same if the source material is shot at effective HD (or less, as is often the case) resolution, but with real UHD source footage you can see the difference quite easily.

I'm not an expert on projector optics and I don't have a tv that big so I can't say. My projector seems about as sharp as my plasma at a similar FOV, but I believe plasmas are actually softer than most LCDs, so that makes sense that they would be less sharp than big tvs are.

You must have very good eyes if you can the difference at a normal viewing distance. For me, I can't tell a big difference and I have better eyes than most still (I think they're still 20/20) so I just assume most others can't tell either.

Truth be told, I bet most people with exceptional (15/20 or better) vision can see a significant difference in some wide shots and a very significant difference in text, animated content, graphics, etc. even at a normal viewing distance and with a tv as small as 64" in not smaller. I wonder if I could tell the difference with white on black text. I bet I could. Even in a double blind test. So to that extent it's just a matter of my preferences in terms of diminishing returns that leads me to have the opinion that 4k isn't necessary or interesting. If I still had such good vision and had a great tv and money to burn, I might get into the 4k ecosystem, too. 

Link to comment
Share on other sites

10 hours ago, meanwhile said:

Speaking as someone with a background in the field: balloon juice.

>> however the compression quality is very high): 

However, the image quality of the cherry picked example is very poor.

Question: if a render farm can't produced realistic skin and hair even from the most optimal of description, how do you expect a fraction of the processing power using a much cruder description to do a better job??? It defies common sense.

...There are possible (emphasis on possible) application areas for this technology, but replacing RAW isn't one of them. The tech is possibly workable where extreme levels of compression are needed; the problem is that when you need higher quality then the demands of the algorithm rise exponentially. 

Balloon juice? Do you mean debunking folks hoaxing UFOs with balloons or something political? If the later, do your own research and be prepared to accept what may at first appear to be unacceptable- ask yourself why you are rejecting it when all the facts show otherwise. You will be truly free when you accept the truth, more so when you start thinking about how to help repair the damage that has been done and help heal the world.

Regarding generative compression and what will someday be possible: have you ever studied DNA? Would you agree that it's the most efficient mechanism of information storage ever discovered in the history of man? Human DNA can be completely stored in around 1.5 Gigabytes, small enough to fit on a thumb drive (6×10^9 base pairs/diploid genome x 1 byte/4 base pairs = 1.5×10^9 bytes or 1.5 Gbytes). 1.5 Gbytes of information accurately reconstructs through generative decompression, 150 Zettabytes (10^21)!  (1.5 Gbytes x 100 trillion cells = 150 trillion Gbytes or 150×10^12 x 10^9 bytes = 150 Zettabytes (10^21)). These are ballpark estimates, however the compression ratio is mind-boggling. DNA isn't just encoding an image, or a movie, it encodes a living, organic being. More info here.

Using machine learning which is based on the neural networks of our brains (functioning similar to N-dimensional gradient-descent optimization methods), it will someday be possible to get far greater compression ratios than the state of the art today. Sounds unbelievable? Have you studied fractals? What do you think we could generate from this simple equation:

Z(n+1) = Z(n)^2 + C, where Z is a complex number? Or written another way Znext = Znow*Znow + C? How about this:

Mandel_zoom_08_satellite_antenna.jpg

From a simple multiply and add, with one variable and one constant, we can generate the Mandelbrot set. If your mind is not blown from this single image from that simple equation, it gets better: it can be iterated and animated to create video:

And of course, 3D (Mandelbulb 3D is free):

Today we are just learning to create more useful generative systems using machine learning, for example efficient compression of stills and in the future video. We can see how much information is encoded in Z^2 + C, and in nature with 1.5Gbytes of data for DNA encoding a complete human being (150 ZettaBytes), so somewhere in between we'll have very efficient still image and video compression. Progress is made as our understanding evolves, likely through advanced artificial intelligence, to allow us to apply these forms of compression and reconstruction to specific patterns (stills and moving images) and at the limit, complete understanding of DNA encoding for known lifeforms, and beyond!

Link to comment
Share on other sites

2 hours ago, tomekk said:

I think he meant that this is unlikely to happen to the commercial world before we die? Therefore, raw will be with us for the foreseeable future.

I meant that there was as much truth in the claims that this tech can replace raw as there is juice in a balloon. (What with balloon's being notably juiceless if operated correctly..)

4 hours ago, jcs said:

Today we are just learning to create more useful generative systems using machine learning,

This is true in the sense that we can't do very much, yes. However by your phrasing you've tried to imply inevitable progress. This is not how competent people reason or argue: if they believe there is a definite reason you should expect progress in a field, they state it.

In this case, you are expecting something especially ridiculous - that ML can come up with a compact description thousands of times more efficient than those currently used for generative computer graphics. Which, honestly, is silly.

Quote

From a simple multiply and add, with one variable and one constant, we can generate the Mandelbrot set.

Ok: I've told you I've actually done research in this field and you expect me to have my mind changed by the most childish possible example. Does this make sense? (Hint: the answer is NOT "yes"...)

...Arguing that because we can generate pretty fractals we can use ML to replace raw is like saying "I can grow pretty blue crystals in copper sulphate solution! Therefore if collect the juice from the Sunday roast for a few weeks I can clone a human being in it!!!" One thing has nothing to with the other. Thinking that they do requires a level of intellectual resolution so low that - well, so low that I actually can't come up with a useful phrase you'd be likely to understand.

...You've also responded to an argument that ML systems can't reason surpass human generative graphics by showing examples of human generative graphics. This is also silly. And it's even sillier because another point I made was that human created generative graphics don't do hair or skin well and you've shown examples of robots(?)

It's fine to be excited by things, but speaking as someone who actually knows this field, then - for the benefit of everyone else, not you - this ain't gonna happen! 

Link to comment
Share on other sites

2 minutes ago, jonpais said:

I was under the impression that the clients of most members here weren't demanding 4K anyhow. Or am I mistaken?

My impression was that 4K doesn't matter for pros as much as a delivery format as a shooting one. It gives you room to crop in post, stabilise, supersample - even to extract two povs. But perhaps the people I talked to weren't typical?

Link to comment
Share on other sites

3 hours ago, jonpais said:

I was under the impression that the clients of most members here weren't demanding 4K anyhow. Or am I mistaken?

Unfortunately, I've been running into more and more clients who think that if I don't shoot 4K they won't get a nice image. One recent example was a client who had been doing the company's youtube videos herself and had a ~$600 camcorder that had "4K" plastered on the side... I didn't bother explain that I could get a better image with 10 bit 4:2:2 etc. for the same reason having a physically larger camera already primes clients to think my work will look good. So I just shot 8 bit 4K on my "large" camera (FS5) and everyone loved it (Thanks Pro Colour!).

On the other hand, I've still got plenty of clients who trust me to deliver the best image based on my past work and never complain if something isn't 4K. I love these clients ;)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...