Jump to content

NETFLIX: Which 4K Cameras Can You Use to Shoot Original Content? (missing F5! WTH?!?)


IronFilm
 Share

Recommended Posts

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs

@HockeyFan12, I think you misunderstood: I'm not saying you have to oversample by 2x for the monochrome case. Let's take the perfect example again:

f55-no-aliase.jpg?w=777

The line chart and sensor line up perfectly, no OLPF. Real-world test chart lines are transformed into digital lines: pixels. The 1920x1080 sensor has created 960x540 line pairs, where each pair is made of 2 pixels, giving us 1920x1080 output. Perfect!

Ok, let's cause some trouble and shift the sensor a tiny bit:

f55-aliasin-all-grey.jpg?w=777

Sh*t!!! Where'd all our lines go dude! It's just a gray mass! Now we have 0 line pairs: mush. What if we slowly move the sensor sideways, what would that look like as a video? Black & white lines alternating flashing gray: terrible aliasing! How can we fix it? When we add the OLPF, both the first aligned case and second misaligned cases produce a gray mass. However, no more aliasing! Because the OLPF is filtering out frequencies above the Nyquist limit. The thinnest lines we can now capture without aliasing will always take up at least two pixels, instead of one (for this test case). A 1920x1080 mono sensor with no OLPF can (sometimes!) capture 960x540 line pairs, but suffers from terrible aliasing. A 1920x1080 mono sensor with proper OLPF can capture 1/2 the frequency, so 960x540 line pairs become 480x270 line pairs, with each white line taking 2 pixels (blurred) and each black line taking 2 pixels (blurred), so 480*4 = 1920, 270x4 = 1080.

I borrowed Alister Chapman's images above, here's another write up of his explaining the same thing with more words (also read the comment section): http://www.alisterchapman.com/2011/03/10/measuring-resolution-nyquist-and-aliasing/. It's true that OLPF's aren't perfect, and they are rarely tuned to prevent all aliasing: trading off a little extra apparent sharpness for slight aliasing. 

 

 

Link to comment
Share on other sites

@jcs 

While the systems you identify function more-or-less as you claim and most of what you’re writing is correct, you’re making a few mistakes, most significantly dividing by two an extra time.

Rather than dissect every aspect of your arguments in this thread, I’ll focus only on a few points that are problematic and we can go from there. IMO, Alister Chapman’s images are bad and misleading examples anyway so I won't address that post in as much detail as your post before it.

I struggle to be concise, but I’ll try. For now, let’s stay focused on monochrome sensors without AA filters because Bayer interpolation just confuses things. You state:

Maximum capture resolution possible for a monochrome sensor is W x H pixels, or W/2 x H/2 line pairs in terms of frequency

Maximum possible capture resolution for (1) without aliasing is : (W/2)/2 x (H/2)/2 line pairs in terms of frequency or W/2 x H/2 in terms of pixels

Where did the second dividend come from in part 2? Not from Nyquist. You already applied Nyquist in converting pixels to line pairs. What is correct is simpler and needs no part 2:

Maximum capture resolution possible without aliasing for a monochrome sensor is <W x H lines or <W/2 x H/2 line pairs (assuming sinusoidal cycles in the input frequency)

Or, basically just the Nyquist theorem. 

Alister Chapman’s first image is misleading. I suppose if you could map each pixel individually and perfectly that is about what would happen. But the chance of those patterns being mapped exactly to one another is infinitely low (not to mention I think he’s wrong in terms of how a Bayer sensor would read that input based on how deBayering algorithms work, but that’s an unrelated tangent and why we’re focusing on monochrome sensors for now). Both of his example images are based on a vanishingly low improbability. >99.9999999999% of the time at a square wave input frequency equal to the frequency of the sensor you’ll just get crazy aliasing, as you correctly state. Not that a true square wave could even hit the sensor once passed through a lens. So we see here only the exceptions which prove the rule and which are only tangentially related to the actual rule, which concerns sine waves and not square waves (except to the extent that square waves contain infinite order harmonics of sine waves).

So let’s entirely forget Chapman’s confusing examples for now and start with more useful ones. Let’s map <N sine waves onto N pixels. (There are some caveats to the examples I’m posting but we can discuss those later.)

What we can see here is that when the frequency of the sensor and signal are aligned, we can resolve <2N lines with high contrast. When they’re offset exactly by 1/2 we can see that the signal approaches gray (it would be pure gray if exactly offset and if we had N pixels but for this example we’re talking <N and N needn’t be an integer so instead it’s infinitely close to gray). And when the grid is offset by an arbitrary amount we get…. a lower contrast image. Not aliasing! 

Screen Shot 2017-08-05 at 2.45.52 PM.png

Screen Shot 2017-08-05 at 2.45.47 PM.png

Screen Shot 2017-08-05 at 2.45.57 PM.png

Nyquist holds true. We don't need to divide by two again to get rid of aliasing! We just need to think in terms of sine waves rather than square waves in the input domain.

Let’s reexamine your claims:

    1    Maximum capture resolution possible for a monochrome sensor is W x H pixels, or W/2 x H/2 line pairs in terms of frequency

Part 1 is basically true, excepting a lack of a < symbol and some semantics relating to pixels and line pairs not quite being the same thing 

 2 Maximum possible capture resolution for (1) without aliasing is : (W/2)/2 x (H/2)/2 line pairs in terms of frequency or W/2 x H/2 in terms of pixels

Part 2 contains an extra dividend. It should read:

Maximum possible capture resolution for (1) without aliasing is : <(W/2)x (H/2) sinusoidal line pairs in terms of frequency or <W x H in terms of pixels. Those sampled pixels may approach but will not reach zero contrast (in theory, not practice).

The reason there’s so much confusion is because most resolution charts and most zone plates are printed in square waves, which have infinite higher order harmonics. I would also contend that oversampling by a factor of two guarantees a preservation of contrast that not only guarantees no aliasing but guarantees that no signal is reduced to approaching gray (taking gray as zero) and every signal is allowed to reach its full amplitude. But that is separate from Nyquist, and I'm not even sure oversampling (itself a Nyquist-bound process) preserves that detail anyway.

What’s really crazy to consider is that a high resolution sensor is still prone to alias when shooting a single high contrast edge no matter the frequency of the fundamental pattern. It just usually doesn’t show up much.

Furthermore, the < is a bit of a silly bugaboo because even without the < you won't get aliasing. You just also won't get a recoverable signal because it will be reduced to gray (zero) when misaligned perfectly with the grid. But since there's noise in any sensor and quantization isn't infinitely sensitive anyway, the <'s presence is very arbitrary in practice and the exact limit of recoverable resolution is even less than Nyquist predicts, promoting the practice of oversampling 

Anyhow, you're dividing by two an extra time and using square waves and sine waves interchangeably even though square waves are of infinite frequency regardless of their fundamental and sine waves are of their fundamental frequency and no higher.

Link to comment
Share on other sites

11 hours ago, gelaxstudio said:

Why Iphone7 is missing too?

Lack of wide gamut BFA and 10 bit high bitrate codec, for one. I'm sure they'd let you shoot the occasional insert on an iPhone, though, and would purchase an iPhone-derived project as an original on its other merits. They just likely wouldn't finance and produce one. So I wouldn't sweat it until you're in a position to discuss it with them personally.

Link to comment
Share on other sites

15 hours ago, HockeyFan12 said:

Lack of wide gamut BFA and 10 bit high bitrate codec, for one. I'm sure they'd let you shoot the occasional insert on an iPhone, though, and would purchase an iPhone-derived project as an original on its other merits. They just likely wouldn't finance and produce one. So I wouldn't sweat it until you're in a position to discuss it with them personally.

Uh...actually that was a joke~:flushed:

It is just a phone!

 

Link to comment
Share on other sites

@HockeyFan12,

  1.  Maximum capture resolution possible for a monochrome sensor is W x H pixels, or W/2 x H/2 line pairs in terms of frequency
  2.  Maximum possible capture resolution for (1) without aliasing is : (W/2)/2 x (H/2)/2 line pairs in terms of frequency or W/2 x H/2 in terms of pixels

(1) is maximum resolution with aliasing, (2) is maximum resolution without aliasing, and thus we must perform the extra divide by 2 as per Nyquist. Without the extra divide by 2, (2) is the same as (1). Since most OLPFs and camera systems allow a small amount of aliasing, I stopped using > to simplify the statement. Nyquist is >2x if we want zero aliasing (vs. = 2x as written above to simplify for camera systems which allow for a slight amount of aliasing).

If one argues that (1) & (2) above are invalid, they are stating that Nyquist sampling theory is invalid.

We don't need to complicate the statement with fundamentals and harmonics, especially with a sine wave. What are the harmonics of a pure sine wave beyond the fundamental? ;) 

Nyquist applies also to the purely digital domain, and is used in computer graphics and video games to reduce or eliminate aliasing: http://cs.boisestate.edu/~alark/cs464/lectures/AntiAliasing.pdf

Link to comment
Share on other sites

I've already explained myself, I'm not going to explain further if it doesn't add up. I had this explained to me by a sensor engineer at one of the leading cinema camera manufacturers, so while I may be using layman's terms incorrectly I'm sure the theory behind it is right and the illustrations I offer prove it fairly simply. Maybe what I'm expressing isn't clear. But I used to think what you argue and a lead engineer at a camera manufacturer corrected me. I don't want to refer to others' explanations so that people take things on faith, but I already explained how it works just maybe not as well as he can. If you want I can refer you to his explanation via PM. 

Long story short, you're still dividing by two twice, once unnecessarily.

Of course there are no harmonics of a pure sine wave beyond the fundamental, by definition. But Chapman's images show square waves, which do have higher order harmonics. And so what I contend is that if you take Alistair Chapman's images (his explanation is wrong btw) but swap sine waves for the square waves he places in the input domain, you'll find that any frequency equal or less than 2k line pairs (4k lines) doesn't alias and any frequency less than 2k line pairs (less than 4k lines) not only doesn't alias, but leaves a recoverable trace, however low contrast. Yes, the contrast of the recorded pattern varies based on alignment with the sensor, but no false detail is generated. 

I'm quoting the first website I find: 

Nyquist rate -- For lossless digitization, the sampling rate should be at least twice the maximum frequency responses. Indeed many times more the better.

You'd agree that fits the definition of Nyquist? So let's say we have 2k sinusoidal line pairs, cycling between black and white, and want the minimum resolution sensor we need to record them without aliasing. A sine wave hits zero (gray) twice per cycle, and 1 (black) once and -1 (white) once per cycle. So that's one black and one white (really they'll show up as shades of gray) line per cycle. So if a 4k sensor can resolve up to 2k line pairs without aliasing, it can resolve up to 4k lines. Nyquist basically does nothing more than convert lines to line pairs. Yes, you still need a low pass filter. And yes, misaligned exactly by 1/2 and sampled at the same frequency as the input, the image will resolve as gray, as Chapman illustrates. But that's where the < comes from. If it's < that number, then there will always at least be a trace in the recording, however faint, from which to recover the signal's frequency.

So yes, let's think in sine waves and monochrome sensors exclusively. Refer to the images I posted above. You don't need the second dividend, as I illustrate, under those circumstances. 

Maximum possible capture resolution for without aliasing is : (W/2) x (H/2) line pairs in terms of frequency or W x H in terms of pixels (although those pixels may be shades of gray, and almost certainly won't be black and white lines unless there's a lot of false detail or sharpening)

Link to comment
Share on other sites

1 hour ago, HockeyFan12 said:

Maximum possible capture resolution for without aliasing is : (W/2) x (H/2) line pairs in terms of frequency or W x H in terms of pixels (although those pixels may be shades of gray, and almost certainly won't be black and white lines unless there's a lot of false detail or sharpening)

If (W/2) x (H/2) = frequency, where's the >2x frequency oversampling to prevent aliasing per Nyquist? That's what the divide by 2 does. If we define frequency as W x H, then the max lines possible are W/2 x H/2 (actually less because of >). The later definition makes the most sense to me, since the individual photosites are doing the sampling.

Taking the optics out of the picture and looking at what ends up in the sensor after sampling, we see that we need at least 2 pixels to represent the thinnest line possible without aliasing:

image013.png

Putting the optics back in the picture, the OLPF slightly blurs the input to give us the anti-aliased example in the sensor.

Link to comment
Share on other sites

2 hours ago, jcs said:

If (W/2) x (H/2) = frequency, where's the >2x frequency oversampling to prevent aliasing per Nyquist? That's what the divide by 2 does.

To avoid aliasing and ensure a recoverable signal when recording at a given frequency, you need to sample at more than twice the rate of that frequency. That, more or less, is the Nyquist theorem, I think we can agree?*

So you need ≤1/2 as many cycles in a given direction as you have pixels in that direction in order to record a signal without aliasing.

Cycles of a given frequency are equal to line pairs (sinusoidal line pairs)

So that's ≤(W/2) x (H/2) line pairs without aliasing

Line pairs contain two lines (half sine waves)

So ≤(W/2) x (H/2) line pairs x 2 lines per pair = W x H lines without aliasing

Thus, if you are recording a sinusoidal zone plate, a 4k sensor can record anything up to 4k sinusoidal cycles without aliasing. 

Your extra dividend comes out of nowhere. It's an attempt to apply Nyquist to Nyquist.

The pictures you posted are again of straight lines, not of sinusoidal gradients. All of your visual examples so far are of straight lines, which have infinite higher order harmonics and thus are of effectively infinite frequency (irrespective of the fundamental frequency) and so they'll alias at their edges unless you apply a low pass filter. Your latest example correctly demonstrates this, but that's all that it demonstrates. You keep posting these incredibly basic examples that don't relate to the math but just to the broad concept that things can alias. I'm not arguing that. I'm arguing that you're continuing to add an extra dividend and continuing to ignore that Nyquist applies to the highest harmonic frequency of a wave not to its fundamental frequency (the two are only the same with sine waves).

*≥2 times will prevent aliasing. >2 further guarantees the signal doesn't gray out when it's exactly out of phase.

Link to comment
Share on other sites

Moving back from monochrome sensors to Bayer sensors, because of the undersampling due to the Bayer pattern, Sony is legit in saying the F65 is the only True(-ish) 4K camera (full 4K sampling in green (important for luminance) and 2x color sampling in R & B vs. a 4K sensor). In order to make a 4K Bayer sensor not alias with 4K output, the OLPF would have to be very strong, resulting in a soft image. This matches the real-world results by Geoff Boyle: with the F65 showing no aliasing and, at the same time providing the most detailed image, with progressively lower resolution Bayer sensors providing less detail and more aliasing:

F65 (20 megapixels, sufficiently sampled to provide high detail alias-free 4K):

C300 Mark II (8.85 megapixel, undersampled, producing less detail and a lot of aliasing):

C700: slightly higher resolution than the C300 II, with less color aliasing but still has luminance aliasing:

More tests here: https://vimeo.com/geoffboyle

It's possible 8K Red could produce similar results for 4K output to the F65; not shown on Geoff's page.

 

17 minutes ago, HockeyFan12 said:

To avoid aliasing when recording a given frequency, you simply need to sample at more than twice the rate of that frequency. That, more or less, is the Nyquist theorem, I think we can agree.*

Why overcomplicate things with line pairs, sinusoids, harmonics, fundamentals and so on? We can also drop the > for this discussion as slight aliasing is typically OK in the real world (actually much worse is allowed as shown above). Pixels (photosite output) are samples, not line pairs, right? I think they used line pairs to reflect Nyquist, and thus 2 pixels vs. 1: the thinnest lines require 2 pixels, and thus the term line pair.

With W x H sample sites, we need to oversample by 2, and thus max W/2 x H/2 lines are possible along with an OLPF to cut frequencies higher than that (thinner lines). This math matches real-world tests as shown above with Bayer sensors: if with start with 8K (F65) we've got max detail and no aliasing, and as we go down in resolution detail drops and aliasing rises.

Link to comment
Share on other sites

37 minutes ago, jcs said:

Why complicate things with line pairs, sinusoids, harmonics, fundamentals and so on? We can also drop the > for this discussion as slight aliasing is typically OK in the real world (actually much worse is allowed as shown above). Pixels are samples, not line pairs, right? I think they used line pairs to reflect Nyquist, and thus 2 pixels vs. 1: the thinnest lines require 2 pixels, and thus the term line pair.

With W x H sample sites, we need to oversample by 2, and thus max W/2 x H/2 lines are possible along with an OLPF. This math matches real-world tests as shown above with Bayer sensors.

No. Nyquist applies to sine waves, full stop. Once you put square waves (black and white lines) in the input domain you are immediately adding infinite higher order harmonics and so, in absence of a low pass filter, you're increasing the input frequency to infinity. (Because of the overtones.)

Again, Nyquist concerns sine waves. Square waves have overtones of infinite order, so while Nyquist still concerns them, it theoretically requires of square waves a sensor with infinite pixels. So anyhow, it concerns them differently...

When we discuss Nyquist, we have to use sine waves in the input domain. Whether it's concerning sound or image. Or if we use square waves we must accept that square waves in the input domain are of theoretically infinite frequency at certain localities.

That said, in practical terms, oversampling by 2 with a Bayer sensor seems to work really well! And, in practical terms, the low pass filter knocks away most of those higher order harmonics. As does diffraction, etc. As do the limits of the printed image, etc.

Yes, oversampling helps get a better image! But oversampling by an extra factor of 2 is fairly arbitrary, and not exclusively or specifically derived from Nyquist. Imo, it's probably more related to getting full raster in R and B compensating a bit for the low pass filter.

Link to comment
Share on other sites

30 minutes ago, HockeyFan12 said:

No. Nyquist applies to sine waves, full stop. Once you put square waves (black and white lines) in the input domain you are immediately adding infinite higher order harmonics. With square waves, the frequency is theoretically infinite (or practically, whatever the lowest frequency the low pass filter and lens passes through) because of the overtones. Nyquist concerns sine waves. Square waves have overtones of infinite order sine waves. 

When we discuss Nyquist, we have to use sine waves in the input domain. Or accept that square waves in the input domain are of theoretically infinite frequency.

That said, in practical terms, oversampling by 2 with a Bayer sensor seems to work really well!

It's just not because of the same 2 that's in Nyquist. It's because of many other factors working together. And to that extent the strength of the OLPF is fairly arbitrary. More tuned to taste than to math. 

Nyquist applies to all continuous (analog) signals and even purely computer generated images(!), not just sinusoids: https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem

Quote

In the field of digital signal processing, the sampling theorem is a fundamental bridge between continuous-time signals (often called "analog signals") and discrete-time signals (often called "digital signals"). It establishes a sufficient condition for a sample rate that permits a discrete sequence of samples to capture all the information from a continuous-time signal of finite bandwidth.

A camera sensor with an OLPF never sees a square wave anyway (no infinite bandwidth), and aliasing happens with fine fabric, brick walls etc., not just black and white test charts. We could get into Fourier analysis if you want to talk about harmonics, but that's really off topic for this discussion.

Again:

With W x H sample sites, we need to oversample by 2, and thus max W/2 x H/2 lines are possible along with an OLPF to cut frequencies higher than that (thinner lines). This math matches real-world tests as shown above with Bayer sensors: if we start with 8K (F65) we've got max detail and no aliasing, and as we go down in resolution detail drops and aliasing rises.

Without the divide by 2, we're not using Nyquist, which was your original statement in this thread- that Nyquist doesn't apply to image sensors (it does). It also apples to computer graphics in general, so we can talk about these concepts using just graphics, with no optics or sensors at all.

The OLPF strength is fairly arbitrary? Not really, it's absolutely tuned to the sensor resolution, with some cameras providing an OPLF swap when capturing at lower vs. higher resolution. Sure, there is some minor taste tuning regarding sharpness/detail vs. aliasing for all the camera manufactures who are cheating in calling their cameras 4K ;). The F65 OPLF is precisely tuned to the math and it shows!

 

Link to comment
Share on other sites

I don't know what more to say except that the Nyquist theorem specifically concerns sine waves. Always has, always will. It concerns other wave shapes only to the extent that they are the product of sine waves. By frequency, Nyquist means frequency of the highest order sine wave. (Which in the case of an unfiltered square wave is infinite.)

I don't think we're going to make any progress in this discussion until we can agree upon this point, and if we can't we won't. Which is okay! Everyone will draw their own conclusions and luckily this discussion is more academic than practical (to the extent that neither of us are designing sensors and both of us agree the F65 looks awesome).

By fairly arbitrary, I only mean tuned partially subjectively. That's my bad for articulating that poorly. And yes, Nyquist does apply to computer graphics, and to that extent the fact that all the graphics you present are of square wave functions (infinite order so far as Nyquist is concerned) is very relevant. That's why they're aliasing like crazy when you downscale them. A few posts back I downsampled some sine wave zone plates (quantized sine waves, granted, and I suppose a binary quantized sine wave resembles a square wave, which further complicates things....) and they didn't alias in the same way! Not nearly as much aliasing. Not any, in fact... until they hit the Nyquist limit. 

So yes, square waves and sine waves do have different frequencies as concerns Nyquist. 

Link to comment
Share on other sites

43 minutes ago, HockeyFan12 said:

I don't know what more to say except that the Nyquist theorem specifically concerns sine waves. Always has, always will. It concerns other wave shapes only to the extent that they are the product of sine waves. By frequency, Nyquist means frequency of the highest order sine wave. (Which in the case of an unfiltered square wave is infinite.)

I don't think we're going to make any progress in this discussion until we can agree upon this point, and if we can't we won't. Which is okay! Everyone will draw their own conclusions and luckily this discussion is more academic than practical (to the extent that neither of us are designing sensors and both of us agree the F65 looks awesome).

By fairly arbitrary, I only mean tuned partially subjectively. That's my bad for articulating that poorly. And yes, Nyquist does apply to computer graphics, and to that extent the fact that all the graphics you present are of square wave functions (infinite order so far as Nyquist is concerned) is very relevant. That's why they're aliasing like crazy. A few posts back I downsampled some sine wave imagery (quantized sine waves, granted) and it didn't alias in the same way! (At least until it hit the Nyquist limit.)

Writing Nyquist as concerning specifically sine waves is misleading, as is creating examples based on sine waves (smooth gradients) to demonstrate your point, when the real world does not look only like that. Nyquist is about sampling continuous (analog) signals and accurate reconstruction (it also applies to discrete signals with digitized sound as well as computer graphics and any other signal for that matter). Understanding Nyquist requires Fourier analysis (FFT, DFT, DCT), where we can break any signal down into component frequencies, which are indeed sine waves (displayed as discrete pixels with computer graphics!). But we don't need to go that deep, one can trivially draw lines to understand aliasing vs. anti-aliasing and the 2-pixel factor:

image013.png

From: https://www.microscopyu.com/tutorials/spatial-resolution-in-digital-imaging

Quote

The numerical value of each pixel in the digital image represents the intensity of the optical image averaged over the sampling interval. Thus, background intensity will consist of a relatively uniform mixture of pixels, while the specimen will often contain pixels with values ranging from very dark to very light. The ability of a digital camera system to accurately capture all of these details is dependent upon the sampling interval. Features seen in the microscope that are smaller than the digital sampling interval (have a high spatial frequency) will not be represented accurately in the digital image. The Nyquist criterion requires a sampling interval equal to twice the highest specimen spatial frequency to accurately preserve the spatial resolution in the resulting digital image. An equivalent measure isShannon's sampling theorem, which states that the digitizing device must utilize a sampling interval that is no greater than one-half the size of the smallest resolvable feature of the optical image. Therefore, to capture the smallest degree of detail present in a specimen, sampling must occur at a rate fast enough so that a minimum of two samples are collected for each feature, guaranteeing that both light and dark portions of the spatial period are gathered by the imaging device.

By eliminating the divide by 2, you are back to your original position that Nyquist doesn't apply to sensors. If you do agree that Nyquist applies to sensors and computer graphics, where do you apply the divide by 2 (oversampling) in the math?

Thanks BTW for this discussion, I have a better understanding of sensors, cameras, and why the F65 looks so amazing! :) 

 

Link to comment
Share on other sites

There's no need to oversample beyond the 2X conversion from line pairs to lines. If we have N/2 line pairs (cycles), a sensor with >N photo sites in that axis can resolve them all. And N/2 line pairs then translates to N lines. So we can resolve lines up to the limit of the sensor's resolution (not only up to half that limit). So a 4k sensor can indeed resolve up to 4k lines on a sinusoidal zone plate without aliasing. Practically, all that Nyquist really does is recognize that a sine wave has two humps, one positive hump that goes from 0 to A (amplitude) to 0 and one negative hump that goes from 0 to -A to 0. And that we need to sample twice per sine wave to pick up once on each of the two humps or else we will get aliasing. But if those humps are exactly out of phase with the sampling, we'll get a phase-cancelled signal (gray, or 0), so we need to sample at more than twice the frequency to leave a recoverable trace of the signal. 

Again, this entire discussion is predicated on whether or not you accept that by "frequency," the Nyquist theorem means "frequency of the highest order sine wave." And it does. The Nyquist theorem refers specifically to sine waves. 

Nyquist applies to computer graphics, sound (yes, a sine wave through a synthesizer has a different frequency than a square wave of the same fundamental, at least so far as Nyquist is concerned), and images. But in each case Nyquist applies to sine waves specifically. By frequency, the theorem refers to the frequency of the highest order sine wave. If we can't accept that as a starting point we'll get nowhere. 

Link to comment
Share on other sites

33 minutes ago, HockeyFan12 said:

There's no need to oversample beyond the 2X conversion from line pairs to lines. If we have N/2 line pairs (cycles), a sensor with >N photo sites in that axis can resolve them all. And N/2 line pairs then translates to N lines. So we can resolve lines up to the limit of the sensor's resolution (not only up to half that limit). So a 4k sensor can indeed resolve up to 4k lines on a sinusoidal zone plate without aliasing. Practically, all that Nyquist really does is recognize that a sine wave has two humps, one positive hump that goes from 0 to A (amplitude) to 0 and one negative hump that goes from 0 to -A to 0. And that we need to sample twice per sine wave to pick up once on each of the two humps or else we will get aliasing. But if those humps are exactly out of phase with the sampling, we'll get a phase-cancelled signal (gray, or 0), so we need to sample at more than twice the frequency to leave a recoverable trace of the signal. 

Again, this entire discussion is predicated on whether or not you accept that by "frequency," the Nyquist theorem means "frequency of the highest order sine wave." And it does. The Nyquist theorem refers specifically to sine waves. 

Nyquist applies to computer graphics, sound (yes, a sine wave through a synthesizer has a different frequency than a square wave of the same fundamental, at least so far as Nyquist is concerned), and images. But in each case Nyquist applies to sine waves specifically. By frequency, the theorem refers to the frequency of the highest order sine wave. If we can't accept that as a starting point we'll get nowhere. 

Sorry man, that's so far off with no basis in math, science, or real world evidence that I think you're right there's no point discussing this further. In science you've got to provide equations which predict a result, then when real world results match the predicted outcome, we are reasonably sure we understand how something works.  Here is another source saying the same thing regarding 2x sampling, which you have completely ignored:

https://www.microscopyu.com/tutorials/spatial-resolution-in-digital-imaging

Quote

The numerical value of each pixel in the digital image represents the intensity of the optical image averaged over the sampling interval. Thus, background intensity will consist of a relatively uniform mixture of pixels, while the specimen will often contain pixels with values ranging from very dark to very light. The ability of a digital camera system to accurately capture all of these details is dependent upon the sampling interval. Features seen in the microscope that are smaller than the digital sampling interval (have a high spatial frequency) will not be represented accurately in the digital image. The Nyquist criterion requires a sampling interval equal to twice the highest specimen spatial frequency to accurately preserve the spatial resolution in the resulting digital image. An equivalent measure isShannon's sampling theorem, which states that the digitizing device must utilize a sampling interval that is no greater than one-half the size of the smallest resolvable feature of the optical image. Therefore, to capture the smallest degree of detail present in a specimen, sampling must occur at a rate fast enough so that a minimum of two samples are collected for each feature, guaranteeing that both light and dark portions of the spatial period are gathered by the imaging device.

All these sources in this thread I have provided are saying the same thing, and by simple inspection of a drawing trivial lines:

image013.png

which show that we need at least two pixels to draw the smallest feature without aliasing (same thing stated by microscopyu.com and all the other sources provided in this thread). Your response is you have secret knowledge from a lead camera engineer which states that Nyquist doesn't apply (no need for 2 pixels, 1 pixel is sufficient), yet you're only willing to share it in a private PM. Do you see the absurdity of that argument for an open, science-based discussion?

Since Nyquist states >2x, microscopyu.com correctly states that for image sampling, we really need 2.5 or 3x for best results:

Quote

To ensure adequate sampling for high-resolution imaging, an interval of 2.5 to 3 samples for the smallest resolvable feature is suggested.

You haven't provided a test chart showing a 4K sensor resolving 4K lines without aliasing because it's impossible. We need at least 2 pixels to represent the smallest line or feature possible without aliasing. Here it is again:

image013.png

The left line is what you are proposing, the right line requires at least 2 pixels to prevent aliasing. I can't think of anything simpler or more clear.

Audio signal sampling is the same as image signal sampling: a 44.1kHz ADC converts analog voltage from a sound pressure transducer. Each audio sample represents voltage at 1/44100 of second. We don't clump the samples into pairs. The input signal has an analog low pass filter which cuts frequencies as per Nyquist: 44.1kHz/2 = 22,050Hz. So the highest frequency we can record without aliasing is 22kHz (slightly below because of >). And in the real world, we oversample much more than 2x as analog filters aren't perfect.

In graphics and sensors, each pixel is a sample. For a 4K sensor, we use an optical low pass filter to cut frequencies above 4K/2 = 2K. So we can represent at most 2K lines, as each line needs at least 2 pixels as shown above to represent lines without significant aliasing (3 pixels would be better, 2 pixels is good enough in practice). It's that simple.

There's also temporal aliasing. In order to move a minimum-sized dot smoothly across the screen, we need at least 2 pixels, where 2 pixels vary in brightness as the group moves across the pixel grid smoothly. Just one pixel will alias in time as it jumps discontinuously on pixel boundaries across the screen:

This 2x (2 pixel factor) also applies to features captured by elements moving across an image sensor.

That's all I got brother, peace out! :) 

 

Link to comment
Share on other sites

From my perspective, the only math you're doing that I'm not is dividing by two an unnecessary extra time. (That and we disagree on how the frequency of square waves relates to the Nyquist theorem.) But I understand your reasoning, and most of what you've written closely mirrors what I believe to be a widespread misunderstanding, one that I used to share, and one that I see online a lot.

So I agree we can agree to disagree. Which is fine.

But I think we can also agree that the basis of our disagreement is predicated on whether Nyquist applies to sine waves specifically or if it concerns only the fundamental frequency of the wave and ignores overtones. IMO, regardless of whether it's audio or an image, a square wave is of effectively infinite frequency at its rise and fall regardless of the fundamental, whereas a sine wave is of its fundamental frequency throughout its sweep. These are the frequencies I contend apply to Nyquist, and I think this is what our disagreement is mostly about.

That said, I agree that in practice oversampling is very beneficial! There's no arguing with those test charts and the F65's performance! (Those charts are still binary, thus square wave, btw ;) .)

Fwiw, I did post an example (with images) a few posts above that shows how what I've written is fully consistent with real-world performance given a monochrome sensor and sine wave in the input field. Again, we're back to the same fundamental (pun intended) disagreement.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...