Jump to content

jcs

Members
  • Posts

    1,839
  • Joined

  • Last visited

Everything posted by jcs

  1. Looks like a combination of unsharp mask with a normal radius (somewhat small) and LCE (local contrast enhancement), which uses unsharp mask with a very large radius (e.g. 300). It's too bad PP CC doesn't GPU accelerate unsharp masking, however it is possible to achieve the effect with GPU acceleration by copying the video track, blurring it (Gaussian blur: 300), scaling the original (ProcAmp: Contrast & Saturation: 200) and subtracting the Gaussian blurred copy (Blend: Subtract). I've been using 1DX II 1080p24 to get full frame shallow DOF and smaller files with nice results (sharpen around 50 in Lumetri/Creative in PP CC). Along with a little bit of fine noise grain, sharpening 1DX II along with LCE can look very filmic, even with the initially soft 120fps 1DX II 1080p. Regarding Andrew's tests: every one of those 120fps cameras produce acceptable results. Was surprised to see the Alexa and Phantom missing though
  2. You'll probably get more play/interaction on Instagram (limited to 1 minute, need to use Google Drive or Dropbox to save to and upload from phone). Put content on all platforms (including Vimeo), and check out Patreon.com too.
  3. Built-in DMT dispenser included, LSD & Mushroom cartridges available as accessories (Amanita Muscaria available in September along with Super Mario AR).
  4. Sorry man, that's so far off with no basis in math, science, or real world evidence that I think you're right there's no point discussing this further. In science you've got to provide equations which predict a result, then when real world results match the predicted outcome, we are reasonably sure we understand how something works. Here is another source saying the same thing regarding 2x sampling, which you have completely ignored: https://www.microscopyu.com/tutorials/spatial-resolution-in-digital-imaging All these sources in this thread I have provided are saying the same thing, and by simple inspection of a drawing trivial lines: which show that we need at least two pixels to draw the smallest feature without aliasing (same thing stated by microscopyu.com and all the other sources provided in this thread). Your response is you have secret knowledge from a lead camera engineer which states that Nyquist doesn't apply (no need for 2 pixels, 1 pixel is sufficient), yet you're only willing to share it in a private PM. Do you see the absurdity of that argument for an open, science-based discussion? Since Nyquist states >2x, microscopyu.com correctly states that for image sampling, we really need 2.5 or 3x for best results: You haven't provided a test chart showing a 4K sensor resolving 4K lines without aliasing because it's impossible. We need at least 2 pixels to represent the smallest line or feature possible without aliasing. Here it is again: The left line is what you are proposing, the right line requires at least 2 pixels to prevent aliasing. I can't think of anything simpler or more clear. Audio signal sampling is the same as image signal sampling: a 44.1kHz ADC converts analog voltage from a sound pressure transducer. Each audio sample represents voltage at 1/44100 of second. We don't clump the samples into pairs. The input signal has an analog low pass filter which cuts frequencies as per Nyquist: 44.1kHz/2 = 22,050Hz. So the highest frequency we can record without aliasing is 22kHz (slightly below because of >). And in the real world, we oversample much more than 2x as analog filters aren't perfect. In graphics and sensors, each pixel is a sample. For a 4K sensor, we use an optical low pass filter to cut frequencies above 4K/2 = 2K. So we can represent at most 2K lines, as each line needs at least 2 pixels as shown above to represent lines without significant aliasing (3 pixels would be better, 2 pixels is good enough in practice). It's that simple. There's also temporal aliasing. In order to move a minimum-sized dot smoothly across the screen, we need at least 2 pixels, where 2 pixels vary in brightness as the group moves across the pixel grid smoothly. Just one pixel will alias in time as it jumps discontinuously on pixel boundaries across the screen: This 2x (2 pixel factor) also applies to features captured by elements moving across an image sensor. That's all I got brother, peace out!
  5. Writing Nyquist as concerning specifically sine waves is misleading, as is creating examples based on sine waves (smooth gradients) to demonstrate your point, when the real world does not look only like that. Nyquist is about sampling continuous (analog) signals and accurate reconstruction (it also applies to discrete signals with digitized sound as well as computer graphics and any other signal for that matter). Understanding Nyquist requires Fourier analysis (FFT, DFT, DCT), where we can break any signal down into component frequencies, which are indeed sine waves (displayed as discrete pixels with computer graphics!). But we don't need to go that deep, one can trivially draw lines to understand aliasing vs. anti-aliasing and the 2-pixel factor: From: https://www.microscopyu.com/tutorials/spatial-resolution-in-digital-imaging By eliminating the divide by 2, you are back to your original position that Nyquist doesn't apply to sensors. If you do agree that Nyquist applies to sensors and computer graphics, where do you apply the divide by 2 (oversampling) in the math? Thanks BTW for this discussion, I have a better understanding of sensors, cameras, and why the F65 looks so amazing!
  6. Nyquist applies to all continuous (analog) signals and even purely computer generated images(!), not just sinusoids: https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem A camera sensor with an OLPF never sees a square wave anyway (no infinite bandwidth), and aliasing happens with fine fabric, brick walls etc., not just black and white test charts. We could get into Fourier analysis if you want to talk about harmonics, but that's really off topic for this discussion. Again: With W x H sample sites, we need to oversample by 2, and thus max W/2 x H/2 lines are possible along with an OLPF to cut frequencies higher than that (thinner lines). This math matches real-world tests as shown above with Bayer sensors: if we start with 8K (F65) we've got max detail and no aliasing, and as we go down in resolution detail drops and aliasing rises. Without the divide by 2, we're not using Nyquist, which was your original statement in this thread- that Nyquist doesn't apply to image sensors (it does). It also apples to computer graphics in general, so we can talk about these concepts using just graphics, with no optics or sensors at all. The OLPF strength is fairly arbitrary? Not really, it's absolutely tuned to the sensor resolution, with some cameras providing an OPLF swap when capturing at lower vs. higher resolution. Sure, there is some minor taste tuning regarding sharpness/detail vs. aliasing for all the camera manufactures who are cheating in calling their cameras 4K . The F65 OPLF is precisely tuned to the math and it shows!
  7. Moving back from monochrome sensors to Bayer sensors, because of the undersampling due to the Bayer pattern, Sony is legit in saying the F65 is the only True(-ish) 4K camera (full 4K sampling in green (important for luminance) and 2x color sampling in R & B vs. a 4K sensor). In order to make a 4K Bayer sensor not alias with 4K output, the OLPF would have to be very strong, resulting in a soft image. This matches the real-world results by Geoff Boyle: with the F65 showing no aliasing and, at the same time providing the most detailed image, with progressively lower resolution Bayer sensors providing less detail and more aliasing: F65 (20 megapixels, sufficiently sampled to provide high detail alias-free 4K): C300 Mark II (8.85 megapixel, undersampled, producing less detail and a lot of aliasing): C700: slightly higher resolution than the C300 II, with less color aliasing but still has luminance aliasing: More tests here: https://vimeo.com/geoffboyle It's possible 8K Red could produce similar results for 4K output to the F65; not shown on Geoff's page. Why overcomplicate things with line pairs, sinusoids, harmonics, fundamentals and so on? We can also drop the > for this discussion as slight aliasing is typically OK in the real world (actually much worse is allowed as shown above). Pixels (photosite output) are samples, not line pairs, right? I think they used line pairs to reflect Nyquist, and thus 2 pixels vs. 1: the thinnest lines require 2 pixels, and thus the term line pair. With W x H sample sites, we need to oversample by 2, and thus max W/2 x H/2 lines are possible along with an OLPF to cut frequencies higher than that (thinner lines). This math matches real-world tests as shown above with Bayer sensors: if with start with 8K (F65) we've got max detail and no aliasing, and as we go down in resolution detail drops and aliasing rises.
  8. If (W/2) x (H/2) = frequency, where's the >2x frequency oversampling to prevent aliasing per Nyquist? That's what the divide by 2 does. If we define frequency as W x H, then the max lines possible are W/2 x H/2 (actually less because of >). The later definition makes the most sense to me, since the individual photosites are doing the sampling. Taking the optics out of the picture and looking at what ends up in the sensor after sampling, we see that we need at least 2 pixels to represent the thinnest line possible without aliasing: Putting the optics back in the picture, the OLPF slightly blurs the input to give us the anti-aliased example in the sensor.
  9. @HockeyFan12, Maximum capture resolution possible for a monochrome sensor is W x H pixels, or W/2 x H/2 line pairs in terms of frequency Maximum possible capture resolution for (1) without aliasing is : (W/2)/2 x (H/2)/2 line pairs in terms of frequency or W/2 x H/2 in terms of pixels (1) is maximum resolution with aliasing, (2) is maximum resolution without aliasing, and thus we must perform the extra divide by 2 as per Nyquist. Without the extra divide by 2, (2) is the same as (1). Since most OLPFs and camera systems allow a small amount of aliasing, I stopped using > to simplify the statement. Nyquist is >2x if we want zero aliasing (vs. = 2x as written above to simplify for camera systems which allow for a slight amount of aliasing). If one argues that (1) & (2) above are invalid, they are stating that Nyquist sampling theory is invalid. We don't need to complicate the statement with fundamentals and harmonics, especially with a sine wave. What are the harmonics of a pure sine wave beyond the fundamental? Nyquist applies also to the purely digital domain, and is used in computer graphics and video games to reduce or eliminate aliasing: http://cs.boisestate.edu/~alark/cs464/lectures/AntiAliasing.pdf
  10. This could be fun- create a bunch of different shots which can be easily edited together to tell a short story (really short, less than a minute). People can download the clips, add their own dialog- narration and ADR, then assemble them into something entertaining. Basic minimal story format: Show the hero's goal/desire/problem Show the hero overcoming obstacles to reach goal (drama) Celebration and close A way to get people creating content vs. obsessing with gear
  11. Looks great, thinking what the dialog might be... SATT: Shaving, A Trial of Thought. VO in gruff voice: "Man. I've got to shave. But how? A razor? Then I'll need shaving cream. Too Messy. Too complicated. Something simpler maybe. Electric razor? Charging. Where to get power? Too complicated. Maybe I'll just grow a beard." END. Here's something to click which inspired the dialog just from watching the ads (didn't buy it, but who needs it, right? ): https://www.masterclass.com/classes/aaron-sorkin-teaches-screenwriting
  12. People have argued and debated on the internet since the beginning on USENET (even before on dial-up BBS's). One reason people are in a much more dramatic space is due to the constant negative divisive programming coming daily from mainstream media and the frequent reminder of world war III starting with N. Korea. India is now in a standoff with China. At the same time the US government is under assault from the so-called deep state, run by the bankers looking to hold onto power after the impending global financial reset. Consciously and unconsciously this results in an increase in negative behavior online. We can keep our heads down and fiddle with our cameras and gear, or we can get out and make films and videos designed to bring unity, show how we can work together, how we can survive when the financial reset happens, how we can survive if WW 3 happens, and even better how we can all stand up and voice our global desire: war is not an acceptable solution to the financial crisis. If this is one of the last forums where useful information is shared and people get along, then the pattern is an example to apply to other forums, news site comment sections, Facebook and so on. Now more than ever in all our lifetimes is the time to figure out how to work together non-competitively and with kindness, for the survival of everyone.
  13. While I agree with you the information posted on that site is not based on science and the points made are not valid, when one focuses most of their energy on attacking the character of the person vs. what they say, it vastly diminishes one's credibility and projects lack of self esteem: perception is projection. With our planet on the verge of another world world (possibly nuclear), it is in our best interest at every level to figure out how to be kind to each other and work together, to help each other non-competitively. Tough times are ahead- the cat's out of the bag regarding global banking and its current state (crypto currency is a reaction to the problem and may not solve it). It's in our best interest to prevent more division, instead promote unity and helping each other, as that's the best chance of preventing the next big disaster or helping us to survive and heal if it cannot be prevented. While it's important to be aware of negative things such as war and global financial collapse, it's more important to be thinking about living in harmony, as our thoughts do indeed change reality in ways that are currently beyond our scientific understanding: http://www.princeton.edu/~pear/implications.html Here's a challenge for everyone: don't put anyone down or project negativity for a week. Smile at everyone you see. Meditate at least 10 minutes every day (it's easy once you start doing it regularly). See how people treat you differently and how your life and outlook on life changes. I'm not perfect and this post is a reminder for myself to practice the same. On the topic of contrast, micro contrast, and resolution, it's all the same thing, the ability to measure differences / deltas / changes at multiple frequencies, from low to high. MTF charts and shooting lens charts are useful tools for showing lens performance without pseudoscience. The lens effect you mention regarding parallel light is collimation: https://en.wikipedia.org/wiki/Collimator, https://oceanoptics.com/product-category/collimating-lenses/. For aesthetic and artistic appeal, these kinds of tests are helpful: http://www.thehurlblog.com/lens-tests-leica-summicron-c-vs-cooke-s4-film-education/, http://www.thehurlblog.com/cinematography-online-why-do-we-want-flat-glass/. We can see that defects and artifacts can be appealing, depending on the intended use of the lenses and projects.
  14. $23,100,000 on sale limited stock, film your own moon landing movie just add sand and front projection ??‍?? https://www.premiumbeat.com/blog/10-incredible-camera-lenses/
  15. https://en.m.wikipedia.org/wiki/Carl_Zeiss_Planar_50mm_f/0.7
  16. Rent or borrow an Alexa, shoot something great, problem solved? You'll include camera and gear rental costs in higher end gigs.
  17. @HockeyFan12, I think you misunderstood: I'm not saying you have to oversample by 2x for the monochrome case. Let's take the perfect example again: The line chart and sensor line up perfectly, no OLPF. Real-world test chart lines are transformed into digital lines: pixels. The 1920x1080 sensor has created 960x540 line pairs, where each pair is made of 2 pixels, giving us 1920x1080 output. Perfect! Ok, let's cause some trouble and shift the sensor a tiny bit: Sh*t!!! Where'd all our lines go dude! It's just a gray mass! Now we have 0 line pairs: mush. What if we slowly move the sensor sideways, what would that look like as a video? Black & white lines alternating flashing gray: terrible aliasing! How can we fix it? When we add the OLPF, both the first aligned case and second misaligned cases produce a gray mass. However, no more aliasing! Because the OLPF is filtering out frequencies above the Nyquist limit. The thinnest lines we can now capture without aliasing will always take up at least two pixels, instead of one (for this test case). A 1920x1080 mono sensor with no OLPF can (sometimes!) capture 960x540 line pairs, but suffers from terrible aliasing. A 1920x1080 mono sensor with proper OLPF can capture 1/2 the frequency, so 960x540 line pairs become 480x270 line pairs, with each white line taking 2 pixels (blurred) and each black line taking 2 pixels (blurred), so 480*4 = 1920, 270x4 = 1080. I borrowed Alister Chapman's images above, here's another write up of his explaining the same thing with more words (also read the comment section): http://www.alisterchapman.com/2011/03/10/measuring-resolution-nyquist-and-aliasing/. It's true that OLPF's aren't perfect, and they are rarely tuned to prevent all aliasing: trading off a little extra apparent sharpness for slight aliasing.
  18. The box stays up in MF mode (using native lens). Searching for Direct Focus Area found this: https://***URL removed***/forums/thread/3677576 See last post here; might help: http://www.dvxuser.com/V6/showthread.php?325388-Turning-off-focus-pinpoint
  19. Goes way with menu option as noted and when in AFC mode (when enabled in menu).
  20. Have you tried cycling through the active display options, MF/AF switch?
  21. + D800E for no OLPF still cameras and the OLPF in the prior post is likely tuned to the Bayer photosite size (not green), and red and blue are 1/4 resolution with green at 1/2 resolution (not red & blue 1/4 relative to green). So all 3 colors are undersampled in a typical Bayer sensor (not the F65).
  22. That's for the GH4 (no GH5 here): have you gone through all the menu settings, searched the PDF manuals?
  23. (C300 II & C500 vs. Alexa, Epic, F55 and false detail: the C300 II & C500 have lower resolution Bayer sensors, so they compensate with weaker OLPFs to gain apparent sharpness vs. the much higher resolution competitors. The tradeoff is more aliasing (I wasn't happy when I saw it upgrading from an alias-free 5D3!). When you think about limit theory, as sensor resolution increases, the need for an OLPF decreases, and at the limit we don't need an OLPF at all (e.g A7R II, 5DSR). So it makes sense that the higher resolution cameras have less false detail, especially in the case of the Alexa capturing 2.8K and delivering 2K. Capture ultra high resolution without an OLPF, then provide a filtered/blurred downsample to the target resolution.) Two ideas: Maximum resolution capture possible from a camera sensor Aliasing I agree a 1920x1080 monochrome sensor can indeed capture a maximum resolution of 1920x1080 pixels. In terms of frequency, we need an up and a down so we get zero crossings to form cycles (2 pixels). So that's 960x540 lines pairs in terms of frequency. Nyquist says if we want to eliminate aliasing, we must sample at > 2x the desired frequency. If we line up a test chart and 1920x1080 camera sensor perfectly, we can capture 960x540 LP without aliasing (1920x1080 pixels). However, as soon as we start moving around, it will alias like crazy. We fix it by applying an optical low pass filter (OLPF), so that the 'input' pixels are now effectively >2x wider and taller. Starting with the initial case of lining the sensor up perfectly with the test image, we've got a sharp, 1920x1080 pixel capture. Now we apply the OLPF and the image now appears less sharp and somewhat blurry, now capturing 480x270 LP in terms of frequency, or 960x540 pixels (it's still 1920x1080 pixels, slightly blurry). However, if we move the sensor around, the non-OLPF capture will alias like crazy, and the OLPF version will look nice, without aliasing (to the limits of the OLPF). This is for any image with high-frequency information beyond Nyquist, not just test charts (brick walls and fabrics are common problems). Which means: Maximum capture resolution possible for a monochrome sensor is W x H pixels, or W/2 x H/2 line pairs in terms of frequency Maximum possible capture resolution for (1) without aliasing is : (W/2)/2 x (H/2)/2 line pairs in terms of frequency or W/2 x H/2 in terms of pixels An HD (1920x1080) monochrome sensor can capture up to 480x270 line pairs (960x540 pixels) without aliasing when using a proper OLPF, a 4K (3840x2160) monochrome sensor can capture up to 960x540 line pairs (1920x1080 pixels) in the same way. Another way to think about it: can you draw a black 1-pixel wide vertical line in Photoshop without aliasing? Now draw the line again with a slight angle (not perfectly diagonal). It's going to have discontinuous breaks now, which is aliasing. How can we fix it? We need to add grayscale pixels in the right places so now the line must be at least 2 pixels wide in places to provide anti-aliasing. If we create an alternating black & white pixel wide grid, which is the max possible frequency, we can't rotate it and still have the original pattern. If we don't antialias, it will be a random looking jumble of pixels, if we antialias it will be a gray rectangle: the original pattern will be gone. As we make the grid lower resolution, so that we can antialias it as we rotate it, the pattern can still be visible. The problem with Bayer sensor cameras (except apparently the F65), is the OLPF is tuned such that effectively luminance is antialiased (mostly from green), but since red and blue are 1/4 resolution vs. green, we end up with color aliasing. Some cameras of course alias in both luminance and chrominance- a weaker OLPF or no OLPF is being used. If we want alias-free 4K, we need an 8K sensor and tuned OLPF. It's 2x again, now due to the undersampling from the Bayer sensor in R & B. This is one of the factors in the 'film look': there must be zero aliasing, and the image can even look a bit soft. However the noise grain can be much higher frequency, providing an illusion of more texture and detail. Right now it appears only Sony's F65 and maybe Red's 8K sensors can provide True 4K, so Netflix ought to get busy and update their camera list! I believe this could stand up in a court of law, so excluding the Alexa because it's only (max) 3.4K is BS, since only the F65 and maybe 8K Red provide real, actual 4K. All others are undersampling in R & B (including the Alexa 65). The marketing angle should be that any camera which provides over HD resolution and detail requires a 4K display, and will thus look better than HD when streamed on Netflix's 4K subscription plan and viewed sufficiently closely on a 4K display.
×
×
  • Create New...