Jump to content


  • Posts

  • Joined

  • Last visited

Everything posted by maxotics

  1. @JCS's comments very interesting as always. I've been looking for something similar for a family podcast I'm doing. Been using the Tascam 70D, which works well. However, I would like what I use to double as a USB interface (the 70D does'nt). In that vein, the Yamaha MG10XU 10-Input Stereo Mixer with Effects is pretty interesting. Watched some YouTuber set it up. So throwing that into the ring...sorry! Those Sound Devices are very interesting (thanks JCS) and seems like a good investment. Anyway, one aspect you might also consider are the size of the knobs and meters and ability to listen to isolated channels, etc. It's those things I'm really starting to appreciate, especially after using the 70D. Although I can live without analogue pre-amps, I'll just taser anyone who talks too loud Reading about all the professionals, like JCS, who can't live without the finer details that Sound Devices takes care of, makes me wonder if this is a place to go cheap, or make the investment.
  2. Between the two of us we're going to confuse a lot of people about HDR In reference to the issues with S-LOG, I want to point out that the concept of ISO is that, again, it's just a reference point of where an image should be properly exposed in the data stream. As far as I know, ISO has no functional effect on the sensor's work. It is only AFTER the sensor is exposed that it is used to set the gain, so to speak, on the signal. And this is where it isn't as simple as extending the ISO, like in HDR, to increase dynamic range because there is a component of perception in dynamic range, which is why HDR can easily look artificial or plastic. When you set ISO, you're also setting your center point of exposure. Brightness and dimness extend out from it in a way that should be natural to us. When you have two frames of data, one at ISO 100 say, and one at ISO 800, you actually have two perceptual expectations of how light goes from dark to bright. When you try to fill in clipped areas in either direction, dark or bright, our brain can quickly see when something doesn't make sense. This relates to S-LOG because it too records a range of light beyond our normal expectations. Mostly, we don't notice if there is little color because we're more sensitive to basic contrast. In both HDR and S-LOG, the more color fidelity you go after, in normal DR, the more these issues fight back.
  3. It's related! You can't change the physics of a sensor. 100 ISO essentially means, the sensor can get a perfectly accurate color, with no noise, at f1 for 1 second, or something like that. The more you move away from that strength of light, the less the sensor can read it, to the point where it is "guessing" so badly it looks like noise. Let's just talk about light at the 7th stop of DR in a physical scene of 15 stops and the main exposure would be to capture the 3rd to 9th stop of the scene (that is, would expose for 6th stop of physical brightness). In 8bit photo/video, the 10th stop of physical light becomes white because it's the 6th stop, end of the 8-bit space. How to get that data? In HDR, the camera takes a different exposure where it exposes for the 7th stop of physical brightness, meaning the 6th stop of 8bit recording is getting that extra stop in the scene. If you shoot RAW, however, you're getting that 7th stop because you're getting 12 stops of dynamic range. HOWEVER, the raw values at the 7th stop aren't as good as the main exposure; again, the further away you move from your main exposure the less quality color. So which is better, using the 7th Stop of RAW OR using a 7th stop where the amount of light into the camera has been changed. I haven't seen any analysis of that question but it is a super one in my book! Although ML has an HDR video feature, I don't believe it take different exposures, it just take a different reading of the RAW data. If the manufacturers could figure out how to change aperture or shutter-speed 30 times a second, for video, well, that would be interesting indeed!
  4. Let me try again. You're in a well lit room where you can shoot 100 ISO (or whatever allows the camera to get the best saturation of colors--lowest noise). The brightest part of the room is 6 stops greater than the dimmest part. There are 16 million colors which one can discern on the objects. The camera records 256 values from each pixel, either a R,G,B, and put them together as a 24-bit value, 1 to 16 million. These values combine both color information (what filter is over the pixel) and brightness information. However, they are just data points and in our 24-bit full color space we can fit them all in, but no more. When we view the image, our display superimposes a GAMUT over these values so the 1 shade of RED say is almost white, as bright as it can be, and the 255 is almost black, or as dark as the red can be. In a sense, we have 2.6 million colors per stop of gamut. If we extend our GAMUT on our display, which is possible on some high end models, by even 1 stop, then we are not getting any more color information because, again, we only have 16 million. It will look different, we may like that look, but it isn't actually giving us more color-wise, or in hues? When you shoot S-LOG you're using a GAMUT that is not really visible to us in real life, that's my understanding now. Because you always have to bring that gamut down to 6 stops for the reality of our equipment (and viewing preference). When we talk about "compressing" that information down, we're not recognizing that when we record into that extended gamut in 8 bit, there are lot of color holes. Those holes are empty for good. A LUT can't replace a color that isn't there with a better color, no matter how nice that color might be
  5. How do they have nothing to do with color space recorded? What is your definition of color? Gamut is applied to color values after they are recorded. Tell me if I'm missing something. So gamut has nothing to do with the amount of colors you record in an 8bit space. Or it has everything to do with it because when you record into an extended gamut what happens? @EthanAlexander, don't get me wrong. This was extremely confusing to me for a long time! Maybe still is! What I didn't understand is although the camera records brighness, it doesn't not end up as brightness in video data, it only ends up as colors which have a gamut applied to them later to give them brightness.
  6. I see that 10bit is better than 8bit, certainly, only pointing out that it doesn't bring one closer to RAW in color depth (in my experience). Can we see a wider color gamut than REC709? We can certainly see one in nature because things can be brighter than 6 stops, say, and our pupils dilate, but in the real world, our displays and needs don't exceed it? I want to make a point again, that you can get beautiful footage from S-LOG, I'm not talking about subjective benefits of S-LOG. Or to put it another way, if all cameras started with S-LOG and today they introduced standard profiles everyone would be like "Wow, look at all those rich colors!"
  7. Oliver, you know I love you to death and greatly admire your work. But I have to push back a bit here. When you say "S-LOG is totally worth it, if you now now to use and treat it." You're insinuating that SLOG is better than standard profiles, that everyone should shoot it? Right? It is ONLY extended dynamic range in the fact that it picks up extended brightness values in the scene, it is NOT extended COLOR range, in fact, quite the opposite. The extended brightness range you get are TRADED OFF for color fidelity. Now, you may only want to see those 10 million colors, and I respect that, but to say that everyone should shoot in a way that degrades the camera FULL color fidelity I don't understand. As for highlight rolloff, yes, I can see that, but again, it ignores what you lose in the mid-tones. Please think about what I wrote more carefully and tell me why S-LOG does not trade off "brightness contrast" in a scene for color fidelity in the full gamut we expect. Tell me why what you're saying is an objective statement that SLOG is better than standard profiles.
  8. I believe I know what you're saying when you say "compressed to REC709's color gamut". But isn't compressed a bad word? Compressed usually means you're doing a bunch of math to represent a complex image, say in a smaller data file because a lot of the data is repeated. That is, it's easy to imagine how compression could remove a lot of data from an image of the ocean and sky, but not an image of a fine tapestry. Isn't it more like truncated? No matter what kind of LUT you use, there is missing color information? I'm glad you brought up 10-bit SLOG. I don't see much of a difference between 8-bit SLOG and 10-bit SLOG (on Sony cameras) because the 9th and 10th bit are used AFTER the sensor has been read. That is, I believe many people believe the higher bit resolution approaches the 14-bits of single-channel RAW, when it remains apples to oranges comparison. I haven't done any tests using 10-bit, but I suspect I would still discover nearly 30% color lost in an aggressive S-LOG, as I found in 8-bit. Is there anything remotely simple about S-LOG Let me explain a different way. Your monitor can display 16 million colors. If it's 1080 it'll only show 2 million, so you need about 8 images to show the full capability of your display. If you were taking a video of 8 drawings you created on your computer, and the 8 drawings all added up to 16 million colors you'd want your camera to record every color, right? If you shoot a normal profile and look at each pixel of color in your video you will count up 16 million. If you shoot S-LOG, you will get, say 10 million.
  9. Sorry, let me clarify. First my understanding is S-LOGs are designed to capture a wider dynamic range of scene brightness outdoors. It is understood by the cinematographer that they will lose color fidelity. Although image data can be compressed, is there such a thing as a "compressed CODEC"? That is, whether shooting 8, 10, 12 bit H.264 or 14-bit channel RAW, you are limited by your largest data "word" value. This is where I scratch my head. Many people talk about it in a way that I think either I'm missing something or there is a lot of misconception out there about what CODECs do. That is, no amount of compression can change the number of color values possible in the data space. I wouldn't put it that REC709 wastes precious color information on SLOG, isn't it that SLOG wastes precious color information when it ends up in REC709, due to 1) sensor limitations and 2) the fixed size of your color space (8,10, whatever). That's my question.
  10. I did a video on this (2nd try) but it's very dull (link below). My question to the forum is this, what am I missing in this logic? 8-bit video captures roughly 16 million color values Human vision is around 12 million (but let's call it 16 million) Human vision, is around 5-7 stops of dynamic range (without pupil change) CONCLUSION ONE: 8-bit color can deliver a complete color representation to a human; that is 16 million colors over a 6-stop gamut, let's say. In S-LOG, one is widening the dynamic range to 10 stops, let's say, and spreading color information across the fixed 8-bit data space, which means we're losing saturation compared to the 16 million colors over 6 stops? Compounding the above's theoretical question, a sensor becomes noisy or erratic a few stops above and below a range where is can accurately do #4; Therefore, doesn't one trade 16 million colors of better saturation for few colors and noise? (that's my finding after doing some experiments). My current conclusion is S-LOG is not really about dynamic range, unless you interpret that as capturing only brightness values in the wild. S-LOG is a "look". I'm personally tired of it, but that has no bearing here. The question is, isn't it a misnomer to say LUTs put the color back in? When one shoots S-LOG, aren't they walking away with more noise than accurate color values in which to apply the LUT? Will S-LOG look very dated and washed out in the future? Why isn't there more discussion of the destructiveness of S-LOGs on color data? Thoughts?
  11. Yes, I agree with you. A1ex is brilliant. My understanding is that 12-bit it really just the 14-bit values with the top 2 bits cut off, which are usually over-exposed data that isn't used. Now that I have a C100 I can see the difference between it and Canon's 80D H.264 and 7D RAW. The 80-D video, as you point out, has a nice color feel, but looks soft next to C100 AND the 7D RAW. The 7D RAW has a grainy, organic film-grain look, that for my money, can match the clean image of the C100. The only thing that stops me from using the 7D lately is the 4 gig a minute recording. I just want to say again that the 7D RAW camera is the best deal out there! Canon doesn't make their own sensors. Is Sony squeezing them? Maybe it's a sensor issue that is preventing canon from offering 4K on full-frame cameras. Yet, if that's the case, all the more reason for them to compete with Sony using firm-ware, like RAW. I don't get it.
  12. Went to ML forum to get more specifics, to flesh out my memory. Until 2013 there was a lot of discussion about 10/12-bit RAW, then things went dead. On May 11, 2016, A1ex, one of the lead devs posts some findings based on an "@d". He refers to the raw_twk thread which itself dies in 2014, but picks up again in 2016. I assumed "@d", came and went in 2016, but he was active in 2013, only made 6 posts, then disappeared. So it looks like he had a good insight into 10/12 bit video, back in 2013, but it forgotten until A1ex re-visited it in 2016. "d's" future game-changer post is here: http://www.magiclantern.fm/forum/index.php?topic=5601.msg38946#msg38946 Maybe he re-signed onto ML using a different name and is active today. I don't know. I shouldn't have speculated he was a Canon insider because it doesn't look like I can have any fun here The main point is that Canon cameras can do a lot more than they do from the factory and that 10/12 bit RAW is more evidence of that. I was supporting the argument of the original story.
  13. I wasn't trying to prove anything. What's silly about it? Who would join ML, then give a very specific technical hint beyond the knowledge of most people, then disappear? Also, I wasn't saying RAW development was dead, seriously? I said "that effort" and meant the effort to do 10 and 12-bit RAW.
  14. To add to the mystery, there is the possibility that someone within Canon leaked the possibility of 10 and 12-bit RAW video the ML forum. That effort had been dead-as-a-door-nail for a long time. If it wasn't for that anonymous tip, which is VERY technical in nature, we'd never know that Canon cameras can essentially record a kind of compressed RAW video; that is, much less than the current 4 gig a minute. Even without 12-bit RAW, I don't understand why Canon doesn't allow 4-minute RAW recording on their 5Ds. The only real worry is temperature and they can easily fix that with a limit. Bottom line, Canon could put 12-bit RAW on their cameras today. The ML devs have recently proven that. As for the C100/300/200 line, I no longer see a threat from their consumer cameras even if they had 4K. The cinema sensors are built for cinema, that is, video resolution with large pixels, so you wouldn't get the same low-light video with your 80D 4K, say, that you would from the C100. Then, of course, all the buttons, XLR, etc. I've never bought the argument that Canon would hurt their cinema business no matter how powerful they made their consumer cameras. Of course, that might not be the case in Canon. In fairness to Canon, Sony can't seem to do 4K in-camera downscaling to 1080 well (at least in my 6300). However, I can't see how they can't output 4K in teh 6D II and let those who are inclined, downscale on their PC. So I see some merit in the argument that Canon is very dismissive of enthusiast video users. They seem to take the position that unless it can be done in camera, they're not interested in what happens to video downstream. The 4K coming out of the A6300 and A6500 is incredible, to me. And they're great cameras, period. Sony keeps releasing more lenses. Once you look at 4K out of those cameras next to Canon video it's hard not to see the difference.
  15. Great review. To bring it in-line with your DSLR reviews, the benefit of the Sony line is their professional and prosumer video equipment dovetails nicely into their consumer mirror-less line. This is Canon's real problem, if you ask me. You buy a C100II as your main camera, then what would match it as a b-cam? For example, I bought a PXW-X70 professional 1-inch sensor cam. I decided I want a second camera to match, but I don't want to lay out another $1,700 (for another used X70), so found an RX10 used for $500. I can see the difference between the cameras if I look REAL HARD, but few others would. I end up getting the same image for 2 cameras where 1 can handle all the XLR audio, etc. and ergonomics, etc. and the other can get different POVs. Yes, Canon has some great camcorders, but their image size is still small so won't match well with their DSLRs. I love the feel of the Canon XC10, but what other camera does it match to, in the Canon line, sensor wise? If I want to go real small and light, I can get an RX100. If I want to get real cinema looking, the FS5 seems like a great value for the money, as you point out. But then there's the A7S for full-frame shallow DOF if wanted. In short, Canon has no consumer/young-filmmaker answer to the full-frame A7S or the 1-inch RX100.
  16. I was torn between the Canon XC10 and the Sony PXW-X70 and ultimately chose the X70 because of the 10-bit CODEC and the fact that most of my other cameras are Sony. I also got it used for $1,700 and the XC10 is still new (expensive). However, if I had seen an XC10 for $1,700 there's a good chance I would have bought it. In any case, both cameras are similar to me. I've been shooting with the LX100 a lot and I'm really impressed. If you asked me how close it's image is to the X70 I'd say very close. Then I started doing some shoot with them side-by-side. Whoa. The X70 producer a much lower contrast image than the LX100. I keep coming up against this in consumer cameras. On their own, they look very nice to me. The more money you spend, it seems, the better cleaner low-contrast image you get (not taking away from the LX100 BTW!). As soon as I shot RAW with the Canon 50D (from a guide I bought on this site) everything changed. I now spend a lot of effort getting to that with ease of use, great audio, etc. The BMPCC has been great, but is NOT a camera I could hand to an actor and say shoot, say. As a photography nut, I value shallow DOF, especially for portraits. However, I've been noticing that really shallow DOF in video is a bit distracting. That as been my recent conclusion too little DOF in video can make the footage look artificial. I have to work a bit to get shallow DOF in the X70 (which would be the same in the X10) but it's just enough to isolate the subject and give the viewer a sense of place. Further, I lose focus shooting really shallow DOF in anything but the most controlled scene ("stand here and do that"). So, in most cases, I want the camera to auto-focus 1st, get DOF later. That isn't to say I don't want to manually focus when I can. And on that, again, the X70 puts other cameras I've had to shame. I can press the focus zoom button and toggle between views WITHOUT shaking the camera. In fact, I can adjust almost every important camera setting WHILE shooting and not taking my eye away from the viewfinder (which is very difficult with a DSLR). It's been an eye-opening experience, working with a modern, low-end professional camcorder.
  17. The XC10 is a much maligned and misunderstood camera. First, on the 4K. MrNieto if you don't understand how bayer sensors work I'd look it up. The simple fact is BEFORE you can look at any image from these cameras, the pixel readouts must be de-bayered (colors combined) to create a full color pixel. So a 1080 image is actually only using 25% blue, 25% red and 50% green to build an image. Then it adds video compression, which further reduces color information, usually 4:2:0 in consumer cameras. With a 4K downsampled to 1080 you're getting full-color information for each pixel. That's why 1080 derived from a 4K readout looks a lot better than 1080 from 1080 sensor pixels. To put it simply, you must either sample a 4K image per every 2k image to get really good 1080. That can either be done in camera (like the PXW-X70) I have, or can be done in post with the GH4. I would also recommend the LX100 to you. It has 4K that creates super nice 1080 (I just run in through ffmpeg before dealing with, would suggest same to you). However, as Fuzzynormal said, the GX7 is a fantastic deal and if you're new to this, a safer choice. However, if you want to shoot for broadcast TV you probably want to get the XC10 because it is made as a professional's travel video camera. As others have said, the specs say one thing, but when you look at images from professional equipment there IS a difference. The difference is Canon made the XC10 to do one thing only, provide small-camera b-roll on TV shoots. It's not made to shoot family portraits, or soccer games. I won't be surprised if the camera goes on to developer cult-status.
  18. Don't waste money on shareware that usually just wraps around ffmpeg. I just created this to export Sony Long-GOP into MP4s. Slight variation for C300 I think. You probably have more audio channels, etc. Anyway, I run this .BAT file in a folder where I have my files and it creates another copy in \mp4_1080 (you can call it something different). I use a variation with my LX100 where I shoot 4K, then want it downscaled. You want to get a script that works for you then no more worries. That is, take some time to learn ffmpeg. It will SAVE YOUR BACON one day echo OFF SETLOCAL REM title Converting... REM ***** CONFIG ***** REM like F:\Files2015_Maxotics\Video\ REM fold= curent folder set fold=%~dp0 set ext=*.MXF REM dest = curent folder + mp4_1080 REM like F:\Files2015_Maxotics\Video\mp4_1080\ REM change to whatever you like... set dest=%~dp0\mp4_1080\ REM ***** END CONFIG ***** :: This makes the target folder ::MD "%fold%" MD "%dest%" REM make sure you have correct path to ffmpeg.exe for %%f in (%fold%%ext%) do "C:\Files2013_VidPhotoSoft\ffmpeg\64bit\bin\ffmpeg.exe" -i %%f -map 0:0 -filter_complex "[0:1] [0:2] amerge" -acodec aac -b:a 256k -strict -2 -vcodec copy -sn "%dest%%%~nf.mp4" REM if errors, pause so command window doesn't lcose REM pause
  19. I just bought a Sony PXW-X70 as my main video camera. DSLRs become too bulky and unwieldy for flexible video work. I've tried for a long time to make them work, can't do it. Without moving my hands I now have instance access to record, focus peaking, iris, shutter, gain (ISO), AF/MF, zoom. Better audio controls. Aggressive image stabilization if I need it. 10-bit CODEC. So whatever you get for a main cam, I recommend a C100 or something like it. You can use your other cameras on tripods for long-shots, or other POVs. I too, love a great image (especially RAW-based video), but any image out of focus, with shakes, or at a bad angle, or with bad audio is unusable. In short, what worked for me having fun, standing around, with my DSLR or mirror-less, didn't work when I needed move around.
  20. It's an interesting test in that it proves just how misleading camera specifications can be. Based on that test one would feel a fool to spend $5,000 for an FS5 body, instead of $1,000 for a NX1. The test shows that, resolution-wise, both cameras output similar footage in a static scene with objects lit within (my guess) 2 stops of each other (if that!), between the white of the photograph paper border and the black in the wall, depicted in the picture. There's a reason Andrew puts lamps, strings of lights and shadow-producing objects in his test scenes. It is difficult to shoot a scene of wide dynamic range using reflected light only. In the real world, where the sky is in the scene outside, or even coming through the windows, there will be 10-20 stops of dynamic range, or brightness from dark to light where each stop at the high or low end would show more or less detail (contrast). So when a filmmaker looks at Andrew's test scenes, for example, he will look to see how much detail is preserved in the string of lights, or in a shadow. The book on the table is mostly to show color at middle gray. Point both cameras at a person sitting on a couch, say, with light streaming through the window, with a mix of highly reflective objects, and light absorbing objects, and then let's see the difference between the FS5 and NX1
  21. For me, their new MFT cameras that don't record internally are professional/industrial niche products. I was hoping for a 4K version of the BMPCC, or one with photo capability, or a better screen, etc. But hey, I'm the last one who wants to give an opinion about something I know NOTHING about! So I may be more than silly, I may be dead wrong. And to clarify, I don't believe the future of MFT is dependent on BM in any way. Panasonic and Olympus are still innovating.
  22. It seems Blackmagic is moving away from MFT, I doubt Panasonic will. I'm with you, I'd rather have Blackmagic RAW 1080, then 1080 Panasonic. However, Panasonic 4K against Blackmagic 1080 RAW is a difficult decision. I wish Andrew would cover this more than the rumor stuff I'm not making a dig here, I'm serious. The consumer film-making world is going down two paths (leaving Sony out). I have a BMPCC and a LX100 at the moment. They are both the same size. With the BMPCC, I get really nice color nuance, dynamic range, skin tones, etc. With the LX100 I get moire-less, chroma sharp 1080p (downscaled from 4K). Crudely put, the LX100 image is much sharper than the BMPCC. And it's much easier to use and has lots of cool features, like wireless remote, 4K photo mode etc. So what I really want now, is 4K RAW, so I can get the same chroma sharpness as Panasonic 4K. Yet those file sizes are no joke. Budget wise, still outside the afforability of most young filmmakers. And again, missing all the useful features. Okay, so why would I want 8K? Because again, 8K will provide chroma perfect (moire-less) 4K--which means you can do NLE panning, zooming, etc., without image degradation. And photos will be much better! Again, Panasonic 4K Photo mode video is very useful, but still a tad too low in resolution. 8K would compete harder against large DSLRs in stills in many situations.
  23. Same boat. I'd love to standardize around Sony. At about 50mm, the LX100 has really nice bokeh, better than the smaller sensor RX100iv of course. I wish it had HDMI out during recording, but mine doesn't even do playback anymore. I believe the HDMI connector is poorly designed, because the pins that are supposed to snug-up against the male input have now pushed all the way out on mine, preventing me from inserting a cable. So watch out for that. I'd send it back to Panasonic but they take forever and often just want $300. One of the things I wish Panasonic had done on the focus peaking is make the distance bar on the screen accurate through the entire zoom range. I notice if I focus at 3 feet, say, when it is between red and gray, it changes when zoomed in. In short, I can't use the scale to mentally calculate where my focus is. On the face color, I find any changes to the dynamic range curves create havoc, as they should, because LOGs essentially borrow from Peter (dark tones) to pay Paul (bright tones). I now use Standard, Neutral or Portrait--which are perfectly fine. On my video yesterday, I realize today hitting a coin against the Tascam or camera would do the trick. No need for silly tweezers.
  24. Was shooting some stuff with some Azden dual wireless mics (330LT) and kept getting popping on one set. Dave Dugdale gave up on the Azdens after a few minutes so in frustration I ordered two sets from Audio-Technica System 10 (chose over Rode system and Sony because they output both balanced and un-balanced--which I might want for a professional camcorder situation). While I'm on B&H I figured I'd get the Tascam DR-70D so I might get one channel from the Azen and the two from Audio-Technica. I was about to put the Azdens up for sale then figured I'd give them one more fiddle. I moved the frequency on one to the opposite end of the other and, voila, they now work perfectly. I know, why didn't I do that in the beginning? Lying in bed I wondered, how difficult to slate the LX100 to the Tascam? Turns out, it isn't difficult at all. I've half-heartedly tried to sell the LX100. Something keeps drawing me back to that little camera. The 4K from that small size downsamples to crystal clear 1080. Anyway, the short of it is the Tascam DR-70s have internal mics and, with the LX100 (or any camera) firmly screwed into it, tapping on one will create a SLATE sound in both. The Tascam also has a slate tone you can send to any camera with an external mic input. I plan to use that with the A7s. BTW, B&H got me the stuff in a day. Amazing company. They often include free stuff with purchases. With this, I received some Sony audio software. Another interesting thing I learning in doing this is with the Sony XLR-K1M, you can set the gain to one thing (say low) in one channel and (high) in another channel coming from the one shotgun mic. How cool is that! It's a similar thing you can do with the Tascam. Once you set the gain of one mic properly, you can record another "low" so that if something really loud happens, you have some audio that you can use, which isn't distorted. Thank you to Andrew for running this blog where I can share this tip!
  25. The problem the Lieca M solves is getting a full-spectrum brightness at each pixel, in contrast to the bayer sensor which samples either R,G,B at each pxiel. However, when a bayer downsamples 4 pixels, it has the same information as 1 pixel in the M. Therefore, the real advantage of the M is getting HIGH RESOLUTION black and white images without spectrum distortions, essentially. Using the M as a video camera, which must sample down to 2K, gives it no real advantages. Indeed, the footage shot looks very soft to me. Taking photos and putting them into a film would have given much better results. Video compression favors motion over single image IQ. You might do another video where you compile it from photso taken. I'd love to see it!
  • Create New...