Jump to content


  • Posts

  • Joined

  • Last visited

Everything posted by RichST

  1. It may or may not be, the wording from the RX 10 is "Sony brings out the full potential of Full High Definition (Full HD) movie recording... by utilising every pixel from the image sensor at accelerated speed". For the A7s they say this about the video: The world’s first full-frame sensor capable of full pixel readout*1 without pixel binning for movies and 4K*2HDMI video output If the RX10 were generating video the same way the A7s did I think they'd be bragging about it, but they're not. I think they're just using every pixel in the binning process on the RX10.
  2. Given the very large buffer of the E-M1 I'm wondering if the "4K" mode is just a short burst video, sort of like Nikon's Motion Snapshot. It wouldn't require extra hardware, it would just throw as much 4K footage into the buffer until it fills up, then processes it as it would process a bunch of stills, just compiling the frames as a video instead. It would essentially be a gimmick and even with a huge buffer it would fill up in a few seconds, but it does get you a little bit of 4K footage without worrying about hardware or heat; that could be where this rumor is coming from.
  3. Had a feeling the rolling shutter was going to be bad, if Sony can't get it right on a smaller 1" sensor then asking more from a full frame sensor is out of the question. It's all about how fast the camera can scan the rows, and the data coming off a 1" sensor will get scanned faster than one coming off a full frame sensor all other factors being equal, that's just simple physics. Sony will really have to improve their scan rates for good 4K. The GH4 looks better all the time, I'm wondering what's keeping other companies from using Aptina's 1" sensors for 4K?
  4. From the way they're explaining it they'll get 1080p in APS-C mode by scanning all the pixels in the APS-C window then downscaling, just like a camera would take a jpeg picture, downscale it, and save it at 1920x1080. I DON'T think it will use the RX-10 method, which I think is binning, albeit binning without column or row skipping. If you read the fine print of Sony's description of the video mode on the RX-10 it says they "use" every pixel for their video, that's carefully and cleverly worded, they've never said the RX-10 takes a full sensor scan then downsamples that for video. What I think the RX-10 does is use a fine binning mode that incorporates every pixel into it but at the end of the day it's still binning. For instance a red pixel would probably get binned with 3 surrounding reds to get a summed readout, etc. It's faster and easier than getting a discrete reading from every single pixel, and if the RX-10 were doing it that way Sony like the a7s does they would have been shouting it from the rooftops like they are now.
  5. Reading the fine print some more it does indeed sound like Sony is only using every pixel in the binning process. In their interviews I never hear them say that they read every pixel as a discrete value like a still frame and then downsample it, rather they just use every pixel to generate video. Technically this is what the  5DIII probably does to get its video mode so I don't see that much to write home about other than reduced moire and aliasing. Resolution will still be poor.   For "true" 1080p there are pretty much two ways to do it: A) Sample the entire sensor then use the jpeg processor to downscale to 1080 or 2) take a cluster of 4 pixels, bin the two greens, and read the red and blue values to get a kind of "superpixel" that has R,G and B values. This is what Canon did for their cx00 cameras.    I find 2 to be the most unlikely since that method works best with about a 10-12mp sensor. One advantage to this method though is that the power is already there if you use processing chips from camcorders that have 3MOS sensors. I think Sony would be shouting from the rooftops that is what their new camera does if this were the case. And I don't think they're using A either since the number crunching involved is still massively staggering, much higher than a simpler, fully binned readout would require.    What you might expect to see in the first consumer "4K capable" camera is one that is not done in realtime; i.e. about 5-15 seconds of 4k footage is taken from a full sensor scan (24 or more full size images per second), thrown into a buffer and then processed into a video file. It may lock the camera up for a few seconds while it's converting to video but the quality would be fantastic. This is basically what the Nikon One's so-called "4K" mode is, it just spits all the frames out as individual jpegs instead of a single movie file, but that's trivial to change. I've tried it in the store and the V1 can do the whole burst shooting, downscaling, jpeg conversion and writing to the card very quickly, especially if you select 1080 as the resolution. When the One debuted I had high hopes that was what the Motion Snapshot mode was going to do but unfortunately it didn't; I guess Nikon didn't want to spend the extra few dollars (if that) for a larger buffer or it was protecting its lucrative videocamera market segment (sarcasm). If its buffer could have held, say, 240 images you could have had 10 seconds of 24p footage, 4 seconds of 60p footage, 6 seconds of 30p etc.  
  6. I'll believe it when I see it. If it does do true 5K readout to the engine processor and then uses the downscaling engine most cameras use when you set them to lower resolutions the video should be razor sharp, the difference would have jumped out at Dave and he would have been gushing all over the look of the video. But he didn't because I don't think the camera can do it - the dead giveaway that it isn't really doing this is that it should offer a limited 60 fps burst mode like the Nikon One cameras, but all I see in the specs are 10 fps. Now maybe it's using a binning technique that utilizes every pixel on the camera but I don't think it's discreetly sampling every pixel for its video mode.   A bit disappointing but hopefully we won't have to keep waiting too long; Aptina says it will have 1" sensors with accompanying processing engines capable of 4K ready next year. It's not that I want 4K - I don't - it's just that I want a 1080 mode that has been derived from an entire 4K image sample. JVC's PX10 was the first camera to do this but it threw away so much fine detail along the way it ruined the usefulness of it. Their conversion processor probably wasn't up to snuff. Hero did better, I don't know about Samsung's Note.   The reason you're seeing these fast smaller sensors is because it's a lot easier to bus the incredibly high numbers required for video off of smaller sensors than it is for large ones, that's just physics. That's why the A7's video is probably not going to be anything to write home about. For the smaller sensors I imagine the bottleneck right now is getting processors fast enough to take 5K worth of data and downsampling it to 1080 in realtime like you would for a smaller jpeg image, then encoding that to mp4.   
  7. Here's another interesting snip from the article about its video:   Perhaps more intriguingly, Panasonic says that the better image sensor means that it need only bin four pixels to create each pixel in the final movie, rather than six pixels as in the G6. The mixing is performed in 1 x 4 pixel lines, rather than 2x2 blocks, and the image processor performs low-pass filtering on the resulting data as it comes off-chip.   I'm not sure how they would bin in 1x4 lines, I'm wondering if it improves horizontal resolution in video. I'm also not sure how they could bin 6 pixels off a 2x2 block. The low-pass filtering of the data coming off the chip may have been how Panasonic has been getting relatively moire-free video for years now.
  8. We've suspected all along that their cinema DSLR used the 1Dx sensor, if you look at the speed it can do at full frame and crop it just a bit you can get to the magic 24fps number. What I didn't think, though, was that the 1Dx had the extra horsepower to handle 24fps coming off the sensor and converting all that data to mjpeg in real time, that's a serious workload. I had assumed all that extra processing power needed to pull such a feat off was what made the camera expensive. Hmmm
  9. Very interesting, I wonder just how Sony will improve its video capture ability? Best case scenario is that they do a full sensor scan and downscale it. But I just don't think that's possible at that price point. A c300 type readout would also yield excellent results but again I think that's too high end. Perhaps they've figured out how Panasonic grabs the video capture from its GH sensors and will use something like that.
  10. Hmm the app was probably written more for the n8 since I haven't had any problems with it crashing
  11. Andrew I just dropped the $3 for the CameraPro app for the n8 (which is supposed to work for the 808 as well) and it allows for much more control, including bitrate for video, audio, frames per second (any number from 1-30), etc. A summary of all it can do is here: http://www.tequnique.com/wb/downloads/Manual_CameraPro1_4.pdf
  12. I really wanted to get an 808 to replace my 1st generation iPhone that I've had for 5(!) years but considering on how hard I am on phones (I drop them every couple of months) I opted for a used n8 for less that $200. It's only 12mp with no oversampling feature but I've been very happy with the quality and now that I've gotten used to the different user interface (it's certainly no iPhone in that regard) I can definitely see myself moving up to an 808 in the future. IMHO Nokia would do well specializing in high-end cameras with phones on them (oops I said that back backwards.....or did I?). Future improvements could include features like a one-shot HDR mode that makes use of the oversampling feature. Also, while putting an optical zoom might be impractical, I wonder if two lenses could be put in the body and moved by a slide or wheel in front of the sensor. The sample videos I've seen looked pretty good, virtually no moire and good detail, do you have stabilization enabled on yours? I wouldn't be surprised if someone releases an app to increase the bitrate, there is an older app called CameraPro I think that lets you change the compression settings for the jpegs, including an uncompressed mode. Oh and by the way get the Symbian version of Opera for the camera, much better than the included browser. I'm still trying to find a good alternative to Google Maps, which only works when you're connected to wifi (like that does you any good on the road)
  13. [quote name='EOSHD' timestamp='1342920630' post='14294'] Will the new mount be the basis of a C100 affordable digital cinema camera, following the strategy of Panasonic and Sony? [/quote] That's a joke right? I mean this IS Canon we're talking about. I'll be surprised if they even fix the moire issues that are apparently still there on the t4i. But hey at least they got the sensor size right, more than I can say about Nikon. But don't get me started about the One system, that has probably been the biggest blown opportunity in a long time, that camera could have been insanely great given the speed of its sensor but they either completely blew it or more likely the One team is purposely being deprived of oxygen Saturn-style (to use a car analogy) by the rest of the company. So many of us are desperately waiting for something to come along that is a game-changer - the technology is there - but I just don't think the established camera companies have the least interest in thinking outside the box. You're also dead on about smartphones; the tiny, cheap, slow lens, pocket-sized camera's days are definitely numbered. In fact it wouldn't surprise me if a smartphone maker were to come out with the next breakthrough camera (you know digicams are stagnating when Nokia (!) rolls out one of the most interesting products of the year). Smartphones have the advantage of being more like a "real" computer and specific apps can be written for them.
  14. Looking at comparable videos it does seem like the EM5 is as sharp as the GH2, just with more moire. I don't know how Panasonic filters out the moire but however they do it it works well. But congrats to Sony for the excellent sensor, I don't know why they haven't used a similar video implementation on their APS-C sensors. If I had to do it over again I'd definitely get an EM5 over a GH2 since I do more still photography thesedays and have never been enamored with the GH2's stills. And most videos I do take now are handheld, another reason to choose the EM5
  15. I really doubt the camera could be doing a full sensor scan 24 times a second let alone 60; look at the Nikon One cameras, their sensors can do full readouts 60 times a second AND take simultaneous stills yet their video mode is just above average at best. There are pretty severe technical limitations to doing full read scans of even 8mp sensors for realtime video let alone 24mp (much less at a consumer prices), it's probably more realistic to expect a sharpness similar to the NEX-7 with this new camera. The problem lies in the hardware outside of the sensor (though I understand the physically larger the sensor gets the harder it is to do). JVC is the only company that has even attempted it - with the GC-PX10 - and they had to make pretty severe sacrifices to IQ in order to get there. A C300 type approach that skirts around the Bayer demosaicing might be more practical in the next few years but that would be more optimal for a camera with 10-12mp and would require the addition of processing chips specifically meant for camcorders.
  16. RichST

    4K for $1k

    It would be interesting to see if it can get to higher frame rates by cropping (ROI, or region of interest). Apparently the sensor is capable of 4K@60fps so I'm assuming the 21fps limit is due to some bottleneck somewhere else in the camera. Whether cropping will speed it up is unclear; you would think they would advertise this on their site somewhere but I can't find any mention of it, all they say is that it supports partial imaging modes including ROI and the dreaded binning
  17. RichST

    4K for $1k

    It'd be interesting to know if you can crop the frame in the software to speed up the fps. Cutting it to QuadHD (3840x2160) isn't quite enough, it would only get you to about 22.4 fps. A setting about 3600 pixels wide should be enough for 24fps What is it about Flycap that is so frustrating? I've never used it
  18. After seeing the comparison last week I'd have to concur, the 1Dx [i]looked[/i] sharper than the 5D3 but I didn't see any extra real detail being resolved, it just looked like it had been sharpened more. Some things that were being attributed to extra detail (like the stairs on the escalator) were due to the 1Dx's apparently better dynamic range and not increased resolution.
  19. [quote author=Policar link=topic=884.msg6464#msg6464 date=1340574525] I don't think the 5D is intentionally crippled, at least in terms of IQ (its lack of focus peaking is infuriating, though).  The 5DIII seems to be binning before readout.  Either the pixels are binned at 4x4 to three channels of 1440X810 and added using a scheme similar to the C300 or they're binning to one 1920X1080 bayer grid that's then debayered.  My guess is the latter.  Either way you can expect about 75% of the linear resolution you want, while other cameras are oversampling quite a bit.... [/quote] I also think it's the latter, binning to a single 1920x1080 bayer grid would take less time to read off the sensor and could make use of the camera's debayering and processing chip(s)
  20. It does seem to have better dynamic range but for actual detail rendered I'm just not seeing that much difference, sharpen the 5DIII a bit and it looks close to the 1Dx. GH2 outdoes them both and [i]still[/i] it doesn't max out what you can get out of 1080p
  21. I don't know if this has been asked before or not, but how is the camera getting its compressed 1920x1080 modes? Do we know if it's doing it by cropping off the sides or is it by scanning the same area as the 2.4K raw output and then downscaling?
  22. I don't think they'll be able to increase resolution, it looks like it's running some sort of binning routine on the sensor before the processors can get ahold of it so the bottleneck will probably occur before Magic Lantern can intervene. That could be wrong though, after all the AA filter should have had no impact on image sharpness in video mode but apparently there is some. A clean HDMI or crop modes may certainly be doable though. Now the 1DX might be another story, but by then you're starting to talk about real money
  23. Everything from the interviews Blackmagic has given to shots of the camera's display screen indicates that it does use a rolling shutter. But the specs of their sensor are indeed uncomfortably close to that super-duper-sensor that's been bandied about on the internet. I'm guessing using a global shutter cuts too much into the image quality, that or it is a cheaper sensor from the same manufacturer. It would be nice if the Blackmagic would have at least an option in the future to offer a global shutter mode; in fact there are lots of things that would be neat to offer in their firmware (I wonder how hackable the thing will be?). The camera is seriously intriguing, I need one about as much as I need a hole in my head but there is just that something to it. I'm guessing it will be a bigger hit with the general public than many are thinking, it's within the budget of the so-called "soccer-mom" crowd and people often like to look for excuses to go out and buy a new state-of-the-art computer. How long do you think it will take for iMovie to support this camera? (oh sorry, FCP X already does, never mind :-P)
  24. Falk is actually very brilliant, he managed to figure out how the old Pentax K-7 generated its video by measuring it in the studio. These results look like [s]Sony[/s] Nikon has used the same technique Canon used for the MKII and (presumably) every other Canon VDSLR up until the 1Dx and MkIII. One note though, I think [s]Sony[/s] Nikon is binning three pixels of like color on the rows exactly like the MKII did instead of discretely reading every pixel value on every third row; if it didn't bin there would be a very noticeable increase in resolution on the x-axis rather than merely a decrease in moire. If the three colors are binned on-sensor it would also cut readout times to more realistic levels. The internal resulting Bayer pattern would be 2240x1260 which is then downscaled to 1920x1080 (the original MkII had something like an 1872x1053 pattern upscaled to fit a 1080 frame). Note that since both would use new, smaller Bayer patterns their true "resolution" would still not max out 1080p, I think the best they could theoretically achieve is about 70.7% of those numbers. His zone plate test chart I did on an old Canon gives results very similar to what the D800 shows. I believe I also ran his test plate once on a GH13, don't know if I still have the results but it did show the GH13 handling the moire better and had slightly increased resolution in the circles, particularly around when the circles were at increments of 45 degrees (sort of like how the D800 does when zoomed in only without the ugly color moire) My guess is that Canon is now simply doing the same routine on the MKIII as it did on the MKII but does it to both rows and columns (so now 9 pixels of the same color get binned instead of 3). The only problem I have with Falk's analysis is his conjecture that zooming in adjusts the line skipping to every 2nd row instead of every 3rd row. I just don't see how that could work, you would end up with a new Bayer pattern that would be nothing but RGGR or GBBG, either the reds or the blues would have to be tossed (there might be a workaround, such as perhaps occasionally sampling the odd R or B from one of the "dead" rows. I'm going to have to think about that, maybe I'll write to him). I did notice that resolution on the x-axis improves much moreso than the resolution on the y-axis when zoomed in. That suggests that the sensor either quits binning the like colors on the x-axis and reads out each individual pixel (it could just bin in pairs I suppose). Falk had caught Samsung using a different sampling routine on the K-7 when you zoomed in to 8x and 10x, and I think in their description of the K-7's sensor Samsung had in fact described that it had two sampling modes for live view so he's very good at catching these things (it never would do a simple 1:1 readout even when zoomed in to the maximum level). He and DSPographer are probably the most competent sources on the internet on this subject and if he states something in his blog it is highly likely to be either the real deal or at least very close to it.
  25. [quote author=FilmMan link=topic=434.msg2753#msg2753 date=1332359420] Rich, Alex said the following too: Looks like we are making progress in understanding how to reconfigure sensor scanning modes, image processing pipeline, output device and so on. No promises for now, just some hope. [/quote] There are almost certainly different scanning modes for the Canon DSLRs that came after the 5DII, as the 720 modes evidently use a different routine than the 1080 modes. Maybe Canon does have a hidden secret mode in the 5DIII (though 4K is highly unlikely given the speed and number of channels on the sensor). Please correct me if I'm wrong, but I don't think any hack on a DSLR has ever "overressed" a sensor's scanning ability in video mode. The GH13/23 hacks improved bitrate and allowed people to output 1080 mpeg files but true resolution never went up. I really think the resolution is something hardwired into the camera's sensor - it has to perform the binning on the sensor before the lines get read otherwise there would be no speed advantage. Upping the resolution would mean the camera would have to fundamentally go into another binning routine which would mean the chip would have to be designed somehow to be able to execute different binning or subsampling patterns. Sure it's possible there are finer ones hidden in there, especially for liveview, but the frame rates would probably drop to intolerably slow speeds and rolling shutter would be horrible. Now there IS one camera out there that has a chip fundamentally equipped to do what you're talking about and it's the Nikon One, but that camera would probably stop recording as soon as its buffer filled up (which wouldn't take long)
  • Create New...