Jump to content

Mokara

Banned
  • Posts

    744
  • Joined

  • Last visited

Everything posted by Mokara

  1. The reason is that those specs are determined in hardware, specifically by the ISP used in the Bionz X processor. They have not upgraded that since about 2014, that is why specs have not upgraded. There is no mystery here. They can't "just do it", and they are not "holding stuff back". Magic does not exist in the real world, waving wands and doing wishful thinking will not make stuff that is not there magically appear. That will be the case until they introduce a new processor with an upgraded ISP. Basically a redesign of the entire processor. That is not realistic in the camera industry on a regular basis because they don't sell enough units to be constantly redesigning the processor like Intel or AMD do, and that is the only way specs will go up. That is why Sony specs have "stagnated" for so many years. It is the exact same reason why Canon took so long to implement hardware 4K in consumer cameras. In their case the processors they had eventually were capable of doing 4K, but not without active cooling, which was not practical on a stills camera. Hence no 4K for the longest while. But they were not "holding tech back" or "being conservative", they simply could not do it with the processors at the time. You are not going to see an improvement in video specs until they introduce a new processor. That may happen with the a9II and a7SIII, perhaps even the rumored a7000 (although I am dubious about that particular one's chances). In fact, this may well be the reason for the delay in introducing the a7SIII, namely that the processor was not quite ready yet and they did not want to release the camera with the lower specs that their current cameras have.
  2. RX100 has an ARM core in it. That is not what the problem is, the ARM core does not do image processing, it is the image signal processor that does that. And the Bionz X IPU can't do those specs. Until they update the ISP you are going to have the same issue recording. The Xperia does not use a Sony processor, it has a Qualcomm processor in it. IIRC, most cell phones record using variable frame rates, which you probably would not be happy with in an actual video camera.
  3. It will happen eventually to most fixed lens cameras with a zoom function. It is probably more prevalent with these small cameras because they are actually put in pockets, where picking up lint/dust is more likely.
  4. Your iPhone has a much better processor, that is why. The Sony processor is the same one they have been using the last 5 years. If it is the same processor. the video specs will be substantially the same. Not rocket science. Why do so many people have such a hard time understanding this basic concept? They cant add dual pixel AF because Canon would sue them and win.
  5. Why is updating the electronics a waste of time? Do you think people would prefer to have older less effective components instead of the latest equivalents available at the company at the time? If you were in the market for an RX100, and were going to buy a RX100VI, wouldn't you rather have the same camera with better focusing and a more capable sensor? I am not sure I understand this "I would rather have less for the same price" mentality that some people have. Why trouble engineers to improve their products constantly if all you want is the same thing you had 10 years ago even though it is inferior?
  6. Because they sell a lot more RX100 than a7 cameras. That makes annual updates economically feasible. I don't have a problem with the RX100. I have a mark 3 and a mark 5. They are great travel cameras. Reason for not updating video specs has been explained a thousand times (metaphorically), so I am not going to do it again. There comes a point where you realize that no matter how hard you hammer a nail, it ain't going in.
  7. You buy these cameras because they can fit in your pocket, something that cameras like the XT3 and a6400 are completely incapable of doing. There is some irony about people complaining about lack of 60p in 4K while on the other hand also complaining when cameras don't include a 24p mode Keep in mind that a lot of the older 4K panels out there can't decode 4k60p anyway, they need an external device to decode and input an HDMI signal from that.
  8. The RX100 in its current form is a great camera, the body does not need much in the way of change. Consumer products are supposed to be upgraded each year, as new electronic components come available. This sort of iterative cycle is both normal and desirable, it gives you newest tech available when you buy your camera. High end models leave you with old tech for years because sales numbers don't allow constant updating, otherwise they would do the same with them. It is not a bad thing. The three basic internal components which get updated as newer tech comes out are the sensor, the SLI and the processor. In this model it is the sensor that is upgraded.
  9. Until they update the processor, that video specs will remain the same. They have been using the same ISPs in the SoC since 2014 (Bionz X comes in a number of variants depending on the type of camera it is being used in), and video specs will be determined by that. My guess is that a9/a7SIII is when we will see a new processor. If not, then expect the video specs of those cameras to remain the same as well.
  10. If at first you don't succeed, blame someone else. Just like Trump, lol, even down to the "extraordinary sensor" gratuitous superlatives.
  11. If you have a large viewing panel you need higher resolution. It is not simply a case of resolving individual pixels, but resolving objects those pixels are comprising. If you have an object that is, say 3x3 pixels in size (a leaf, for example), it will resolve as a blob visually. That creates a lot of jarring ugliness in any image where said object is not blurred out through depth of field or something like that. This is a particular problem on any video that has vegetation with most current cameras/TVs. This is why stuff shot at FHD looks like complete crap from an IQ point of view on a large panel. If you are viewing on a large panel (and for immersion you want to be relatively close to it so it covers a large part of your field of view) you are easily going to see those IQ artifacts, your leaves will look like blobs instead of leaves. Maybe that looks realistic to you if you need glasses and don't have them, but if your eyesight is good you will see them. All these "viewing distances" that people go on about when trying to rationalize shooting at low resolution ignore the fact that it is not pixel resolution that is the deciding factor, it is object resolution. It is only difficult to do because the processors in cameras use decade old tech. Big files are not an issue. Storage is cheap. Many of the high end cameras are already capable of shooting 8K 10bit. My little RX100V can shoot 20 mpixel stills at 25 fps for example, and I believe that is 12 bit. So it is capable of shooting 8K at 10bits. The problem is the limitations imposed by the processors. But, if you build a body with enough cooling for the processors, and use multiple processors if necessary to overcome the ancient technology used in them, you can shoot at those specs. In short, there is no real technological reason to not have cameras with these capabilities, although they likely would be need to be designed as dedicated video camera bodies rather than hybrids if there is not a jump in technology.
  12. That is not what you said. You claimed that consumers are not asking for 8K, and that is not true. Saying that 8K is not important is like saying that 10 bit or 4:2:2 is not important. In all three cases more information available to the user makes for a better image after editing.
  13. Speak for yourself. I am asking for 8K. Because I know that will reduce artifacts that stem from shooting at high resolution. Oversampled sensor cameras are shooting at 6K already, but we need more than that, especially when we start having 80"+ panels becoming commonplace. Personally I would rather be able to do that downsampling myself in post, than have the camera do it and just have to take whatever it produces. There are compromises made in that process, and the further you can put those compromises off the better it will be for your final product IQ. If you are interested in maintaining maximum IQ you will want 8K.
  14. I don't think you know what DMF is, or you are not using it correctly at the very least. It is one of the most powerful tools on your camera, if your camera has it implemented. If you are shooting closeups of dynamic insects in the wild, bees for example, you literally have 1 or 2 seconds to get your focus and take the shot. A manual lens is simply too clunky to do that consistently. DMF makes it very easy to do (obviously if you have not taken the time to set up the related tools on your camera you will get less benefit from it). AF gets the ballpark focus, use DMF to get it where you want, then go. Anything that has to be done fast or you lose your shot is going to benefit from AF. DMF and focus lock allows you to get your camera exactly where it needs to be fast enough to get the shot.
  15. It doesn't have to be spot on. That is what DMF is for.
  16. Not in the camera market. Canon revenues are down, as are most other manufacturers. Canon are staying about even on unit sales, but most of those are low end low margin, so it is likely that they are not making much money from cameras right now. Sony is the only manufacturer that has seen significant increases on revenue, although the number of units they have shipped has fallen somewhat. That means that they are primarily selling high value high margin products in the current market, which would translate into increased profit in the camera segment. But they will be with the a9II.
  17. AF, then use DMF, then flick to manual. If you are shooting macro, that is the way to go. Very often you have very little time to get your shot, farting around with a straight manual lens will cost you shots.
  18. I don't rarely care about WB when shooting, since everything I shoot is saved as both RAW and jpeg. The jpeg images are used for preview purposes so I can get a general idea what I need to do with a particular image, but beyond that I adjust everything in raw for final images. Things like WB and hue are irrelevant in the workflow.
  19. The RED patents require compression above 6X, if you are below that you are OK.
  20. Not really. The changes to your regular workflow would be consistent, meaning that it would be automated, just as it would be done with sets shot by any other camera. If someone normally shot on a Sony, and then had to adjust a raw image from Canon, it would entail the same amount of work. It would add "almost a week of extra work" by your logic, meaning that no one should be buying Canon cameras.
  21. Well, this is a camera for professionals or serious amateurs who, one would assume, know what they are doing. It is not intended for snapshots. If they are shooting in jpeg, then probably some other camera is better for them. Once you have the look you want figured out, you apply that to your images. It is not a problem with modern software. That is the whole idea behind image processing software being a digital darkroom. A real photographer does not (and never did, even in the age of analog media) go down to the local 711 to get their photographs developed. They do it themselves or they hire a specialist to do it for them. The differences in the filters are really subtle. As I said, you would need spectroscopy to see the real difference (and I am not referring to actual wavelengths here). I guarantee there is absolutely no fucking way you are going to be able to eyeball it. The specific wavelengths chosen are irrelevant, as long as the ones chosen cover enough of the visible spectrum and the filters have enough dynamic range to reconstitute a true color. The wavelengths chosen by individual manufacturers typically cover slightly different wavelengths, but all have enough dynamic range to properly constitute the proper color. There will be slight inherent differences, but you are not going to be able to tell that using your eyes. The real problem comes in when people who are used to dealing with one cameras raw try to apply the same corrections to a different camera, and of course they end up with messed up colors. But that is simply because they don't know what they are doing. It does NOT mean that one camera got it right and the other did not. It is a user problem. Adobe camera raw presets are generated by Adobe, not Sony. If you have an issue with those, take it up with Adobe.
  22. As long as the lens is resolving close enough to that level, higher pixels are useful. For a start it allows you to crop if necessary and not worry about the image falling apart. If it was not useful then people would not be interested in the pixel shift technology then right? But they are, which means there is a demand for even higher pixel counts. Perhaps not for you, but then this would not be the camera for you. This is primarily a specialist stills camera for portraits and landscapes. It can shoot video but it is not a specialist video camera. If that is what you want, then there are other more suitable options for that application.
  23. No, it does not, not in the sense that a person could tell the difference. As long as you are collecting data with enough dynamic range at a particular wavelength (which is what the slightly different colored filters do) any difference in the final image from a visual perspective is entirely due to the skill of the person working with the raw file.
  24. AF on the latest Sony models is as good as on Canons. The a9 will stand shoulder to shoulder with anything, and this camera is not far off the a9 in terms of capabilities. I don't have a problem with the menus. I shoot with the camera, not menus. They are different from Canon. So what? I happen to like them. I guess it all comes down to what you are used to. To me they are all inherently similar in ease of use however. "Mojo" counts for nothing. Personally I find Canon's too big and bulky, I am always half afraid that most of them are going to slip out of my hands. Being built like a tank is great if it falls out of your hand. I prefer cameras that are not always threatening to slip, that way I am more free to get the shot that I want rather than constantly having to be aware of where my fingers are. The only difference is the pigments in the filters, and those are very subtle. You would not be able to tell the difference without spectroscopy. No human can. And in any case, the whole point of raw is that you can interpret that data to suit yourself. If you can't get whatever color you want out of the file, that is a fault of you, not the camera. That depends. Most of the people who might switch are those who currently use DSLRs. Many of those will switch to MILCs when it comes time to replace those. The question then will be which company has the best systems on offer at the time and has managed to do that consistently. It will not happen all at once, it will be a gradual process as older DSLRs are junked.
×
×
  • Create New...