Jump to content

Mokara

Banned
  • Posts

    744
  • Joined

  • Last visited

Everything posted by Mokara

  1. I did not say it was a hybrid, I said it was meant to fill the video camera mode of a hybrid.
  2. Ya, but the BM camera is supposed to fill the video camera mode of a hybrid, and is used in that role mostly. So his comments about the ergonomics and general usability are valid.
  3. Sigh....please do your research. I own the camera, I know what it does. Here it is, from CANON THEMSELVES ....https://www.usa.canon.com/internet/portal/us/home/products/details/camcorders/support-flash-memory-camcorders/vixia-hf-s10 24p is recorded at 60i according to them, as is 30p. The only native recording mode on that camera is 60i, everything else is fake mode. Later models did have native 24p and 30p (I think the ones from the following year introduced that feature iirc), but the HF S10/100 did not.
  4. Nope. 25p drops a frame and becomes 24p. The masses are happy and won't know the difference. Easy. Other than that they need to add some menu items and pay a team to test that it works in the relevant cameras hardware without introducing any additional bugs. It is a LOT more than just setting a flag. That is probably because of how the ISP was designed. The whole sensor will get read but the data will not all be used in the same way. 50p is twice the amount of data as 25p, so they may not have had the bandwidth in the ISP to accommodate that.
  5. So they are spending money to implement it now instead of saving it? Nothing has changed. How sure are you that it is not just 25p with a dropped frame? The original Canon consumer HD cameras did stuff like that, they shot everything as 60i internally, then converted that to 30p or 24p through processing during encoding. Only 60i was actually native, the other modes were fake even though the cameras could record in them. It was only after the HF-S10 that native 30p/24p was implemented. I have a HF-S10, it pissed me off mightily when I found out about it at the time. If they are going to do it with firmware on these cameras and the hardware is not there, then it would have to be fake 24p.
  6. Most people who buy proper pro cameras such as the a9 or 1D series are not using them to shoot video. They are pros, so if they want to shoot video they will use a pro video camera. Right tool with no compromises for the right job. Hybrids are more important for advanced amateurs and semi-pros. But the a9 is not being made for those folk, it is being made for pro stills shooters. It is pretty safe to say that the vast majority of pro stills photographers don't give two hoots about video. And that the vast majority of pro videographers don't give two hoots about stills. That said, both cameras are still very capable of shooting video as well, if some is required, they just don't need to be the best tools available for that. The primary purpose of the cameras was and will be stills.
  7. It was not included to cut costs. Every function on a camera has development cost associated with it as well as increased manufacturing costs from the associated hardware. The people who buy consumer cameras don't use 24p to any significant extent, so leaving it out made sense. You would include stuff like that in products where those functions would be used (high end cameras for example), but not where they are rarely being used. It is the same reason you don't see basic beginner modes in professional cameras. It costs money to implement and are highly unlikely to be used by the people who buy the product. 24p is not extra strain on the processor (Digic 8 is quite capable of handling 24p, however the processor is not the only electronics needed), but those cameras will have basic stripped down image signal processors (in other words cheaper to make) that don't have a 24p mode in them, so it is irrelevant what the processor can or can't do. The function simply is not there to start with. I have never claimed that 24p was left out because of processing. Stop making stuff up. The cameras in question use Digic 8, which very clearly CAN do 24p. 24p was left out to reduce costs in consumer cameras. In fact, new models that use older hardware, such as the M200, still have 24p in them, so your argument is complete nonsense. There is no conspiracy.
  8. And you know this how? I am confused why you think that 4K can be oversampled but all 2K is pixel binned/line skipped.
  9. Not true. Consumer cameras were updated annually with incremental improvements as they came available, so the product stayed up to date as much as possible. All Sony did was extend that approach to prosumer products as well, something Canon and Nikon were not (but should have been) doing.
  10. As I said before, it is the processor. They won't have improved video specs until they update the ISP in the processor. I would guess that is in the works but not ready yet, and they needed to update the a9 in time for the Olympics, which means they went with what was available now. Don't count on that. Canon should have much better video specs in 2020 models extrapolating from what the Digic 9 should be capable of based on the DV7 specs. If the new 1D uses Digic 9 it may very well have significantly improved video capability. It is all about processor capability and how hot they get doing it. Crippling has nothing to do with it, rather it is what the processor can do and what sort of compromises to hardware have to be made to fit into a manufacturing cost model. Manufacturers are not competing against their own products, they are competing against competitors products, so crippling for no reason other than crippling would be the equivalent to shooting yourself in the foot. The processors used in these cameras are SoCs so they contain a whole lot more beside the ARM processing cores themselves. It is usually those other bits that limit performance, getting the latest ARM core generally will not solve the problem. Because those other bits have to be developed internally for the most part, development cycles on them can be fairly long, that is the main reason why specs don't increase at the rate we would like.
  11. The S1 is supposed to be oversampled. The problem at lower resolutions is the increased processing demands that places on the camera, so there are artifacts that result. Not necessarily pixel binning/line skipping as such, but rather crude approximations of the data which has a similar results.
  12. I would buy one in a heartbeat if I could afford it.
  13. Apparently they were asked about why the sensor was the same on the conference call, and their response was that the processing power to handle a new sensor was not ready quite yet, according to Jared Polin. The camera does have the latest variant of the Bionz X though. It suggests that there is a new sensor that they have not used yet because of processing requirements, and that the processor to do that is on the way but not quite ready. I would guess they needed to release the a9II in time for the Olympics, so they used the processors they had. But since it has the Bionz X the video specs would be the same. I suspect that the next processor will come with the a7SIII, since they will need improved video specs for that system.
  14. That is not really the point. There is some content where it does not make a difference, and other content where it does. Being dogmatic and saying that is always better or never better is just plain stupid. As a general rule however you want your equipment to be capable to handle both scenarios at a minimum. There are other things as well, higher resolution is good and if you have a decent viewing panel you can tell the difference, BUT, you need a camera that is actually resolving at that resolution (a lot of older lenses just don't, for example) without losing information through insufficient oversampling or insufficient bit rate. Not to mention when the shooter is deliberately killing resolution through artificial means. So you will usually have something that is nominally "4K" but in reality is closer to 2K. So of course it looks the same. Ideally you want your cameras and your TV panels to be able to match the resolution of eyes in the real world, and at the moment we are no where close to that. Certainly 4K is not, and although some 8K panels are starting to appear, there isn't really appropriate footage for those (which has to be shot at a minimum of 12k to approach true 8k). Improvements will only stop making a difference when we reach that point.
  15. It is the same as the M50. This camera is basically the M50 hardware repackaged into a different form factor. So of course it has the same specs. Why is anyone surprised by that? Really? Do people no longer pay attention before diving off the deep end of the conspiracy pool?
  16. 60p can't be worse than 16.7ms because that would require you to start the next exposure before reading the last one, so their measurement technique is likely flawed. If it was flawed for that measurement, chances are it was flawed for the other frame rate/resolution combos as well.
  17. Youtubers are not going to buy these lenses, they cost about $7-8k. They are obviously for Sony's new pro video cameras, which have AF as one of their selling points.
  18. Canon are not trying to protect anything. No one buys a consumer camera instead of a pro camera when they need a pro camera. Whether or not an M6 does or does not have 24p has no impact on EOS-C sales at all. Nothing. Zip. Zero. There is zero evidence that these features were left out to protect high end products. People speculating as such is not evidence. Features are left out to reduce costs. That is it. Think about it. If 24p was really in such high demand as most here think, do you think Canon would shoot themselves in the foot by leaving it out, when all that would accomplish is send customers to competitors consumers cameras, not the Cxxx cameras? They would gain absolutely nothing if that was true and instead would lose market share to the competition. Remember, people have been going on for ages how Canon are market savy, and look to what their customers really want while waiting for the technology to mature. People say that their superior sales speak for themselves. Now suddenly we are supposed to think that their engineers and marketing managers have no clue. Just like that, because they made a decision exactly on the basis that people have been saying they have for all these years. They left it out because it costs money to implement and few people who buy these cameras actually use it. They counted the beans and went with the largest pile. So they cut it out in order to improve their margin. It will save them a million or two and have negligible impact on sales. Why wouldn't they do it? It is the smart thing to do. The vast majority of people who buy the M6 are buying it as a stills camera, and the vast majority of that minority who use it for video will shoot 30/60p anyway, not 24p. Canon know this, and they would rather have that extra million or two in margins in these times of shrinking markets. When sales are growing you can add fluff to improve your sales pitch, but when they are shrinking you are best served by cutting the fat.
  19. Those are royalties, not license fees.
  20. There will probably be a new processor coming soon. Canon processors come as pairs, Digic X for stills and Digic DVX-2 for video. The latest video camera is the C500M2, and that has introduced the Digic DV7 processor, which means that a Digic 9 is also out there and will appear soon. Since the Digic DV7 can handle 6K, it is likely that the Digic 9 will as well (although it may not be implemented in all products due to cost or practical reasons). Usually new Digic stills processors first appear on low end cameras, presumably because they have shorter development cycles, but that does not mean it has to be this time around. If Canon wanted to get "back in the game" as it were, they might have strong incentive to get it into one of their flagship products first this time.
  21. Lol....that is exactly what bracketing is. It can be used for all sorts of things, and works, provided there is not excessive motion in the picture. The point is, Apple did not invent it, it has been around for as long as digital cameras have been around. The same principle is used in focus stacking for example.
  22. I don't see why not, since what they are effectively doing is the same thing, just in the camera.
  23. Mokara

    P6K accessories

    Because there is not such thing as a free lunch and such compromises lead to image degradation, which, if you don't care about FF, may be an issue to you.
  24. Is this where people report not finding bugs? That sounds like politics, where people believe that discussion of issues and problems can be counterbalanced by sunshine and flowers, when those things don't need fixing. Issue discussion needs to be about what is wrong and how it can be fixed. What is right does not require fixing. There is not an "other side" to problems.
×
×
  • Create New...