Jump to content

Mokara

Banned
  • Posts

    744
  • Joined

  • Last visited

Everything posted by Mokara

  1. Even if you encounter diffraction limitations, higher pixel counts mean that you can actually reach that limitation with bumping into approximation errors that you would encounter if your pixel resolution was at it. As a general rule of thumb, in order to capture a true image you want your sensor resolution to be significantly more than your lens resolution so you don't waste any of the lense performance. Plus, Beyer filters means that even when you are at the diffraction limit you will lose information, so you have to be significantly below it to capture reasonably accurate true color at the maximum resolution possible with the lens. High pixel count sensors are useful for this reason, even if the optics does not resolve down to that level. At a guess I would say that you need about 9-16 pixels (3X or 4X matrices) on the sensor to properly recover all of the information in 1 pixel of lens resolution. It is fairly simple. You use the 3x3 Beyer array to generate a single debeyered pixel, which is what is output. If that is done on the sensor itself it means that your camera would not have to debeyer the image separately, and you would essentially be getting a true color image from the camera at full resolution. The 4K output would have the image quality of a 12K camera, but without the computational overhead a conventional 12K camera would normally have. Since computing power is the bottleneck on all video cameras, a sensor that does this automatically itself would free up enormous resources in the rest of the camera that can then be used in other more useful ways. Such as monitoring more AF/exposure points for example, which in turn would allow for tighter more accurate tracking and things of that sort.
  2. Ya, Like I said, I have to wait until I get home to check. I don't use the recorder all that often, it is more of a toy to experiment with for me, but I seem to remember deleting individual files the other day however.
  3. Select it and delete? I am at work atm, so I can't check the exact steps, but it is something like that.
  4. That will change. A lot of what you see now are systems people currently have. No reason for them to throw that stuff away just because. There is an inertia to change, especially in professional spheres where familiarity with current equipment and systems far outweighs other considerations. For example, old main frames continued to be used in corporate environments long after superior alternatives were available. It was simply too disruptive to upgrade, consequently they carried on using the old obsolete technology, often for very long periods of time. So what you see on the side of a sports field is irrelevant to the performance argument. There is also an element of Pavlovian response involved, where the people who use DSLRs for this stuff generally base their opinion of mirrorless performance on much older systems rather than the latest state of the art gear. There choices become a self fulfilling prophecy based on subjective rather than objective considerations as a result. It is also a consumer camera, not a professional camera. It should be evaluated as such.
  5. Flexibility? So you can if you want to for a particular purpose, but use internal for other applications as needed? Those wobbles are a combination of lens distortion as the focal point moves and rolling shutter. Your camera can't correct for this. You will have them in most stabilized systems in one form or another no matter what system is used unless you have exceptional optics That is not correct. DSLR tracking is inherently limited by hardware while MILC tracking is limited only by computing power. It is inevitable that MILCs will overtake DSLRs in this respect. The latest iterations of Sony firmware are apparently already there or are pretty damned close. We are now at the trip point when photography moves rapidly to MILC systems. My prediction a few years ago was that this would happen around 2019 based on my perception of how technology was developing, and it somewhat satisfying that it is actually playing out that way . Especially going back to forums like Canon Rumors and the folk there who poo pooed that idea. Turns out they were not as astute as they thought
  6. Playback does not involve rendering. It is just decoding, the data is not being transformed. Usually when you encounter issues like this it is a case of the software doing the decoding rather than hardware. Since software decoders vary considerably in efficiency, some will give smooth output while others will stutter. In this case since the issue seems 10 bit centric, my guess is that various software is treating those files very differently as a result, which is causing the performance issues.
  7. Mokara

    Framerate

    It is not an artistic brain, it is a Pavlovian response to the fact that movies have traditionally been shot at 24p while ignoring the fact that nobody outside of a cinema (or a BluRay/DVD disc on a TV) views anything at 24p. You see stuff you admire and try to emulate it without understanding WHY things are shot like that. There is a difference between 24p and 25p. Because of frame mismatch there will be a stutter twice a second (unless you are viewing in a cinema or from a BluRay/DVD disc).
  8. Mokara

    Framerate

    The people who tell you to shoot at 24p because that is what movies are shot at don't know what they are talking about. Shoot at the framerate you expect it to be displayed at. 24p works for movies because movie projectors display 24 fps. But if you try to display that on a device operating at a different frame rate then expect problems unless you do sophisticated interpolation (and if you are going to do that, you might as well shoot a the proper frame rate to begin with - far fewer problems later on). If you are not shooting footage for cinema distribution, shoot at 25/30 - 50/60 fps (depending on where you live) if it is for commercial TV distribution, or 30 - 60 fps if for anything else. Flat panel TV sets (but not CRT panels) automatically adjust their frame rate to match the input (cell phones probably do this as well), but computer monitors and on-line video hosts do not - they run at 60 fps (or whatever you have manually selected the frame rate to be), anything else will have frames duplicated or removed which can result in serious visual artifacts. Broadcast TV sources use a fixed frame rate and any content shot with a different frame rate has to be re-rendered using sophisticated interpolation to generate something that is watchable. As a general rule of thumb, shoot at 30/60 fps and you should be ok for most viewing devices if you are not shooting commercially. For broadcast TV shoot at the PAL or NTSC standard, depending on where you live.
  9. Well, that is perfect subject matter for the camera test - back packing cats!
  10. Graphics cards on laptops are not the same as the ones in desktops in terms of capability. That is likely your problem. A laptop is NOT a replacement for a desktop when video editing is concerned. If a hardware decoder is being used it won't show up in tools since the work is not being done in the processor cores. That is why it doesn't look like they are being fully utilized (when they probably are, at least in so far as the hardware decoder is concerned).
  11. It is a Canon. The colors are lovely. Of course you will!! I think the intended customers are children.
  12. That is absolutely NOT how Mr. Jobs would have done it. He was all about control and squeezing competition out, that means proprietary everything. That was his thing. Steve Jobs believed that he knew what the consumer wanted better than what they knew they wanted (he said so himself). You got what he gave you, he packaged it as "hip" and got opinion leaders to buy in to an elitist image knowing that the sheep would follow. That, together with absolute control over the ecosystem, was his business plan. It was NOT a consumer friendly business plan, consumers were gullible chumps there to be exploited by sophisticated marketing. You mean like Russian Doll?
  13. 95 MB/s is the read rate. Burst write rates are usually a lot less and sustained writes even lower. Keep in mind that your effective write speed is going to be determined by the lowest write speed, actual write rates fluctuate during a recording, sometimes wildly. What you see in a burst spike is not necessarily what you are going to get if you are recording for a period of time. It will be the minimum write speed, not the maximum. Also, write times can vary considerably depending on the capacity of the card.
  14. Is that a possibility? What will happen to cat videos if the app wipes them out? How will new cameras be tested properly in the absence of cats?
  15. As a general rule of thumb, most stuff that costs under a few hundred dollars should be treated as disposable. It is too expensive to fix, and if the manufacturer does not provide you with a prepaid RMA, too expensive to exchange. A warranty is pretty much worthless if it costs almost as much as a new one to ship it back to them. Just go and buy a new one.
  16. CinemaDNG is an open format, but BM was using a modified extension of it. It was likely those modifications which ran foul of other people's IP, not cinemaDNG itself.
  17. Right. Your dynamic range could be a foot or it could be a mile. ADC bits are the "ticks" on those scales when those distances are converted into a digital code. A bit in itself has no inherent size. A 12 bit tick on your foot ruler does not represent the same quantity as a 12 bit tick on your mile ruler.
  18. A ADC converts a analog signal into digital code. If your sensor can measure 1000 stops of dynamic range (as in, be able to accurately measure a low non zero quantity and a high quantity, the difference between which is your dynamic range), your ADC will convert that analog response into bits covering that range. An individual bit does NOT correspond to a stop of DR range. It could be the equivalent of 10 stops or it could be 0.1 stops. It is completely arbitrary. You could use a 12 bit ADC for your 11 stop DR sensor, or you could use a 256 bit ADC for it, or you could use a 4 bit ADC. The only thing that would change is the size of the steps between the highest and lowest analog value your sensor can detect. If your sensor had a 1000 stop DR, your 12 bit ADC would generate a digital version of that analog signal with 4096 possible values. The size of those 4096 steps will vary depending on what the dynamic range of your sensor actually is. If it 4 stops, those 4096 steps will correspond to small increments of the response, if it has 1000 stops, those 4096 steps will correspond to large increments of the response.
  19. You clearly don't know what ADC is. Your 12 bit ADC can cover 1000 stops of DR if necessary. There would just be big steps instead of little steps.
  20. But maybe guarantees invites from other companies! ;)
  21. The Sony is doing 5 axis stabilization in the body, it doesn't care much what lens you have attached. The stabilization on the Nikon however happens through interactions with the lens, so if your lens is not fully compatible with the body it is not surprising that it does not offer full levels of stabilization with older lenses.
  22. That would be because arts types are also equipment snobs in general. Most of them probably would not be caught dead with anything other than an Apple product (where applicable) for example. The type of person who shoots documentaries with awards in mind (in other words, as "art" rather than as content) is probably going to use the "right" sort of equipment. It is a self fulfilling prophecy of sorts. For them, if they don't use a Canon they are not a real artist, so they use a Canon. If they used something else, their fellow artists would consider whatever they made as "not looking right" and hence not "art", so they would not win any awards (or at least, not stand much chance of winning). That is why all of those docs were shot on Canon equipment. On the other hand people who shoot documentaries primarily as content are going to be more concerned with equipment specs, so their choices will be different.
  23. The wires are probably the same thickness irrespective of how thick the cable is. Thicker cables are less flexible however, so less local stress is placed on the wires inside. The important part is to stop the wires from flexing too much and failing from metal fatigue, so a thicker relatively inflexible cable is what you want. Bending is the enemy of wires, you want a cable that minimizes that.
×
×
  • Create New...