Jump to content

tugela

Members
  • Posts

    840
  • Joined

  • Last visited

Everything posted by tugela

  1. Hybrids like the GH5 and other prosumer cameras are far more likely to come at Photokina than NAB. The best you could have hoped for would be professional camcorders this weekend, since that is the market that the conference was aimed.
  2. Actually, something like a 80D is far too heavy and bulky to be a good vlog camera. If you are going to be walking around holding something at arms length, if that something weighs 730g for the body alone, not counting the lens, it is going to wear down on you pretty quickly, autofocus or no autofocus. The ideal vloging camera is something that is small and light. No doubt some he-man types will use it, but for most vloggers (who seem to be typically young women in late teens to early twenties) the 80D is simply too big for that purpose.
  3. Or it could be a cardboard box with a pin hole. Remember....developing a camera is not a simple thing, and developing one that pushes the envelope is even harder. What are the odds that some guy working out of his basement (or where ever) is going to be able to do that from scratch? If it was that easy we would be swamped by such cameras, and we are not. Even those with decades of experience making cameras cannot. So, what are the odds? I would suggest not holding your breath, and under no circumstances send any money until there actually is a real working product to be had.
  4. If I release an email, will you send me $500 as well? I "promise" you will get it back if the camera doesn't happen (unless my company runs out of money after spending all the development $$$ on wine women and song first- then your "deposit" poofs)
  5. The NR is most likely a consequence of the downscaling of the oversampled data set, which would explain why noise suddenly pops up when you hit a certain ISO value. Below that value it gets averaged out, above that value the averaging is no longer effective.
  6. I don't see much. Perhaps you are confusing edges with edge enhancement? If you are debeyering it seems to me that at any edge there are going to be artifacts associated with the fact that there is insufficient information to accurately reproduce reality. If you prioritize luma information, you will see incorrect color at the edge, and if you prioritize chroma you will see incorrect resolution at the edge. Either way you will see an artifact. It will be present in all debeyered images no matter how you set up the debeyering (unless you deal strictly in greyscale). The advantage of an oversampled sensor is that the radius of uncertainty is going to be that much smaller and the artifacts greatly reduced as a result, so the image you get will be more representative of reality than a sensor that is not oversampled.
  7. Is this "craft camera" a cardboard box with a pinhole in it? What kind of fool is going to give them $500 for a "reservation" to get something that they know absolutely nothing about?? Good luck getting that money back.
  8. Well now.....that just changes everything! Being able to shoot at 35 mbps is an amazing leap forward, other manufacturers need to take note and follow Canon's lead. It is great that Canon continue to innovate like this! Bravo!
  9. Maybe the compressor engine has more leeway in 16-235 than 0-255, so macroblocking is reduced?
  10. I don't believe that the NX1 has much sharpening applied (which IMO is just a debeyering parameter when done in camera anyway - not sharpening in the post-processing sense). The footage looks sharp most likely as a result of downsampling of the native image.
  11. You could buy a T2i and it will pay for itself 5X in two weeks, if that is a camera that you use. It doesn't mean that it is better than everything else though, it just means that you opted for a lower performance tool for your work.
  12. I have not had a chance to try the script out yet, but I am hoping it will solve one niggling issue I have observed. Often you will see the image immediately behind a moving object breaking up (particularly with DIS enabled), which I have attributed to there not being enough bandwidth to handle the extra bits needed. Hopefully these higher bit rates will be more accommodating in those situations.
  13. If you do conversions with Samsung's utility, the bit rate for the H.264 file can vary considerably depending on what is actually in the clip. It is just a specification. There is no reason why the software can't go higher, just if it does it will generate an out of spec file.
  14. The processors are the same family, so they both run the same OS, but that does NOT mean that they are the same. For example, an Intel base laptop i3 will run Windows in exactly the same way as a K series desktop i7, but there is a huge difference in their relative performance. IIRC Samsung already said that the NX500 had a lower performance processor than the NX1, and that was the reason for the reduced capabilities. And in any case the respective web pages for the two cameras clearly show them with different names, albeit from the same processor family.
  15. The methodology they used is flawed however. They recorded the write light and made the assumption that recording was taking place continuously while it was flashing. If it was not continuous then the actual write speed would have been a lot higher. If most of the "flashing light" time was actually processing, then later firmware updates may well have made that more efficient with the net result being that the overall apparent write time was much faster even thought the actual write times remained the same.
  16. I'm guessing the 80D autofocus works less well with Sony lenses.....
  17. I believe the sensors are the same, but the NX500 has a more basic processor. The NX1 has a DRIMe V processor, whereas the NX500 has a DRIMe Vs processor, according to Samsung.
  18. I don't see how that is possible either, because you have to move the frame out of the light path for the next frame to be exposed. Unless you reduce the frame rate you cannot decrease the shutter speed below that. It is not physically possible. When you are working with digital data you can average the results of several frames to produce new composite frame. I would guess that a camera could do this internally with the raw data, but that would require a fundamental change in how the data is processed (and greatly increase demands on the processor itself). Because of that I am sceptical that many (or even any) cameras really do this, even if they report "slower" shutter speeds to the user.
  19. The 5DS (both versions) and 7DII are current generation prosumer designs, and they all have dual processors. The 5D4 will as well, unless it uses a Digic 7, in which case it might have only one. All Digic processors up to version 6 can only do up to 1080p60 using hardware encoding. In order to do 4K software encoding like the 1DXII they will need dual processors. One Digic 6+ won't cut it. If they use a Digic 7, it likely has a 4K encoder in hardware, just like its sibling, the Digic DV5. The jury is still out regarding their ability to keep things within the thermal envelope with adding a fan however. We have already been told that the Canon went with dual Digic 6 and MJPEG in the 1DXII because that was the only way they could do 4K in the body practically. The 5D update will have even more constraints, so the bar for it is even higher. If the 5D4 shoots 4K, IMO it will most likely be MJPEG using dual processors. If it has a single processor then it will probably be limited to FHD using H.264 encoding.
  20. 5D cameras have dual processors as well. Unless the 1DXII has additional logic, the 5D4 should have similar capabilities
  21. It is impossible to set the shutter speed to slower than the frame rate. Other cameras may give a number "slower", but it is not real since the sensor is always being exposed once the shutter speed hits the frame rate. It physically impossible for it to be exposed for any longer. You need to think about what you are saying. Not really. Just offset duplicate copies of the clip by one frame and set the transparency to 50% (or whatever fraction you are using to simulate a longer exposure). Shoot with the shutter speed set at whatever the frame rate was and you are good to go. It's pretty simple.
  22. It can be changed on my NX1. What are you talking about?
  23. No they won't. The only reasons they can get away with it on the 1D is because that camera has two primary processors as well as extra hardware to help out, the body is huge and made of metal (helps with cooling), and because they use an undemanding low compression codec. It is the sledgehammer approach, and you can only use sledgehammers effectively if you burly enough to do so. Doing it on smaller systems that are more typical of consumer cameras and with a codec more palatable to consumers would not be possible with their current technology. That is why they have not tried to compete with the other manufacturers - they just don't have processors up to the task. No doubt they can see their market share slowly dribling away, but there is nothing they can do about that right now other than hint "Just wait! Soon!" without any specifics.
×
×
  • Create New...