Jump to content

Mokara

Members
  • Content Count

    625
  • Joined

  • Last visited

  • Days Won

    1

Mokara last won the day on March 23 2019

Mokara had the most liked content!

About Mokara

  • Rank
    Frequent member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. 1DC used mjpeg because the processor at the time could not handle 4K hardware encoding without a cooling solution, which was not acceptable in the DSLR form factor. So they compromised and used a software solution instead, and the only way that would work was with an inefficient codec like mjpeg. The 1DC did use hardware encoding for 1080p footage, which the processor encoder could do within the thermal envelope presented by the form factor. I know people at the time said that mjpeg was better/superior because it damaged the footage less bla bla bla (the only downside in their mind was the large file size), but that was not the reason it was implemented that way, it was implemented as mjpeg because the processor encoder could not handle H.264 encoding of 4K footage. If the processor had been up to snuff they would have done it using H.264.
  2. I don't think there is any mystery here....if you have a high budget production and are professional, you are going to pick one of the best tools for the job. Why wouldn't you? If you have a multi-million dollar budget, spending an extra few k on the best gear is a no brainer. Things like the EOS systems appeal more to the low budget crowd because that is all they can afford. There are usually hardware or IP reasons for those things, it is not "crippling". You can't implement something if the hardware can't do it, or you don't have freedom to operate because of some blocking IP that you don't have a license for.
  3. Because it is Blackmagic dammit! Did you not look at their superior marketing? Was it all wasted?
  4. It probably does not cost him that much since a lot of that stuff is likely sold after he reviews it. Ad revenue would make up the balance, provided he has enough viewers. Plus, if his channel is popular and he maintains contacts with marketing people from manufacturers, chances are that he gets a fair amount of gear free or for evaluation.
  5. As a software engineer you no doubt know that memory is arranged as 8 bit blocks, so a 14 bit data piece is stored in a 16 bit space? Yes? Or do your computers work differently? The spec says 1000+ frames, that means that it will be around 1000, but could be more because actual RAW file sizes vary depending on what is in the image and the point you hit the wall is different based on that. It does NOT mean infinite. It means 1000, thereabouts. If there was no limitation on the number of frames they would have said so, not "1000+". Clearly there IS a limitation. The write speed of the card itself is irrelevant if the camera can't deliver data at that rate. As a software engineer I would have expected you to know this. Magic does not exist in the real world, you can't write data that has not been created yet. Mirror speed running is completely irrelevant. The mechanical limitation imposed by the mirror is 16 fps, and in any case is irrelevant to the computing overhead imposed by DPAF since in DSLR mode the camera is using the viewfinder focusing elements (which has it's own discrete processor in the 1D and 5D cameras), not DPAF (which is handled by the main processor/s in those cameras, the 1D has two primary processors, the 5D has one). In live view the frame rate is 20 fps, which is the upper limit of what the camera can handle, subject to the constraints of the buffer size. Note: 20 fps = live view, not SLR mode, the mirror is up and is not involved at all. 12 bit compressed RAW can be handled as 3 byte fragments instead of 4 bytes that 14 bit would require, so it is inherently faster to deal with. Also, 5.4k is working with ~15 mpixels, not the full 20 mpixels. So, significantly less data that has to be dealt with. The overall processing limitations can be roughly deduced from the absence of DPAF in RAW or 4K60p modes. This is not a problem with stills since dropping a frame or two to accommodate the processing needs of DPAF is not an issue, but you can't do this with video. Compression itself is not a factor because that is handled in hardware by a different part of the processor.
  6. That is compressed RAW (an actual file). A 16 bit 20 mpixel data feed is 40MB of data, that is what is going into the buffer. It is then processed and packaged into a file, that is where the bottleneck is. The buffer itself must be able to accept at least 800 MB/s to meet the specs. Again, you are missing the point, it is irrelevant what a card is capable of, it is what the camera is capable of that counts, and that is approximately 1000 frames at the spec frame rate. You don't need a buffer to take up the entire 1000 frames before writing, writing will be going on while the buffer is filled, which means that you can have a much smaller buffer and still get those 1000 frames at that frame rate. Let me provide a visual analogy since text is not doing it for you. Think of it as a bottle with a hole in it. Water is running in at the same time it is running out, but if it is running in faster than it is running out, eventually the bottle will fill and you cant get any more water in. So, if data is going into the buffer at 800 MB/s and leaving at 500 MB/s, with 1000 frames at 20 fps, the buffer would fill at 15 GB of RAM. If you write to the card at a higher speed then your buffer would necessarily be smaller to hit the same 1000 frame limit. A 600 MB/s write rate would mean 10 GB of RAM, while 700 MB/s write would require a buffer of 5 GB of RAM. Cellphones have 8GB of RAM, it is not out of the question that a large flagship camera has more. This new camera probably has 16 GB. The fact that there is a maximum frame number cited in the specs means that the camera does eventually bottleneck.
  7. How so? What will probably happen I think is that there will be a new generation of processor for the a7SIII which will place it on the cutting edge as far as video is concerned. Then, some time later, the same processor will be introduced into the mainstream a7 series, which will be the a7IV. They certainly will not want to introduce the next generation video capabilities on the a7IV before the a7SIII since it would make the later somewhat irrelevant if they did. The a7SIII will come first I think. Of course it is always possible that an a7IV might come first, but if it does it would be pretty similar to the a7III, and that seems somewhat pointless to me unless the a7SIII is delayed even further than what we expect. They had better be using a more modern processor in the P950, the P1000 video is held back by the old processor in it (they appear to use older parts in their Coolpix line, unlike the other manufacturers will comparable products). I wish they had used the Expeed 6 in the P1000, rather than the old Expeed 4A they actually used. Why would you use a 5 year old processor in your latest products? The mind boggles. Are things really that tight at Nikon?
  8. 20 mpixel raw is more like 35-40 MB, assuming 16 bits of data per pixel. The camera will be writing to the card while filling the buffer, but at some point the buffer will fill and frame rate will drop like a stone. That apparently happens at around 1000 frames. You don't need to have a full 1000 frames worth of RAM available. Some lesser amount will be enough since data is being sent off to the card while the camera is still collecting data. The cards may be able to sustain those write speeds, it does NOT mean that the camera can actually deliver data at that rate. The way it works is that raw data is written into the buffer as a data stream, it then undergoes some processing and rearrangement into a form that can be saved, after which it is written to the card. The bottleneck is usually that middle step, so even though you might have a superfast card, you are still limited by the processing power of the camera and the physical size of it's buffer.
  9. The specs say the buffer is 1000 frames. That is a lot, but not unlimited. That corresponds to ~50 seconds at 20 fps, which is more than you will ever need shooting stills. After that you will be limited by whatever the file write rate is. To have unlimited raw stills at 20 fps you would need a card write speed of ~ 800 MB/s, and somehow I rather doubt that the camera is capable of doing that.
  10. That is probably because a a7IV is on the way, likely mid 2020. I expect it will show up a few months after the a7SIII.
  11. Probably because the amount of data the camera has to handle is too large. They don't do these things for no reason, it is usually because of limitations in the hardware. Well, with stills the raw data is going into the buffer, so it does not impose a resource drain on DPAF. When the buffer is full the fps will drop by a lot. With video there is a constant requirement for compression (there is a 1:2 compression ratio for RAW as per Androidlad's specs), so there will be a difference.
  12. Since they want it to be a flagship camera there is no point in releasing a half assed upgrade just because. The S models do not sell in large quantities, never have, so an upgrade has to be significant compared to the competition if it is to sell at all. The basic problem Sony have is that in order to get that big competitive upgrade they are going to need a new processor, or at least some way of wringing more performance out of their existing processors. Until they have that solution in place they are probably not going to upgrade the camera. In a way it is similar to the basic problem Canon had with their cameras for the longest while, where the video encoders in their Digic processors were not competitive enough, resulting in them not being able to provide video performance comparable to companies like Panasonic and Sony for many years. Which of course caused the mistaken belief in consumers eyes that they were deliberately "crippling" their cameras. Nothing could be further from the truth - no company makes inferior products compared to their competitors versions in a particular market space on purpose - there is always a reason, be it inadequate technology or to implement cost savings.
  13. The 48 mpx sensors are older and consequently probably perform less well. Resolution is always important. You can make an image softer in post if that is what you want, but you can't recover resolution that has been dumped due to the sensor design. The "sharpness" setting in cameras is simply a debeyering parameter. With 4 separate inputs per pixel you potentially can calculate the true color of a pixel more accurately after debeyering than what would otherwise be possible, hence less color error at edges (in other words, less halo effect), which in turn would increase the color resolution of the sensor. You would not need to turn down sharpening in camera as much to avoid color artifacts at edges, basically a better raw image to work with than conventional sensors.
  14. 48 mpx will require a lot of processing, which will translate into heat, which will translate into no 4K120. The sensor he listed has a quad beyer structure, which means that pixels are arranged in groups of 4 same color filters, unlike the conventional beyer filter. So, even though it is a 60 mpx sensor, it acts like a 15 mpx sensor because of how the filter structure is arranged. No, it has groups of 4 pixels that behave like 1 pixel when it comes to gathering light. Debeyering would not work with that system, unless it treated the sensor as a 15 mpx sensor. So your stills would be 15 mpx resolution. It is basically a modified a7R4 sensor with a different beyer filter arrangement so that it acts like a 15 mpx sensor. The sensor readout would be pixel binned (which you can do without problems because the binned pixels are all the same color) and then processed as a 15 mpx image using normal deyeyering. Because it is based off the existing sensor (just with a different filter) it comes out as 15 mpx rather than something else like 12 mpx (which would require the design of a completely new sensor, increasing cost)
  15. Unless it is a connection that is nearing the end of it's working life (which is probably what is happening in your case, hence why it works in some devices and not others), the problem is software related. And if it software related it is absolutely the fault of the OS.
×
×
  • Create New...