Jump to content

Mokara

Banned
  • Posts

    744
  • Joined

  • Last visited

Everything posted by Mokara

  1. Why? Cell phones have higher pixel densities than sensors used in micro 4/3 cameras. The only thing that would affect practicality is the power of the processor being used. If the most advanced processors currently available have a hard time doing 4K60p, then they probably can't do 8K30p. That is most likely what the Panasonic manager was saying, but it got lost in translation and comprehension. Do Sharp have a processor capable of encoding a 8K30p stream in H.264 or H.265? Likely not, so if they do make such a camera it would probably use some relatively uncompressed encoding scheme. The picture of their camera shows a small body, it is dubious that thing will be capable of dealing with the heat from a high efficiency codec at 8K.
  2. Well, if he tries to cross the border with them they will be confiscated and he will be deported.
  3. Card speeds are usually read speeds, the write speeds vary considerably depending on the brand, model and capacity. Cards with smaller capacities often have much lower write speeds than the higher capacity versions, so you need to be aware of that. The Kelvin white balance thing is likely a computational glitch. I would guess that the white balance adjustment values used by the camera internally are not linear with respect to exposure so a sudden change might throw it off briefly.
  4. Longer if you have the grip installed. If you are using the 16-50mm lens you need the grip, otherwise the lens hits the bridge of the tripod mount, lol. Unless you are using some sort of riser to get the necessary clearance.
  5. Shouldn't contrast be adjusted according to how flat the actual lighting is?
  6. Well, if you go north of the border to Canada, gun regulations are very strict and gun violence is a tiny fraction of what it is in the US. The same is true for pretty much every other country with strict gun control. So sorry, but your theory is flat out wrong. Gun violence stems directly from ready availability of guns, especially guns which are owned for frivolous reasons. This should be obvious to anyone with half a brain, so I am assuming it is obvious to you as well, since you have taken the time to attack the intelligence of "mouth breathers" on other forums who dared to disagree with you.
  7. Usually because it requires additional processing and the processors are already at their limits. So, some compromise has to be made. The extent varies depending on how much oversampling is being done. 6K is the minimum to get an approximation of true 4K, but you will need more than that if you have EIS as well. Anything less than 6K raw will get you image degradation after debeyering. An 8K image would allow you to get a close to true 4K final image and have enough headroom to allow EIS. But, as I said, you need the processor headroom to account for all of the extra processing that would be needed.
  8. That is because most MILCs don't oversample enough or don't oversample at all, so some degradation happens. The NX1 did a pretty decent job at it though.
  9. As long as you have minimal rolling shutter and an oversampled sensor, using EIS should be superior to IBIS. The weird distortions you sometimes get with IBIS are likely due to the sensor rolling in the focal plane. You get similar effects with lens stabilization, which also results in the sensor rolling in the focal plane relative to the lens elements. How severe those artefacts are will depend on how aggressive the mechanical stabilization is.
  10. Log profiles involve collecting a larger data set than normal and then conforming it to fit a smaller data space using an algorithm. That would require more processing than non-log, so it would not be surprising to see things like AF affected if the processor is operating near it's limits.
  11. Because there is no science involved. It involves making a bunch of subjective modifications usually by people who have zero understanding of what the underlying data actually is. If it is not measurable and definable it is not science, it is guessing. You might be good at guessing but it is still guessing. It is sort of like a group of sheepherders from 2000 years ago talking about DNA modification, lol. They know that by selective breeding through trial and error that they can get different properties in their animals, but they know nothing about the DNA basis underlying that (although that lack of knowledge won't stop them from explaining how they are doing DNA "science"). As soon as you see people throwing terms like "magic", "special sauce" and "undefinable quality" about you know they have no idea what they are doing/ It is pure subjectivity, there is no science involved, at least with what they are doing. They are the modern day versions of those sheepherders ;) Well, actually, if you know the chromatic profiles of each dye element used for a particular camera, you should be able to produce a correction table that will largely adjust the responses from different cameras RAW output. There will still be some differences since computation might be required to figure out more or less where in that response curve an individual particular cell is, but it is possible. Not by anyone reading this board though. It is the sort of thing the NLE producers would have to do (those that work with RAW footage that is). If they don't (or do a crappy job at it) I suppose a user could do it themselves in post manually, but it would require exceptional skill and a lot of trial and error to get it right, something most users are not prepared to do.
  12. Ya, but Canon and Nikon people do the same thing as well. Roger is probably in one of those camps, so when fanboys from those camps do the same thing, they are being "reasonable". Look at any review of Sony equipment and you will see those folk chirping in as well, basically that whatever they happen to use is "better" even when objectively it is not. You see the same thing here as well, when people start talking about things like "color science", "undefinable qualities", "video like" or "organic" when discussing whatever equipment they like to use. People who talk nonsense like that you need to make fun of, because they deserve it!!
  13. It is the same folk who own Canon gear, they are by far the worst.
  14. Why not just shoot with a H.265 capable camera such as an NX1 and then record the same scene with a Ninja recorder? You would be able to compare the two directly then without any intermediate steps.
  15. There are minor differences due to the exact composition of the pigments used in the filters, but otherwise it is correct, assuming you are using real RAW and not preprocessed data. Those minor differences should be correctable if you know what you are doing. I have a suspicion that a lot of people are calling debeyered but otherwise not processed data "RAW" when that is not RAW. If it is undefinable then it is not a quality. It is like saying that people born from the aristocracy have an "undefinable quality" when in actual fact they are just like everyone else, but happen to have been born from parentage that has some historical significance. The snob effect That is irrelevant, a hybrid is still a hybrid. The OP is talking about how those cameras can be used. The point was that hybrids can be used for feature production, as evidenced by examples.
  16. If you are shooting in RAW the camera itself is largely irrelevant as far as colors are concerned. The color you get is the color you adjust for. As another saying goes, poor workmen blame their tools.
  17. Overheating comes from the processor, not the sensor. The NX cameras did not overheat because they had a thermally efficient state of the art (at the time) processor that could handle the workloads imposed on it by the compression. Most other cameras, especially those from Canon, have more primitive processors which can't handle the work load without melting, which is why all sorts of compromises have to be made. When you buy a Canon you get crap outdated tech. They try to deflect from the deficiency by encouraging nonsense like "color science" and other such unquantifiable things that people talk about but can't really say what it is. Good old fashioned marketing. Convince the sheep that the slaughterhouse is cool and just lead them in.
  18. More specifically, it is an idea propagated by people who have no clue how business works. Heat management is not an issue for BM because they have a "feature" where they do little or no compression. No compression is zero pressure on the processor, which is where heat comes from. So, in actual fact the BM "feature" just reflects the bargain basement design of their camera. If you are not doing compression, or very little, you can throw in a cheap underpowered processor and call it a done deal, but just put up some smoke to fool the noobs and call your cost cutting a "superior feature" to pull the wool over the eyes of people who don't know any better. Marketing to the ignorant at it's finest. Convince them that less is more.
  19. Easier to make your own system, nowdays drivers are not an issue unless maybe if you get stuff from some obscure manufacturer. It will cost less and you will get exactly what you want. Lots of nice looking cases are available now.
  20. You are going to get much better color accuracy from an 8K image downsampled to 2K than you are going to get from straight 2K simply because you are getting more color information from a virtual pixel. A virtual pixel sampled in multiples of red green and blue subpixels is going to be more accurate than a pixel sampled in one of the three colors. What more is there to explain, if that is not obvious? It is not going to be fully accurate, but it will be pretty close. If you are not getting the colors you want it is more down to your post processing adjustments than anything else. It has nothing to do with preventing "cannibalization". The differences in the product line ups are due to processor capabilities. If you have a large video specific camera it is much easier to keep the processor cool during shooting. Cooler processors mean higher clock speeds and more capability and hence more functions/IQ. MILCs will always be second class to dedicated video equipment for this reason, it has nothing to do with market segmentation outside of the fact that the equipment is physically different so that it can be used optimally in one function or the other. The whole cannibalization nonsense is a myth started by consumers who want what big pro cameras can do without understanding exactly why those cameras can do it while their little MILC can't. So they stamp their feet and pout, but really it is just that they have a simplistic view of the technology involved.
  21. Having a sensor that can do these things is one thing, having a processor capable of handling the data flow is quite another. You will get more accurate color from downsampling because the composite pixel is based on more information than a single physical pixel. It will also increase the bit depth of the composite pixel (assuming the original data was 8 bit, it would convert it to 10 bit). That does not mean increased dynamic range however, but it would result in more accurate color and luminosity. Shooting in 8K with a beyer filter in place means that you should be able to resolve true color at 4K resolution (assuming you are using a RAW feed of course) since each composite pixel would be receiving input from two green pixels and single red and blue pixels.
  22. All youtube does in generating the lower res versions is combine the pixels, so 4 pixels becomes one in 4K > FHD. They are not going to spend extra computational resources to do anything else. If you have a deliberately soft image it will lose that when downscaled. More problematic is that it will be re-encoded again after that, which will introduce a whole bunch of artifacts. For best results on a 1080p panel you should view the 4K version and let your computer do the down conversion, that way you will get the pixel remapping only, without the re-encoding artifacts, and get a much better image than if you viewed the 1080p version.
  23. Well, what you are describing is using other people's work to promote your business or public interests, the fact that there is an intermediate guy (the director) does not make it ok. He does not have the right to assign the content over to you, only the original music creator does, and he/she does not know you from a bar of soap. As far as the people who actually own the content are concerned, you are stealing their stuff and they are 100% correct. Having a third party as an intermediate does not absolve you of responsibility. In the scenario you described the director basically has a license to use the content to promote their work, but they do NOT have the right to sublicense that work to a third party without express written permission of the original owner. It is the same thing as buying a song from a download service. What you buy is not the song but a license to store and play back that song for personal private use, you are not allowed to then turn around and use that bought song for commercial use or public use. For that you will need to buy a different, much more expensive license. If you use the song for anything other than personal private use then you are violating the terms of the license and are in essence stealing the work even though you bought it for use in a limited way. That is exactly what stealing creative works is.
×
×
  • Create New...