Jump to content

Waynes

Members
  • Content Count

    10
  • Joined

  • Last visited

About Waynes

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. What happened to the second camera pair front and back they talked about? The website picture last year had one pair of cameras front and back like the demo. That's what is needed to reduce deficiencies in 3D filming.
  2. I'm trying to find a Sigma contact form to ask what fps and live modes the HDMI has. Also to suggest a firmware update to output on HDMI unaltered video downscaled to some mode, and binned resolutions for higher frame rates. I imagine you should get at least 720p24 binned.
  3. I take it back. News of a Foveon video sensor patent: http://thenewcamera.com/sigma-patent-new-foveon-sensor/
  4. Is there a hack to overclock the sensor to output at video rates on HDMI at 720p, 1080p or uhd? It would be worth buying a feild recorder to record this then.
  5. Thanks. I have since read they are preparing a next version of the camera to come out, with good features.
  6. I have skipped through the article. I was looking for a list of vertical color filter sensor makers. In the past I've found one Russian firm, Canon, and Siemens might have had a Foveon technology licensed mobile sensor. If anybody knows of a list, please let me know. Now, there has been some issues with the Foveon. Apart from the reds getting noisey and dim, as the light passes through the layers, some of ths color meant for deeper layers gets lost, resulting in that noise and unbalance. They compensate for this. But the layers themselves are artifical cutoffs, not how the eyes overlapping layers work. These are potential routes for inaccuracies. You really need at least 5 layers to get it more accurate. Now, the cutoff is likely soft enough to give you some overlap. But two issues, accurately shaping each response layers to human vision is good but even more accurately recording values across the color ranges is better for software to get a hold of things more accurately. Now, Foveon also has gone to a combined pixel in one color layer, something I don't want. The 4:2:0 of layering. I don't see how it could be as accurate. I have waited decades for video performance from these chips, a number have. But it didn't come through and the market turned down. Good video performance was going to be a seller of cameras. It was just it was so poor back then, but even more reason to buy an X3 with good video performance back then. There actually was a Foveon chip years ago that seemed to have aspect to pull 720p24 off sensor, which people could have jumped at. But the issue with Foveon is that Sony got and bought vastly superior sensor technology outside of X3 itself (and are supposed to be developing X3 like technology). Canon also has technology. Their best bet is to partner with a company like the one that bought Aptina, that has a lot of cross licensed Sony technology, or samsung or Red. Then they could do the sensors with updated X3/X5+ technology, with very good low light and HDR for Sigma cameras, high end security cameras, phones, and broadcast pro cameras, other video cameras and some pocket cameras. Frankly I want a x7 (UV, 3 primary two complimentary, and IR). Very useful for a range of things. But for cinema x5-x7v to better match the human eye (x7v: blue, a spread accross the overlaps with green, and red). Even a fullhd version would be useful. But realistically 4k. The data stream would be mixed down to 4:4:4 to keep the data rate down, even 4:2:2 or 4:2:0 consumer. We can basically do away with the debayering and color grading steps in ENG and broadcast due to the increased accuracy over Bayer or film, with simple adjustments and auto color look settings. There was a recent article, maybe on not a film school, about robot colourists doing a good job. Outside of cinema, auto stuff to a desired look should be sufficient. The issue with Bayer, I see it produce some weird stuff in broadcast. This means an undesirable grading step, or some auto approximation. There was a time you would get shame for even mentioning using single chip in such environments (maybe due to complimentary single chip consumer cameras being so lousy). But Bayer is better. What is needed in broadcast, is no hassles accurate live results. That is worth money.
  7. I too wanted to buy this, but from what I read last week, not real. I imagine we will see something like this from Sony running at 4kp60 or so. Now, the original JVC 4k camcorder was UHDp60. The old Nikon J1, used a similar sensor that could do the same. The BM micro studio uses a newer version of the sensor, and I imagine the studio does the same and runs at that frame rate. Newer versions again have been announced years ago with better and higher resolutions. Also even Panasonic has smaller than gh5 running that frame rate. This Aptina based sensors, and Sony use a special low heat sensor technology. But I don't know how far it goes, though Aptina did use it in high speed camera sensors. But looking at the processing side. The Yi 4k+ uses a small old 4kp60 Sony sensor with around 11 stops native dynamic range I think. Thee is a mod case for it that adds a c lens mount. It is extremely small and they have had the ambarella chipset in it running over 240mb/s h264. So that puts in perspective the Gh5. Even Ambarella's top chipset on the site, is probably 1-2 watts max while working at the specified data rate doing 8k+ video recording. As someone said, marketing decisions, we should not put faith on. I'm going to point something out. On larger sensors, I noticed less pixel rate and less maximum bit depth at the higher resolution. I imagine they have less noise, and high dynamic range, but maybe they are more limited in moving data around.
  8. Hi Guys. I've been reading a bit of the end part of this topic about stops. I was actually out looking for 16 stop camera heads and this popped up on Google search. 16 stops, if only, but Panasonic pricing I'd rather that for $500 than paying an extra $500 on-top of the gh5. From years ago, the Andromeda recorder project. Jaun the emgineer found recording at a higher bit depth revealed extra dynamic range. The 8 bit recordings were simply clipped version of 10 bit, I think. But of course, if you are spreading the same dynamic range between your first and last values between one bit range and the next, than you are not going to get extra, as I think was indicated earlier. However, as so much is noise, how does that noise hold out in bright scenes? You can shift the placement of dynamic range to the highlights. As indicated, lower noise may have revealed more stops, but was the gh5's old figure ussable stops, or including noise? If ussable, then you would have more over 12 stops. But what is the stops really? New sensors. The Sony handycams have a nice creamy smooth image late last year. Without looking into tests, I think it is likely from extra latitude like the Red Helium technology has. Sony and Aptina did a cross license deal years ago on all their sensor technologies, so Sony could get hold of their high-speed low heat sensor technology. I have been watching Red and Sony performance advances. I think Red derives from the Aptina camp. The new Panasonic sensor, is it a Sony derived technology? Does this camera have a HDR video mode? HDR video modes can give a few extra quality stops according to Red. You categorically don't need a HDR screen to benefit from expanded latitude capture. HDR screens are just to render the scene with more visual range. Without one, you can still see a more expanded natural look on an normal screen. But as you go beyond 16.5 stops the footage can look more murky without a screen with high enough dynamic range. There are car cameras out there with 20 or more stops HDR video capture. I wonder if there will be 16 stops 4k camera heads out there this year?
  9. Hi, I came across a NX500 body and am interested in getting better video out of it. I've spent some time reading about the hacks, and have some ideas. I realise the hack developers visit here, so maybe they might like to look at this? It could be that the camera can not process data from the sensor fast enough to do 4k/UHD p50/60. But sensors can often be windowed (the crop factor) to read an area out faster (among other techniques). So, I wonder if up to a standard 2.39:1 wide screen cinema ratio can be read out faster, closer to p50 or p48? My thinking is if p50/60 just pops over a threshold and gets rejected by the camera, a lower resolution window may work. It would still look normal, with some black area above and beneath the screen. https://en.m.wikipedia.org/wiki/Aspect_ratio_(image) Along those lines some other things occur to me. That the camera might be setup to process standard video modes better, and or, even divisions of it's maximum resolution. So, taking the maximum width and height of the sensor and halving them, for example, might be more palatable and result in binning hardware being used, resulting in higher frame rates. For standard resolutions, variouse resolutions are used in 4k and in UHD (3840x2160), and divisions of them may help, even of 8k/SHD. Multiplications of 720p and SD modes are also worth trying. It's possible internal firmware that has been mentioned is only presenting standard options, inorder to protect the camera from being damaged, such as overclocking the sensor. Now, I'm also wondering if 10 bit or 4:2:2 video is possible, and if the lovely rec2020 color space supported in recordings, or DCI-P3? Thanks.
×
×
  • Create New...