Jump to content

Syme

Members
  • Posts

    42
  • Joined

  • Last visited

Everything posted by Syme

  1. According to this Anandtech article Samsung uses something they call the SRP (Samsung Re-configurable Processor) in the DRIMe-5 SoC. They call it a CGRA (Coarse-Grained Re-configurable Architecture), which probably means it is less flexible than a FPGA but more flexible than traditional IP blocks. I have no idea what their source is, but Anandtech is known for being very reliable. Details about the SRP online are very sparse, but there are a few research papers about it being used as an experimental GPU. Apparently it is used in the last generation of Exynos SoCs as an audio processor (not for their latest chips, though). The SRP is definitely not being used as the GPU in the DRIMe-5, since the kernel source shows it uses a conventional GPU just like in the Exynos SoC it is based on. It also isn't being used for HEVC encoding, since Samsung has special-purpose hardware blocks to do that. As to what they are using it for, I have no idea at this point.
  2. Out of curiosity, how did you find the clock speed?
  3. A few points: USB 3.0 tops out around 350MB/s in real applications, even with the best optimizations. RAW video just means unprocessed data from the sensor. Pixel binning or line skipping modes are still "true" RAW. Currently all evidence points to the sensor maxing out at 30fps for a full 16:9 crop. Unless someone has hard evidence to the contrary, we should really assume that they use the same column-parallel CMOS sensor architecture known to be used by Sony, Canon, Aptina, CMOSIS, ON-Semiconductor, Kodak/Truesense, etc. In that case the readout speed is inversely proportional to the number of lines, so the maximum 1080p framerate is ~120fps with only about 910 lines of resolution. Hacking will not change that. Overclocking might be possible, but not by 100%. And a few questions: Has anyone actually found the part of the firmware that controls the sensor and video processor during video recording? In particular, has anyone seen what the actual sensor modes are? Also, can anyone confirm if the raw data from the sensor ever appears in a buffer accessible to the CPU? What interface does the camera app use to record video? Is it libmmf* or something else?
  4. Nice work Otto! Looks like I was missing some major components when I was investigating the camera app. I was focusing on libmmf-camcorder, since di-camera-app definitely passes parameters like bitrate, resolution, time limit, etc. to that library. However after reading your posts on the dfmstool, I took a look at the strings in di-camera-app again, and it definitely includes the dfmsd commands and arguments. I'm rather confused as to how the two interact. I'll take a look at where those strings are used in the binary when I have the time. Hopefully there will be some way to add the 1440p mode back into the GUI. The simplest way might be to replace one of the existing video modes in the menu with the 1440p mode. To do so someone will have to figure out exactly how the camera app calls dfms and mmf-camcorder in order to figure out the right constants and strings to change. That seems to be beyond me at least for the moment. Otto K: are you planning on trying to install a modified firmware?
  5. Don't expect anything from me in the near future. I'm pretty swamped with homework and tests to study for at the moment As much as I enjoy messing around with the NX1 firmware, I've got to remember my priorities...
  6. tl;dr: 2.5k mode may be lurking just under the surface. Not sure, but it looks promising. Did some more digging today and I'm pretty sure the valid resolutions one the NX1 and NX500 are defined at least in part by the file /usr/etc/mmfw_camcorder_dev_video_pri.ini. It can be found in both the rootfs filesystem and the opt filesystem images (under etc in opt). I found that a while back but I was sure it couldn't be the right file since the defined name and comments for the file say it is for a Fujitsu cellphone camera. Furthermore the stills resolutions listed are completely wrong for the NX1/500. However I spent a while reading the source code for libmm-camcorder.so, and it reads the config values from those files. I'm still not sure how it determines the correct sensor and image processor parameters for a given mode, however, since there are several combinations that would produce each resolution, and the details aren't in the configuration files I have seen. However the interesting part is that the one difference between mmfw_camcorder_dev_video_pri.ini on the NX1 and that same file on the NX500 is that the NX500 includes an additional "2560,1440" mode. That's good news since it seems to indicate that they didn't bother to remove the 2.5k mode from the underlying libraries and instead just got rid of it in the camera app. For what it's worth the "QHD" icon is still present in the resources folder for the camera app, so it looks like they weren't very thorough in getting rid of it.
  7. That was supposed to be 9us, not 9ms. 9ms would mean 0.03fps... Also I'm not even going to bother arguing with people who still think the NX1 can do a 240fps readout of every individual pixel. That's just not what I'm here to do.
  8. I just realized the Tizen git repository (https://review.tizen.org/git/) includes open-source code for libmm-camcorder, which seems to have all the same functions as libmmf-camcorder.so on the NX1. Not sure what the difference is. Looks like this is a goldmine for understanding what the camera app is doing. I'm glad I found it before I started digging into the libmmf-camcorder.so binary. There is also a repository for a default camera app, which doesn't seem to be the same as on the NX1, but if we're lucky they might be closely related.
  9. http://www.samsung.com/semiconductor/products/cmos-image-sensor/camera-cis/S5KVB2?ia=218 Says 25fps for the full sensor. That means about 9ms per line, which results in about 30ms minimum frame time for a 16:9 crop. Hence the 30fps maximum for UHD and 30ms rolling shutter that was measured by both Cinema5D and some guy on DVXuser. That also lines up with the 1080p rolling shutter and maximum frame rate measured in various places. The predicted speed in the closest binning level to 720p is also about 200fps based on that time per line, which is what Cinema5D saw on the pre-production NX500. Continuous shooting at 15fps is also consistent with that readout speed. Yes, I know a Samsung marketing guy said otherwise in an interview, but I'm not convinced he really knew what he was talking about. I think it is more likely that after a long corporate game of "telephone" the actual numbers got mistaken. It's the word of one marketing manager in an interview vs the official product information published by Samsung Semiconductor and all the specifications and measurements taken of the camera that match that number. The Nikon j5 is a good example of what it actually looks like when a fast sensor is limited by a slow processor. The sensor does over 5k at 60+fps, but the processor will only handle UHD at 15fps. However the rolling shutter is still excellent, since the first step of the image processing pipeline is to just dump the raw readout to RAM, which is more than fast enough. It can do continuous shooting at the full speed of the sensor since it doesn't use a mechanical shutter in that mode, and they didn't decide to limit it. I see no reason for Samsung to go to all the effort to build a CMOS sensor capable of 10x the clock speed they run it at in their flagship product, then lie about the actual capabilities on the product page. Furthermore there is no CMOS circuit I have ever heard of that can be overclocked by 1000%, even under laboratory conditions (liquid nitrogen, etc), so I am somewhat skeptical that they ran the thing at 240fps in testing. The live view is derived from the raw sensor data. The question is where the boundary between the ARM cores and the special purpose hardware is in the image processing pipeline. Magic Lantern RAW video works because they found that Canon's live view mode puts the raw sensor data somewhere the general-purpose processor can access it before de-bayering it and sending it to the screen or to HDMI as RGB video. Nikon does it the same way, as shown by the Nikonhacker developers. I don't know how Samsung does it, but it is quite likely to be like Canon and Nikon.
  10. I did read otto k's posts. Currently I'm going about things from another angle, but there is definitely some interesting stuff there. Running scripts on the card would be the first step on the way to getting root. Alright, now time for some unfounded wild speculation =) As far as USB 3.0 RAW goes, it really depends on a lot. Keep in mind I have no idea how to actually do any of this; I'm just considering the capability of the hardware. First of all USB 3.0 cannot achieve 5Gb/s in real life. The real maximum data transfer is somewhere around 3.6Gb/s after the overhead is removed. Even then most implementations of USB 3.0 are a lot slower. Point Gray, a machine vision camera company did some extensive testing in which they found the maximum transfer rate with various computers and USB 3.0 cards, so if you want to know more, google that. Fortunately with custom drivers the integrated USB 3.0 ports on modern Intel processors appear to be just barely fast enough for 12 bit 4k RAW at 24fps. However the implementation of USB 3.0 on the NX1 isn't guaranteed to be fast enough, since not all USB 3.0 devices are. Someone would have to test. The other high bandwidth external interface on the NX1 is HDMI, which has a maximum pixel clock of 340Mhz. That would be enough for 6.5k 12 bit 24fps RAW video if it were possible to pack two 12 bit pixels into each RGB444 pixel like Apertus is trying on their cameras. Of course it is possible that the NX1 doens't support the full bandwidth of HDMI 1.4, in which case the maximum rate we know it can handle is 3.40Gb/s for DCI 4k 422. That is less than the theoretical bandwidth of USB 3.0, but it is completely stable and guaranteed, unlike USB, and it would be enough for 5k at least. It would require a custom FPGA-based recorder to capture it (or support from Convergent Design or Atomos), but that's a lot easier than making a fully custom camera. There are some big fundamental issues hanging over either strategy for RAW video. The first is whether or not the general purpose processor can actually access the raw sensor readout at full resolution. That isn't necessary for the normal operation of the camera, so there is no guarantee it's possible. If the general purpose ARM cores in the DRIMEv5 can't even access the raw data coming off the sensor, neither strategy would work. The other big question I can think of is what layout the data would come in memory if the ARM cores can in fact access it. If it is packed tightly it might be possible to just pass that buffer to the GPU and tell it to treat it as 24bpp RGB, or just copy it out the USB port as fast as possible. The wimpy little ARM cores are probably fast enough for that. On the other hand it might be aligned to 16bit boundaries for fast memory access to each pixel, in which case it would have to be packed tightly before transmitting it. I'm not sure if the general-purpose cores can do unaligned memory accesses or bit-packing operations fast enough (but again I'm no expert). Magic Lantern found that the processor in a Canon camera isn't fast enough to convert 14 bit to 12 bit at just 720p24. Actually on second thought the GPU might require 24bpp RGB to be padded out to 32bits, in which case some messy unaligned copying would be unavoidable. Back to more realistic considerations, that Tizen emulator might be good for testing what the menues do. It probably could't emulate the camera-specific low-level stuff, but that doesn't mean it's not useful. I might get there eventually. Currently I'm working on unpacking the NX500 firmware to compare it to the NX1, but I'm pretty busy with the work I'm actually supposed to be doing, so I'm not going to be making any rapid progress.
  11. Has anyone taken a look at what exactly is in that SDK? I was under the impression it was just for developing remote apps. I think the Nikonhacker folks made an emulator for some Nikon DSLRs to help understand the firmware. I'm not sure how much they have done with it. For the NX1/500 getting root on the physical camera and running a debugger there would probably be easier and more fruitful. Nikon, Canon, Sony, Panasonic, and most other camera manufacturers use custom (or semi-custom) operating systems, which make standard debuggers incompatible. Since Samsung uses a Linux kernel it should be possible to run a standard debugger if you have root, and use that to see the instructions executing on the real hardware.
  12. I don't think I currently have the experience or time to successfully modify the NX1/NX500 firmware. I don't even have one of those cameras. However it would be a shame to let the interest in this die, so I guess I'll post some of my notes/thoughts about the firmware in hopes that it might help keep the ball rolling. Note: if you aren't into technical minutiae, the only interesting part of this wall of text is the bulleted list at the end detailing what is and is not possible in my opinion. Here are the files listed in the header of nx1.bin (version 1.4.0): version.info: offset=0x0 size=0x3F (same as found in /etc/version.info) linux image: offset=0x0130 size=0x00624748 idk: offset=0x624878 size=0xD8E9 linux image: offset=0x632161 size=0x3192E8 linux image: offset=0x94B449 size=0x638518 idk: offset=0xF83961 size=0x01FF10 idk: offset=0xFA3871 size=0xB35140 rootfs: offset=0x1AD89B1 size=0x117A89FF (lzo compressed ext4 filesystem image) opt: offset=0x132813B0 size=0x58E91C (lzo compressed ext4 filesystem image) pcachelist: offset=0x1380FCCC size=0x7000 (PAGECACHELIST, preceded by a header, I think) idk: offset=0x13816CCC size=0x35BCC44 (lzo compressed. header indicates swap image?) Anyone with the skills to reverse engineer a camera could figure this out pretty easily, but it was fairly tedious so maybe this will save someone 20 minutes of poking around in a hex editor. If anyone knows what's up with the files I've labeled "idk," I would love to hear about it. The checksum algorithm is fortunately unchanged from the NX300 as far as I can tell. As documented at sites.google.com/site/nxcryptophotography/diy-firmware "width=32 poly=0x04c11db7 init=0xffffffff refin=true refout=true xorout=0x00000000 check=0x340bc6d9 name="JAMCRC"" "jacksum -x -a crc:32,04c11db7,ffffffff,true,true,00000000 [file]" The main camera app binary is (I'm pretty sure) located at /usr/apps/com.samsung.di-camera-app/bin/di-camera-app in the rootfs. It seems to access the hardware through a relatively high-level API with the /usr/lib/libmm* libraries. libmmf-camcorder.so is particularly interesting. The function I've focused my attention on is mmf_camcorder_set_attributes(), which comes from libffm-camcorder and is used repeatedly in di-camera-app. It conveniently (and strangely IMO) takes strings as identifiers for the attributes that are apparently being set (why not just an enum? I suppose I shouldn't look a gift horse in the mouth...). Some of those attributes include "target-time-limit," "audio-encoder-bitrate," and "video-encoder-bitrate." The guy who successfully removed the recording time limit on the NX300 did it by modifying the instructions that set the variable being passed along with "targe-time-limit." I found the control flow instructions he mentioned in that thread, so it should't be hard to get rid of the time limit on the NX1, provided the camera accepts the modified firmware. The NX500 is probably similar. The "video-encoder-bitrate" attribute also looks promising, though it would take some more advanced reverse engineering to figure out where the values are being set. So from what I've seen and read, here is what I think is and is not possible to modify on the NX1 and NX500: Remove time limit: Highly likely. Seems to be the same as the NX300. Pretty easy too, if there aren't any new security measures in place. Increased bitrate: Possible. Needs some real reverse engineering to find where the rates are set for each resolution and quality. Noise reduction and sharpening: Possible. Haven't seen anything that looks like it's controlling these, but if setting the bitrate works, this should be possible too. FWIW I think that increasing the bitrate would help with the noise reduction issues. H.265 tends to smooth things out a lot to achieve low bitrates. Re-enable 2.5k on NX500: Plausible but difficult. It depends on whether they just removed it from di-camera-app, or if they removed it from the underlying libraries as well. Either way it would likely require actually adding control flow to the binary, which opens a whole new can of worms. Beyond my current ability, for sure. Focus peaking for NX500 4k: Maybe? I have no idea, really. There might be a good reason they didn't include it, there might not. 4k crop on NX1: Plausible but even more difficult. We know the hardware can do it, but it was probably never implemented on the NX1, even in pre-production. Gamma profile on NX500: Plausible. Similar to porting the NX500's 4k crop to the NX1, I think. 6k 24fps H.265: Highly unlikely. The H.265 encoder would have to support frame sizes larger than DCI 4k and be able to handle twice the pixel rate (clock speed) of 4k. Furthermore it would require implementing a brand-new video mode at a very low level. I can't say for sure it's categorically impossible, but don't get your hopes up. 10bit H.265: Nope. The H.265 standard does indeed allow for 10bit encoding, but I highly doubt Samsung would include the significantly larger (wider busses, more complex encoding) hardware necessary to do it. It would be a miracle if Samsung had really decided to go to all that effort and not use it. 6k or full sensor 4k at more than 30fps or 1080p at (significantly) more than 120fps: Impossible. The image sensor simply isn't fast enough. If you hope for this you will just be disappointed! RAW Video: Not really. It might be theoretically possible to dump the live-view feed as in Magic Lantern. Who knows how fast the SD card interface is, though. Certainly no more than 1080p. I can imagine tricking the GPU into packing 12bit 4k RAW into 1080p 444 HDMI like Apertus, but consider that a pipe dream. Don't get your hopes up. Anyway, that was longer than I expected, but I enjoyed poking around in the firmware, so I don't regret it even if it comes to nothing. It's a shame Samsung appears to be dropping the NX line; they are cool little cameras. p.s. If you know anything I've said is wrong, please correct me; I'm learning this as I go along.
  13. Anything that gets read directly off the image sensor is "RAW." Magic lantern RAW comes from the image that is read out for live view, which is the same sensor mode that is used for video. Any camera with live view (essentially all modern cameras, including the NX1) could be hacked to do RAW video in theory, since they have to read out the sensor to memory at some point to get that preview image. It's just a matter of what resolutions (and bit depths) are available and how fast the camera can write that data to a card.
  14. 5d mark III does 1920x1080 raw by reading out every third row and column. It has a native horizontal resolution of 5760, and 5760/3=1920. Most Canon DSLRs achieve that with line skipping, which tends to leave severe aliasing and moire. Judging by the lack of severe moire in the 5dm3, it probably uses some variation of pixel binning, which does an analog average of neighboring pixels before reading out. Look up pixel binning online to find out more. Using pixel binning loses a lot of resolution and tends to produce aliasing and moire, which is why a full-pixel readout is considered a desirable feature. The NX1 probably uses pixel binning for the FHD modes, which is why the rolling shutter is reduced. In theory (I think) the sensor could do RAW with horizontal resolutions of 6480, 3240, 2160, or 1620 pixels. Unfortunately due to the nature of pixel binning, the 3240 mode would have less than half the resolution, the 2160 mode would have less than a third, and so on.
  15. Looking at the drime5 kernel source I was pleasantly surprised to see that there is actually quite a lot there. I expected that they would probably do like almost every Android phone does and include all the important drivers as binary blobs, but it looks like most of the HEVC encoder driver is actually right there in the open-source part. Of course the code that actually sets the bitrate is in the closed-source camera application, but knowing which functions are used to actually pass the parameters to the driver should make it easier to find where to look in the closed-source part. Another pleasant surprise was that the firmware update files look fairly straightforward to unpack. I'm no expert in reverse engineering, but judging by a cursory examination, the level of wacky encryption/obfuscation I've seen in the firmware updates for other devices doesn't seem to be present in the NX1 files. At least that's my impression from browsing the files for an hour or two. It's always possible I could be completely wrong.
  16. I know a bit about GNU licenses and the Linux kernel, and the short answer is that no, they don't need to include anything about HEVC encoding in the open-source kernel. They could if they wanted to, but they don't have to. All Android phones are like that. The Linux kernel is running on standard ARM cores inside the DRIME5, with binary-blob drivers for things like the GPU, HEVC encoder, etc.
×
×
  • Create New...