Jump to content


  • Content Count

  • Joined

  • Last visited

About slonick81

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Color saturation is defined not by variety of values of any single (R, G, B) channel arcross the image but by difference between channels in given pixel. Thus color fidelity is defined by amount of steps this difference is digitised. If you have image with (200, 195, 205) RGB values in one point, (25, 20, 30) in other, (75, 80, 70) in third that doesn't mean the image color range is 25-200, it's 10 and it's lacking color badly.
  2. I was trying it when it was late beta. The core functionality worked well at that time, it was playing BMCC 2,5K raws on average quad-core CPU an 760-770 nVidia cards, debayering quality was good and after some digging I was able to dial the look I wanted to get. The interface was clumsy to a degree - panel management was painful sometimes, some sliders were off scale so it was hard to set proper values. But it wasn't too unbearable or irritating. The main show stopper for me was it's inability to efficiently save results to ProRes/DNxHD for proxy edits. It was technically possible to stream Processors' image output to ffmpeg but the encoding speed was slow. So at the end it wasn't faster than Resolve at this task, and Resolve had much better media management and overall functionality. Besides Resolve is more universal as transcoder, it's able to work with Canon, RED and ARRI raw files. I'll try to test current version if I have time. But I think they targeted narrow niche and missed some time: CUDA and Win only, no fast proxies, limited input formats support, Resolve is quite a beast now, hardware is much more powerful than 3-4 years ago. But if you shoot a lot with BM cameras mostly it may fit your needs really well, why not?
  3. So, some nice films were shot with Canon's high end cinema cameras, thus we should accept limitations and marketing crippling of EOS RP and buy it in desire to be assosciated with talented winners?
  4. What do loosers choose? Do winners win because of Canon? Will someone win for sure just by choosing any Canon product (specifically, EOS RP)?
  5. Set "Mismatched resolution files" (in "Input scaling") to "Scale full frame with crop". Or you can set it individually for any clip on timeline: Inspector - Retime and scaling - Scaling - Fill.
  6. 1) After russian internets it's like a victorian gentlemen's club here. But problem is present, so better take some measures. 2) Total prohibition of any political topics is overkill, as for me. You won't choke it to death, it'll still rise in indirect and ugly forms. Besides, Andrew himself has started some discussions on political topics, I guess it's important for him, too. 3) So I voted for subforum. All hot heads can quarrel there and should be banned from any other threads for trying. It will take more moderation efforts but it's lesser evil among those three.
  7. slonick81

    All things RAID

    I meet this idea from time to time, and it's reasonable, but somehow isn't proved by real life. I mean there is RAID5 made of several 2TB drives in this article as example and controller needs to read 12TB of data to rebuild it, so there is a significant chance to get unrecoverable read error during rebuild process and lose some data. But it also apllies to simple data reading. Like every 20TB of data read you should get such error, no matter is it raid6, raid1 or single drive. Then (in case of ZFS, as example) the system will detect it as checksum error, mark array as degraded and try to rebuild it if possible. But 20TB of data read (not stored) is really small amount of data if we talk about video editing. In my case it's like getting footage for film or small serial and transcoding it to proxy format. Without even starting to edit. Or there is a resilver process twice a month, when the whole array is checked for consistency. It should degrade just from this procedure. So I should be swapping drives and rebuilding array all the time. But nothing even close in reality. And it makes me paranoid. I even started to keep some test data set with externally calculated checksums and verify it by myself from time to time - but it's fine so far.
  8. slonick81

    All things RAID

    Random thoughts: 1) RAID should have consistency check procedure that you can assign on schedule (so it won't interfere with your work), set up priority (you may get some urgent work right in the middle of self-check), and get readable output (so you don't need to guess which drive became corrupted, at least). Most of cheap disk enclosures, as well as many cheap quasi-RAID cards don't have it and you'll notice data rot only when you fail to read it. 2) HW RAID consistency != FS consistency. I've seen some systems showing healthy RAIDs with badly corrupted NTFS/ext3 at the top. If you have a classic RAID that provides block device and OS managing your FS then better be checking both (RAID/block device and FS) from time to time. 3) FS consistency != data consistensy. Say "hello!" to network errors, software crashes and those funny encrypting viruses. It's not RAID's direct responsibility but better keep in mind. 4) Rebuild time. If we have classic RAID for video editing purpose (i.e. no database patterns with 4K random IO) plus HW RAID SoC or CPU/RAM for SW RAID with decent performance then rebuilding speed is limited by write speed of new disk (that replaces broken one). 5) RAID doesn't replace backups. I'm somewhat a ZFS fan. At work I have 120TB NAS under FreeNAS and Proxmox node for FTP and some random VMs, both running on ZFS - so far, so good. NAS was originally built 3 years ago and grew from ~40TB to current size and there are some projects that lived there all this time - no lost/corrupted data. Got 700MB r/w on 10Gbe according to BM utility, performance is mostly limited by HBA (LSI 2008 based SAS card - quite old design for tight budget).
  9. 1) Memory. Get MB with 4 RAM slots. This CPU has 2 memory channels, so buy 2 sticks to get the amount of memory you need. The general rule for long term run is to buy the largest capacity ones - 16GB in your case. You'll get 32GB RAM, a sufficient amount to start editing in AP/Resolve/Avid or whatever. And you can always add another 2 sticks to get more RAM (up to 64GB RAM for your platform) 2) CPU. 8700 is top one for this Intel platform so you can't get better niether now nor later (next gen Intel CPU will most likely require new chipset/MB). If you're thinking about CPU upgrade pass - look at AMD, they're much better in terms of keeping CPU-MB compatibility along the timeline. You'll get 8 cores but loose on single thread performance. 3) PCIE slots. Get full-sized ATX board with 3xPCIEx16 or 2xPCIEx16+PCIEx4 and with 4 real lanes at least for all those slots (they usually claim it like "PCI Express x16 slots, running at x4"). Check if 2-slot video card blocks 2 of this "manylane" slots at once - it should not! Check if this slots are sharing lanes with M2/NVMe SSD - sometimes it will prevent you from using high speed card and those SSDs at the same time. Why is it important? PCIEx1 slots will not give enough throughput for many cards - GPU, 4K video IO, 10/40Gbe NICs, 10Gb USB3, RAID/HBAs. So if we look at classic NLE WS setup - GPU for OS monitors and editing acceleration, IO card for video preview, RAID for storage and data protection - these 3 slots are a necessity. 4) Don't overclock, especially with water cooling. It's tempting to get extra 30% performance boost out of nothing but it's a big gamble in the long run. It usually fails in the worst possible moment and is not worth nerves/money loss. And I'm for classic cooling in WS. If water pump fails you'll get instant system hang or extreme lagging (to a degree you can't save your work) due to heavy throttling. With big chunky radiator and decent case airflow you can work for a week before you'll notice the CPU cooler failure. And water leaks.
  10. I've found out that AP handles HLG and Vlog (both 8 and 10 bit), transcoded or not, much worse than Resolve whether it comes to manual grading or just applying LUT. Dunno why. I blamed h264 decoder at first but converting to prores didn't make things better. So it's color engine by itself I guess.
  11. I guess there is some lossless compression for "uncompressed" raw. Because 3:1 and 4:1 data rates more or less correspond to ~400MB-s of 4K30p
  12. I'm really curious to read this thread in october'18...
  13. Is it about raw or industry standart part? Compressing raw is just common practice - Red, Cineform, cDNG, Canon raw lite, it's rational to compress before debayer and leave it to post. It would be strange if Apple ignores this opportunity. Standarts... Apple introduced intermediate codec not only with obvious vendor lock-in but with platform lock-in as well. And it's fine, it's Apple in their right to follow thier business model. But somehow the industry abandoned the idea of open standard intermidiate codec, ignored other intermidiate codecs and here we now: "- We need masters in ProResLT! - What about DNxHR? - What's that?" Ok, "ffmpeg.exe -c:v prores_ks -profile:v 1 ..." and here we go but still some frustration left. And I'm afraid that industry will switch to this raw flavour of ProRes, and it won't have effective implementation outside OS X.
  14. That's was expected. You need to store 1 value per pixel for Bayer raw instead of 3 for RGB or 1.5 YUV422 video stream + debayering, so you'll get less processing and smaller stream with same compression ratio. This aside, I still cant get how prores became an industry standart with it's "fuck Win/Lin/*BSD platforms" attitude.
  15. Good morning! They have been doing it for decade. Servers - ditched. Classic WS - ditched? (Still have some hopes for this year) Clumsy FCPX launch, when lots of pros just switched to something else and never looked back. ZFS - ditched, APFS doesn't look oriented on RAIDing/clustering. HW upgrades for extended lifetime - ditching in progress.
  • Create New...