Jump to content

slonick81

Members
  • Posts

    40
  • Joined

  • Last visited

Everything posted by slonick81

  1. That's was expected. You need to store 1 value per pixel for Bayer raw instead of 3 for RGB or 1.5 YUV422 video stream + debayering, so you'll get less processing and smaller stream with same compression ratio. This aside, I still cant get how prores became an industry standart with it's "fuck Win/Lin/*BSD platforms" attitude.
  2. Good morning! They have been doing it for decade. Servers - ditched. Classic WS - ditched? (Still have some hopes for this year) Clumsy FCPX launch, when lots of pros just switched to something else and never looked back. ZFS - ditched, APFS doesn't look oriented on RAIDing/clustering. HW upgrades for extended lifetime - ditching in progress.
  3. Well, I read from time to time Alex Tutubalin blog (RawDigger, FastRawViewer, LibRaw) and all the cases he describes as developer of raw files manipulation tools made me to think this way. He has an idea about raw metadata and how to apply it but often gets some strange results with new cameras. They are usually not affecting general style photography but can be found by raw analysis or in some extreme shooting conditions, like: https://www.lonelyspeck.com/why-i-no-longer-recommend-sony-cameras-for-astrophotography-an-open-letter-to-sony/ But I guess it's safe to think there is quite some fine tweaking inside modern photocameras. Is it applicable to cinema cameras? Hard to say, but why all these companies avoid the benefits of modern processing power? Like, Sony a7R3 chews 420Mpix/s for stills, so it's capable in theory to apply same processing for 4K 50fps stream. I'm no expert in ML but looks like in general for raw recoding it's a side hack that grabs image from memory buffer at some stage of processing pipeline and dumps it to card. Who knows, maybe this issue is corrected later in processing or it wasn't designed to be corrected in this sensor mode or operation mode.
  4. There is this sensor: http://www.onsemi.com/pub/Collateral/KAE-02152-D.PDF It has some kind of non-destructive amplifier that uses some side effects of transferring the charge during readout to probe the charge and determine whether use charge multiplier to add gain or not. This kind of amplification allows to dig much more DR, acoording to specs. But it's CCD, and I have no idea if this tech is suitable for CMOS sensors. Wish I could understand this stuff better. Fixed-pattern noise reduction, black frame subtraction, microlens effects correction (like "Italian flag"), regular spatial NR, dead pixels remapping - it's what I know. That's why you can get nicely matching results from two different cameras of one model - all the fine and unit specific tuning is already done. If something is omitted or made wrong we get those stories like Ursa mini 4.6K launch. So, better say that raw image isn't baked but normalised.
  5. But you'll get two ranges instead of one - for lighter and darker parts independently. These ranges may be narrower than one from doubled photosite, but in combination they can cover more range. Let's say 2xsized photosite has 10 stops of DR, 1xsized photosite has 9 stops of DR - proportional to area, but you can expose two of them with bracketing of 4 stops and get 13 stops with good range mapping (mid tones of one site cover problematic zones of another photosite - shadows or highlights). But you may want to amplify low signal befor ADC to get less digitising error. And anyway what you get in .raw file is not actual sensor data, there is still a lot of processing done. You can simply drain charge from brighter site through resistor so it'll store proportionally less electrons with same exposure time. Middle grey is different for dark and bright photosites. Some kind of smart blending in ovelapping area of dynamic range that evaluates non-linearity in shadows and highlights and noise amont in shadows. Seems to. I guess it's still possible to get good resolution in the middle of DR, where both types of sites would be usable but shadows and highlights should suffer.
  6. Yes, I was thinking about it, too, trying to estimate rationality of EVA1 with m4/3 mount. Actually, while 5,7K for s35 looks like some random value inbetween 4K and 8K, it gives a 4K DCI crop that fits nicely on 4/3 sensor.
  7. I did some math, too. 30"~75cm, man's head ~25cm. vertical angle of view (as it's not affected by anamorphics) =2*arctan((25/2)/75)=2*arctan(0.1(6))~=2*9.5=19 degrees According to several online fov calculators it is about full height of 4/3 sensor for 35mm lens, a bit shorter maybe. All APS-C/S35 flavours are 25+ degrees. Edit: ah, easier, there is relation between lengths: head/distance=sensor_height/focal_length -> sensor_height=head*focal_length/distance=25*3.5/75~=1,2cm=12mm
  8. What ISO it was shot at? Was it pushed up in post? I have a feeling of some delicate noise reduction in post looking at this screengrab.
  9. Try to remux container first. You already have h264 inside .mts container, and it fits .mp4 nicely. It's really fast - usually storage performance limits the process - and doesn't recompress video stream. I usually use ffmpeg and small batch script for this task. It's possible to scan recursively down the folder tree and remux any file with given extension into new container, then to put resulting file in the same directory or to another destination. And if it'll stutter in AP even inside .mp4 - just change the script a little and it will convert your footage to ProRes/DNxHD/whatever... Well, (M|J)PEG artifacts is the thing that ruins the show. So the less - the better.
  10. Well said, IronFilm! I was thinking about mft variant of EVA-1 as well. No doubts that it's totally possible in technical sense - LS300 as proof, and it's unclear why the majority ignores or even denies this fact. Maybe because Panasonic haven't had anything in the middle for ages. But is it reasonable? I checked several aspects. First - resolution. M43 lenses won't cover whole sensor, maybe lens coverage will have so awkward crop factor and/or resolution that it's not worth trying? Math: (5700 / 24.6) * 17.3 ~= 4000 pixels wide. So it totally covers UHD even in 4:3 aspect and can easily get 4096 on 17:9. More to say, it fits so well without any noticeable crop that makes me think it was originally designed for mft in terms of pixel density. Second - lenses. Even native lenses would be usable with crop on EVA-1, but there are a lot of lenses for mft that will cover S35 sensor. And most affordable anamorphic 2x lenses are mft. And those new affordable Fuji cine lenses could be rehoused for mft, not for EF. Third - usability. We all understand why so many non-Canon cameras with EF mount come on market today - tons of lenses and (as rumored) no more patent protection. Nice choice for company without it's own lens ecosystem. But EF specs sre still proprietary and closed. I guess there will be some kind of HCL for lenses, it won't be long and EVA-1's AF on EF lenses seems to be rudimental anyways. Nothing to loose in comparison with "mft + adapter" combo. But on mft mount it's totally possible to adapt a lot of short flange distance lenses. And use speedboosters. And have less problems adapting PL lenses. And have small and light native mft lenses with DFD autofocus for small setups. Fourth - ideology. Current mft leader - GH5 - is heavily video-oriented. If someone would like to benefit from mft system advantages (compact size and light weight) on photo side then GH5 is a bad choice - it's bulky, expensive and has no crucial advantages in photo mode in front of many cheaper and smaller models in mft family. The majority of GH5s is bought mostly for video shooting, it's usually a noticable investment. And people usually start to think about lens investment at this point. I see quite plenty of videographers who started with G6/7 and kit lens and now are trying to move further, or guys switching from Canon, or some productions using GH5 as B-C-crash-cam. They are deciding whether it's worth investing into native lenses in terms of video production or not. And they are those who can potentially invest into top lenses. But Panasonic kinda says "no". AF is not that great, you still need to rig your setup to a degree and make it heavier, and all those mft lenses will be useless if you want to move to pro solutions. In fact there is no straight line path from GH5 to EVA-1 cause you have to change all you gear (exept flash cards, hopefully), and there are no benefits from switching to EVA-1 rather than Canon Cx00, Sony FSx, BM Ursas or even some LS300 descendant. I do understand why Canon wants to tie us to their solutions, but I have no idea why Panasonic tries to drag us into Canon's ecosystem, especially facing Sony's MILCs market domination. Sell more lenses today - sell more camera bodies tomorrow.
×
×
  • Create New...