Jump to content

Otago

Members
  • Posts

    60
  • Joined

  • Last visited

Everything posted by Otago

  1. Absolutely, I think it was more in relation to the features on the latest cameras - here's amazing autofocus that you probably won't use because you have a great focus puller and here's RAW just incase you forget 20 years of experience in how to expose or set white balance ( I think there might be better reasons to use RAW than that ? ) and on the lower end cameras you've got: set your white balance perfectly, nail your exposure and focus and remember to make something interesting too! Are you seeing trend to smaller sets or budgets where these things might start to come in useful ? Or is it technology companies driving it because they can't think of/are finding it difficult to engineer anything else to add ? I've been looking at C300 footage recently, scoping out the next upgrade and it made me realise I don't need anything better than that - perhaps some more shooting time and a holiday.
  2. I agree, it's present in both of them when it's in focus. I think the solution is to shoot 4K, which is downsampled from the 6K and won't show any of those artefacts ( if it's been done well and there seems to be no evidence it's not ) and then downsample again to 1080 when you are editing. You could also try a softening filter as a form of OLPF if 1080 in camera is necessary. Can you tell from it being inconsistent between the horizontal and vertical what they are doing with line skipping vs binning ? Do they have to use what the sensor provides or can customers make their own version by changing the microcode on the chip ? Do you know if the sensor control unit is addressable in anyway ?
  3. I'm still not convinced that these camera companies are crippling all their cameras. I think they do with regards to each other i.e. the top of the line has more features than the bottom of the line and they trickledown but I really think they are all coming up against real technical challenges in their top of the line cameras rather than drip feeding technology. I think heat and outdated processors/software are holding them back. I started to think this when I saw that the 5D iv can only have the tagging upgrade or clog installed at the same time, either it is a massive piece of code ( which is pretty unlikely ), there is very little spare memory in that camera ( I think this is most likely ) or they really enjoy pissing off their customers ( which is what most people think! ) It's like they upgraded the imaging chip but didn't update the processors because they still worked for what their goals were. It's hard to know for sure because they give all the processors internal names, has anyone every seen a true breakdown of where those processors sit in something like the ARM range ? The heat is pretty clear when you look at the die casting in something like a C300, it's just a heat sink ( and alignment frame ) attached to a handle really. The RED one and Frankie ( one of the first prototypes ) had an early autofocus system that worked by moving the lens 1 mm when the camera heated up ? I've never worked for a Japanese company but the idea of Kaizen, continuous improvement, seems very embedded in their engineering philosophy so I think it's likely the culprit but also the reason we have shutters that do 500K MTBF, swings and roundabouts. It feels like the polar opposite of western, innovative, engineering - move fast and break stuff. RED is probably a pretty good example of this. I have never had a Japanese camera fail on me, except when I've been far too tough on it, but my M6 needs regular servicing ( well, it feels regular - once every 5 years is pretty good actually ) The next generation, A7V, A7RVI, A7SIII, Nikon D6, D900?, Canon 1DXIII etc... will probably have the next generation of processors in them ( I mean the real next generation in terms of benchmarks to the rest of the world rather than an increment in their own language ) if THEY don't do the stuff we are all wanting then I'll definitely be on the side of the crippling conspiracy theorists.
  4. For my personal photo stuff I am using 35 and 90 summicrons, with the vast majority being on the 35 ( both on an M6 so full frame ), I find anymore choice than that and I get a bit of decision paralysis. It also helps that the photos don't really matter - if I get the shot I get the shot and if I don't I don't have a client asking why. I used to use a 5D and zooms but found I wasn't really present for the event that I happened to be photographing because I was thinking about it too much, it just wasn't fun anymore - I had become a bad event photographer. My "professional" work is training videos that I am in and I use a GH3 and 12-35 but probably soon to be a C300 with something like the 24-105 and 16-35. I like having lots of options when I'm doing videos that are a pain to reshoot and my performance is far more important than the cinematography in this case. The zooms really help because I shoot in a workshop and often can't get the camera where I would like it so the zoom really helps to get the So I suppose my strategy is to limit my variables so I can't get too distracted, in the first case the lenses and in the second limiting what is important to me. It seems to be quite a common theme in this thread to limit the variables, which is encouraging! Pretty sure I'd curl up in a little ball with a full set of cine primes with a new focal length every 2mm ! I think it's interesting that the professionals who have to get the shot and can't control all the variables need technology to help them with RAW, higher dynamic range and, for some, autofocus. I have read a few things recently that joke about how the people who can afford cameras with RAW and massive dynamic ranges and amazing autofocus are often exactly the people that don't need them because they have a crew that can control all the variables and a focus puller.
  5. I agree, but it may just do that in the H.264/5 ? Has there been a video camera that does a downsampled RAW yet ? The latest Nikon small NEF seems pretty good at preserving all the editability of RAW but it's still not quite as good as the full size file.
  6. The first person to pull that focussing command out of the E mount and reliably map it to a focus motor will make a killing! It's strange that the clip on motor looks to run the iris and zoom but not the focus, is it an internal ultrasonic motor or a hidden gear ? The latest Canon RF lenses and Panasonic L mounts seem to have very little breathing, it will be interesting to see if Sony follow suit with their new cine style lenses.
  7. So the 4K raw is full frame and down sampled 6K ? I haven't seen anyone confirming that the RAW is full frame. Any word on the ISO 6, 25 etc ? Does that give you full dynamic range in video or does it just move middle grey for exposure ? If it really does ISO 6 with fulll dynamic range either side of middle grey then that would be huge for me, never need another ND filter ever again and I can shoot video with M lenses and almost put it in my pocket.
  8. Otago

    Lenses

    I don't understand how the photo site size can change the dof, unless it is massive and there are so few samples to judge sharpness from. I could sort of see how it could be true for film, a flat CCD or CMOS with micro lenses vs a multilayered sheet. I'm obviously out of my depth with the maths behind all of this, is there a book or paper that you know of to explain it all ?
  9. Otago

    Lenses

    Yes, it's pretty obvious when you state it like that ?
  10. Otago

    Lenses

    I was trying to figure out why we were wrong, the optical system hasn’t changed so the depth of field hasn’t changed - this is pretty fundamental in optical systems I have worked on. The bit that we were missing, as Kye says with the circle of confusion, was that while the optical system hasn’t changed, the viewing conditions have. The depth of field ( effective to the viewer of the output ) is the same for any size sensor as long as the output is scaled similarly to the input I.e. FF35 is viewed twice the size of M43. The circle of confusion is related to both the input and output size and scaling that changes the visible depth of field. I think of it like zooming into 100% , I can see whether something is actually in focus but I couldn’t tell when scaled to fit the screen, the difference won’t be as apparent as that example though. This is interesting to me because it explains why FF35 looks so good on a phone but can be too shallow on a big screen, and may explain some of the rush to the bigger sensors - as the images we view get smaller we need shallower depth of field to match the depth of field we see ourselves or are accustomed to in media. What I haven’t quite wrapped my head around yet is whether it changes the absolute level of blurriness of the out of focus areas. My initial thought is that it doesn’t, it just changes the rate of transition between acceptable sharpness and out of focus but not the absolute blurriness ( I’m sure there’s better terms for this ) Anyone got any pointers on that ?
  11. Otago

    Lenses

    I think you are correct - it's the same lens, how could the depth of field change ? They may be taking into account circle of confusion and resolution of the cameras, do they say how they calculate the limits for near and far limits ? It could also be that they are working back from the 2x crop and that is a problem with their methodology or code. I could see this error coming in if they calculate the depth of field based on it being a 140mm equivalent and working back, but that is wrong.
  12. The problem with the DPAF in anything but the most recent EOS Cinema bodies is half the features don't seem to work with lenses which aren't STM ( all my lenses are USM ). The C200 and C300mkii both use all the autofocus features with the USM lenses but are still expensive in the case of the C300mkii or produce large data rates ( or low 8-bit ones in the H.264 codec ) in the case of the C200. The C500mkii is rumoured to be coming out soon and I'm hoping it's going to be very successful, so there will be some more C300mkii's and C200's hitting the used market for people like me to hoover up. To get sharp 1080p it has to be from an over sample, like the EOS Cinema line, line skipping and pixel binning drop you below the real 1080p resolution because the bayer pattern doesn't deliver a true 540 line pairs required for HD, even the 4k is only really 2.8k - you need 6k bayer for a true 4k resolution file. Only the Cinema Eos line does this over sampling in the Canon line up but plenty of other cameras like the Panasonic GH5 and S1 and S1H do it ( but don't have the DPAF )
  13. A 4k bayer sensor doesn't provide 1000 line pairs so what we are seeing in the pocket 6k is real 4K resolution ( i.e. 1000-ish line pairs). A 4K bayer sensor works out about 2.8K of actual resolution, I think that's why lots of people weren't blown away by moving to 4K vs HD from an over sampled sensor. Sticking with the Canon C lineup that means the real resolutions have been 1080p from the C300 ( 4k sensor over sampled to 1080p) C300 mk2 is 2.8K recorded in a 4K file and the rumoured C500 mk2 is going to be 4K from an over sampled 6K sensor. I think Canon are struggling to keep up with Sony, the one constant among all the new cameras we are all raving about is the Sony sensor in them ( I think ? Is anyone else making their own sensors apart from Canon ? ) It could be that they run very hot when sampling the full sensor and that's why they are only reading the full sensor in cameras that have fans and a decent thermal mass, or the processors and driving circuitry do.
  14. It also stops a future PL mount from rotating when using a follow focus, no point in having a locking pin in one mount if there isn't one in the other. Could be confirmation of a Varicam and EVA L mount is coming, it would make sense for them to standardise on that mount.
  15. Does the camera do the 3:2 pull down internally then or is there something happening to the clock signal itself ? Do you know if it's just the sensor and driving electronics for it that change frequency between 50Hz and 60Hz or does that whole camera get a frequency change ? Anyway, that's that theory debunked ?
  16. I think that some cameras use different clocks for the 25/50p and 30p/60p, which is why you have to restart the camera when you change that setting. I don't know if these Canon cameras do though. It's possible that 24p would need another clock circuit, and leaving that out might save some space on the board and reduce the cost slightly. If that is the case, I could see there being an argument for there being a real ( but minor ) cost saving to leaving out 24p, but also happens to be a "total coincidence" that it forces some people to buy the more expensive camera ? I also suspect that there is a real technical reason for no over sampled 4k from full frame, Canon do seem to be struggling to keep up with Sony on sensor technology and perhaps their sensors get too hot or read out too slowly to do it, or their processors aren't fast enough. I think the CEO would rather people thought they were segmenting products rather than being really far behind with technology, that would look bad to their share holders.
  17. It is very dependent on your local tax laws, in the UK for instance, you can get into trouble claiming a percentage of mortgage payments based on square footage when it comes time to sell. My chats with accounts have always saved me more than they have cost, but I file myself and only go to them when everything is properly formatted for advice.
  18. I think I am probably not using the right terminology, or haven't understood what noise or sharpening artefacts look like. Probably both ? The image look smoother to me on the BMPCC 4k ( when not at 100% ) for some reason. I agree that the BMPCC 4K is definitely noisier, is it just there is more detail coming from a 6k sensor downsampled to 4k in the S1 rather than a 4k bayer sensor on the BMPCC 4k ? I have played around in Resolve with the files but I am still far from proficient ? As I said, I think they both look great, it's very much nit picking for the uses I have for the camera - oh to have a few weeks and a far higher credit limit to try them all out.
  19. Sorry, should've been clearer, I meant the noise reduction and sharpening. I'm wondering if it can match the smooth look of Pocket 4k, it looks amazing but the Pocket 4k looks nicer to my eyes, even with the slightly lower dynamic range - perhaps it's RAW vs H.265 though.
  20. Does it go lower than that ? If you're bored it would be nice to see what a range of setting do to the image - but only if it would be useful to you obviously!
  21. The logistics of selling digital products through distributors and then retailers is by no means a solved problem - everyone wants their cut and is afraid of setting a precedent. The distributors know their business is dieing, there are so few camera shops now that the manufacturers could take on selling directly to retailers quite easily but the distributors are fighting tooth and nail to keep their businesses alive. Like a fungal nail infection that just won't go away ? What were the settings on the S1 for this ? The S1 looks too sharp, to me, in comparison to the BMPCC 4K but they both look really, really good! Dancing on the head of a pin.
  22. Because "raw footage" is ambiguous but "RAW footage" isn't.
  23. I do wonder why they are fighting this so hard, is the internal RAW really so important to the success of Red cinema cameras ? I can't see people shooting big budget stuff turning up with a Z6, they are already sniffy about Blackmagic stuff apparently! Is this why no digital cameras ( like the A9 ) shoot higher than 20fps ? Is there a legal difference between video and a sequence of stills, because I couldn't see one in the documents. (Andrew: Have you thought about putting this forum into a separate legal entity from your blog ? I would be sad to see the information contained in this forum lost to a legal battle.)
  24. So Red have sold $250M of Cameras and a total revenue of over $500M, according to these documents. That doesn't seem very much considering Arri did €415M last year alone (across all their businesses I know, but still!), no wonder they have to sell the cards for so much ?
  25. I don't think this is for ProRes RAW, it's so they can put RAW video in the iPhone and the licensing there would be crippling. I wonder if the Hydrogen is some sort of play to show that Red is in the phone market and the iPhone doing RAW video would be a loss to them directly. If you can wrap RAW in a ProRes stream then you can probably wrap it in a more consumer oriented format, 1/3 of the data rate has got to be compelling once phones get fast enough ( they may even be already )
×
×
  • Create New...