Jump to content

Otago

Members
  • Content Count

    52
  • Joined

  • Last visited

Everything posted by Otago

  1. Otago

    Formula 1 cameras ?

    I'm envious of the concentration it takes for ENG and documentary shooters to think and edit in those conditions! Might be of interest to some people : https://www.live-production.tv/news/sports/gearhouse-broadcast-helps-take-sky-sports-f1-coverage-road.html https://www.svgeurope.org/blog/headlines/behind-the-scenes-sky-sports-coverage-of-the-british-grand-prix-at-silverstone/ https://www.wired.co.uk/article/formula-one-liberty-media-chase-carey-bernie-ecclestone It's mostly from the point of view of branding but it was still quite interesting to someone who has never thought about how outside broadcast works.
  2. Perhaps it will become a huge success and a cult film making lens, but I think it's more likely to be the next Leica 50/1 Noctilux where a few people use it to it's full extents and many lust over it and sell it when the realities of using that lens hit and it ends up stopped down to 1.4 or 2.
  3. Otago

    Formula 1 cameras ?

    I agree that it could be a 2/3 camera ( and according to android lad it is! ) I haven't watched live tv recently and I noticed that it looked very different to what I was used to - why the change now if they were always capable of it ?
  4. Otago

    Formula 1 cameras ?

    I was watching the interview highlights on YouTube and was surprised to see high dynamic range and shallow depth of field in the drivers interviews after the race. Admittedly the last time I watched F1 it was 10 years ago ( they moved from free to view to Sky Sports and I didn't want a satellite dish just for F1! ) but it was definitely all clippy with deep depth of field. I can't find anything about the cameras and other broadcast stuff they use, anyone know ? The irony is that there's a camera man in the background of this video but I can't see because of the shallow depth of field :
  5. I think that lens is designed for a very particular type of Japanese camera collector / user. The same ones who buy those mint cameras on eBay and the Leica collectors editions, and that I follow on Instagram. If you don't have much space, but a high disposable income and the urge to collect then £8k camera lenses might be just the thing for you! I wouldn't be surprised to see a beautiful leather bag for that lens and the Z7 that comes with its own cover bag to stop the bag getting damaged, they're not expecting professional users to buy it 😁
  6. Perhaps 4K 10-bit 422 or 420 30p with a codec too low to meet broadcast specs and remove the timecode ? I would hope it's not 8-bit in 4k, but 8-bit with great colours is better for me than 10-bit that I have to work on. Great autofocus, variable ND, XLR inputs and good out of the box colours would still make it a great camera for all the web delivered content being shot by single shooters ( me for instance! ). An upgraded FS5 II with the Venice colours available in more than one profile, full frame and great autofocus, I'm guessing about £6k ? What else could they cut down to make the A7s3 though ? Just the variable ND and XLR inputs ? Perhaps the A7S3 will be 8-bit 4k and the FX6 will be 10-bit ?
  7. There does seem to be a low RAW for stills, so perhaps the 1080p RAW is based on that? Nikon do it by downsampling according to this https://www.rawdigger.com/howtouse/nikon-small-raw-internals but it could be pixel binning too. I think this makes sense for stills - jpeg is only 8-bit so anything better is useful but for video 10-bit and 12-bit are available so this style of partial RAW might only be useful for getting around patents and keeping the data rates lower vs uncompressed video if it is similar to the baked in sNEF.
  8. This is an interesting series from Steve Yedlin that goes into this and uses IMAX and 35mm film and all sorts of digital cameras. http://yedlin.net/ResDemo/index.html Is debayer processing getting better or will there still need to be 12k of bayer to get 8k RGB ? After watching the resolution demo, and understanding some of it, I don't think it matters though.
  9. I'm engineer so not directly relevant, but might be based on what BasilikFilm has said above. I was never asked what school qualifications I had after I got into University, I was never asked what degree I had after I had a few years work under my belt, I haven't been asked what my first few years of employment were like now I have 15 years under my belt. Each time those credentials or qualifications made the step to the next level easier, it would've been possible without them but I have seen others coming from a less traditional background take longer to get their foot on the first rung. A traditional path with good grades gives everyone in the hiring process a bit of comfort and can be the differentiator between 2 candidates but this is only true at the early stages of a career, after 10 years of experience I care far more about personality. I work at a University now and there would be 2 things I would check if I was going to do a masters. 1. See who the tutors are, if they are academics that have worked their way up via bachelors, masters then a phd I would be a little cautious because there are some great people who have done that and some people who have learnt and excelled at academia and not just their specialist subject. 2. Who are the other students, if part of what you want is to build a network then students who don't stay in your location ( either because they don't want to or aren't allowed to ) or aren't fluent in English then it may not be as useful to you. In our institution ~70% of masters students are Chinese and American and they are there for the credentials and don't stick around afterwards.
  10. That looks like a pretty great camera for the price, still no confirmation that the UHD Clop ( nice typo in their specs list ) isn't what they are using for RAW so it looks like a s35 UHD CineDNG. Then again the Pocket 6k is massive compared to this so I'm not entirely surprised!
  11. They also seem to have wanted the exposure to be the same, perhaps they also saw it as a B camera for Alexa shooters as they were also comparing the F65 to the Alexa.
  12. Ah, I have misunderstood the terms then! I assumed by linear it was meant that the total values had been remapped to make better use of the graduations rather than the absolute light levels - but that is what log is - I assumed that this was done on all sensor ADC's as a matter of course ( the data coming off was log) and then a further log conversion was done to fit that in a 10-bit container while prioritising mid tones but I suppose it's not necessary if the sensor bit depth is close to the final bit depth, I forgot this wasn't a 24 bit instrumentation ADC where you have loads of data to throw away. Thanks!
  13. Just realised some of this is incorrect. There are only 1024 values for the whole dynamic range, rather than each stop so the numbers about should be 50 values representing each stop rather than 100 - the concept is the same but the values are / were wrong.
  14. I think this is true if it is a linear 14 bit file but not if it is log 10 bit ( assuming each bit corresponds to an extra stop of dynamic range ) If you ETTR and put lots of information in the curve of the log then in the brightest values they will be sharing bits. Depending on what curve your camera uses you could end up with, say, your 2 brightest stops being compressed into one bit in the codec and so only have 512 values representing each stop rather than 1024 - whether that is noticeable is another question! If you exposure "correctly" then most of the values will fall in the linear part of the log curve and you'll get all 1024 values for each stop. I think log curves are used because it is assumed that the very brightest and darkest information will be lower in information and importance, and work similarly to film so it was easier to switch over.
  15. I thought this might interest a few people, I find these things really interesting because it shows thinking and not just the final output. I think it's probably from the Sony Pictures hack a few years ago. EDIT; This was actually the link I meant to post but the other one is interesting too https://wikileaks.org/sony/docs/05/docs/Camera/FeedbackV1 1_Next Generation Camera v6.pdf https://wikileaks.org/sony/docs/05/docs/Atsugi/To_Nakayama_san.pdf Apologies if this has been posted before, I couldn't find anything.
  16. It may also be to appease the algorithm in another way; if they have different types of content then they can be penalised if everyone isn't interested in everything. If you never watch a channels vlogging content but always watch their proper content then the algorithm just sees that as you not being as interested as you were before, and in a puppy like attempt to please you it shows you content that you always watch all the way through, and may not show you their content again for a while ( or ever again ). How it is working is conjecture on my part, based on reports from a few people who have talked about it - Linus Sebastian is pretty open about how it all works on the WAN show, but also knowing how "algorithm" and "machine learning" is used as pixie dust in the tech world to make pretty old concepts seem magical and worthy of investment. It could be solved if there weren't so many people trying to game the system, it's probably hardest to game a system based on your watch metrics rather than what the content is purporting to be. Until an AI is smart enough to understand and categorise the content itself then it'll just continue to be a cat and mouse game.
  17. I've been looking at C300 footage recently and there is a big difference between the best and worst, some people can make it shine and others could make an Alexa look like an iPhone video. It might be wise to wait for a few weeks till there's a bit more footage rather than just the promos and test shots out there - the S1 vlog stuff I have been seeing has definitely gone up in quality as the camera has been used more and there's a larger sample of users publishing stuff.
  18. Absolutely, I think it was more in relation to the features on the latest cameras - here's amazing autofocus that you probably won't use because you have a great focus puller and here's RAW just incase you forget 20 years of experience in how to expose or set white balance ( I think there might be better reasons to use RAW than that 😀 ) and on the lower end cameras you've got: set your white balance perfectly, nail your exposure and focus and remember to make something interesting too! Are you seeing trend to smaller sets or budgets where these things might start to come in useful ? Or is it technology companies driving it because they can't think of/are finding it difficult to engineer anything else to add ? I've been looking at C300 footage recently, scoping out the next upgrade and it made me realise I don't need anything better than that - perhaps some more shooting time and a holiday.
  19. I agree, it's present in both of them when it's in focus. I think the solution is to shoot 4K, which is downsampled from the 6K and won't show any of those artefacts ( if it's been done well and there seems to be no evidence it's not ) and then downsample again to 1080 when you are editing. You could also try a softening filter as a form of OLPF if 1080 in camera is necessary. Can you tell from it being inconsistent between the horizontal and vertical what they are doing with line skipping vs binning ? Do they have to use what the sensor provides or can customers make their own version by changing the microcode on the chip ? Do you know if the sensor control unit is addressable in anyway ?
  20. I'm still not convinced that these camera companies are crippling all their cameras. I think they do with regards to each other i.e. the top of the line has more features than the bottom of the line and they trickledown but I really think they are all coming up against real technical challenges in their top of the line cameras rather than drip feeding technology. I think heat and outdated processors/software are holding them back. I started to think this when I saw that the 5D iv can only have the tagging upgrade or clog installed at the same time, either it is a massive piece of code ( which is pretty unlikely ), there is very little spare memory in that camera ( I think this is most likely ) or they really enjoy pissing off their customers ( which is what most people think! ) It's like they upgraded the imaging chip but didn't update the processors because they still worked for what their goals were. It's hard to know for sure because they give all the processors internal names, has anyone every seen a true breakdown of where those processors sit in something like the ARM range ? The heat is pretty clear when you look at the die casting in something like a C300, it's just a heat sink ( and alignment frame ) attached to a handle really. The RED one and Frankie ( one of the first prototypes ) had an early autofocus system that worked by moving the lens 1 mm when the camera heated up 😉 I've never worked for a Japanese company but the idea of Kaizen, continuous improvement, seems very embedded in their engineering philosophy so I think it's likely the culprit but also the reason we have shutters that do 500K MTBF, swings and roundabouts. It feels like the polar opposite of western, innovative, engineering - move fast and break stuff. RED is probably a pretty good example of this. I have never had a Japanese camera fail on me, except when I've been far too tough on it, but my M6 needs regular servicing ( well, it feels regular - once every 5 years is pretty good actually ) The next generation, A7V, A7RVI, A7SIII, Nikon D6, D900?, Canon 1DXIII etc... will probably have the next generation of processors in them ( I mean the real next generation in terms of benchmarks to the rest of the world rather than an increment in their own language ) if THEY don't do the stuff we are all wanting then I'll definitely be on the side of the crippling conspiracy theorists.
  21. For my personal photo stuff I am using 35 and 90 summicrons, with the vast majority being on the 35 ( both on an M6 so full frame ), I find anymore choice than that and I get a bit of decision paralysis. It also helps that the photos don't really matter - if I get the shot I get the shot and if I don't I don't have a client asking why. I used to use a 5D and zooms but found I wasn't really present for the event that I happened to be photographing because I was thinking about it too much, it just wasn't fun anymore - I had become a bad event photographer. My "professional" work is training videos that I am in and I use a GH3 and 12-35 but probably soon to be a C300 with something like the 24-105 and 16-35. I like having lots of options when I'm doing videos that are a pain to reshoot and my performance is far more important than the cinematography in this case. The zooms really help because I shoot in a workshop and often can't get the camera where I would like it so the zoom really helps to get the So I suppose my strategy is to limit my variables so I can't get too distracted, in the first case the lenses and in the second limiting what is important to me. It seems to be quite a common theme in this thread to limit the variables, which is encouraging! Pretty sure I'd curl up in a little ball with a full set of cine primes with a new focal length every 2mm ! I think it's interesting that the professionals who have to get the shot and can't control all the variables need technology to help them with RAW, higher dynamic range and, for some, autofocus. I have read a few things recently that joke about how the people who can afford cameras with RAW and massive dynamic ranges and amazing autofocus are often exactly the people that don't need them because they have a crew that can control all the variables and a focus puller.
  22. I agree, but it may just do that in the H.264/5 ? Has there been a video camera that does a downsampled RAW yet ? The latest Nikon small NEF seems pretty good at preserving all the editability of RAW but it's still not quite as good as the full size file.
  23. The first person to pull that focussing command out of the E mount and reliably map it to a focus motor will make a killing! It's strange that the clip on motor looks to run the iris and zoom but not the focus, is it an internal ultrasonic motor or a hidden gear ? The latest Canon RF lenses and Panasonic L mounts seem to have very little breathing, it will be interesting to see if Sony follow suit with their new cine style lenses.
  24. So the 4K raw is full frame and down sampled 6K ? I haven't seen anyone confirming that the RAW is full frame. Any word on the ISO 6, 25 etc ? Does that give you full dynamic range in video or does it just move middle grey for exposure ? If it really does ISO 6 with fulll dynamic range either side of middle grey then that would be huge for me, never need another ND filter ever again and I can shoot video with M lenses and almost put it in my pocket.
  25. Otago

    Lenses

    I don't understand how the photo site size can change the dof, unless it is massive and there are so few samples to judge sharpness from. I could sort of see how it could be true for film, a flat CCD or CMOS with micro lenses vs a multilayered sheet. I'm obviously out of my depth with the maths behind all of this, is there a book or paper that you know of to explain it all ?
×
×
  • Create New...