Jump to content

Otago

Members
  • Posts

    60
  • Joined

  • Last visited

Posts posted by Otago

  1. Does anyone have some Canon C300 mark I footage they would be willing to share as the original XF file ? I want to see how my computer handles editing it ( I'm assuming it won't be anymore taxing than GH3 footage ? I just want to make sure incase I need to budget for a new computer too! )

    I have had a google but all the files I have found are now long gone I think. 

  2. 19 minutes ago, Django said:

    HEIC is also very interesting for stills giving basically 10-bit 4:2:2 vs 8-bit 4:2:0 for JPGs. 

    It's also quite like MXF in video in that hit can hold metadata, if it is implemented well then all the edits can stay in the file as metadata. Unfortunately it is heavily patented so it might never take off as a universal file format. 

  3. Can you use hardware HEVC decoding to decode the 10-bit HEVC ? If so then doing that on the latest AMD or Intel hardware would probably be worth the price difference alone. Are these 1U/2U servers or workgroup servers in tower cases ? The 1U servers I have been around were not something I would want outside a data centre because they are LOUD, like really really loud and with a range of harmonics to set your teeth on edge. 

    The other things like M.2 NVMe drives and space for PCIe cards are probably doable with a few riser and adapter cards though. 

  4. 13 minutes ago, Andrew Reid said:

    What makes it better than the S1H?

    The Canon badge?

    Or is it the $6000 price?

    Internal RAW and really good autofocus, which is good because historically this is the spec for Canon cameras for the next 10 years ?

    I'm more excited that HEIC images are being adopted outside the Apple ecosystem. It can be half the size of JPEG for the same quality ( or lossless ) and can have alternate display versions so an sRGB file for general viewing and then a 10-bit flat file for editing and that edit data can be kept in the file rather than as a sidecar or in a database. 

  5. Interesting replies! I note that there are compromises for some people and some people are content with what they are currently shooting with. 

    If you are satisfied then how did you come to that conclusion ? I am curious about how you know that you don't need anymore, because I know what I am happy with but I am finding it difficult to express how I came to that conclusion. 

  6. I've been looking at footage from various cameras, straight out of the camera if I can find it, and was curious about how other people evaluate cameras. 

    I can see differences between them; resolution, dynamic range and colours. I can rank them in order of which image looks better to me ( in a subjective way ) and I can rank ergonomic and technical features that I want or need. I can push the footage to see where it breaks and how it looks after compressing for delivery. 

    What I'm curious about is where is the point that it is good enough and anymore isn't necessary for you ?

    I think I know where my point is, it's somewhere around a C100, C300, F3 for features and image quality. I want the built in NDs and XLRs and I need the robustness of them and the image quality holds up to the little grading that I do and when it's compressed down higher resolution or bitrates don't jump out at me as being better, but I'm not pushing anything hard and my delivery is heavily compressed.

    Where is your point of adequacy and why is it there ? Is there ever a point where you will say that you don't need anymore ? 

  7. I'm envious of the concentration it takes for ENG and documentary shooters to think and edit in those conditions! 

    Might be of interest to some people : 

    https://www.live-production.tv/news/sports/gearhouse-broadcast-helps-take-sky-sports-f1-coverage-road.html

    https://www.svgeurope.org/blog/headlines/behind-the-scenes-sky-sports-coverage-of-the-british-grand-prix-at-silverstone/

    https://www.wired.co.uk/article/formula-one-liberty-media-chase-carey-bernie-ecclestone

    It's mostly from the point of view of branding but it was still quite interesting to someone who has never thought about how outside broadcast works. 

  8. 14 hours ago, Danyyyel said:

    Its extraordinary when people who are in video comment on a lens, that any mean beat every cinema lens and cost like 1/3 an Arri or Cooke. Before laughthing go and look at the result from Dpreview, the thing is out of this world, it beats the leica noctilux which cost 1/3 more. In fact I see this lens will have a place in many place in video/cinema use, more than photo.

    But thinking about the last part, it is Z lens so won't fit on nearly anything. Some will perhaps just buy the z6 as an accessory for it.

    Perhaps it will become a huge success and a cult film making lens, but I think it's more likely to be the next Leica 50/1 Noctilux where a few people use it to it's full extents and many lust over it and sell it when the realities of using that lens hit and it ends up stopped down to 1.4 or 2. 

  9. I agree that it could be a 2/3 camera ( and according to android lad it is! ) I haven't watched live tv recently and I noticed that it looked very different to what I was used to - why the change now if they were always capable of it ? 

  10. I was watching the interview highlights on YouTube and was surprised to see high dynamic range and shallow depth of field in the drivers interviews after the race. Admittedly the last time I watched F1 it was 10 years ago ( they moved from free to view to Sky Sports and I didn't want a satellite dish just for F1! ) but it was definitely all clippy with deep depth of field. I can't find anything about the cameras and other broadcast stuff they use, anyone know ? The irony is that there's a camera man in the background of this video but I can't see because of the shallow depth of field : 

     

  11. I think that lens is designed for a very particular type of Japanese camera collector / user. The same ones who buy those mint cameras on eBay and the Leica collectors editions, and that I follow on Instagram. If you don't have much space, but a high disposable income and the urge to collect then £8k camera lenses might be just the thing for you! 

    I wouldn't be surprised to see a beautiful leather bag for that lens and the Z7 that comes with its own cover bag to stop the bag getting damaged, they're not expecting professional users to buy it ?

  12. Perhaps 4K 10-bit 422 or 420 30p with a codec too low to meet broadcast specs and remove the timecode ?

    I would hope it's not 8-bit in 4k, but 8-bit with great colours is better for me than 10-bit that I have to work on. Great autofocus, variable ND, XLR inputs and good out of the box colours would still make it a great camera for all the web delivered content being shot by single shooters ( me for instance! ). An upgraded FS5 II with the Venice colours available in more than one profile, full frame and great autofocus, I'm guessing about £6k ?

    What else could they cut down to make the A7s3 though ? Just the variable ND and XLR inputs ? Perhaps the A7S3 will be 8-bit 4k and the FX6 will be 10-bit ? 

  13. 1 hour ago, paulinventome said:

    Certainly UHD RAW has to be a crop, otherwise it isn't RAW and the impression i get from all the specs, for all the movies, is that they are all crops and the HD i presume would be a crop of the centre.

    You can't get RAW after the camera has processed it, so there's no way it's full frame RAW.

    There does seem to be a low RAW for stills, so perhaps the 1080p RAW is based on that? Nikon do it by downsampling according to this https://www.rawdigger.com/howtouse/nikon-small-raw-internals but it could be pixel binning too. I think this makes sense for stills - jpeg is only 8-bit so anything better is useful but for video 10-bit and 12-bit are available so this style of partial RAW might only be useful for getting around patents and keeping the data rates lower vs uncompressed video if it is similar to the baked in sNEF. 

     

  14. I'm engineer so not directly relevant, but might be based on what BasilikFilm has said above. I was never asked what school qualifications I had after I got into University, I was never asked what degree I had after I had a few years work under my belt, I haven't been asked what my first few years of employment were like now I have 15 years under my belt. Each time those credentials or qualifications made the step to the next level easier, it would've been possible without them but I have seen others coming from a less traditional background take longer to get their foot on the first rung. A traditional path with good grades gives everyone in the hiring process a bit of comfort and can be the differentiator between 2 candidates but this is only true at the early stages of a career, after 10 years of experience I care far more about personality. 

    I work at a University now and there would be 2 things I would check if I was going to do a masters.

    1. See who the tutors are, if they are academics that have worked their way up via bachelors, masters then a phd I would be a little cautious because there are some great people who have done that and some people who have learnt and excelled at academia and not just their specialist subject.

    2. Who are the other students, if part of what you want is to build a network then students who don't stay in your location ( either because they don't want to or aren't allowed to ) or aren't fluent in English then it may not be as useful to you. In our institution ~70% of masters students are Chinese and American and they are there for the credentials and don't stick around afterwards. 

  15. 4 hours ago, IronFilm said:


    Interesting! Sony was clearly closely comparing their F3 against the Arri Alexa, and doing their best to make the F3 compete directly head to head against ALEXA

    They also seem to have wanted the exposure to be the same, perhaps they also saw it as a B camera for Alexa shooters as they were also comparing the F65 to the Alexa. 

  16. 52 minutes ago, EthanAlexander said:

    I appreciate your desire to dig into this further. In any linear recording, each stop brighter is actually getting twice as many bit values as the previous darker stop, starting with 1 for the darkest and 512 for the brightest in 10 bit. That means the brightest stop is actually the "top half" of all the bit values (so in 10 bit 513-1024 would actually be reserved for just one stop of light.) If you want each stop of light to be represented by an equal amount of values (for instance, ~100 as you are suggesting) It requires a log curve to map the input values to that. (How many and which values get used for the different stops is what makes the differences between different log curves like SLog2 and 3, V Log, N Log, etc.)

     

    They won't be sharing bits before compression whether it's linear or log, but to your point, you're right and this is a big reason why shooting log on a high compression camera is troublesome - the codec has to throw away information and that means that values that are close together will likely be compressed into one. This is why I said several times that highly compressed vs raw recording is a big factor. But if we're talking raw recording with lossless or no compression, or even ProRes HQ frankly, then a 10 or 12 bit file mapped with a log curve will look practically the same as a linear 14 bit recording. Either way you still have to decide where you want middle grey to land, which means you're deciding how many stops above and below you're going to get.

    Ah, I have misunderstood the terms then! I assumed by linear it was meant that the total values had been remapped to make better use of the graduations rather than the absolute light levels - but that is what log is - I assumed that this was done on all sensor ADC's as a matter of course ( the data coming off was log) and then a further log conversion was done to fit that in a 10-bit container while prioritising mid tones but I suppose it's not necessary if the sensor bit depth is close to the final bit depth, I forgot this wasn't a 24 bit instrumentation ADC where you have loads of data to throw away.

    Thanks! 

  17. 1 hour ago, Otago said:

    I think this is true if it is a linear 14 bit file but not if it is log 10 bit ( assuming each bit corresponds to an extra stop of dynamic range ) If you ETTR and put lots of information in the curve of the log then in the brightest values they will be sharing bits. Depending on what curve your camera uses you could end up with, say, your 2 brightest stops being compressed into one bit in the codec and so only have 512 values representing each stop rather than 1024 - whether that is noticeable is another question! If you exposure "correctly" then most of the values will fall in the linear part of the log curve and you'll get all 1024 values for each stop. 

    I think log curves are used because it is assumed that the very brightest and darkest information will be lower in information and importance, and work similarly to film so it was easier to switch over. 

    Just realised some of this is incorrect. There are only 1024 values for the whole dynamic range, rather than each stop so the numbers about should be 50 values representing each stop rather than 100 - the concept is the same but the values are / were wrong. 

  18. 7 minutes ago, EthanAlexander said:

    If a scene only has, for instance, 7 stops of dynamic range, then you could easily argue that ETTR will offer a better image because of the high SNR which will lead to low noise and a cleaner image. We're fitting 7 stops into.a 14 stop container so it's easy to make sure everything is captured. You could probably argue that when shooting raw or super low compression, anything less than the full dynamic range should be ETTR by the amount of stops in the scene fewer than the maximum allowed by the log curve.

    I think this is true if it is a linear 14 bit file but not if it is log 10 bit ( assuming each bit corresponds to an extra stop of dynamic range ) If you ETTR and put lots of information in the curve of the log then in the brightest values they will be sharing bits. Depending on what curve your camera uses you could end up with, say, your 2 brightest stops being compressed into one bit in the codec and so only have 512 values representing each stop rather than 1024 - whether that is noticeable is another question! If you exposure "correctly" then most of the values will fall in the linear part of the log curve and you'll get all 1024 values for each stop. 

    I think log curves are used because it is assumed that the very brightest and darkest information will be lower in information and importance, and work similarly to film so it was easier to switch over. 

  19. I thought this might interest a few people, I find these things really interesting because it shows thinking and not just the final output. 

    I think it's probably from the Sony Pictures hack a few years ago.

    EDIT; This was actually the link I meant to post but the other one is interesting too 

    https://wikileaks.org/sony/docs/05/docs/Camera/FeedbackV1 1_Next Generation Camera v6.pdf

    https://wikileaks.org/sony/docs/05/docs/Atsugi/To_Nakayama_san.pdf

    Apologies if this has been posted before, I couldn't find anything. 

  20. 1 hour ago, TurboRat said:

    So is that why some famous youtubers also create a 2nd & 3rd account? Because YT suddenly stops recommending the vids of the ones gaining traction?

    It may also be to appease the algorithm in another way; if they have different types of content then they can be penalised if everyone isn't interested in everything. If you never watch a channels vlogging content but always watch their proper content then the algorithm just sees that as you not being as interested as you were before, and in a puppy like attempt to please you it shows you content that you always watch all the way through, and may not show you their content again for a while ( or ever again ). How it is working is conjecture on my part, based on reports from a few people who have talked about it - Linus Sebastian is pretty open about how it all works on the WAN show, but also knowing how "algorithm" and "machine learning"  is used as pixie dust in the tech world to make pretty old concepts seem magical and worthy of investment. 

    It could be solved if there weren't so many people trying to game the system, it's probably hardest to game a system based on your watch metrics rather than what the content is purporting to be. Until an AI is smart enough to understand and categorise the content itself then it'll just continue to be a cat and mouse game. 

  21. 2 hours ago, wildrym said:

    Thanks for the feedback!

    I'm still hesitating between S1H and P6K. I think i've watched all S1H and S1 avalaible footage on youtube and vimeo!

    My feeling is that despite all the ergonomic issues, P6K is in another league in term of image quality. Some P6k footage looks like really high end cinema cameras. 

    On the other hand, while i've seen really nice footage from S1H, overall I still feel that videoy/digital look (too sharp, noise reduction artifacts, harsh highlight rolloff, weird colors in mixed lighting situations, "thin" color depth especially on skin tones that often look platic) in addition to noticeable rolling shutter and wobbles.

    My hope is that image quality is hindered by high compression h265 4.2.0 codecs , so that prores raw would improve it substancialy.

    However, raw will likely be limited to 10 bits (EVA 1 raw is 10 bit, and if it was 12 bits they would have been bragging to boost sales !)

    I've got a question for EVA-1 users: Does 10 bit prores raw improve significantly image quality in comparison with internal codecs or 10 bit 4.2.2 prores HQ from external recorder ?

    I've been looking at C300 footage recently and there is a big difference between the best and worst, some people can make it shine and others could make an Alexa look like an iPhone video. It might be wise to wait for a few weeks till there's a bit more footage rather than just the promos and test shots out there - the S1 vlog stuff I have been seeing has definitely gone up in quality as the camera has been used more and there's a larger sample of users publishing stuff. 

×
×
  • Create New...