Jump to content

ac6000cw

Members
  • Posts

    443
  • Joined

  • Last visited

Posts posted by ac6000cw

  1. The situation with review content being affected by commercial interests is nothing new in the slightest...

    Many years ago I was asked if I was interested in doing a product review for a now long-gone print magazine. I tried to be as fair as possible when I wrote it, but thought the product had some usability/compatibility issues that needing fixing. The version of the review printed in the magazine had some of my criticism watered-down, I assume partly because advertising revenue related to the product was important to the finances of the magazine.

    I only did a couple of reviews for the magazine in the end, mostly because the amount they paid for them wasn't much in relation the work involved in testing a product properly (and I wasn't interested in doing quicker, more superficial reviews).

  2. 21 minutes ago, fuzzynormal said:

    4k acquisition, 1080 delivery.

    A very good strategy for getting the best out of older 8-bit cameras - I think Kye does the same (with a GX85).

    At the end of the day, unless you tell them, anyone who watches the final movie won't know (or care for the most part) what was used to make it - they'll judge it primarily on the story it tells, be it fictional or documentary, and whether it it's enjoyable/interesting/absorbing. If an old camera does the job and is enjoyable to work with, why not use it?

    Inanimate tools like cameras, lights and filters can help or hinder the art, but they can't create it...that needs imagination.

  3. 1 hour ago, fuzzynormal said:

    The issue is that it's not technically inferior by much when you get right down to it.  For what we're doing, the divide between 8bit video and 10bit video isn't such a big deal.  We're on manual lenses, and, honestly, the image from the EM10iii+our lenses is dang good. 

    Capturing in 4k or 1080?

    1 hour ago, fuzzynormal said:

    Anyone else holding onto (and often using) old inferior gear because it's good enough

    I still have and occasionally use a GX80.

    I recently had a serious clear-out of my most of my older cameras - G6, GX800, E-M1 ii, LX7, LX100 - but did hesitate over the LX7, to the point where I did some video test shots. In the end the amount of jaggies/aliasing and no 4k weighed more heavily than the small size and reasonably 'premium' feel, so it went...

  4. 19 minutes ago, gt3rs said:

    I would prefer a stereo mic like the Sennheiser 440 or Rode Stereo Mic as they are not as tall as the H3 so I can use it on a gimbal

    Never done any VR stuff, but a few thoughts/ideas:

    You could try a using stereo mic with a 120 degree (instead of the more usual 90 degree) angle between the capsules. That will give you a wider, more diffuse soundfield. Some stereo mics that use mid plus side capsules internally have a switchable 90/120 option (and/or an option to output the raw mid and side signals instead of L and R, enabling you to choose the virtual capsule angle in post using a mid-side to stereo conversion plugin).

    Stereo field manipulation/enhancement plug-ins might also be useful to widen the soundfield and create a more spacious effect e.g. I've used  iZotope Ozone Imager (free), Nugen Stereoizer (in full and cut-down 'elements' versions), various Melda Production plugins to do this and/or create pseudo-stereo from mono sources.

  5. 17 hours ago, gt3rs said:

    I see two options, simply using a Stereo Mic or using a Zoom H3 VR and use the "straight out of the device" binaural (not sure how it can be binaural if the mics are so close together).

    It's possible to synthesize all sorts of microphone types/polar patterns, including things like perfectly co-incident stereo pair mics (which are impossible to physically build), from a B-format Ambisonics stream/recording -  https://en.wikipedia.org/wiki/Ambisonics#Virtual_microphones - and also produce a binaural stream - https://en.wikipedia.org/wiki/Ambisonics#Decoding

    I think that post-processing flexibility is the real strength of using a Soundfield microphone for ambient sound recording.

  6. 7 hours ago, kye said:

    It's worth pointing out that the thermals might be the dominant factor here, considering that laptops will throttle down on their performance in order to manage overheating, so a few extra fans in the laptop can make more difference than which model of CPU / GPU you buy!

    I agree (as someone who has done all my editing for years on either gaming or workstation-class laptops). Generally, decent laptops in both of those categories should have cooling good enough for long-term high CPU and GPU loads, but you do need to choose carefully - and expect them to be noisy when they are working hard!

    Also be careful when comparing desktop and laptop GPUs - they can have the same or very similar model numbers, but the laptop version might have different performance specs (and more aggressive thermal management) e.g. :

    (info from Wikipedia) The desktop version of the nVidia Quadro RTX 4000 (100-125 watts Thermal Design Power):

    image.thumb.png.4ee33c219e3b3ea6e165f4e8bba33038.png

    ...versus the mobile version (60-80 watts TDP):

    image.thumb.png.170071693263e9092f6bec487436c980.png

    image.thumb.png.9a57bb8d97d377c8746b634e0c787104.png

  7. 3 hours ago, kye said:

    I find that simple answers come when you understand a topic fully.  If your answers to simple questions aren't simple answers then you don't understand things well enough.

    I very much agree with that - the opposite of someone filling an answer with the latest buzzwords, fashion statements and acronyms to gloss over the fact that they don't really understand the subject.

    I've been interested in science and engineering from quite young (the first book I ever bought was about electricity and magnetism). Favourite subject at secondary school was physics, helped a lot by an enthusiastic teacher who really understood the subject and could explain the fundamentals behind it very well. When I went on to study physics and electronics at university, in marked contrast some of the lecturers were terrible at explaining things in a simple fashion.

    One lecturer in particular kept pushing his own textbook, which was just as impenetrable as his lectures, so some of us students just gave up and found a book that explained the basics of the subject much better, just to get us through the exam at the end of the year... (and it was a subject that in my subsequent electronic design engineering career I've become much more familiar with - so now I know it's mostly much less complicated than it seemed at the time).

    "Simplicity is the essence of good design" I've found to be very true. If things start getting too complicated and messy in a project, it's usually a sign that I didn't set off in the right direction at the 'blank sheet of paper' stage.

  8. 1 hour ago, kye said:

    It seems to have one of those boxes that controls the lens and provides a rocker switch for zooming etc, maybe that narrows it down?  Maybe it's an ENG lens rather than a cinema lens?

    If it was a real working 16mm film camera, I don't think it would be an ENG (Electronic News Gathering) lens, as they are designed for professional portable video cameras (which in the late 1970s would have been triple vacuum tube image sensor cameras using a dichroic colour splitting prism, thus having a long flange-to-sensor optical path).

    But of course in the movie it's basically a prop, so doesn't have to be a working camera.

  9. 3 hours ago, John Matthews said:

    You can tell a difference if you pixel-peep to 200-600% and A/B the images. I've also noticed a slight color shift too. There are also a few artefacts, but it's still 5.9k downsampled to 1080p 10 bit- not bad. If you slap on a touch of sharpening, no normal view could tell the difference and you have something that would be 5% the size of a 6k image.

    I did a similar 1080p to 4k comparison with 10-bit 50p HEVC files from my OM-1 very recently (as a check after I'd updated the FW to the latest 1.6 version). 1080p is nominally 40Mbps and 4k is 150Mbps.

    With the 1080p upscaled to 4k (using the FFMPEG zscale 'spline36' filter), at normal viewing distance on a 55" native 4k OLED TV I could tell them apart (as I know what to look for) but it's not easy. A normal viewer wouldn't notice. I've done the same comparisons in the past with files from my G9 with the same result.

    As a consequence of this, most often I record in 1080p 10-bit and save 75% of the storage space, unless there is a reason to want maximum resolution/quality e.g. it's an 'unrepeatable' major trip or event, to allow for re-framing or extraction of 4k stills. For the last one (which is handy for wildlife), I often record at 4k 24/25/30p 10-bit as that is sharper on the OM-1 than 4k 50/60p, but use 1/100 shutter speed to reduce motion blur while being reasonably usable as video footage as well.

  10. 1 hour ago, John Matthews said:

    I think the Intel Air i3 must have some sort of hardware for that though.

    AFAIK, the GPU in the 'Ice Lake' CPUs has hardware decoding for up to 10-bit 4:2:0 HEVC i.e. 'Main 10' profile, assuming you have a Retina MacBook Air - https://en.wikipedia.org/wiki/MacBook_Air_(Intel-based)#Retina_(2018–2020)

    Any higher HEVC profile e.g. 10-bit 4:2:2 has to use software decoding on those machines.

    1 hour ago, John Matthews said:

    It should also be noted that the video compares the full resolution 4k/6k to the 12mbps HEVC files and I'll be damed if I can tell the difference on my M1 iMac in full screen Youtube mode (4.5K). I've even played around with the 6mbps files and they didn't seem that degraded.

    Not surprised.

    I often upload stuff to YouTube as 4k 50p using HEVC at 15-30 Mbps (using 'constant quality factor' encoding). I used to use higher bitrates, but decided it wasn't worth the extra storage space/upload time. HEVC is generally a very efficient (quality versus bitrate) compression codec.

  11. 5 minutes ago, John Matthews said:

    I think there's a fine line between making stabilization appear natural.

    Definitely - it's why I like the way Olympus/OMDS IBIS operates on the OM-1 & E-M1 iii. When I use proDAD Mercalli for stabilisation in post I usually choose the 'Glide Cam' option, which gives a floatier feel with lower warping artefacts (and less cropping) than the default 'Universal Cam' setting.

  12. 15 hours ago, John Matthews said:

    My feeling is that H quality would be better than the lowest available bitrate in the camera (20mbps H.264 8bit) as it's 12mbps H.265 AND 10bit. I think the compression is about the same but with more color information. I haven't really tried the M quality- the compression seems massive in that though. It would probably depend on the scene. The quality of the H Proxy really looks quite acceptable. The L quality was a huge step down though. I'll need to try the M setting tomorrow.

    Thanks for the info.

    15 hours ago, John Matthews said:

    I have the Panasonic 70-300mm lens and it has great Dual IS 2. I haven't really noticed an improvement to already one of the most stable setups on the market. In face, I'd say the EOIS interferes with the OIS of the lens. Again, I need more testing.

    Robert May also commented in his video that it didn't seem to improve long telephoto stabilisation. Not that surprised as OIS in the lens can be as or more effective than IBIS and/or EIS at long focal lengths, at least for pitch and yaw.

    Now that the enhanced EIS is out in the wild, doubtless Panasonic will be getting lots of feedback, so there may be performance improvements to it in the future. For the vlogging situation, it sounds like it needs to be made a bit more 'floaty'.

  13. 5 hours ago, kye said:

    Panasonic actually has really nice colour - it's just not cool to say it out loud on the forums but I hear it from people in private quite often.

    I agree (having owned 10 of their hybrid cameras and 2 camcorders in the last 15 years).

    Panasonic has a long heritage in professional video (going back over 60 years) and it shows. I think the GH5 became a very popular camera for video because it was a good all-round, reliable, video tool in most situations, rather than excelling in any particular area at the expense of others or having a specific SOOC 'look'. 

    For a bit of fun, this is 9 year old, basically SOOC, FHD 50p video from a Panasonic LX7 'enthusiast compact' with a small 10MP 1/1.7" sensor. There's some obvious aliasing/jaggies and I think the reds/oranges in particular are exaggerated. But for a camera launched in 2012 that fitted in the palm of one hand and weighed 270g I think it is reasonably decent (and could be improved in post). SOOC video from a G6, GX85 or G80 would leave it in the dust though, having much less aliasing and better balanced colours.

     

  14. 18 hours ago, kye said:

    This is a fundamental split in the camera communities - those who like the look of cinema and those that like the look of video.

    For my own stuff, I prefer it to be as 'faithful as possible' to the original scene, within the limits of the tools I've got and the amount of time I'm prepared to spend fiddling with it. I don't care what someone else wants to categorise that as, but I suspect it would come under your 'video' category.

    Personally the parts of the production process I find most interesting are being out-and-about recording the content, the basic editing (the clip choice, 'flow' and the cutting) and getting the best out of the ambient sound. Adjusting the image doesn't usually get much more advanced for me than brightness, contrast, saturation and sharpness, unless there's a clip that's particularly 'off' what I think it should look like in the lighting conditions at the time.

    But I'm perfectly happy respecting and enjoying other peoples artistic choices, including abstract art (which is inherently non-realistic). But nobody likes every piece of art they view...

  15. 8 hours ago, John Matthews said:

    One "issue" I'm seeing with proxies is it caries the 4 channels of audio is you're shooting MOV and you choose the highest and medium quality proxy. That's probably the behavior you'd want, but I'd really like to have some sort of option. The only way to get 2 channels (L and R) in the proxy is to record in the MP4 option or you go to the 720p proxy (H.264).

    In terms of file size, proxies are often less than 10% of the "keeper" file- huge savings. Now, I need to figure out the best to use them in Final Cut Pro. I imagine you could put the bigger files on a much cheaper storage option and just edit with the proxies.

    Another cool thing is to apply the Realtime LUT to only proxies and leave the original as vanilla V-LOG, much like a RAW + JPEG photo workflow.

    I don't own the camera, but looking at the updated manual the 'H' proxy file HEVC bitrate looks quite reasonable at 16Mbps for 60p/50p and 12Mbps for 30p and below - roughly equivalent to the 8-bit FHD AVC mp4 bitrates of 28 and 20Mbps. It would be interesting to know how the quality of the 'H' proxies - when recording 'main' 4k - compares to 'main' FHD recordings at equivalent bitrates e.g. is it doing high quality down-sampling from the 4k stream for the proxies?

    image.png.4d819f980df921e2b864eba53a43f59a.png

    Another short YT review of the latest firmware (from someone who is primarily a wildlife photographer using a Z9, but also uses an S5iix primarily for video):

     

  16. 6 hours ago, mercer said:

    No offense to the photographer of that shot, but that looks like it could have been shot with any standard profile from a camcorder in auto mode.

    According to the EXIF data on Flickr, the still was taken with a Canon EOS 1100D (Rebel T3 in the US). I think it is a little over-saturated (the reds in particular), but is otherwise a reasonably colour accurate photo of 'Mayflower' in that time period. Here is another photo of it taken with a Canon EOS 40D by a different photographer - https://www.flickr.com/photos/125085162@N06/21175458306/

    There are a few colour stills on Flickr taken by Paul Cook at the same event with the 5D Mark III, which I think look nicer than the video (generally more vibrant, and with nicer skin tones) - https://www.flickr.com/photos/paulwilliamcook/albums/72157648396375737/

    But other than me not liking the colour grade (which is an artistic choice anyway) I agree the video looks good.

     

     

  17. 7 hours ago, mercer said:

    And I'd be remiss if I didn't post this video from a British filmmaker named Paul Cook. It was the video that made me buy my 5D3 and install ML Raw on it and never look back. After 7 years, I'm still chasing what he was able to capture in an afternoon...

     

    I really dislike the colour grade in that video - low-contrast and de-saturated with (to me) a grey-green cast. It sucks all the life out of the event it's recording... How do you judge the colour capability of the camera itself from that?

    (I know the heritage railway and the location it was filmed at quite well).

    Not my photo and a different location, but this is what the 'Mayflower' locomotive looked like in reality):

    8106439963_437dea7fa5_c.jpg

    From emdjt42 on Flickr )

  18. 47 minutes ago, kye said:

    One thing I do see, however, is that it's possible to have too much stabilisation if the camera is moving in 3D space, because if you stabilise too hard then you get that gimbal effect where the camera is locked onto a direction but is floating around in space like a drone trying to hover.  If the stabilisation isn't quite as good and leaves a little shake in the frame then the floating blends in with the shaking and it just looks like hand-holding and doesn't look so odd.

    For general video, I prefer a small amount of 'float' in the stabilisation - it looks more natural.

    For handheld or monopod long telephoto video, I need all the stability I can get, so it's Panasonic 'IS Boost' or Oly/OMDS '+1' level stabilisation in that situation for me.

  19. 2 minutes ago, eatstoomuchjam said:

    Heck, there are already dozens of videos just about the Pyxis talking about how it's the best camera released in years or the most ho-hum camera announcement so far this year.  Bonus: almost none of the people with those videos has even seen a Pyxis in person, much less actually shot with one.

    ...and if they had picked one up and shot with it, they might like or dislike it for reasons unrelated to the video it can produce (like where the buttons are, how the menus work, battery life, monitoring options etc.).

  20. 11 minutes ago, SRV1981 said:

    I just realized - I did come to the wrong forum. I just assumed this was a wider community than what actually is. Many view themselves as filmmakers. I do not, I’m sure there’s other forums/places to chat just about equipment etc and not ruffle weathers of those who see themselves as “filmmakers”. 

    Don't take forum discussions and comments quite so seriously - Kye discusses equipment too. I think this forum has always been biased towards that side of things. But the equipment is used as creative tools, so both it and the creativity are very much linked together. But if you're mainly interested in the equipment that's fine as far as I'm concerned - probably most of the threads on the forum are related to equipment and tools.

  21. 1 hour ago, kye said:

    Once you have enough DR to shoot the scenes you need to shoot, having more is actually a liability rather than a feature.

    I agree.

    1 hour ago, kye said:

    I definitely agree that one of the main challenges is taking a clip that was shot in LOG and has 10-14 stops of DR in it, and somehow stuffing that into Rec709 which has just over 5 stops of DR.  This obviously manifests in having to crush or severely compress various areas of the luminance range, but it also means that the source material can have colours that are dramatically more saturated than Rec709 can contain and you'll need to work out how to contain those too.

    I often apply an S shaped contrast curve, compressing the highs downwards and lows upwards and expanding the mid-range to increase contrast. Balancing the compression and expansion (and the inflection points) to get it look nice is the tricky part of course....

    And then there's the accuracy or otherwise of the Rec 2020 to Rec. 709 colour conversion - I think every HLG to Rec 709 conversion LUT I've tried has a different take on this...

×
×
  • Create New...