Jump to content

kye

Members
  • Posts

    7,456
  • Joined

  • Last visited

Posts posted by kye

  1. 20 hours ago, SRV1981 said:

    So how do we go about answering this when comparing the a7iv sensor (a7c2) versus the fx3 sensor (a7s3/zve1)

    I'd suggest testing this for yourself.

    Find the situation / situations where your existing setup doesn't have enough DR, and use a camera in stills mode to measure the brightest part of the image and the darkest part.  I'd suggest using zebras.

    For example, if you open up your camera lens to f2.8 and set a 1/10 shutter and ISO 3200 and this sets off the zebras then you just adjust those parameters down until the highlights don't set off the zebras and then you'll have your number of stops.

    This might sound a bit fiddly, and it is, but it will give you a definite answer.  The alternative is spending thousands of dollars based on a review / vlog from some camera bro with mediocre technical knowledge and unknown commercial interests on YT.

  2. 14 minutes ago, SRV1981 said:

    Thanks it seems the A7iv has more viewable DR than fx3 due to A7iv being 7k and fx3 is 4k but has internal noise reduction that can’t be bypassed 

    I think comparisons like this are a journey into very tricky territory.

    I'm not saying that DR doesn't matter, or that the A74 doesn't have more than the FX3, but I wonder what situations this will really make a meaningful difference in.

    12.9 vs 12.4 isn't that much of a difference, especially once you've got this much DR already.  I like to think about this in terms of "what shots can you get with one and not the other?".  I have spoken a lot about DR in my work, but that's because I tend to regularly be shooting people in front of a sunset, and I want the persons face and also not to clip the sunset behind them.  I think I could get that shot with either of these cameras.

    Another shot is of people sitting by a fire, and wanting to show the surroundings as well as not clip the fire (white fire looks odd but you can clip little bits of it as they end up looking like fire highlights).  This isn't a shot I do so I'm not sure if this needs more DR than the A73.

    You also need the skill to pull the shadows out in post.  I don't mean just raising the shadows slider either, I mean having the skill to match a shot with (let's say) 12.9 stops of DR with the previous and next shots which are likely to have dramatically less DR.  Imagine that the surrounding shots are of people sitting around the fire - they might have half the DR of the previous shot.  When you have the contrast / curves / LGG wheels / etc cranked one way for one shot, then cranked the other way for the next shot, will all the colours remain similar?  Will the contrast seem natural?  

    Colour grading software is getting better but did you know that the normal colour grading controls were developed with algorithms that were simple for computers to do (to reduce processor requirements) rather than looking good?  This often doesn't matter when you're only adjusting things a little bit, but large changes will ruthlessly reveal the weaknesses.  Things like the Resolve HDR Palette were designed to be perceptually uniform, but are you grading in Resolve using this panel?

    You also have to consider the other implications of a choice between the FX3 and A74, for example.  Comparing DR is great but it's an "everything else being equal" comparison, but of course, everything else isn't equal.  The extra DR might come at the cost of something else that is more meaningful to what you're doing.  Film-making doing anything other than shooting still-life images on a tripod inside (where everything is controlled and you have all the time in the world) is an exercise in having less time and attention than you'd want to, so you're always trying to work out what is most important and paying attention to that.  I don't know about you, but I routinely look at the footage I have shot and see all kinds of things that I should have done differently - but then I have to remind myself that I was walking down a staircase holding the camera with one hand and the railing with the other and trying not be steady and use the ninja walk while keeping the right focus distance to my subject and keep the nice framing and also not hit my head etc etc etc.  I didn't have mental capacity to do anything else, and if I had paid attention to that other thing then maybe I would have screwed up the shot entirely, etc.

    You have to look at the total situation and review what the requirements are for you in your situation and think through what implications you'll have of swapping from one camera to another and the effect on your viewers as they are watching the final edit.  Everything else other than how your viewers feel is a means to an end.

  3. This reminds me of when I did a Cisco training course and the instructor was giving us the background on who Cisco are and what they do etc. (For those unaware, Cisco make networking equipment)

    He went through the number of models, then the number of different variations that each one could have, and the end result was that they offered millions/billions of different "products".  This is in much the same way that Ford has billions of different variations of the Transit van.

    Anyway, his point was that for these to all be completely stable and reliable, Cisco had to be flawlessly organised and have incredible systems for everything, and was really a software company that happened to also make its own hardware rather than a hardware company.

    6 hours ago, Al Dolega said:

    Yea a recurring subscription to keep something I already have is BS.

    I think you've hit the nail on the head here.  If you have to pay a monthly fee to use the camera then it's not something you "already have".  It's something you have licensed..  it's equipment that you rent.

  4. In digital, highlights don't roll-off - they clip faster than a Ferrari without breaks.  The roll-off is applied as part of the processing that occurs after the image is read from the sensor.

    In terms of DR, you have to find a source that has tested both with the same methodology.  Luckily, CineD is one such source:

    https://www.cined.com/sony-a7-iv-lab-test-rolling-shutter-dynamic-range-and-latitude/

    https://www.cined.com/sony-a7s-iii-lab-test-does-it-live-up-to-the-hype/

     

  5. 2 hours ago, ac6000cw said:

    Yes, their raw image sensor data compression system.

    Do you know what intoPIX actually use inside the TicoRAW implementation?

    That's what I was asking.

    2 hours ago, ac6000cw said:

    Discrete cosine transform (DCT) isn't a compression algorithm, it's just a mathematical transform (of a block of pixels into spatial frequency coefficients) that's particularly useful for 'natural image' compression systems. It doesn't compress the data (in fact it increases it, as the output coefficients are usually higher bit depth to maintain precision), just transforms it into a different representation. That makes it much easier to discard/downgrade the coefficient data afterwards while minimising the impact on image quality - how clever you are at doing that (and the subsequent lossless data compression) is basically what determines the compression efficiency (data reduction versus perceived quality) of the image compressor.

    DCT is far from being the only game in town though - there are other front-end transforms in use as part of image compression systems. But I agree it's very popular (for very good reasons) in natural image compressors.

    My point was that the math that sits underneath the many branded names is mostly the same.  TicoRAW will be just another branded version of something that someone else wrote.

    My original post was wondering what it was...

  6. I recently bought a new treadmill and was amazed that the high-end ones required a subscription model.

    I wouldn't be averse to buying a GX85 updated firmware, but to buy and re-buy the damned thing over and over again is ridiculous.

    Yet another reason to snap out of the specifications trance that everyone seems to be in and focus on making better videos, rather than making the same videos with better spec'd equipment.

  7. 4 hours ago, Gesmi said:

    Thank you very much for your time and advice, Kye. I am going to study all the material you have provided in your post, to learn what i need to know. 😉

    I've just started in the field of video (I've always been more into photography and playing with RAW). I have also recently purchased a course on how to use Davinci Resolve.

    It actually blew my mind when i saw the video of the G7. Even at base ISO and with a LUT applied (which seems quite aggressive), the video looks very clean, full of detail, free of the typical artifacts of grading 8-bit videos and with a dynamic range that i didn't expect to be possible get with a Panasonic G85/GX85/G7, at least in standard mode.

    That's the reason i started this post, because i knew there are people here who have experience with the GX85 (like you). I was curious to know your opinions. In fact, thanks to this forum, i have learned how to activate CinelikeD on the GX85 (and that many people here longed for it to get the most out of the camera's dynamic range). However, after watching the G7 video, i think i have more than enough with the standard/natural modes.

    In a few days, i'll buy an ND filter and start shooting video and editing it.

    Thank you so much 🙂

    Welcome to doing video!  Stills photography is so easy by comparison that comparing the two is almost impossible.  I also came from taking RAW stills into doing video, so you're on a difficult but well-worn path.

    Of all of my advice, this is the most important...  Don't believe anything you read - test as many things yourself as you can.

    I have done this over the years and I have routinely found that a third of what "everyone knows" is completely false, and another third on top of that is completely misunderstood.  It shouldn't surprise anyone, but the misconceptions and outright lies will often push you in the direction of buying things that you don't actually need, and come from the manufacturers pushing very one-sided or suspect information and then it being "re-interpreted" by consumers who are too stupid or lazy or both to question it.

    Best of luck!

  8. 4 hours ago, ac6000cw said:

    No, TicoRAW is the brand name for an implementation of a particular compression algorithm.

    People don't really know this, and the manufacturers sure don't want to tell people, but there are only a few compression algorithms in use for video.  Some manufacturers will tweak the algorithm to get better performance in some metrics, but it's still the same algorithm and still subject to the same limitations etc.

    This is from the page on the Discrete cosine transform: https://en.wikipedia.org/wiki/Discrete_cosine_transform

    Quote

    The DCT is the most widely used transformation technique in signal processing,[29] and by far the most widely used linear transform in data compression.[30] Uncompressed digital media as well as lossless compression have high memory and bandwidth requirements, which is significantly reduced by the DCT lossy compression technique,[7][8] capable of achieving data compression ratios from 8:1 to 14:1 for near-studio-quality,[7] up to 100:1 for acceptable-quality content.[8] DCT compression standards are used in digital media technologies, such as digital images, digital photos,[31][32] digital video,[18][33] streaming media,[34] digital television, streaming television, video on demand (VOD),[8] digital cinema,[22] high-definition video (HD video), and high-definition television (HDTV).[7][35]

    Quote

    H.261 - 1988 - First of a family of video coding standards. Used primarily in older video conferencing and video telephone products.

    Motion JPEG - 1992 - QuickTime, video editing, non-linear editing, digital cameras

    MPEG-1 Video - 1993 - Digital video distribution on CD or Internet video

    MPEG-2 Video (H.262) - 1995 - Storage and handling of digital images in broadcast applications, digital television, HDTV, cable, satellite, high-speed Internet, DVD video distribution

    DV - 1995 - Camcorders, digital cassettes

    H.263 (MPEG-4 Part 2) - 1996 - Video telephony over public switched telephone network (PSTN), H.320, Integrated Services Digital Network (ISDN)[61][62]

    Advanced Video Coding(AVC, H.264, MPEG-4) - 2003 - Popular HD video recording, compression and distribution format, Internet video, YouTube, Blu-ray Discs, HDTVbroadcasts, web browsers, streaming television, mobile devices, consumer devices, Netflix,[42] video telephony, FaceTime[41]

    Theora - 2004 - Internet video, web browsers

    VC-1 - 2006 - Windows media, Blu-ray Discs

    Apple ProRes - 2007 - Professional video production.[50]

    VP9 - 2010 - A video codec developed by Google used in the WebM container format with HTML5.

    High Efficiency Video Coding (HEVC, H.265) - 2013 - Successor to the H.264 standard, having substantially improved compression capability

    Daala - 2013 - Research video format by Xiph.org

    AV1 - 2018 - An open source format based on VP10 (VP9's internal successor), Daala and Thor; used by content providers such as YouTube[64][65] and Netflix.[66][67]

     

  9. On 3/16/2024 at 8:02 PM, Walter H said:

    Great. I was wondering if there might be some upstream adjustments that I should keep track of based upon the V-Log L vs V-Log and I will watch the highlights particularly. 

    It's worth noting that this highlight clipping behaviour is a limitation with all LUTs, not just in this case.

    Increasingly, cameras are supporting "super-whites" which are values above 100%.  My GX85 is one of them, and in my standard node tree I just have a curve with the white-point dropped down a bit to bring values back into range prior to all the subsequent adjustments, so I don't forget or accidentally clip anything that was available in the file SOOC.

  10. 9 hours ago, Danyyyel said:

    I thought the same, until I tested a bit the 4.1 K (pixel skipping) of my z9 as I calculated that it was about 350 mb/s bitrate  (I dont know if it is capital M or small m), compared to about 200 mb/s the h.265. Another surprise was that it played better using Resolve than H.265. For me this has been my defacto standard for higher end work.

    Assuming you're talking about h264 vs h265, the h265 is about the same quality as h264 but h265 only requires half the bitrate to accomplish that.  The price is that h265 requires MUCH more processing power to encode and decode. 

  11. I'll chime in with my usual advice about colour grading.

    The simple fact is that colour grading has a much more significant role in getting great looking images than the camera does.  I'll also re-enforce the points above that what you point the camera at is more important than anything else.

    When we look at something shot on ARRI or RED or the high end Sony cameras, the reason they look great are 70% the scene, 25% the colour grading and 5% the camera.  I know this is a bold statement, but I stand by it.

    Colour grading is the elephant in the room of all online discussions about cameras.  Everyone is looking at sample videos and going "wow, this looks great - I want to get that look without doing any colour grading or work in post at all!" and it's just not true.

    If you need more convincing, here are a few things to look at:

    I could go on (many will wholeheartedly agree on this!) but long story short...  the camera is a minor part of the journey that the image takes from finding / creating something cool to point it at, to all the work done in post.

    Also, learn to edit.  Well edited bad-quality clips are better than boring high-quality images every time.

  12. Just looking through some old threads and found this.

    Did anyone read it?

    Does anyone actually want to learn something?

    I wouldn't be surprised if not..   feel free to go back to blind capitalism discussing what camera to buy next 😂😂😂

  13. Gary W bought the GX850 when this thread was going strong, and he's just posted his 2 month review with it.  He mostly takes stills so that's the focus of his review.

    TDLR; he really likes it and finds that it has an X-factor that he can't explain that makes him really enjoy using it.  It's not going to replace his GX85, but it makes a great compliment to it.

     

  14. 2 hours ago, BTM_Pix said:

    If you do go down the ATEM path then I think you might be interested in my new product as it addresses exactly this type of operation, amongst many other things.

    You'll have to overcome your modesty and post about it when it's released 🙂 

  15. 4 hours ago, John Matthews said:

    I use the ATEM mIni in a very simple way. Again, it's to limit the things that could go wrong, but it has been very reliable. I only use it to show my iPad screen (this can be very customizable). I just push the picture in picture button to "on" and there's my screen. I'm on the side of the screen so my students can still see me. There are macro features and tons of other things, but you cannot do simple things like play a media file (that's why I have the iPad). Also, a lot of stuff is simple too hard to get ready. On the iPad, it's a cake-walk. You could also use in combination with OBS or something. I have done this but find the iPad solution works better and it's more intuitive- again, no preparation provided I've prepared the lesson on the iPad in the first place. My goal was to simple go to the live setup area, turn on a power strip or two and connect. I should say that I only do one-to-one classes, not groups and it further simplifies everything. The number of variable is astounding and I'm surprised I've had few problems (knock on wood). In April, I'll be getting fiber put in (not that I really need it), but it could be the biggest challenge that my system has faced- wish me luck.

    Best of luck!

    The ATEM models do seem to be great, especially considering the historic price of such things (which every review painstakingly explains to you at the start) but they do seem to be very "industry".  ie, the answer to every question is to build a TV station and hire an army of people to do everything manually in the same way it was done in the 1980s.  This is similar to when I ask a question to any pro who works in the studio system and their answer is to shoot my home videos how a studio shoots a feature film.

    When I look at the ATEM device, I see a few buttons that do a single thing, and a few buttons that run some hard-coded macros.  Why there aren't buttons to run the user-defined macros until you buy the $1000 Extreme model is silly - the Mini has buttons I won't need, let alone the 500 buttons on the Extreme.  What you really need are a few buttons where you can store and then recall configurations, so you can have Config 1 with HDMI 1 (showing yourself), and Config 2 with custom PiP settings (showing your slides with you in PiP with custom location etc), etc, then you could swap back and forth without having to punch a bunch of buttons each time to re-create the settings while also looking professional in front of your audience.  The saving grace is that in the default setup you can swap from HDMI 1 (you) to HDM1 2 (the presentation) without looking, and then when you have to look down at the ATEM to find the stupid PiP button at least they can't see you fumbling around because you aren't currently visible.

    It's typical BM though.  Like with the Speed Editor - the middle and largest section of the controller is for multi-cam only and can't be configured to do anything else.

    Anyway, rant over.

    Using an iPad to run the slides seems like an elegant solution.

    I find that when you're presenting to a group, you need two main things:

    • To see what you're sharing to the group
    • To see the people in the group

    For my day job I do this using MS Teams (the platform chosen by corporate) and having a triple monitor setup where I have the meeting on one screen and I share another screen, so I can just drag any window I want onto that monitor and it's shared.  I share a lot of different things like Word documents, Excel sheets, web content, as well as Powerpoint, etc.  Oddly, Powerpoint tries to be "smart" with Teams, and when you're in Teams and hit the Present button in Powerpoint they try to be clever and talk to each other but just screw everything up.

    My wife is just getting setup and her business hasn't gone live yet, but she'll be using Powerpoint and Zoom with a USB webcam, which our early tests show is a similar situation with them trying to be clever but screwing it up.  I suspect the eventual setup will be a real camera and a laptop running Powerpoint going into the ATEM, then that plugged via USB into a second laptop that is running Zoom and controlling the call.  The goal is to be able to fit the whole setup in a suitcase for travel, including camera / tripod / switcher / lights / stands / diffusers / etc, so we can operate from anywhere.  The laptops would fly carry-on of course.

  16. 4 hours ago, Eric Calabros said:

    New but not unexpected rumors:

    - The Nikon Z6III camera announcement may have something to do with the RED acquisition.

    - We should expect some kind of a Nikon/RED announcement at the 2024 NAB show next month (maybe the Z6III?).

    -----

    I still think its too early for any RED+Nikon development, but Expeed7 doing R3D is the lowest hanging fruit. 

    Or even just the inclusion of their own RAW flavour perhaps, as that would have been much easier to copy straight across from the Z9?

  17. 5 hours ago, John Matthews said:

    I've done hundreds of hours now in English training. It's not my main gig, but it's good to do something a little different.

    I use a GH2 with the Olympus 17mm f/1.8, a M2 MacMini, a Behringer mixer, a Atem Mini, a Audio-Technica AT 875 R, an iPad Pro 12.9", an Apple Pencil, 3 cheap lights with softboxes, and some decent headphones. I've tried many other things too, but so far, this works best for me. My cardinal rule is to have as much as possible with a cord and no batteries as to reduce my single point failures. My only exception is the Apple Pencil, but I plug it in whenever I'm done. The key for me is to have as little setup time as possible- just flip one or two switches and I'm up and ready to go.

    I watched a bunch of ATEM Mini reviews yesterday and it looks like that would be a great setup to upgrade to once we're up and running.

    A few minor questions, if you happen to know:

    • If you enable picture-in-picture and have it so that the setting persists between inputs, and then switch to input 1 (the source of the PiP) does it give you input 1 with a PiP of input 1?  or does it disable the PiP for that angle?  and if it does disable it, does it re-enable it when you swap to another angle?
    • Does it delay the audio inputs to match the audio delay on the HDMI inputs?
    • Did it get upgraded to have a multi-screen view on the HDMI out?  or is it still limited to either Program or Preview?
    • Is there any way to run macros from the hardware device?  Having a custom PiP would be great but the PiP buttons erase any custom settings apparently.

    Unfortunately the best reviews were the ones done when it came out, and they are obviously only of the initial firmware.

  18. On 3/28/2020 at 8:15 AM, kye said:

    I've seen lots of live streams pop up in my YT feed, which is normally much higher percentage of edited content.

    I must admit I'm not a fan.  Watching someone read the chat in real-time mixed in with "is this on" and then deliver unrehearsed unfocused content just makes me angry that the person chose to waste my time instead of spending theirs editing.

    Professional live streams are like TV talk shows or the news and require large amounts of hours of prep and a crew of multiple people.  Most streaming online is like watching the dailies from a set that never hit stop between takes.

    The minimum number of people required for a professional live stream is three:

    1. someone to control the tech
    2. someone to present the content
    3. someone to manage the chat and feed good questions to the presenter so we don't spend minutes at a time watching them read the chat

    They also require a huge amount of preparation - to the extent that the presenter can deliver the content with crystal clarity and almost zero mistakes.

    I re-read this thread out of curiosity and I have to say that I think things have changed a little over the last 4 years - streamers seem to have gotten a bit more organised and are less likely to be fumbling around and having lots of dead time.

    I have also noticed that there are new approaches to streaming too.  One I particularly enjoy is where the female streamers do a live review of the people that have been banned and then appealed, which is normally absolutely hilarious and also has no dead time, despite not requiring any preparation.

    Have people been doing a lot of live-streaming for clients etc?  or themselves?

    My wife is just starting an online training business, so there will be lots of streaming in my future...

  19. +1 for having a consistent methodology.  

    It might not be exactly what each person will get in their own setup, but it allows direct comparison between brands.

    The parallel is DR, which has so many nuances in testing that you can't compare measurements that come from different sources, making the data almost completely useless unless it's part of a large database all from the same source and methodology.

  20. 8 hours ago, Clark Nikolai said:

    At some point both costs of SSDs will be so low and data transfer rates will be so high (Thunderbolt 4, etc.) that it won't matter much in practical terms if raw files are not compressed. I could see new cameras saving internally in uncompressed raw, (ProResRaw or uncompressed BRaw) which would not be subject to the patent.

    At one time shooting HD was super expensive, now it's super cheap, the same with 4K and other things. If the cost of media is within the budget of a production, and the transfer times for copying the cards is short enough for the shooting schedule, then it doesn't matter.

    This is true, but it is mostly offset by the increase in resolution.

    1080p was ~2MP and I remember the data rates and processing requirements being huge at the time.  Now we have 8K ~36MP and the data rates and processing requirements are huge for todays computers.

    It's tempting to say that we won't go past 8K and computers will keep up, but people have been saying this since 1080p and it's gone up 18X since then.  The next shifts will be into VR, where you need to shoot in a huge amount more than your delivery resolution so I see no end in sight to the increases.

  21. 3 minutes ago, JulioD said:

    At least all the armchair camera engineers go quiet for a while

    Check the other threads....  they're still alive and well.

  22. 18 hours ago, Al Dolega said:

    The XF405/605/705 lens (which as far as I can tell are all the same) is faster and goes both wider and longer than the XC10 lens, which is 27-273mm f2.8-5.6; XF lens is 25-380 f2.8-4.5. Plus the XC10 lens doesn't have a servo zoom.

    That awkward hood/loupe thing on the XC10 is also awful, what a convoluted way to try to make the camera cheaper to produce. The placement of the EVF on a typical photo body is fine, just needs to tilt and extend a bit like the EVF on the XF's.

    Would love to have the rotating grip like the XC10 though.

    Yeah, the ergonomics (and grip especially) on the XC10 was second to none.

×
×
  • Create New...