Jump to content

Michael S

Members
  • Posts

    36
  • Joined

  • Last visited

Everything posted by Michael S

  1. I was always a bit surprised that, while in the audio world everyone is familiar with the Nyquist theorem and how obviously you must sample with at least twice the sample frequency as the frequencies you want to reproduce, and a bit of spare so you can make a well-behaved low-pass filter, in the world of video this doesn't seem so obvious. I blame in part the manufacturers who loved to put a 1920x1080 bayer sensor in a camera and then to advertised it as being an HD camera while in the audio realm you wouldn't be able to cheat with sampling frequancies. Of course for video much more bandwith is needed so right from the start the engineers tried every trick they could think of to reduce bandwith needs. So yes, having an 8k bayer sensor, with an OLPF to avoid aliasing would provide an excellent source for a 4K video stream. If the recording is 8K that means you could also manipulate the image without immediately running into scaling artefacts when preparing a clean, detailed 4k delivery. 8K delivery? Useful for specific use cases where people have their faces pressed up to the screen or where the image is supposed to fill your entire field of view as with VR but otherwise not very useful.
  2. Transcend v30 or similar class, 64 or 128 GB. Never failed me sofar.
  3. Correct. I remember having read somewhere that BM uses FPGA technology while the big names use ASIC which explains a lot about the different size and performance characteristics between these products. https://www.geeksforgeeks.org/fpga-vs-asic/
  4. The power efficiency of one chip can be very different from the efficiency of another due to node size and to what degree it's processing is hardware accelerated. Maybe Sony uses a relatively large (= cheaper) node size or the chip has less optimized hardware for the processing it needs to do. There might also be more sample variation with Sony as the cooling efficiency can depend a lot on how well various parts are thermally connected during assembly. There is always some margin of error on soldering points, application of paste and lubricants during assembly and if this margin is larger than what the designers assumed you get clear sample variation.
  5. I've shot long enough with small sensor, 8-bit rec709 camera's that I got used to "just try to get the recording as close as possible to what you want the final image to look like". So don't drop saturation and contrast to the bottom during recording, only to pull them up in post. This would only make sense if there is a lot of reserve DR in the highlights, and when you are working on a camera where you have to try these tricks, there never is. Never use auto WB (or auto exposure) as this may make the colours and exposure shift during the shot which is very hard to correct afterwards. It's easier if they are off with a constant error, that's quite easy to fix as long as you are not too far off. When using these cameras with limited DR I was quite fanatical with setting the whitebalance manually as these cameras tend to exaggerate differences in colour (and contrast). With mixed lighting, artificial light may seem fiercely red while daylight is fiercely blue at the same time. Now that I've got a camera with good DR (Lumix S5) I just set the WB to cloudy when outdoors and incandescent when indoors. Only when encountering weird artificial light (LED, fluorescent or sodium) I might set a custom white balance. Minor corrections might be required in post but nine out of ten times it looks perfectly natural to me. In daylight, shadows are slightly blue, sides exposed to direct sunlight are slightly yellow, just as it is in real life. Only when all image content is exposed to either shadow or sunlight and the shots are of considerable length I might adjust the WB to the specific light just like how my eyes (or brain actually) would adjust to the colour of the ambient light. It is not so much direct advice that I got but when doing a course on making videos the most important thing I took from it is that people are more predictable then you might think which is especially useful when doing event videography. People are animals of habit and a lot of things we do are ritualized. This means you can always try to think ahead of what might happen next in the coming 10 seconds/minutes/hours and ask yourself what is the most interesting part about that, and how to best visualize that. The result of this is that you will find yourself more often in the right spot at the right time which is I think the most important quality of an event videographer. I remember once recording a wedding video for a friend (I'm not a professional videographer in any way) who also hired a photographer who was just starting out as a wedding photographer. Well, the photographer was actually just a colleague from work for whom this was a nice opportunity to build a portfolio to get started in the business. But I noticed that it happened several times that the photographer had to sprint to the spot where I was already waiting as he realized he was in the wrong spot for what was about to happen next. Always keep anticipating for what might happen, decide what the interesting aspect of that is, and how to best visualize that. You will not always get it right, but the number of times you get "lucky" will increase.
  6. My guess is that youtube under some circumstances ignores metadata but makes assumptions based on resolution like "if resolution = SD then input color matrix = rec601" and then does a conversion to rec709 behind the scenes, or it doesn't but it will still be interpreted as rec709 by the browser. My experience with video software is that you can't trust any of it to actually follow the standards and respect all metadata. If you create a video in a "modern" format (HD, rec709) it is most likely to show up correctly. Old formats (SD, interlaced) are often poorly supported. Maybe this link will put you on the right path: https://forum.videohelp.com/threads/329866-incorrect-collor-display-in-video-playback#post2045830
  7. Which is the premise of the first episode of the latest season of the "Black Mirror" series with "Streamberry" as the streaming service experimenting with exactly this. This is of course also very attractive for advertisers creating virtual influencers or advertisement specifically tailored to each individual to ensure the marketing material pushes all your right buttons to get you to buy the thing they are trying to sell you. The next step of targeted advertisement.
  8. My 2cents: Having a netflix subscription and seeing some of what is on offer, a lot of it already feels as if the script was generated using a particular algorithm. There is a market for un-adventurous "killing some time" entertainment and AI can probably help with churning out that kind of stuff. In other aspects of filmmaking, studios will probably see it as a tool to reduce cost and risk, while artists will see it as a tool to help them generate new ideas, and I think AI can do both. The struggle between studios wanting to play it safe and run a financially stable business and "auteur cinema" wanting to leave a very particular, individual mark on their movies and willing to take risks in the process is not new. That struggle will remain. The studios will always need to offer some room to these people as the studios will eventually always become irrelevant if they just keep churning out uninspired drivel. I think there are some cultural differences across the world with respect to how authenticity is valued. The example that comes to mind is how in places like Japan or China there are theme parks containing reconstructions of European or American monuments. I still have to see the first of such a park in Europe containing live-size replicas of Inca or Buddhist monuments. Typical for Japan are also the virtual artists or influencers like Hatsune Miku or Imma, the virtual influencer. As Japan is an outlier in many metrics, that doesn't say much about global trends, but I would not be surprised if future generations would be quite used to virtual personas with complete virtual backstories and such. Of course research is already being done on how to construct such a virtual influencer to sell more stuff to people. https://www.frontiersin.org/articles/10.3389/fcomm.2023.1205610/full Maybe there will be a time when all these "content creators" on youtube peddling their ware will be virtual. It might actually be an improvement.
  9. I'm sorry for the small (wo)man getting crushed by this, but when you are working on a freelance basis, that's part of the deal and a foreseeable risk. You are essentially an entrepreneur with the risks that comes with it. If you find you can't strike and have effectively no leverage on your employer, you know that you are in a vulnerable position. I'm not saying I'm fine with companies exploiting people by "outsourcing" all work from contracted employees to freelancers but it is a trend that is happening over the years and it is up to governments to re-balance the risk and reward between employers and employees through legislations. Because if we leave free market forces reign supreme, we either end up with an industry that is exploiting people, or an industry that can't get the people it needs because everyone becomes aware of it's practices and decides to pursuit their ambitions elsewhere, where less risk is involved. Sometimes something of a crisis is needed to make people aware and improve things.
  10. Congratulations, that must have been an awful lot of work. I'm just an amateur who likes to tinker with all of this. I experimented a bit with the VLOG to V709 LUT and briefly the "nicest" LUT provided by Panasonic and also an ACES workflow with IDT's and ODT's (in Vegas Pro) on footage of my S5(i) and of course they all give different results. From my experiments I suspect that accuracy was never a design goal for the V709 LUT. I believe it is called a monitoring LUT by Panasonic and I think that is what it is good at. It shows more detail in the shadows and highlights than I can make out from the footage when e.g. using a rec709 view transform using an aces workflow. It also strongly desaturates very saturated colours, allowing me to see details in strongly saturated areas which get completely saturation clipped with the standard rec709 view transform. So what the V709 LUT is good for is getting a pretty good impression of all the detail and color you're capturing while giving a reasonably contrasty look with mid grey sitting in the right spot. The rec709 view transform (ODT) might be more accurate for colors captured within range, but it clips brightness and saturation hard for anything out of range and the amount of information the camera can capture outside of rec709 is quite impressive (to me anyway who was used to using consumer camcorders). I've e.g. shot some footage with blue led lights and while on the V709 LUT I could make out all kinds of detail being captured (while the blue looked very desaturated), the standard rec709 transform showed strongly saturated, even blue surfaces lacking any detail. But if you then start massaging the footage you'll find all the information is still there and you can bring it into range if you want to. Or you create a HDR export which simply contains all that detail (which then can't be shown properly on an OLED screen as it can't show colours that are both bright and highly saturated). I also find it interesting that some of the colour errors you describe are what I recognize from all the years using Panasonic cameras and camcorders. Especially the way it handles sky is something I've seen in all their cameras. It seems they have a kind of recipe they stick to religiously. But so far I settled on using the built-in V709 LUT as a monitoring LUT on camera and then using an ACES workflow to grade colors. I haven't got the tools to verify accuracy but it looks good enough to my eyes. Most of the times I don't even need to bother with corrections but then again I don't have critical customers to please other then myself.
  11. For such occasions I actually just use the built in microphone of my S5 with a self-made mini wind-muff stuck on it using double-sided tape. For capturing ambient sound it is good enough and not much worse than an external omnidirectional microphone. I'm actually always a bit surprised by the quality of the microphones Panasonic puts in their cameras. This has also been true for their camcorders. You may tweak the frequency response to taste in post. Using a fairly low recording level so the auto-limiter doesn't need to kick in also helps. In my experience, when you want to capture some specific sound like someone talking to camera, having a directional microphone on top of the camera is not helping much due to poor placement of the microphone. But if someone else has positive experiences I'm curious to know as well.
  12. What is supported by a TV depends on the model (obviously) and how it gets ingested; i.e. Is it read from an inserted thumb drive or through some dlna server etc. All three things you mentioned can prevent a TV from playing back the footage. The more you stick with bog-standard formats, the more likely it is to play. So something like 8-bit 4:2:0, 6 Mbs 1080x1920 or UHD will work. I think following DVD standards for HD Tv's and blue-ray standards for UHD Tvs should be a safe bet. (e.g. DVD has a max bit rate of 10Mbs if I remember correctly and are in practice on average 6Mbs).
  13. People share cards between cameras? I probably don't have as many cameras as Andrew but given how primitive and limited camera software typically is and how finicky data management can be, I don't swap cards between cameras unless I can do a reformat card as the first operation in the new camera.
  14. All "social media" platforms are eventually always turned into marketing platforms by their owners, and as content-creators are a species which lives exclusively on such platforms I would like to call them outsourced-marketeers. Why employ your own staff to drive a taxi when you can run a platform like Uber and have all these individual drivers compete for rides? Why have your own staff to deliver packages when you can contract all these individuals to deliver packages and have them compete with each other? Why have your own marketing department when you can have all these content-creators compete with each other to peddle your message?
  15. Isn't this inherent to working with raw? When your blue channel clips, but your green hasn't yet, then as the brightness of the sky increases further the colour will shift to green as green can still increase in value, but blue can't. Some clever highlight colour recovery trickery is needed to restore the colour to it's proper value but that has to be done while debayering the footage. Lowering the exposure to avoid individual colour channel clipping would also work but as no cameras in this range have raw exposure tools you can't accurately check for this. The best you can do is check for colour skew but if in-camera this highlight restoration is being used than you still can't check for clipped channels properly. Maybe some raw recorders/monitors have raw exposure tools?
  16. I would have sworn I have seen the exposure meter on my S5 jump up and down when switching between vlog and the standard profile but as I checked recently, it now also stays fixed on 0. I wonder if it is one of those things they also silently fixed during one of the firmware updates. Or the fix accidentally got included as it is part of a common code base that gets shared between all models. Anyway, it behaves properly now on at least the S5.
  17. When people say you must overexpose vlog by 2 stops, they mean the exposure meter must say +2. The reason for this is that the exposure meter assumes a standard gamma curve and not a log curve. As mid gray on a log curve sits two stops above mid gray compared to a normal gamma curve, the exposure meter must say +2. In my opinion this is a user interface design error from Panasonic which only leads to confusion. So the proper answer is that you must expose correctly, and therefore ignore the exposure meter when shooting in vlog. Use the spot meter (which switches to EI in vlog) or waveform. Spot meter must say +0 EI and mid gray on the waveform sits on 42%.
  18. If I remember correctly, the S1H has a -1 setting for noise reduction in vlog which disables all noise reduction, so yes, zero should mean some noise reduction is still active. And all these codecs are optimized to discard visually imperceptible detail. I'm sure Panasonic tuned these codecs to the best of their ability but when you are shooting in a very flat profile, some detail might be classified as "visually imperceptible" and get discarded even though after adding contrast, the detail wouldn't be so imperceptible anymore, and now you've got these flat, featureless (chroma) surfaces in your video. I remember reading in some forum years ago that at some point a new camera from Canon was giving such clean images that banding acros an even gradient like a sky became vary apparent in the 8bit rec709 recording. The solution was to add a bit of gain while recording to hide it. So this became common practice with that camera.
  19. Looks to me as if a conversion to 8bit is happening before a conversion to a new colorspace is done. I'm assuming you are recording in 10 bit? Recording vlog in 8bit is no good. Maybe this is useful: https://business.panasonic.co.uk/professional-camera/varicam-eva1-color-grading-in-aces-davinci-resolve-tutorial
  20. Just a guy, reading the same rumour sites as everyone else, but then decides to make a video about it with catchy thumbnail (expressive face, bold colours) and suppressing a healthy dose of scepsis, hoping to gain some traffic. He's not the only one using that technique.
  21. Michael S

    Panasonic GH6

    Online stores have essentially become like auctions. Computer sees what all customers and competing online stores are doing in real time and adjusts it's price according to fancy algorithms trying to maximize profits. Most shops have the decency to only adjust prices during the night but Amazon and decency... In physical stores they are gradually switching over to digital price tags. It is only a matter of time until these start to adjust automatically as well during the day if local laws will allow for it. If only it was legal to set the price based on user profile data...
  22. 1) If the vibrations are high frequency enough, you will still get the jello effect, even with the tiny sensors. I would not expect any damage though as there are no moving parts in the iphone. 2) All Ibis units have hard limits on their range of movement and I'm quite sure if you yank them to their limit hard enough and often enough something will break eventually. These systems were not designed for violent movements, there is no mechanical dampening or absorption when they reach the limits of their range, just loud clicks. I would not risk my camera at it unless I would be ok with breaking it anyway. Maybe you could get creative with something like this: https://www.proaim.be/collections/shock-absorbing-systems like this photographer: https://radpowerbikes.eu/blogs/the-scenic-route/a-rad-setup-for-ebiking-photographers If you happen to know any farmers, they are usually also quite good at creatively putting together some mechanized contraptions.
  23. Michael S

    Panasonic GH6

    That probably, and there are some strong cultural differences between how colors are appreciated. I sometimes do get the impression that people from say north-west europe are generally "afraid" of color (in clothing or interior choices) where in other parts of the world they really appreciate bold colors. Just as there is no world-wide agreement on what "good" skin tones are. Must be pretty hard for a world-wide audiovisual equipment maker to please everyone.
  24. For our european visitors, in my country the Lumix S 24mm F1.8 is 900 euro's in every shop (well, 899 actually) but in Amazon italy it is 750 for reasons unknown to me. Has been for quite a few days now. I ordered one, it was an excellent copy. https://www.amazon.it/s?k=lumix+s+24mm&i=electronics
  25. Have you tried setting the sensitivity of focus peaking to +2 and turned on "black and white preview" where the EVF/LCD image is rendered in black and white and the peaking dots are shown in e.g. red? It may still not be enough, but that's about as good as it is going to get on the S5.
×
×
  • Create New...