Jump to content

Michael S

Members
  • Posts

    36
  • Joined

  • Last visited

About Michael S

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Michael S's Achievements

Member

Member (2/5)

33

Reputation

  1. I was always a bit surprised that, while in the audio world everyone is familiar with the Nyquist theorem and how obviously you must sample with at least twice the sample frequency as the frequencies you want to reproduce, and a bit of spare so you can make a well-behaved low-pass filter, in the world of video this doesn't seem so obvious. I blame in part the manufacturers who loved to put a 1920x1080 bayer sensor in a camera and then to advertised it as being an HD camera while in the audio realm you wouldn't be able to cheat with sampling frequancies. Of course for video much more bandwith is needed so right from the start the engineers tried every trick they could think of to reduce bandwith needs. So yes, having an 8k bayer sensor, with an OLPF to avoid aliasing would provide an excellent source for a 4K video stream. If the recording is 8K that means you could also manipulate the image without immediately running into scaling artefacts when preparing a clean, detailed 4k delivery. 8K delivery? Useful for specific use cases where people have their faces pressed up to the screen or where the image is supposed to fill your entire field of view as with VR but otherwise not very useful.
  2. Transcend v30 or similar class, 64 or 128 GB. Never failed me sofar.
  3. Correct. I remember having read somewhere that BM uses FPGA technology while the big names use ASIC which explains a lot about the different size and performance characteristics between these products. https://www.geeksforgeeks.org/fpga-vs-asic/
  4. The power efficiency of one chip can be very different from the efficiency of another due to node size and to what degree it's processing is hardware accelerated. Maybe Sony uses a relatively large (= cheaper) node size or the chip has less optimized hardware for the processing it needs to do. There might also be more sample variation with Sony as the cooling efficiency can depend a lot on how well various parts are thermally connected during assembly. There is always some margin of error on soldering points, application of paste and lubricants during assembly and if this margin is larger than what the designers assumed you get clear sample variation.
  5. I've shot long enough with small sensor, 8-bit rec709 camera's that I got used to "just try to get the recording as close as possible to what you want the final image to look like". So don't drop saturation and contrast to the bottom during recording, only to pull them up in post. This would only make sense if there is a lot of reserve DR in the highlights, and when you are working on a camera where you have to try these tricks, there never is. Never use auto WB (or auto exposure) as this may make the colours and exposure shift during the shot which is very hard to correct afterwards. It's easier if they are off with a constant error, that's quite easy to fix as long as you are not too far off. When using these cameras with limited DR I was quite fanatical with setting the whitebalance manually as these cameras tend to exaggerate differences in colour (and contrast). With mixed lighting, artificial light may seem fiercely red while daylight is fiercely blue at the same time. Now that I've got a camera with good DR (Lumix S5) I just set the WB to cloudy when outdoors and incandescent when indoors. Only when encountering weird artificial light (LED, fluorescent or sodium) I might set a custom white balance. Minor corrections might be required in post but nine out of ten times it looks perfectly natural to me. In daylight, shadows are slightly blue, sides exposed to direct sunlight are slightly yellow, just as it is in real life. Only when all image content is exposed to either shadow or sunlight and the shots are of considerable length I might adjust the WB to the specific light just like how my eyes (or brain actually) would adjust to the colour of the ambient light. It is not so much direct advice that I got but when doing a course on making videos the most important thing I took from it is that people are more predictable then you might think which is especially useful when doing event videography. People are animals of habit and a lot of things we do are ritualized. This means you can always try to think ahead of what might happen next in the coming 10 seconds/minutes/hours and ask yourself what is the most interesting part about that, and how to best visualize that. The result of this is that you will find yourself more often in the right spot at the right time which is I think the most important quality of an event videographer. I remember once recording a wedding video for a friend (I'm not a professional videographer in any way) who also hired a photographer who was just starting out as a wedding photographer. Well, the photographer was actually just a colleague from work for whom this was a nice opportunity to build a portfolio to get started in the business. But I noticed that it happened several times that the photographer had to sprint to the spot where I was already waiting as he realized he was in the wrong spot for what was about to happen next. Always keep anticipating for what might happen, decide what the interesting aspect of that is, and how to best visualize that. You will not always get it right, but the number of times you get "lucky" will increase.
  6. My guess is that youtube under some circumstances ignores metadata but makes assumptions based on resolution like "if resolution = SD then input color matrix = rec601" and then does a conversion to rec709 behind the scenes, or it doesn't but it will still be interpreted as rec709 by the browser. My experience with video software is that you can't trust any of it to actually follow the standards and respect all metadata. If you create a video in a "modern" format (HD, rec709) it is most likely to show up correctly. Old formats (SD, interlaced) are often poorly supported. Maybe this link will put you on the right path: https://forum.videohelp.com/threads/329866-incorrect-collor-display-in-video-playback#post2045830
  7. Which is the premise of the first episode of the latest season of the "Black Mirror" series with "Streamberry" as the streaming service experimenting with exactly this. This is of course also very attractive for advertisers creating virtual influencers or advertisement specifically tailored to each individual to ensure the marketing material pushes all your right buttons to get you to buy the thing they are trying to sell you. The next step of targeted advertisement.
  8. My 2cents: Having a netflix subscription and seeing some of what is on offer, a lot of it already feels as if the script was generated using a particular algorithm. There is a market for un-adventurous "killing some time" entertainment and AI can probably help with churning out that kind of stuff. In other aspects of filmmaking, studios will probably see it as a tool to reduce cost and risk, while artists will see it as a tool to help them generate new ideas, and I think AI can do both. The struggle between studios wanting to play it safe and run a financially stable business and "auteur cinema" wanting to leave a very particular, individual mark on their movies and willing to take risks in the process is not new. That struggle will remain. The studios will always need to offer some room to these people as the studios will eventually always become irrelevant if they just keep churning out uninspired drivel. I think there are some cultural differences across the world with respect to how authenticity is valued. The example that comes to mind is how in places like Japan or China there are theme parks containing reconstructions of European or American monuments. I still have to see the first of such a park in Europe containing live-size replicas of Inca or Buddhist monuments. Typical for Japan are also the virtual artists or influencers like Hatsune Miku or Imma, the virtual influencer. As Japan is an outlier in many metrics, that doesn't say much about global trends, but I would not be surprised if future generations would be quite used to virtual personas with complete virtual backstories and such. Of course research is already being done on how to construct such a virtual influencer to sell more stuff to people. https://www.frontiersin.org/articles/10.3389/fcomm.2023.1205610/full Maybe there will be a time when all these "content creators" on youtube peddling their ware will be virtual. It might actually be an improvement.
  9. I'm sorry for the small (wo)man getting crushed by this, but when you are working on a freelance basis, that's part of the deal and a foreseeable risk. You are essentially an entrepreneur with the risks that comes with it. If you find you can't strike and have effectively no leverage on your employer, you know that you are in a vulnerable position. I'm not saying I'm fine with companies exploiting people by "outsourcing" all work from contracted employees to freelancers but it is a trend that is happening over the years and it is up to governments to re-balance the risk and reward between employers and employees through legislations. Because if we leave free market forces reign supreme, we either end up with an industry that is exploiting people, or an industry that can't get the people it needs because everyone becomes aware of it's practices and decides to pursuit their ambitions elsewhere, where less risk is involved. Sometimes something of a crisis is needed to make people aware and improve things.
  10. Congratulations, that must have been an awful lot of work. I'm just an amateur who likes to tinker with all of this. I experimented a bit with the VLOG to V709 LUT and briefly the "nicest" LUT provided by Panasonic and also an ACES workflow with IDT's and ODT's (in Vegas Pro) on footage of my S5(i) and of course they all give different results. From my experiments I suspect that accuracy was never a design goal for the V709 LUT. I believe it is called a monitoring LUT by Panasonic and I think that is what it is good at. It shows more detail in the shadows and highlights than I can make out from the footage when e.g. using a rec709 view transform using an aces workflow. It also strongly desaturates very saturated colours, allowing me to see details in strongly saturated areas which get completely saturation clipped with the standard rec709 view transform. So what the V709 LUT is good for is getting a pretty good impression of all the detail and color you're capturing while giving a reasonably contrasty look with mid grey sitting in the right spot. The rec709 view transform (ODT) might be more accurate for colors captured within range, but it clips brightness and saturation hard for anything out of range and the amount of information the camera can capture outside of rec709 is quite impressive (to me anyway who was used to using consumer camcorders). I've e.g. shot some footage with blue led lights and while on the V709 LUT I could make out all kinds of detail being captured (while the blue looked very desaturated), the standard rec709 transform showed strongly saturated, even blue surfaces lacking any detail. But if you then start massaging the footage you'll find all the information is still there and you can bring it into range if you want to. Or you create a HDR export which simply contains all that detail (which then can't be shown properly on an OLED screen as it can't show colours that are both bright and highly saturated). I also find it interesting that some of the colour errors you describe are what I recognize from all the years using Panasonic cameras and camcorders. Especially the way it handles sky is something I've seen in all their cameras. It seems they have a kind of recipe they stick to religiously. But so far I settled on using the built-in V709 LUT as a monitoring LUT on camera and then using an ACES workflow to grade colors. I haven't got the tools to verify accuracy but it looks good enough to my eyes. Most of the times I don't even need to bother with corrections but then again I don't have critical customers to please other then myself.
  11. For such occasions I actually just use the built in microphone of my S5 with a self-made mini wind-muff stuck on it using double-sided tape. For capturing ambient sound it is good enough and not much worse than an external omnidirectional microphone. I'm actually always a bit surprised by the quality of the microphones Panasonic puts in their cameras. This has also been true for their camcorders. You may tweak the frequency response to taste in post. Using a fairly low recording level so the auto-limiter doesn't need to kick in also helps. In my experience, when you want to capture some specific sound like someone talking to camera, having a directional microphone on top of the camera is not helping much due to poor placement of the microphone. But if someone else has positive experiences I'm curious to know as well.
  12. What is supported by a TV depends on the model (obviously) and how it gets ingested; i.e. Is it read from an inserted thumb drive or through some dlna server etc. All three things you mentioned can prevent a TV from playing back the footage. The more you stick with bog-standard formats, the more likely it is to play. So something like 8-bit 4:2:0, 6 Mbs 1080x1920 or UHD will work. I think following DVD standards for HD Tv's and blue-ray standards for UHD Tvs should be a safe bet. (e.g. DVD has a max bit rate of 10Mbs if I remember correctly and are in practice on average 6Mbs).
  13. People share cards between cameras? I probably don't have as many cameras as Andrew but given how primitive and limited camera software typically is and how finicky data management can be, I don't swap cards between cameras unless I can do a reformat card as the first operation in the new camera.
  14. All "social media" platforms are eventually always turned into marketing platforms by their owners, and as content-creators are a species which lives exclusively on such platforms I would like to call them outsourced-marketeers. Why employ your own staff to drive a taxi when you can run a platform like Uber and have all these individual drivers compete for rides? Why have your own staff to deliver packages when you can contract all these individuals to deliver packages and have them compete with each other? Why have your own marketing department when you can have all these content-creators compete with each other to peddle your message?
  15. Isn't this inherent to working with raw? When your blue channel clips, but your green hasn't yet, then as the brightness of the sky increases further the colour will shift to green as green can still increase in value, but blue can't. Some clever highlight colour recovery trickery is needed to restore the colour to it's proper value but that has to be done while debayering the footage. Lowering the exposure to avoid individual colour channel clipping would also work but as no cameras in this range have raw exposure tools you can't accurately check for this. The best you can do is check for colour skew but if in-camera this highlight restoration is being used than you still can't check for clipped channels properly. Maybe some raw recorders/monitors have raw exposure tools?
×
×
  • Create New...