Jump to content

kye

Members
  • Posts

    7,504
  • Joined

  • Last visited

Everything posted by kye

  1. I'm not really familiar with the difference between motion tracking and capture (am now - I just looked it up!) but obviously I'm at the edge of my knowledge 🙂 One thing that stands out about the difference would be that the accuracy of motion tracking the camera/lens/focus ring would have to be spectacularly higher than for motion capture of an object in the frame. Unless the sensors for these things were placed very close to the camera, which would limit movement considerably. I guess we'll see though - @BTM_Pix usually delivers so I wouldn't be surprised to see a working prototype in a few days, and footage a few days after that!
  2. Happy for them to have all the opinions in the world about aesthetics. It's questionable if you'd want a YT channel to have opinions about the usability of something like a cinema camera, when all they understand is making YT. Opinions are absolutely NOT welcome when it comes to the engineering. Sadly this puts them in the land of "alternative FACTS" which I like to just call "lies". Sadly, there's a lot of the middle one from camera YouTubers, and a smattering of the latter, mostly concentrated to a minority. Fair enough that you were talking about stills. Do you find that with the higher resolution cameras (BMPCC 6K, R5 8K, etc) that sensor size makes more of a difference than it did in 1080p? If so, it could be a resolution thing? I've subscribed to his channel for years, but don't watch many videos. IIRC It's the channel for a retail store and has relatively similar style to CVP - ie, based on specs and practicalities and designed to assist people in understanding equipment prior to purchase or rental. Yeah, like I said - lots of variables change and so there's no comparison that's even remotely straight-forwards.
  3. I disagree about them being camera related. The whole idea of a vND is that they act as a colour-neutral filter to simply let through a proportion of light. Any NEUTRAL density filter will attenuate all frequencies of light in equal proportion. We know they're not perfect and this results in colour shifts, however, all cameras 'see' in the same basic RGB colours. There can be very slight differences between which frequencies of light different manufacturers sensors are sensitive to, however these will be very very small differences and if a filter is so crazily built that it's very different for one camera than another then you'd want to avoid it at basically all costs as its colour response would be spectacularly non-neutral. Here's the plot comparing a range of digital cameras - not much difference: Source: https://www.researchgate.net/publication/342113086_Introducing_the_Dark_Sky_Unit_for_multi-spectral_measurement_of_the_night_sky_quality_with_commercial_digital_cameras
  4. Any chance of sharing the test, and original files? I'd be VERY keen to play with them and see how they compare!
  5. kye

    Canon EOS R5C

    For those people there are only two kinds of delays: ones that take less than the 0.2s that their ADHD will tolerate, and larger ones that mandate Instagram usage. My experience of living with ADHD teenagers is that normal life provides 100+ moments when social media can be utilised, therefore the delay in mode switching is basically invisible! Thinking more about their choice to make it a Dual-Boot OS reveals something very interesting. By going dual-boot they're essentially showing that it's easier to take an existing Cinema BIOS and port it to the R5C hardware than to integrate new features into the stills BIOS. This implies that: the hardware in the R5C must be relatively similar to a cinema camera, otherwise it would have been too difficult to port and they would have gone the other way (like they normally do) the divide between the stills and cinema divisions must be far less now, as the development would have required someone from the cinema department work with the stills department to develop the hardware and get the software to work Obviously it's not clear what the politics are, but it means a cultural change within the company to either let it happen (if you're the cinema department) or to order it to happen (if you're in charge and can make people do things they don't want to do).
  6. The big drive for resolution in Hollywood comes from the VFX teams, who require the resolution for getting clean keys but also for tracking purposes. I've heard VFX guys talk about sub-pixel accuracy being required for good trackers as by the time you use that information to composite in 3D elements, which could be quite far into the background, the errors can add up. Obviously each technical discipline wants to do its job as well as it can, and people do over-engineer things to get margins of safety, but I got the impression that sub-pixel level accuracy was in the realm of what was required for things to look natural. The human visual system and spatial capability is highly refined and not to be underestimated, but of course this will be context-dependent. If you were doing a background replacement on a hand-held shot of a closeup that involved mountains that were relatively in-focus then a tiny amount of rotation will cause a large offset in the background and it would be quite visible. Altering the background of a shot that has moderately shallow DoF and only involves the camera moving on a slider would be a far less critical task.
  7. I've watched a number of tests comparing vNDs over the years and agree - the quality is limited regardless of budget. Also, cost isn't a predictor of performance either, with some mid-priced options out-performing higher priced options, often quite considerably.
  8. I'm not questioning if there is a look, I'm trying to work out what technical aspect might be causing it. Anything you can understand you can work with, and potentially accentuate or minimise for creative effect. One of the main challenges with trying to compare sensor sizes is that you can't change single variables - every change effects so many variables simultaneously that it's almost impossible to do any kind of aesthetic tests. What I mean is that you can get two cameras with different sensor sizes and two lenses and set them up to have an identical FOV and POV. If you organise your test well you can also perfectly match one or two other variables at the same time, but you'll have probably 5+ other variables all different. I think that's why there are a lot of very well done technical comparisons that only focus on one or two variables (for example Steve Yedlins excellent comparison of Lens Blur on different formats) but only very subjective comparisons of the overall 'look' between formats. I've read a lot of these accounts of subjective comparisons and tried to discern what technical aspects might be behind them, and I'm yet to actually find anything in-particular that is fundamentally different, but there are still many factors I haven't ruled out, and I've definitely learned a lot of stuff along the way. One thing that I thought was especially interesting was the effect that background defocus had on 3D 'pop'. In my lens tests I have consistently found that even a small difference in background defocus (ie, shallower DoF) had a large impact on depth. One test I did involved comparing lenses all at the same aperture and looking with one eye through a roll of cardboard so that I couldn't see the edges of the image, and comparing how much depth I perceived from the image. The interesting thing was that there was a surprisingly strong perceived difference in my test between a 55mm lens and a 58mm lens at the same aperture. Obviously the 58mm lens had slightly shallower DoF, but it was so slight that I had to actually measure the bokeh balls in the background to confirm it was different, but subjectively it made a much bigger difference than you'd imagine. My current thoughts on it is that it's likely to be a combination of a range of factors that accumulate to form the aesthetic impression. Of course, I still have much more to learn about it so this isn't a conclusion but rather more of a working theory. I do find that it's actually been a very good question to ponder, as it has lead me down quite a number of paths of enquiry that have taught me a lot about the technical aspects of a digital imaging system as well as the aesthetic implications of various technical aspects of such. Like all things, the value is in the questioning... Interesting about the Panasonic vs Sony EOSHD Pro Colour sales, but not entirely surprising. Sony used to have terrible colour! My impression of Panasonic colour is that it was ok with the GH5 but has gotten nicer with subsequent releases. If the GH6 has Panasonic S1H level colours then that would be a huge draw-card for me in upgrading I think. There's a rumour that something will happen in their live-stream this week, so I guess we'll see about that 🙂 I quite like Kai but with a few caveats. Firstly, he doesn't make the mistake of stepping out of his expertise (or doesn't do it like others anyway). He doesn't pretend to know the tech, doesn't try and explain it, and doesn't pretend that his testing is anything other than waving a camera around in a relatively hap-hazard way. I've delved into the world of professional DPs and seen their camera tests (which are very difficult to find BTW as they're normally on Vimeo with cryptic titles) which typically only test one variable at a time and aren't meant in any way to be a review, just exploration of the tech. There is also a world of semi-professional DPs who do commercial work but also do YT and non-DP revenue streams (like Tom Antos, Matteo Bertoli, Humcrush Productions, etc) but even these guys often have elements of their testing processes that aren't controlled for. Of course they're not normally claiming that a test is pristine and not claiming to know the tech or try and explain it, but sometimes I'll watch a test and think "I wish they'd manually WB the cameras beforehand rather than just set the colour temperature" or similar things like that. This leaves the poor YouTubers with basically no hope. They don't actually shoot things professionally like the "hybrid" DPs so they can't talk about the concerns or working methods of real sets, and they also don't have the discipline that sets often involve like a DP requesting a particular T-stop and lighting ratio and doing things by the numbers. They also don't have the technical discipline to review things because they are in the business of producing, hosting, filming, editing, and selling advertising on a show, rather than in the tech itself. Some of these people understand that and keep within the lines, and others just don't, and make fools out of themselves in the process. Of course, the sad thing is how many people don't know enough to know the difference, which is why these people can have lots of followers and yet fumble most of their content. What are your thoughts on the OG BMPCC and BMMCC in this regard? I thought they were well known for their magic / mojo. If so, they are an interesting example because they're doing it despite their sensor size rather than because of it. They do raise an interesting element though, which @Andrew Reid touched on earlier when talking about how much of the lens image circle falls onto the sensor. Despite the BMPCC / BMMCC having smaller sensors they are often used with c-mount lenses that were designed for this sensor size, or potentially even smaller, and thus they are looking at almost all of the image circle from many lenses they are used with. I must admit that I find them less magical when used with glass designed for larger sensors like MFT or FF.
  9. I'm guessing that if you have the studio version of Resolve you could do it? I'd imagine that the files from the R5/R3 will be readable in Resolve and it can definitely export CinemaDNG files. You'll need the paid (Studio) version though as the free version has a resolution cap on it (4K perhaps?) and I'd assume you're dealing with larger resolutions from those cameras.
  10. Why am I now having visions of a BMPCC6KPro with a 15mm F8.0 Body Cap Lens 🙂 🙂 The other combination, well, I'll leave some mystery to that one until I do it.
  11. I tried the 3x bitrate on my 700D and found it did very little to improve the standard compressed file, as both were low resolution (700D had ~1.7K) and were still over sharpened. YMMV of course and the EOS-M is a different camera too. This is a great feature as 2.8K is a sweet spot in resolution I think. It's hard to make a true 1920 sensor look sharp because it's not 1920 4:4:4 so the debayering adds a blur, but 2.5-2.8K adds just the right amount of oversampling without having 6K or 8K ridiculous-o-vision and the file sizes to match.
  12. OMFG! Famous for good reason!! I've been trying to work out what might cause different sensor sizes to have a different look and one aspect that I haven't ruled out was to do with the percentage of the sensor that was able to absorb light (ie, not the area between the pixels). This plays into how the threshold between in-focus and out-of-focus would be rendered - the "roll-off". Now that manufacturers have managed to make the gaps between the pixels smaller, have you noticed if this changes the look of the format? An upgrade at last! In the case of the GH5 (which I own and appreciate) it should be noted that it has quite average colour science. Compared to the superior colour science of the OG BMPCC or BMMCC the GH5 pales, and so do the images, to me at least, despite these having smaller sensors than the GH5. The problem with assessing the aesthetics of such things its very difficult to create a direct comparison where everything else is equal. A slight difference in brightness or contrast or saturation or WB or DoF can overrule some of these more subtle aspects like what we're talking about here. I would suggest that almost all camera reviewers understand less than half of what is going on inside the camera. Most less than a third and probably the majority approaching 10%. Chris and Jordan are particularly bad because it's obvious that along with the tech, they also don't understand how people significantly different to them use cameras either, which are probably responsible for the majority of images created.
  13. Ah yes, I had forgotten you'd shared info early on the first page. I guess I got distracted by the subsequent posts where it seemed nothing had been established except an increasing urgency!
  14. @stefanocps I've never used a modern GoPro, or a modern point-and-shoot either. My experience with threads like this is that either someone who has personal experience sees the thread and joins and gives useful advice, or they don't and they go no-where. People like me and @webrunner5 keep the thread alive by replying, giving you general advice to google and checking your logic, thus making it more likely someone with specific experience will see it and reply. Lots of threads on forums get zero replies and sink like a stone, especially on larger forums where something scrolls off the front page in a matter of hours. Considering no-one else has replied, I'd say you're on your own!
  15. I went down the ML RAW path for a bit with my 700D. The files were fine but it was unreliable and a bit finicky in some ways. The 700D build was (at the time, not sure now) one of the most developed, but the RAW was still experimental at that time and I was using the 10-bit compressed variant, so multiple experimental features within an experimental build. @mercer says that the 5D build is rock solid and he never has any issues with his, which I think he shoots with on a weekly basis (or more). YouTuber Zeek has a channel which features a heap of EOS-M ML RAW material, so if you're looking for advice then his channel probably contains the stable builds and various tips for it.
  16. I already shoot on my lightest gear, and the person who normally uses a cinema camera would end up being the winner anyway! Now, if it was a thread dedicated to selling equipment we don't actually use, then that might be a different story 🙂
  17. The people in the comments section always know more than the people in the video, even if they don't lol. I was just going to post a reply saying "where has this been all my life??" but I realise I watched one recently that was hilarious.... they try sparkling water! In a rare moment where every star in existence aligned, I sent that to my daughter while we were out at breakfast with a few of her friends and she played it for the whole table on her phone and everyone watched and for 5 whole minutes no-ones ADHD interrupted. I wouldn't have believed it was possible if I didn't actually witness it!
  18. kye

    Canon EOS R5C

    I've re-read the last few pages of this thread and I'm pretty sure there's almost no actual communication happening, just people replying to something that they think the other person said (but often wasn't said and probably wasn't intended), forgetting what was said in previous posts, taking individual statements out of context, etc. I've also realised that this is the case with every thread about a new Canon camera, although it doesn't seem to be the case with cameras from almost any other brand. Something to reflect on. @Django @Emanuel @Video Hummus While I'm in a reflective mood I'll also say that I think that film-making is too complex a topic to discuss effectively, online at least, due to the sheer amount of interrelationships that happen (almost everything in film-making relates to almost everything else in some non-trivial way), and also due to the depth of technical knowledge required to understand what is going on. I've read entire exchanges on things like "can I downres 8-bit to get a 10-bit image" where every post contained a factual error. Concision is a real factor here too. Anyway, I hope that everyone is putting more effort into what they do with their equipment than they spend talking about it.
  19. It's definitely the YT compression, but you just need to prepare for it before uploading. I've done a few tests before with various combinations of NR and adding noise etc and just uploaded the video in Private mode so you can see what the compression does to it. Uploading in 8K doesn't really change the lower resolutions - watch a 4K video in 360p and you'll see what I mean 🙂
  20. kye

    Canon EOS R5C

    ML RAW from a 5D3 has magic that I haven't seen from a Canon compressed file yet, but it's great for you that you're not seeing it because you can be happy with what they give you. Some others can see it and those comments are in this thread if you wanted to go back and see, so I was talking with them about it when you jumped in and questioned the discussion. The purpose of these threads is to discuss these cameras strengths and weaknesses right? That's what this is. It's great that you can't see what we're talking about. My advice is keep it that way - life would definitely be easier if I didn't! There isn't any worshipping going on, simply that there is a camera with a high standard of image and it makes sense to compare using this as a benchmark. If someone was a sprinter and said that they're great but they're no Usain Bolt then no-one would claim worshipping was going on. If a restaurant served a great meal you might say it was spectacular and certain elements reminded you of El Bulli no-one would claim worshipping was going on. Having cameras that are getting more and more expensive, have huge specifications and even greater expectations, and saying "they're great but over the last decade we haven't gotten closer to an Alexa" isn't Alexa worshipping either, it's just having a benchmark. It's not even like it's an irrelevant or inaccessible benchmark - most of the productions on Netflix or other streaming services are shot with ARRI gear and the Alexa range is probably the camera I see the most footage from and I enjoy seeing the most.
  21. Not bad but pretty hard to tell as not many shots with neutral lighting. It is, however, an excellent video to play a game of "spot the shots they did noise reduction on". YouTube LOVES footage without any noise, it has a party and you get fun things like this! or this: It makes sense that doing slow-motion in such an environment would give some noise, so no judgement there. The secret is to do NR to all the shots in post, then apply some grain over the top so all the shots are even. YT compression will clobber the texture of the grain, but it will prevent the banding. It would be interesting to see some RAW vs compressed shots of skintones in a controlled environment with pristine CRI and exposures.
  22. kye

    Canon EOS R5C

    Unfortunately I don't share your thought that everyone knows this. I've seen many conversations in the past, both here and elsewhere, saying that digital stabilisation / EIS will make IBIS and OIS obsolete. Perhaps it will, in action camera products, but not if you want to maintain a 180 shutter, as I would imagine the majority of people who shoot 8K RAW would be interested in doing. An additional factor that comes into play is the general lack of understanding about how these things actually work. Of course, we can't expect everyone who uses a product to understand how it works - none of us could ever use a computer ever again as they're now hundreds of times more complex than any person could ever understand - but knowledge is power and there are consequences to people not understanding some of these things. A lack of understanding about ISO limitations might cause someone to suggest that they can darken an image with aperture to get a deeper DoF and simply compensate by raising ISO. This is true but if their understanding of ISO is that "higher = brighter" then they're going to risk ruining an entire shoot because they didn't understand the limitation of the technology. Same with digital stabilisation. It does have a stabilising effect, just like raising ISO has a brightening effect, but it's not the same as IBIS or OIS, in the same way that raising ISO isn't the same as turning your lights up. Any time a conversation begins with a faulty understanding of reality, the danger is that it goes in strange directions that are misleading and outright wrong. There were many references in this thread to digital stabilisation being a substitute (or even an upgrade) for IBIS. Anyone who knows that OIS and IBIS are similar (which they are) might conclude from these comments that digital stabilisation can be a good substitute for IBIS and OIS. The fact that the tests presented included OIS and this was basically ignored in pages of discussion is misleading at best. There is something magical about ARRI colour. It is magical in both its RAW form and in the Prores files from it too. It is expensive though. There is magic from the older BM cameras (OG BMPCC and BMMCC) but in many ways these cameras are a PITA to use, and also limited to 1080p, which for many isn't enough. There is magic in the ML RAW files too, even when graded with a simple LUT. However, there wasn't magic in the compressed files. A couple of people have expressed that the compressed files from the latest Canon cameras have "clay" like skin-tones. Now, we know from the 5D3 that the RAW was great (thanks to ML) but the compressed files weren't, so we can deduct that the compression that Canon employ removes some of the image quality. My impression of ML RAW through Youtube compression was that it has this magic, but that normal Canon footage from the Canon you tubers does not. I then make the assumption that the lack of magic in the current Canon cameras is from the compression and image processing, rather than assuming their sensors have gotten worse since the 5D days. Maybe this is a faulty assumption, but I suspect not. In my experience heavy NR and compressed LOG profiles are generally what result in clay-like skin- tones. That leads to the idea that if you want to get great skintones from something like the R5C then you have to shoot RAW. No problems so far, good stuff, but it gives you a choice. You either shoot the full 8K RAW with the large file sizes, or you crop into the sensor which you'd have to compensate for in lens choices and more noise etc, or you shoot compressed codecs and lose the magic that is very likely to be present in the RAW. One way around that is to use an external monitor to get RAW out of the camera but compress it to Prores, which can (as ARRI and BM have shown) retain the magic within skin tones. That's it. That's the logic. My thoughts on the wider topic are this: Cameras are giving us more and more pixels, but less and less magic. In 2012 Canon released the 5D3, which had a sensor that captured magic (when paired with a hack). Also in 2012 BM gave us a quirky sub-$1000 camera with a magical image, but the magic was also in the compressed files. Now, a decade later, we have cameras that cost 3x, 4x, 5x, or more, the price of the OG BMPCC, but there is no magic. We have 16x the number of pixels in the image, but no magic. We have 120p, 180p, 240p, but no magic. Worse still, manufacturers have managed to brainwash us to not even expect magic. The entire point of cameras is to make people feel something. Images should be emotive. Colour is a great way to do that because it's subject agnostic. A CEO giving a quarterly update will have a substantially different effect on people emotionally if their skin looks radiant rather than pallid. This is something that matters. Sure Canon RAW might be similar to ARRI RAW, but (as you say) ARRI RAW gets professionally graded and Canon RAW doesn't. This is all the more reason for the compressed files to look just as great as the RAW. BM gave us this in 2012, after all. In this sense, cameras have gotten harder to get good colour out of instead of easier. But this gets us into the crux of the matter, which is GAS. If you're happy with what you have you stop looking at what else you can buy. If Canon implemented Prores and a colour pipeline that could keep the magic in the files, perhaps requiring a LUT to be applied in post, then people would be happy and not looking to upgrade. Lots of people who own a 4K RAW camera that they're not completely satisfied with would probably be interested in an 8K RAW camera. Almost no-one who owned a 4K RAW camera that made wonderful images would be interested in that same 8K RAW camera. The manufacturers are in the business of keeping you happy enough to buy but unhappy enough to upgrade as soon as you can. What do I own? I own a GH5 with manual and vintage lenses. I shoot handheld in available light in a verite style with no directing and no retakes. The IBIS and internal 200Mbps 1080p 10-bit ALL-I codec create a nice image that's easy to work with. I tested the 5K, 4K and 1080p modes with my sharpest lenses stopped down and the difference in resolution and sharpness was small and so the benefits the 1080p ALL-I files have in the rest of my post-production workflow outweighs that slight IQ bump. I am not happy with the colour science / DR of the GH5, and this is one of the weaknesses I hope the GH6 fixes. I am also not happy with the low-light performance of the GH5, which I also hope the GH6 will rectify. I own a GX85 and GF3 for pocketable fun projects. These are paired with vintage, manual, or lenses like the 12-35/2.8 and the 12-60/2.8-4 that I plan to get to replace the 12-35. I own the BMMCC, which I bought as a reference for its magical image and colour science. I have done dozens of comparisons with it and the GH5 trying to emulate the colour of the BMMCC with the GH5. I haven't gotten perfect results, but I have learned a lot and still have much to learn. Being able to do side-by-side tests is the only real way to compare two cameras if you plan on really learning about how each of them works and how to get the best of both. In theory the 10-bit from the GH5 should be bendable to the colour science (but not the DR) from the BMMCC, but I'm yet to really nail it. I own the OG BMPCC, which I bought as a small (pocketable!) cinema camera to use for fun projects. Turns out the screen isn't visible with my polarised sunglasses (a terrible design decision) and isn't really visible in bright conditions anyway, requiring an external monitor, and negating the point of the camera over the BMMCC. The GX85 was the replacement for this and I'm just yet to sell it. What do I recommend? I recommend people point their cameras at interesting stuff. Assuming they're already doing that, I recommend that they study and increase their skills in story-telling, directing, lighting and production design, composition, editing, colour grading, music and sound design and all the creative aspects of film-making that are relevant to what they do and the roles they play. I recommend people practice and get as much experience with things. So many people on social media ask questions that they could answer themselves. So many more people on social media endlessly parrot things that "everyone knows" but are actually outright bullshit. I've lost count of the number of times I've read something, questioned it, done a test myself in an hour or two, and realised that this "common wisdom" is actually just flat-out wrong. I recommend that if a problem can't be solved by learning more or working around it (one of the reasons to practice and try things yourself) I recommend that people spend money on basically everything except their camera. Lighting, modifiers, grip, supports, audio, and lenses are far better upgrades than cameras, most of the time at least. In terms of cameras, which I know was what your question was actually about, I recommend that people truly understand their needs. Everyone wants the perfect camera and there isn't one and there never will be one. This means that in order to get the camera with the fewest compromises you have to work out what your priorities are, then starting at the bottom start removing them until you're left with a list of just your top priorities that can be met by a camera. I would suggest that existing lenses and other ecosystem factors would probably be high on this list for most people. My philosophy is that you miss all the shots you don't take, so priority #1 is getting a setup that you can use. Fast primes might be great, but if you film in the remote wilderness and can't carry the gear there then it's game over, pack lighter. If you need to shoot fast then primes won't work either - ENG cameras had long zooms for this reason. Would ENG footage have looked nicer if they could have gotten blurrier backgrounds or if there was better low-light performance? Sure. But primes were never going to work, and a lens that weighed 20kg/44lb was never going to work either. Priority #2 is having the equipment allow you to get the best content in your shots. For me size is a factor here. I regularly shoot in private places (like museums or art galleries etc) where professional cameras aren't allowed. I could afford a medium-sized cinema camera but I'd get kicked out of those places in a heartbeat. I also appreciate not hassling people around me with a large camera, and I don't appreciate the unwanted attention that it brings. For me, a GH5, lens, shotgun mic and wrist-strap is about as large as I'm willing to go. Priority #3 is having the nicest images come from the camera. This is why I use vintage and manual prime lenses. I want my images to look as cinematic as possible. Not because it's cool, but because it suits the subject matter and aesthetic of what I shoot and the effect I want them to have on my audience. It also makes me happy to shoot, and it's pretty obvious that I enjoy the technical aspects of it as well, so this is part of the experience for me. I prefer the look of a vintage lens with IBIS rather than a modern lens with OIS. I like the lack of clinical sharpness that vintage and manual lenses give me, because, once again, this suits my target aesthetic. This forum spends lots of time talking about this level - the image from the camera. It spends less time talking about the practicalities that are associated with those choices (priority #2) and even less time talking about Priority #1. Worse still, we spend basically zero time discussing aesthetic, which is what the whole imaging system is designed around. It's just assumed that more resolution is better and sharper lenses are better, etc. I have the distinct impression that the people here who could actually talk about their aesthetic, what the emotional experience of that aesthetic is designed to be, and how their equipment and process is designed to maximise this.
  23. It sounds like you're set on a point-and-shoot and seems like you have a budget range to work with. I'd suggest doing some reading and working out what the options are. Any good review of the ZV-1 will list a few competitors, and their reviews will list more. Build a list, read about each of them and eliminate any that don't meet your preferences, then search to find low-light tests and simply compare them. A few hours with google should give you at least a pretty good guess about what your best option might be.
  24. Do these people even still make videos? Here's a thought...
  25. Great stuff. I'll be buying something similar (although I'd be fine with a 1080p one) for use as an external display for Resolve using the UltraStudio Monitor 3G device that outputs HDMI and means that you can monitor your timeline from Resolve using the native resolution and frame rate from your timeline, plus bypassing the OS colour management malarkey so you can calibrate it properly. I'd be using that for video production while travelling. You know, once that actually starts being something a sensible person can integrate into their life once again.
×
×
  • Create New...