-
Posts
7,882 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Posts posted by kye
-
-
3 hours ago, Cinegain said:
Indeed I was refering to the use for 'vloggers' that walk around with one of these, stretching their arms out, pointing the camera at themselves. This particular crowd I see no reason for having a cinema camera that require an extensive knowledge, thoughtprocess and skill, both in operation and in post-production. Now, if you're talking about certain YouTube 'shows' if you will, where there's an actual production environment... perhaps a set, but for sure lighting, audio, et cetera is taken heavily into consideration, as well as post-production and needing the footage to be of a certain standard and look with a ton of gradability, then of course a cinema camera fits right in. I was strictly talking about the Casey Neistat wannabes out there, riding around on their Boosted boards, trying hard to be semi-interesting, all using the same music and effects they've picked up from Sam Kolder and co; that basically would've used Snapchat and their smartphone to get content online easy and quick, but are seeking after bigger sensor aesthetics and optical zoom. There a point-and-shoot makes more sense than a cinema camera... Casey himself uses a smartphone on his runs, tries to have atleast a G7XII with him on-the-go, will be out and about with a Gorillapod and 80D (6DmkII?) for regular vlogs and B-roll and hates on Sony for not really having front-facing displays for vlogging, but doesn't mind so much using them for static tripod/rig shots at his studio. It's all about the application and what I keep on saying: the right tool for the job. Jon Olsson was obsessed with getting a RED. Thought it would make things EPIC. Found that it's a very impractical camera for vlogs and went with Sony/Pansonic/Canon interchangeable lens cameras instead.
I agree.. YT 'shows' is a pretty good name, although I'd still stress that there are people doing both 'shows' and 'vlogs' and also people who use a mixture of techniques, or even a technique somewhere in the middle.
Only earlier today did I watch a video by a lady who shows how she does construction projects (this particular video was about her making a large wood and metal gate) and as we'd been having this conversation I was paying attention to the shots in it. It seemed that she was sitting and talking to camera with the narration of the video and was cutting b-roll of the build over the top, however what was interesting was that her shot was a tripod / lav mic shot but it was outside in uncontrolled conditions, so that's somewhere in-between your two scenarios described above. Also, included in the standard tripod b-roll shots was a fancy dolly shot where the camera follows her from one room of her shed to another (the front face of the shed was open, so it was one of those "looking into the building from outside" style shots). It was an interesting shot because I think it shows she's got an appetite for upping the production value, and the fact she's already got a lav and sound is nice means she's not new to the game. She also sells plans for projects online, so her channel is a business.
Would she benefit from having a cinema camera? Maybe. I didn't see any evidence of auto-focus requirements, or IBIS requirements, so a cinema camera wouldn't be ruled out. If she shot 1080 in prores then maybe her edits would go faster, so there's perhaps some benefit. I don't think she'd benefit from having more DR via a flatter profile, but others like her might. I think the lines between (hand-held + selfie + outside) and (tripod + not selfie + studio) scenarios that the industry thinks in are so blurry that they ceased to exist quite some time ago.
And yes, I did notice Jon Olsson getting excited about the RED and then it not featuring much again. I heard a mention that their camera guy had a routine of taking it out to get b-roll early in the morning when they were in Monaco before Jon had woken up, but yes, in terms of lugging it around during the day or through airports etc it's not the right camera for the job!!
-
LOL, crying face is right!! But actually for a single operator it looks quite functional, it's quite elegant really.
Then again, I've used the HDMI input on my 32" UHD computer monitor as a monitor when doing some camera tests in my office!!
-
13 minutes ago, jonpais said:
I would use the term YouTubers instead, since to me, vlogging implies recording a sort of video diary of your life - though everyone's definition will of course be different!
And if I were making talking head videos of myself with the batcam for one of the video hosting websites, a fully functioning smartphone app would be more useful to me than an articulating screen.
I guess that's part of my point @jonpais - there are more and more people who don't fit the title 'vlogger' neatly. Any vlogger who watched a "how to vlog" video will be trying b-roll as an alternative to jump-cuts, and any vlogger who bought anything and has watched an unboxing video will be doing product shots, so to say that a vlogger is someone who only ever shoots selfie shots would eliminate most vloggers.
Take Kelsie Humphreys for example (who I showed above interviewing Tony Robbins) - she makes vlogs as BTS from her interviews. Does that make her a vlogger or is she the producer of a talk-show / interview channel? Same for Laura Kampf (also above) who shoots no-dialogue creation videos, using MF-focus tripod shallow DoF techniques, but also shoots vlogs, is she a vlogger or not?
We can use your definition of "recording a sort of video diary of your life" which I think is quite a nice definition, but it isn't very useful if we're talking about what equipment matches that style of creation, because video diaries can be shot in any style with any equipment, because it refers to a subject not a style.
It might have been the case that certain types of content matched certain types of shots and equipment (game shows vs TV drama vs blockbuster movie) but those pesky Youtubers haven't been told the rules, so they're just using every colour in the whole paint-box with reckless abandon!
-
I wouldn't be surprised @jonpais, not even a little bit.
It's the same in the hifi arena with CD vs SACD - you'd go to a demo where they'd be comparing CD with SACD and the SACD would definitely sound better. You'd look around and people would all be nodding their heads about SACD, but I'd be standing there and thinking "both sound completely awful compared to my CD setup at home - how is this a valid comparison?".
-
9 hours ago, Cinegain said:
Well... you say that, but I doubt it would make sense, as I said once before...
... a cinema camera should be used as the name would suggest in a product-type environment. Where you'd otherwise would've loved to use a RED/ARRI for, but due to budget or space/weight contraints isn't exactly an option. Rigs, matteboxes, follow focus, tripods, monopods, sliders, dollies, since the HDSLR revolution production style shooting has become accessible to anyone and I loved shooting that way with all the GH-range models. IBIS, C-AF, et cetera sure makes life a bit easier, but I don't actually mind doing it the oldschool way, that's been proven to work for decades and is actually still very much the industry standard of high-end production. That's what cinema cameras are all about. To use a camera with such purpose for vlogging... I do believe it's the wrong tool for the job, so a flippy screen and sweet C-AF... for a LX100 successor... yeahhhss! For a BMPCC4K... ehhh... not really a dealbreaker if left out. I think that they might even do that on purpose. Must've been a lot of consumers buying one of those 475 bucks BMPCCs back when they were on sale that went complaining because they had no idea what a cinema camera can and can not do. It requires an operator with know-how.
Vloggers tend to operate in three distinct environments, a fixed camera in a purpose-built studio, a hand-held camera while out and about (selfie and b-roll), and an action camera for harsh environments. Some have drones too.
These are often covered by a G7 or RX100 for the first two and a GoPro for the third, but there are vloggers who do use a dedicated camera for their studio setups, and I've seen REDs, C300s, FS5s and 1DXs as well as the usual suspects of the various Canon cameras. These 'premium' setups have boom mics, monitors, and large soft-boxes permanently setup. I think for those that have a fixed studio camera setup the BMPCC4K would be an excellent choice because of the combination of low cost, the anticipated high quality, and the convenience of having prores to edit with SOOC.
They might also find use as a mobile vlogging camera when combined with a gimbal, MF (like Sony users have learned to do), and a fast wide lens.
I find that increasingly YT creators are blurring the lines between documentary film-making, studio film-making, studio talking-to-camera (vlogging), mobile vlogging, travel vlogging, travel film-making, etc etc.
Will these niches drives a lot of sales? No. But I do think they'll have a small footprint in the YT / vlogger ecosystem.
Here are a couple of examples of potential customers that blend "vlogging" with more traditional film-making elements, and who might be interested in upping production quality:
I'll stop now, but there are many many examples of this blending of style, and I think the idea that you can approach RED/ARRI quality for mid-range mirrorless prices will be a HUGELY attractive factor for these people
As a special bonus, here's a couple that shoot mostly hand-held with an FS5 (IIRC) and a GoPro. Production quality isn't top notch (stabilisation is an issue), but I've seen previous videos where they had a smaller Sony video camera, so if they can afford an FS5 then a BMPCC4K wouldn't be out of the question.
-
9 hours ago, webrunner5 said:
These from Intel are even more interesting!
They are interesting. That the project is dead doesn't matter because I think it's something that they will have learned a bunch of stuff from, and will keep that stuff in their back pocket for future times and products. I think in the past companies would have done things exactly like this but just never told anyone about it, but because this is the cutting edge they don't really lose much from telling people it's a product 'coming soon' and then just don't launch it if people don't line up with their wallets open.
It's interesting that Intel went for middle-class-businessman - which is probably the right market to go for eventually, but the early adopters will be 'those pesky kids' and they will need something a little more fashionable....
Not a display, but an example of wearable tech that's actually fashionable.
-
9 hours ago, jonpais said:
4K is really just clean 2K.
If you're referring to the resolution after debayering, then you're absolutely right. People say that the 1080 from the C100 is so nice because it's downscaled 4K, and the A7III downscaling 6K to 4K seems to be a feature that isn't spoken about enough.
I find it strange that through this whole conversation about 4K vs 1080 on YT no-one mentioned debayering. With film you shot at the same 'resolution' as what you delivered in, but digital doesn't do that. Anyone who wants to see what 4K YT can look like if done correctly should compare it to a video by MKBHD, who shoots 5, 6, or 8K RAW.
and if you think he can't possibly be shooting 8K RAW because it's ridiculous in terms of camera equipment and storage, check this out:
There's a pretty strong technical argument that the A7III should be the C100 of FF mirrorless, because it should have a bunch more resolution in its 4K output than anything that shoots at 4K. I'm surprised that the pixel peeping people aren't publishing test charts of this. If I end up with an A7III then I'll do a comparison just for my own curiosity.
-
52 minutes ago, webrunner5 said:
Google glass was obviously a failure because woman complained that people were using them in the Restrooms. Not Google's fault. It was mostly a privacy issue here in the USA that they failed.
I thought it was that they made you look like a complete turnip! ???
-
6 hours ago, Shirozina said:
The use of Proxies / cache renders / optimised media has little to do with image quality and is mainly to do with working around highly compressed camera codecs that consume large processor cycles to uncompress and reconstruct on the fly.It's not even a laptop vs desktop issue as it takes a very high end desktop to edit these files smoothly. For most users the proxies / cache / optimised media are in codec terms higher quality than the originals but in practice image data is often lost in the transcoding process which is why you never want to do anything other than render to your delivery format from the original camera files. Also as far as grading accuracy is concerned most NLE's give you the ability to switch the viewer between the proxy and original file to check for things like NR , sharpening and any banding or macro blocking artefacts and obviously for compositing you don't want to do it at anything other than native resolution. Lastly - portability and mobility is not the same as battery power or independence from mains power. I never use my laptop on battery power alone for editing as it would drain the battery way too quickly. Desktop GPU's consume large amounts of power so it's totally impractical to expect them to run on battery power.
I think this depends on what level of film-making you're doing. The professional colourists over at liftgammagain are likely working with RAW and high quality Prores files from the high end cinema cameras, and live in a different world to the one we talk about on here. There was a thread about buying "cheap" client monitors for their grading studio (these aren't monitors to grade with, they're only for the clients to view the grade) and I just about had a heart attack when someone said that their budget was $8000 per monitor, and they were looking to buy half-a-dozen of them!!
-
1 hour ago, jonpais said:
Learn something new every day. I had no idea you can’t grade proxies.
I think it's to do with the colour accuracy after a conversion, especially considering that most proxies are much lower quality than the original footage. If you were transforming from RAW to something like Prores 4444 HQ then I don't imagine it would be a huge problem, but I could be wrong about that.
Another thing I didn't mention is that if you're doing VFX work or any precise tracking then you want to use the original files too because they'll enable much better tracking accuracy. I've heard VFX people say that for realistic 3D compositing work you sometimes need to be able to track to within a single pixel of accuracy, or even to within half or a quarter of a pixel. I've never done it but it makes sense if you're moving the camera and putting in a 3D object that is also meant to move like it's part of the environment then if the VFX object wobbles around in comparison to the real environment you filmed then that's going to be quite noticeable if your tracking isn't great.
-
14 hours ago, Mark Romero 2 said:
If only we can get @mercer and his cat overlord to switch from Canon to Sony then balance in the universe will have been restored / utterly destroyed...
Cat overlord!!! oh, that made me laugh!! ???
-
6 hours ago, Andrew Reid said:
Said Nokia
I remember back in the day seeing a little video about a guy who worked for Nokia whose job it was to travel the world and see what cool things people were doing with phones, and report those things back to Nokia as product and feature development ideas.
One of the most interesting ones he mentioned was that villagers in rural Africa were buying a mobile phone as a business for their village. Obviously the other villagers would pay them to be able to call people on the phone, and the phone owner would make a percentage, but the big thing was that the phone acted like a bank. In Africa there is a very rural-oriented culture where people go into the city to get a job for a few years, but are sending the money back to their families in their village. So the city worker buys a pre-paid phone card, sends the card details to the phone owner in the village who then pays the family most of the value of the card, taking a percentage on top as a 'transaction fee'.
I thought this was brilliant for Nokia to be basically crowd-sourcing their R&D. I wonder if Apple or Samsung are doing similarly, or if perhaps the innovation is simply in software not business models? The most interesting thing about the smartphone, to me at least, was that it's basically a generic device, it has no specific interface, no specific display limitations, no specific processing limitations, etc. It's a device that can be adapted to as many different styles of interface, display, or app design that app developers care to create. It's kind of the antithesis of how old phones used to work (and how cameras currently work) where you choose a product and you get their hardware, their OS, and their apps all in one.
-
2 hours ago, webrunner5 said:
I am sort of baffled why BM only made this One version of the eGPU with a lackluster GPU in it. WTF! What the architecture of the Laptop can't handle the throughput??
And then it's AC powered. So kind of irrelevant for portable use. This is sort of a Polished Turd. ?
I agree that the chipset and requirement for AC power are both severely limiting factors.. I watched a Hackintosh video where the guy mentioned something about the compatibility of various chipset manufacturers that surprised me, but the fact they didn't put a higher powered Radeon in there is a bit strange perhaps..
Unless it was designed to a spec, perhaps something like "FCPX must be able to play 4K 30 in real-time with 2 LUTs and some basic curves adjustments applied on the nicest Apple display" or similar?
The first battery-powered one will be interesting.
One thing I learned over at liftgammagain is that you can ingest footage with a slower machine, you can generate proxies with a slower machine, you can edit with proxies, you can do sound with proxy video, but you can't grade with proxies, so if you are grading in front of a client then your machine needs to be able to play the original footage with all the grades applied in real-time. Grading, however, is done in a controlled lighting situation with calibrated monitors, so it's a situation where AC power is available, so in that sense an AC powered solution still makes sense.
Personally I don't need to be able to play graded footage in real-time, so it doesn't matter for me. I'm happy to scrub the playhead around and see how things look over the footage, and then render it out and watch that render back again, and then make changes if required. For my home videos my 'clients' are my family so I can play them the mostly finished project and get their inputs but also be reviewing the output for any strange things that catch my eye too.
-
You forgot to include "flange distance" which after all that discussion in the Nikon FF and Pocket 2 threads must surely be the leading search term in all of photography today!! ???
-
16 hours ago, Andrew Reid said:
The question is whether the novelty will wear off like it did with other 3D devices, because if there's not the long-term interest there, there won't be long-term 3D content.
It's a good way to differentiate it to other phones though... Certainly stands out as different, both in branding, design and the display. Not sure about the camera. Is it better than P20 Pro?
Most smartphones are too alike these days so this is good from RED... But we'll see how it does on the market.
My view is that the novelty will wear off, and that's a good thing, but ultimately the future is 3D because AR (augmented reality) will dominate. The popularity of smartphones is undeniable, and while they are great for having constant access to apps and the internet, they absolutely suck as a user interface because the screens are so small. The future of the smartphone interface will be AR, essentially placing smartphone technology within your field of view, like the heads-up-display did for driving. A glimpse of it is with the Apple Watch which is essentially a second screen for your phone, but one that is much handier than having to retrieve your phone from your bag / pocket / nightstand, and AR would eliminate that separation between a person and their device.
In terms of how well this particular product does in the market, who knows. There's a saying in startup culture - "being early is just like being wrong" but even if they are early, establishing yourself as one of the early developers with the knowledge, patents, tech, and company culture will mean that when demand goes up they can be ready.
Capturing 3D won't really take off until there is a decent appetite for consuming 3D, which didn't happen with TVs, but might with VR, and definitely will with AR, however I think that AR could well be a long way off, at least in product lifecycle timescales. Google glass was obviously a failure (for many reasons), but products like snapchat spectacles are bringing wearable tech into the market in ways that google glass couldn't accomplish, and if snapchat spectacles had the functionality of the Apple Watch to display instant messages and other basic info then they would be very popular.
I think VR will get some traction before AR, most likely from gaming, but the fact it blinds you means that it's limiting in how and where people will actually use it. We're not about to see the average commuter put on a pair of VR goggles on the train for example!!
-
1 hour ago, webrunner5 said:
All the above is true. But none of it makes me think of using a Apple Laptop to do it using power hungry Codecs. Apple just seems to me to be on some suicide mission with video lately. They are becoming irrelevant in it with their design over usefulness thing.
You may well be right. I have an Apple laptop and I use it for the above, but I chose Apple because of other factors. If someone didn't have a laptop already and was wanting to only use it for video I wouldn't recommend an Apple laptop unless they were a FCPX user.
When I was in the market for a new laptop I did a detailed comparison between the MBP and a couple of non-Apple laptops, and it was things like the integration with the Apple ecosystem that decided it for me - something that had nothing to do with video. Of course, being my only machine, it's what I use for video, and so being able to upgrade it with an eGPU if I want the extra performance provides some extra flexibility and options to extend my current setup, which is nice to have.
I think that the average person on this site is oriented around video more than the target users for eGPUs, or certainly eGPUs with this processor anyway. If you're looking for a video-first machine then this eGPU isn't the way to go, and I think people that are video-first find it hard to understand that video products should be made for anyone other than video-first people.
In the same way that it would be silly for me to hang around on mobile phone forums criticising every smartphone because it doesn't shoot 4K 60 in 16-bit RAW, have XLR inputs, or SDI connections, I find it strange that people who would never buy an Apple laptop are all-of-a-sudden the experts in what to buy for those people who do own an Apple laptop.
It's like the film-making industry hasn't worked out that convenience and decent video can now co-exist, that the vast majority of film-makers are amateurs, that the biggest networks aren't broadcasters, and that the majority of video content isn't consumed on projectors, and possibly even on TVs!
-
11 hours ago, IronFilm said:
Errr... still missing the point I think? As what I'm saying has nothing to do with how fast the flashes cycle (heck, they could take 10 minutes to recharge for the next flash for all I care, it doesn't impact my point. )
But in your case, you want opposite and them to fire potentially dozens of times a second, and perfectly (so a timecode box with genlock attached to the flashes?? ? ) in sync with the camera shutter??
Yeah, nah. We know how bad it is for the subject being blinded by flashes a few times in a row, could you imagine doing that many times a second for minutes at a time? Quick path to becoming the most hated photographer in town!Ah!
I thought you were saying it couldn't be done.
If you're saying it would be a bad idea, then that's a different conversation. I agree that triggering people with epilepsy everywhere you go would be a bad idea, and you can't overpower the sun with anything except very very bright lights, so that's the ballgame for overpowering lighting for video right there. The alternative would be continuous lighting for a burst but that would be pretty nasty in power requirements and pretty horrific for the poor people the light is aimed at too.
The answer is probably high ISO and digital relighting. Apples portrait mode but with the tech advanced by a dozen or so generations.
12 hours ago, IronFilm said:Exactly. I find it hilarious when people say still lenses are not good enough to resolve 4K detail.
People say that? Wow!
13 hours ago, nigelbb said:You are stuck in today's video vs. photo paradigm. The technology already exists for dual ISO so why not dual shutter speed? or a still image taken at 1/250 or 1/500 or whatever taken in between each video frame or every 10th video frame & stored separately? There are so many different ways this could be accomplished. The 8 Megapixel stills that you can grab off a 4K timeline are sensational & can be used for any commercial purpose from wedding albums to magazine covers to billboards.
Now that sounds great!! Why didn't I think of that!
-
11 hours ago, Andrew Reid said:
The aspect ratio hasn't changed though.
Distortion has.
You're right about the size / resolution of the output files not changing, but the aspect ratio of the things being filmed has changed (circles aren't round any more).
-
17 hours ago, Shirozina said:
I have an eGPU that I use with my laptop. I use this at home when I don't want to edit in the studio. Without the eGPU the laptop overheats and slows down when I edit the GH5 UHD 400mbps codec. When I'm just editing UHD Pro Res I don't have a problem so I can get away without an eGPU and it even worked on my previous less powerful laptop.
BTW - I'm not on drugs
That makes sense..
It sounded like you were suggesting that because some codecs require a lot of processing everyone should switch from a laptop to a desktop and completely lose all the benefits of having a portable computer.
The way I see it is that there are many stages of film-making in which a computer is required:
- On set, ingesting footage
- Reviewing dailies after shooting
- Editing
- Colour correcting
- VFX / compositing
- Grading
- Titles and export
- Archiving
If you use a laptop you can do all of the above with the one computer - with the same computer. But then 4K H265 codecs appear and because you can't edit or grade that footage the advice is to switch to a desktop, which means maintaining two complete setups, or losing the ability to do many of the remaining steps. There's an assumption that because grading requires a calibrated monitor and environment that you'll be doing all the other things in that environment too, which is complete bollocks.
-
56 minutes ago, Shirozina said:
It makes sense when you try to edit 4k highly compressed codecs ?
Changing your lifestyle makes sense?
So, when I bought a 4k camera I should have stopped editing video on the 2 hours per day commute that I had to my day job and instead edited video at home instead of spending time with my family?
Are you on drugs?
How about this - if you can edit on a desktop computer then why are you even in a thread about an eGPU which is clearly aimed at laptop users...?
-
Nice post @jonpais. There are so many different aspects to consider, and for many of us, a camera that really stuffs-up a single feature is worse than one that is passable at everything but doesn't wow.
I think that's the difference between different types of filming - some situations call for a camera to be great at some things and don't need other things, whereas other styles need everything to operate above a certain minimum level of performance, even if that minimum level is quite modest.
A great example is the GH5 vs A7III - 10-bit or 4K60 doesn't matter to me if the AF has failed, yet there are many cinema cameras that don't even have AF. People have told me flat out that I'm expecting too much but unlike perhaps the vast majority of people on here, I started making films with what I had (a Samsung Galaxy S2 and a ~$300 Panasonic GF3 m43) and only upgraded when I went on a trip, filmed real things, messed up shots all over the place, and then looked for ways to improve.
-
6 minutes ago, webrunner5 said:
I think you are sort of are trying to talk me into that the Matrix movies were reality! I "see" where you are trying to go here. ?
Whenever I see things like "we can't see in 4K" or "no-one will ever need 8K" I just hear "640k should be enough for anybody" ???
- noone and Trek of Joy
-
2
-
31 minutes ago, webrunner5 said:
Way beyond my knowledge on the stuff. Makes more sense that the crazy stuff on here. I don't know about you but 16:9 is all is shoot at, and sort of what I relate to, and I Only have one eye! Isn't anyone that can see in 4K! The average person over 20 is lucky to see HD. Hell of a lot of men are Color Blind.
"Red-green color blindness affects up to 8% of males and 0.5% of females of Northern European descent.[2] The ability to see color also decreases in old age" Wiki.
A really simple example might be the home videos from Minority Report:
Ignoring the 3D aspect of it, right now we have the ability to shoot really wide angle and then project really wide angle. All you need is a GoPro and one of those projectors designed to be close to the screen - existing tech right now.
If you shoot 4K but project it 8 foot tall and 14 foot wide then most people sure as hell will be able to see it - especially if you've shot H265 at 35Mbps!!
Projecting people life-sized is a pretty attractive viewing experience, so we're not talking some kind of abstract niche kind of thing - we're talking something that a percentage of the worlds population would see in the big-box store and say "I want that"
17 minutes ago, IronFilm said:I think you missed his point completely.
For example:
Flash lighting (along with high speed shutter) can be used to massively change the ratio of ambient to artificial light, to get a look you could never achieve with constant lighting.I understand that.
If you read my post carefully you will notice I mentioned that they might have a 24/25/30fps sync - this is different to continuous lighting. While this isn't currently available at full power, there are strobes that can recycle fast enough (eg, Profoto D2 - link can recycle in 0.03s and can already sync to 20fps bursts). All that is missing is having a big enough buffer (capacitor bank) to do full power that fast.
-
5 hours ago, heart0less said:
a7III + Vivitar 19-35 @ 35/4.5, no filters used, 5000 ISO, handheld.
S-Log2, S-Gamut3.cine
Colored using Geoff's GFilm correction LUT + simple adjustments.That looks really good.. thanks!
Nikon FF Mirrorless
In: Cameras
Posted
Cool - if you are designing a new product (and indeed a product in a new category) then it makes sense to start with a blank sheet of paper and question all the 'normal' decisions. Good stuff. I think this bodes well for it not being a 'same same' performer.