Jump to content

kye

Members
  • Posts

    7,494
  • Joined

  • Last visited

Everything posted by kye

  1. kye

    Motion Cadence

    I've just watched the video and unfortunately I detected a couple of potential issues with the test. The fact the camera is panning is a problem because it means that the motion can look uneven (as if it's speeding up and slowing down) by having an uneven pan. Ideally the cameras would all be stationary. Also, I noticed repeated frames on both the S8+ and the 1DC. It also looked like there was a skipped frame before the first repeated frame on the S8+. Please note that it wasn't a repeated frame in the video file (which I downloaded from YT) - on both the examples of repeated frames the other cameras had non-repeated frames, so the YT video file appeared to be fine. I wonder if the 1DC file was fine but the error was introduced in post by a slight time stretch? The S8+ file (which seemed to combine a skipped frame with a repeated one) seems more likely to be related to the camera itself - maybe something was happening in the background and the camera app didn't get CPU preference at that moment. It's a pity as the test seems like a good idea and would have taken quite some time to setup and edit and upload.
  2. Have you used this camera personally Omar? If so, what type of filming do you do? With all things, the features of a camera can be great or terrible depending on what you are using it for
  3. It depends on how 'intelligent' it really is. I won't be holding my breath!
  4. Ouch!! Luckily they're getting more robust with each update. Waterproofing and drop-proofing are killer features. I took my iPhone 8 swimming with me at the beach when I first got it to test the waterproofing and see if it was an alternative to the GoPro. I even (accidentally) did the opposite of their suggestions when I plugged it in to the computer without giving it a few hours to dry out. Still works fine, and the video looks incredible. 1080p240 in full-sun at 400% video bitrate (via an app) is very impressive.
  5. Ah. Yep, almost zero chance then. No matter - we've almost got 4K RAW on the 5D and almost 1920 RAW on the crop-sensor models.
  6. kye

    NAB 2018

    Have Canon done all their NAB2018 announcements yet? I had in the back of my mind they might be announcing the XC20..
  7. Well, it's relevant to me today, but I'm a very long way away from the 'industry', so it's not really relevant to anyone who gets paid for audio in film work. This is where we run into the difficulties of people having vastly different expectations and involvement in film-making! A lav mic for each person is a logical choice, although I am reminded of the FB quote that you shared - tracking batteries and all that stuff is something I'm trying to avoid. Besides, getting two teenage kids to wear a LAV mic all day would be almost as painful as having to replace them every 3 months because the kids don't take any care with technology whatsoever. My daughter is onto her third iPhone in two years just through being careless (she's a dreamer and just not aware that her phone will fall out of her pocket if she does a handstand or whatever) and my son thinks that dropping a pair of headphones onto the floor in a moment of exasperation when he dies on xbox and then picking them up by the cord when he starts the next game is still within our instructions to "be careful and treat them properly" - I'll leave you to imagine how long a pair of headphones last in our house. That's the challenge with 360 cameras, it's hard not to get them in shot. If the mics weren't built into the camera (like they are in others) then on top would be the logical choice. You'd have to then treat the camera like it has a blind spot, but considering that a standard wide-angle has a 'blind-spot' of 270 degrees, it's still a huge step up in terms of coverage of a scene. In terms of delivery format I'd film in 360, and crop to 16:9 or 2.35 or whatever suited the vibe of the video (16:9 for more life-like and 2.35 for more cinematic) and then delivery via YouTube as an unlisted video. Then the relatives can all view it from the link that I email them, and I avoid all technical compatibility playback issues. To elaborate on how I'd use it, here are some thoughts: - I'd film by holding it somewhere in front of us so it can see our faces as well as what we're looking at, or above and behind us for a follow-shot. - If we're walking along and the kids see something funny and react to it I can have something like: shot 1 is wide of us walking, shot 2 is wide of what's in front of us, shot 3 is a medium of the thing they reacted to, shot 4 is close-up of their reaction, shot 5 is wider to include us reacting to their reaction, and shot 6 might be other people reacting to us all laughing or whatever. - If it's an activity of some kind I can cut between us and what we're seeing or doing. - It allows either continuity editing of different angles of a scene shot once in real-time, or I can even do the classic suspense technique of having shots overlap in time, so that shot 2 starts at a moment in time slightly earlier than shot 1 ended on. For me it's about always having the camera already pointed at whatever is happening before it happens, and the 360 camera does this by always being pointed at everything. With a traditional camera if you're trying to film a situation you're often caught where something happens and by the time you point the camera at it then the moment is over.
  8. Or when ML is ported to the camera... although that might be a tall order depending on what the insides look like from a technical perspective.
  9. I use my Hero3 in that mode and I still find it disappointing. Maybe it's just too old for what my expectations are - I know the sensors get a little bit better with each new model. I understand what you mean about the sensor, but if given enough light it should be good quality. Maybe there's something else going on that I'm not aware of. In terms of the RX0 1" Sony, it's not as wide angle as the GoPro (RX0 = 24mm, IIRC my GoPro Hero3 = 17mm). If I wanted 24mm I'd just use my iPhone. I either use the GoPro because of it's wide angle or because it's waterproof (and even then because it doesn't have a screen I still use it in wide-angle mode), but it may work for your purposes though. A real fish-eye lens may also be a good equivalent depending on your needs. Absolutely agree. When they were cashed up they should have been pushing and innovating, but didn't. The video I posted above suggests that they are good at selling and understand the waterproof action-camera market, but that's the extent of their strengths.
  10. Here's the intro to Fusion video from the same youtuber as the above - most of the people were asking for a fusion video in the comments! He shows an example of doing a screen-replacement on a phone, which is cool because it involves compositing and tracking. This thing really is like learning to drive the space shuttle..
  11. I'm surprised no-one has posted this video yet... I think the video takes a well balanced look at the company and the founder. I took my Hero 3 to a concert on Sunday to record a short video for a friends birthday and when I got the footage back I was really disappointed - it was hazy and soft. I thought something was going wrong with the camera (maybe some coatings going bad or something) so I tested the camera in good light and it was fine, but the quality was not great - I think it's the poor codec. I'm often half-tempted to buy a newer one to get real 4k but due to the limited video quality I just can't justify it. If they released something that had high-bitrate codec then I would probably buy it. I'd be willing to pay a decent margin if it recorded RAW or something close to it.
  12. It might be a technical limitation? I'm just guessing here, but IIRC my 700D upscales 1920 from 1728 wide, which explains why the footage was so soft.. upscale + low bitrate codec = blurry mess. The reason I think that the limitation might be technical is that ML on the 700D is also limited to 1728 wide in RAW, so I'm guessing that they just take every third pixel - the 700D sensor is 5184 wide, and 5184/3 = 1728. The m50 sensor is 6000 wide, so if they took every second pixel it would be 3000 wide so 3k video upscaled. This is my guess as to what it happening.
  13. Wow, ok.. Let me try and clarify @IronFilm You are right about one mic placed in the scene being superior. I believe this will change, but not for a long time (as you say, maybe decades - when I said 'tomorrow' I wasn't meaning in 24 hours time, obviously). What I am saying is that even though it's not what this mic is designed for, it is a feature that it has. This feature will be useful to some people, and there will be early adopters. One of the early adopter markets will be 360 degree filming. In narrative work, where every shot is carefully setup and people can be positioned offscreen then it makes sense to have a boom operator and a shotgun microphone. One of the challenges of these forums is that people are coming at film-making with such different perspectives that people get confused about what people are saying. I speak from the perspective of an amateur guerrilla run-and-gun film-maker who passively captures what occurs, rather than shaping what happens. I realise that I am by far the minority on here, but to compensate, I do try and be really obvious about where I'm coming from. I apologise if that wasn't obvious enough. @Kisaha it's funny to me because from my perspective I'm not mixing too many things together at all. I see film-making as an incredibly complex system of business, art, technology, logistics, people-mangement, logistics, and all the other things that go into the creation of a two-dimensional pre-recorded piece of art that is designed to play linearly over a finite period of time. It's always funny to me when someone who points a camera at something and records it, then edits it, then colour corrects it, then exports and distributes it for other people to watch it turns around and says that someone else who follows exactly the same process is doing something completely different. This happens between fictional film-makers and documentary film-makers, those who shoot for profit and those who shoot for fun, those who want their audience to feel vs those who want their audience to think, etc etc etc. In fact these are all completely imaginary distinctions that people make up. We are all just people trying to achieve our goals the best ways we know how. Actually what I want is to go to the zoo and have the whole thing recorded with a minimum of equipment getting in the way of the experience. To me, a 360 degree camera and a 360 degree microphone sounds like the perfect setup actually. However, the tech isn't there yet, and won't be for some time. As we've discussed, it needs 8K 360 cameras at least In terms of what happens to the jobs of the hundreds of thousands of audio professionals, I would like for them to live happy and fulfilling lives doing what they love to do, in fair and supportive environments. Technology is changing all the time and I just excited about talking about where it's headed. Isn't that why we're all here on the forums instead of making films or doing something else Sorry if I offended anyone.
  14. This is the point of having an array of microphones to compliment a 360 degree camera.. when you shoot with a 360 camera the beauty is that you choose what angle you look in post, therefore to match this you also want to be able to choose the angle you listen to in post as well. We are talking about a fundamental change in film-making. Today = choose what to record. Tomorrow = record everything and choose in post. [Edit, to be clear, I'm not saying that this usage is what the microphone is for, I'm saying that it could be used this way for the benefit of directional audio that is steerable in post]
  15. Yes and no. The entire purpose of the mic is to capture sounds in all directions, but to have each direction isolated so that each of the different directions can be processed differently in post. The normal use for this is to take all the noise to the front-left and put that in the front-left surround channel (etc) to make surround mix, however the other way you can use that functionality is to isolate the sound you want and remove the sounds you don't want. The Rode guy explains this in the below video at 1:42s. On a broader note, digital processing is really transforming the relationship between what we capture and what we output. In the past we tried our hardest to capture exactly what we wanted to output, were able to do basic adjustments in post, and then shipped the final product. Now we are in an intermediary stage where we still try to control what we capture, but we're able to change it quite significantly in post before we output. Examples of this are that we shoot in 4k and crop in later, we shoot with green screen and comp in 3D work later, we shoot talent and do their make-up in post, we change the colours of things on set in post, etc. The future will move to a point where we capture huge amounts of data about almost everything that is happening and then through very powerful processing we will chop out most of it to form the output, like shooting in 360 and cropping in to have an infinite number of camera angles, or like the article I linked to where they recorded a basketball game with an array of 325 microphones and then using the same technology that is used in radio telescope arrays they process the signal to isolate a single conversation between two people sitting next to the court, taking the sound-levels of the crowd, the PA, the players, etc all down to levels where the single conversation was audible. Angelview - yes, you're right. Apologies I agree, it's the post-processing that really provides the functionality. Like I said above, more and more the output is created in post by sophisticated processing of the raw inputs.
  16. You make an excellent point about the speed of integrating these different pieces of software. Software can be structured to be easy to do these things, but it can be a lot of work and a large investment. To me this shows that Blackmagic have obviously put a huge amount of effort into a long-term strategy and there is some excellent leadership behind these changes. It also shows in things like the BMPCC 4K which on first impression looks like it's a wishlist from every indy filmmaker.
  17. My question would be how much isolation is it able to provide? As I said a few posts ago, I want to hear what the camera is looking at, and not what can't be seen. This is a challenge in audio. 18mm is 90 degrees horizontally, 50mm is 40 degrees, and 100mm is 20 degrees. If we were to take an assumption that -10dB is the 'angle of view' of a microphone, then the Rode NT5 is 200 degrees, the VMP+ is 160 degrees and the VideoMicro is 220 degrees. In comparison to lenses they're all super wide, which is why boom mics are often used pointing vertically, as most sound sources are in a horizontal plane (except aircraft). With the right software and the right number of microphones this would be killer. https://www.wired.com/2010/10/super-microphone-picks-out-single-voice-in-a-crowded-stadium/
  18. Just found out about the Shared Nodes feature - (13:25 in the below video). TL;DR: You can turn any node into a "shared" node and then paste it into other node graphs however you like, but any change you make to that node is applied to all the node graphs where that node appears. This is incredibly useful as it means that you can basically have an unlimited amount of groups. The most common way that I would apply it is like this.. Imagine you shoot a video with two cameras. You can have one node (or multiples) that are common across camera 1, and a different set across camera 2 that means that the cameras are now matched. Then imagine you have two scenes that you want to colour differently, in my case it might be a night scene vs a day scene, so you can create a node/nodes that are shared across scene 1 and another set shared across scene 2. This is far more flexible than the Groups functionality that was in v14, as with the groups you could either group shots by camera or by scene but not by both, so the above wasn't possible to do except for manually copying grades. Killer!
  19. An 8k high DR 360 cam plus something like this would be fantastic!
  20. Absolutely, something I've looked into before. The term you're looking for is "angelcam", which comes from the fact it looks like there is an invisible floating camera that follows you around. It's not the zoom that's the problem. It's that if you turn up with this then people don't think you're making home videos: That's what I thought, until I took my 700D to film the kids go-karting and then got home and realised I could see the kids in the go-karts but the only thing I could hear was the people next to me in the viewing area bitching about politics and the latest thing they saw on facebook. I ordered my first directional microphone within an hour of getting home. I am as well, and I agree 8K is mandatory. 4K RAW would probably also be sufficient, but that's not likely to happen! I would go so far as to say that when a high-bitrate 8k 360 comes out that there's a chance that it might be all I'd need. I'd likely still want one channel of directional audio facing away from me and another facing towards me, but that's easy enough to setup. Being able to capture everything and then choose framing in post would be spectacular.
  21. Mirrorless for sure. but what kind? I see there being a few different form factors of cameras. There's the smartphone cameras. They get used naked, are practically invisible to bystanders, will have good image quality but don't zoom, and have shit (omnidirectional) sound. There's the cameras for convenience - these are the RX100s of the world. They'll get used naked, will have good image quality but mediocre sound. There's the cameras for pure image quality - these are the BMPCCs of the world. They deliver excellent results in all areas, but to do so they basically can't get used naked, and when rigged up are too big, unwieldy, and professional-looking for guerrilla film-making, the category that home videos fits into. Then there are cameras in-between - these are the mirrorless ILCs and high-end handicams. They are practical and well-rounded, and might be augmented with an on-camera mic or maybe a handle of some kind. Most people on here are interested in IQ enough to rule out the naked smartphone category, and generally also rule out the convenience category because of lack of good sound. Which leaves the cameras that require huge rigs, or the mirrorless ILCs. However, very few people here seem to be shooting guerrilla style, so the concerns of the cashed-up video-loving-parent aren't taken into consideration, because much of the filming that we do of our kids are in private property (the zoo, the fair, the museums, etc) or public property like the park or the beach where a big camera attracts too much attention. I just think it's interesting that there's a whole segment to the market that is huge, has tonnes of money, is interested in stealth / guerrilla film-making, and we basically have no visibility of what they're filming. They will be influencing how the manufacturers design, market and sell products, yet most of us don't even know they exist.
  22. I am! I've said before that I try to capture in a relatively neutral style and then I put in the work in post, and Resolve is my all-in-one. I'm excited by Fusion, more specifically for the stabilisation and the 3D titles. I'm really hoping that I can get better stabilisation than the Classic Stabiliser in v14, as sometimes I film in difficult situations and I need more stabilisation than Resolve currently supplies. The 3D titles will be cool, but I'm more hoping that combined with the tracker I'll be able to add 3D items into the 3D space I've filmed. I watched the 20 minute video from BM about Fusion and got lost about a third of the way through. It's like Resolve is a house and each tab is a doorway through to an array of controls that each rival the space shuttle!
  23. I agree, but I was more talking about the 10% that decided that a cheap camera wasn't enough. ie, heaps of parents bought 5DmkIIIs - now they're taking video, I wonder what it is they'll be buying.
  24. kye

    NAB 2018

    My impression is that once you get a basic level of understanding about colour grading (I think I am around this point somewhere) then it's all about learning two things: how to create a desired look in anticipation or interpretation of what the client wants, and how to deal with problem footage. I have experienced problems with the latter and really struggled to know what I was looking at, why it looked awful, and what to do about it. If you hang on the colourist forums (eg, http://liftgammagain.com/ ) then you'll eventually run into a conversation where the pros talk about the problems of old when they had to match a bunch of out-of-date film stock, or low budget productions that saved money by shooting on the small bits of film stock that larger films would discard because they didn't want to start a take with so little film left on the roll. My understanding of things now is that if you shoot your film on one or two half-decent cameras and shoot in log and don't make large exposure or white-balance mistakes then you probably won't run into the kinds of problems that they are best at dealing with. In terms of if they like it, the general consensus was something like "it was awful and I'd never wish it on anyone but it paid the bills..."
  25. Hang on, your maths isn't correct. When we talk about "8-bit colour" we mean that every pixel has 8-bits of Red, 8-bits of Green and 8-bits of Blue. If it truly was 8-bits per pixel then that would mean each pixel could only be one of 256 colours, and we'd be back to the early days of colour VGA, and images like the one on the right
×
×
  • Create New...