Jump to content

kye

Members
  • Posts

    7,924
  • Joined

  • Last visited

Everything posted by kye

  1. I remember GoPro saying that the 12/15p modes would be good as in-camera time-lapse modes, which didn't make that much sense to me to be honest. However, giving customers the option of something is often nice, instead of limiting what can be done because many/most can't see a use for it, so there's two sides to the argument.
  2. Perhaps there's a way to rig up a gel that spaces it a bit from the light? It looked like it only burned in one place and if you can get a few inches between the gel and the light then you'll get a bit of cooling airflow over the gel. It might be a bit of a PITA but if the light has other advantages maybe it salvages it as an option for some situations?
  3. Could you mount the external monitor on the handle of the gimbal and run a (very flexible) cable to the camera? I've seen people running cables from fixed microphones or power banks to the camera on a gimbal before and they seem to work. 230g isn't much but every bit counts!
  4. Totally agree.. the first time I saw the Fuji approach with separate dials I was amazed, then amazed at why other brands don't do this. Can we start a petition of some kind?? I've also been totally pissed off in the past at cameras that don't allow Auto-ISO in manual mode, which means there's no way to control SS and Aperture but still have the camera expose for you - like you would do in street photography!
  5. Werner Herzog talked about this as part of his masterclass too - that actors can't hit their marks properly and you have to work with them to help them understand that a few cm movement on a tight shot can ruin framing or focus. IIRC it was almost the only thing he used to judge actors - the class was full of blanket statements that I thought were quite strange. I also remember him saying that a DoP should know what framing a given lens would provide without needing to look at a monitor, and if they couldn't do that then they didn't know their stuff and you shouldn't work with them!
  6. Makes sense technically - the two uses are at odds with what they would want in terms of behaviour. For a photo mode you would want the IS to zero out movement to the full extent of its range of movement, but for video that results in a shot that is still and jerks suddenly in the direction of the pan and then becomes still again. For video you want to have it zero out movement for a percentage of the range of movement, but when motion gets towards the edges you want it to start allowing movement in the frame, this would allow for smooth pans (and transitions between stationary and moving shots) but would provide less stabilisation for stills shooters.
  7. Seems like a pretty fair overview of the camera, and interesting that it's more sized like the BMCC rather than original BMPCC.
  8. It's interesting to hear snippets like this as who knows where on the spectrum from is-that-all to I-didn't-even-think-that-was-possible-take-my-money it will end up. One of the natural things it could end up being is to essentially turn your smartphone and your monitor/recorder into the same device, which would be sweet - imagine having a monitor that supported apps from third-parties! Who knows what else it will do beyond that, but this board sure does like hype and not much information!!
  9. Everyone knows you need 8K RAW for flower videos.. I mean, they're flowers, right?
  10. I can't offer any answers, but in terms of reducing over-acting I worked on a number of student films and getting actors from theatre was a common practice. I remember that everyone struggled with over-acting but can't remember that anyone found any solutions. One thought though, I do remember that one problem was that these actors aren't aware of how much they were over-acting, and didn't understand when you try and explain that you've got a tight shot and their face is filling a third of the frame. Perhaps it might work to have some practice sessions where you record a few lines then play it back to the actor (on a big TV!)and then you talk about it, and repeat that process a number of times? At least that has a good chance of them re-calibrating their sense of how what they are doing will appear in the final product. They will likely have had this process done to train them in theatre with "more more more" being the guidance offered - this is what you have to counteract. Or it may not work - just an idea Good luck!
  11. Is it still a thing to try and buy your memory capacity in less larger sticks than more smaller sticks? I haven't had a desktop PC in a long time, but I remember back in the day having to re-buy all my memory because I had 4 sticks of the same size (eg, 4 x 1Gb) and no free slots, therefore to upgrade you have to go to bigger sized sticks (eg 4 x 2Gb) and then you've got all this old RAM you paid for and can't use..
  12. @OliKMIA You raise excellent points, however I still believe that "black box" testing as I've described above would still be useful. The same kind of testing would apply, but you'd have to re-test given firmware updates. It doesn't matter what the mechanisms are within the gimbal, it can be reduced to a "black box" and tested by providing a known input vibration and measuring the output vibration (which would ideally be zero above some cut-off frequency). In analog audio circuits there are two main parts of the circuit - the signal path and the power supply. The job of the signal path is to create an output signal as close as possible to the input signal but amplified (voltage and/or current amplification). The job of the power supply is to take the awful noisy mess the AC power from the power company normally is and make it a DC power source with zero AC on it, both at idle and during heavy amplifier loads. There are dozens / hundreds of designs for signal paths with varying architectures (global feedback / local feedback / zero feedback / Class-A / Class-AB / Class-D / MOSFETs / JFETS / pentodes / triodes / etc) and there are as many power supply designs (linear / regulated / passive filtering / active filtering / valve / solid-state / etc) but all of these can still be tested by looking at what they output with a given typical load. In fact, these don't even require the same testing signal to be applied for calibrated testing setups to create measurements that can be compared to each other. Everything I said above about audio applies to the analog components of video processing and broadcast as well, just at a higher bandwidth and with the video embedded on a carrier wave instead of 'raw' through the circuitry, but the principles remain. If an analog video signal path had a high-frequency rolloff or the power-supply was noisy or didn't have a low output impedance it would result in visual degradation of the picture - something that the test pattern would ruthlessly reveal, which is why it designed and used.
  13. @tellure I agree that manufacturers probably have equipment that does this - it's a pity they can't (or won't) share their results with us! Of course, having all the setups calibrated is another whole thing, I'm just talking about a single test regime applied to all the gimbals. @capitanazo & @elgabogomez I agree that gimbals are more than just how well they stabilise, a lot matters in terms of ergonomics, features, how well the app works, etc. this is just one aspect, but a pretty important one! @elgabogomez I'd imagine that you'd need test setups for different weights, for example: smartphone, large smartphone, 500g, 1kg, 2kg, 3kg, 5kg, 8kg, etc. Of course you'd need a balanced version of those setups and an un-balanced one to test how well they do without a perfectly balanced load. You might also find that a gimbal may perform worse with a load a lot lighter than its maximum load, so you might want to test it near its maximum load and also at its minimum load too. I think there's a business opportunity here for a site that really reviews gimbals instead of the kind-of reviews that are being done now - perhaps the DxOmark of gimbals? I don't want to be that person, I just want that person to tell me the answers so that I can buy the right device when I'm in the market for one! This thread is kind of an open letter to that person - please go ahead!!
  14. Are you saying that because it can't be perfect forever that it shouldn't be done at all? I guess we should stop testing cameras because no-one can test Mojo (TM) yet!
  15. Not at all. You simply have to have an arm that you can mount the handle on that can output repeatable vibrations, and then mount a camera (have a few different weight setups) and record what comes out, then analyse it for how much motion came through on the footage. In a way it would be a device like an orbital sander, but where you can control the direction and amount of vibration. Think about music, it is infinitely complex and hugely complicated but that doesn't mean we don't have measurements for frequency response, distortion etc. Light is hugely complex with infinite shades and colours, but we can measure devices in terms of DR, colour gamuts, etc. The testing method would be straight-forward - setup and balance the gimbal, put the device on the arm, hit record on the camera and go on the arm, the arm does several 'passes' where the vibration gets larger and larger, then you download the footage and analyse it for motion. You'd see that gimbal A eliminated all motion up until 7s in, but gimbal B made it to 11s in, or that gimbal C let through higher frequency vibrations, etc. It's not simple, but it's not impossible. Edit: In order to test different camera setups, you might have a few weights and for each weight you might test a camera that's well balanced and then one that isn't (eg, it's front-heavy to simulate having a long lens). If gimbal A setup with the off-balance setup stabilised better than gimbal B then you could assume that this difference in performance would translate to all off-balance setups as this typically is down to the strength of the motors. You could also test battery life under identical loads.
  16. We have standards for tonnes of things, why not gimbals? Specifically, how well they stabilise? As far as I can tell, a gimbal is a physical device that receives vibrations from the handle and through the three motors forms a low-pass filter such that only large slow motions are able to make it through to the camera. This should be easily test-able via a test rig of some kind. I would expect a graph showing dB of attenuation across a range of frequencies over the three axis's of motion. That way we'd be able to say things like: "gimbal X has better attenuation than gimbal Y up to vibrations of strength Z, but above that X runs out of steam and Y is better, therefore for fine work X > Y but for difficult environments Y > X" or "gimbal A has much better attenuation of higher frequencies than B or C or D, therefore if you plan on mounting it to a vehicle (which has a vibration frequency distribution shown in the graph below) you're better off with A". Instead, what we get is "I'm going to watch youtube videos where people compare two different gimbals by running with each in turn, therefore seeing how well each performs IN STABILISING A COMPLETELY DIFFERENT SET OF VIBRATIONS". Hardly the best way to compare devices costing hundreds or thousands of dollars.
  17. My experience has been discovering that I also can't grade, but I discovered that Resolve has built-in transforms for C-Log (and name others) and they look lovely to my tastes. I'd bet that the Pocket 4K will integrate beautifully with Resolve, and simply by using their recommended settings will produce nice images that you can then adjust to taste if you want to do something specific. If you're not a Resolve user then I'm not sure how easy it will be to get good results..?
  18. This is a topic I'll be visiting when the BM Pocket 4K comes out as if I buy the Pocket 4K I'll probably also be looking for a gimbal for it to make it my single camera setup. What kind of interest do we anticipate in the Ronin from DoPs / professional steadicam operators? I would imagine that Zhiyun wouldn't have a great name in industry, but my (outside) impression is that Ronin is a different story?
  19. In my head I think of the Pocket 4K is a kind of "Official ML". What I mean by this is that it will provide results in the same league as ML (eg, a 4K version of the 3.5K RAW in 5D3) but will be fully supported with documentation etc, will be fully functional (eg, having full realtime monitoring while recording) and will be running the hardware well within its spec (unlike ML which is draining every last drop out of the hardware by pushing it to its limits, or past them in terms of overheating etc). I don't think it's better than ML, just different because it's for a different purpose. I have full respect for the wizards behind ML.
  20. My favourite travel vlogger / film-maker just posted this gear video that might be useful for some people. Every time I go somewhere and shoot it I reflect on it afterwards and try to adapt my approach and equipment and I've found that over time my setup looks more and more like his setup. I wish I had been a bit more of a fanboi in this regard because I've bought a lot of gear that didn't end up working for me and eventually replaced it with things similar or identical to this setup and it's worked really well for me. I can second the Gorillapod, Rode VMP+, use of USB charging for as much stuff as possible, and use of clear bags for cables and whatnot. In watching the above he also mentions the 16-35 can turn into a 56mm with the A7SII crop mode switched on, which I'd forgotten. He's spoken in other videos about his lenses (he used to be a wedding film-maker so there's lots of gear videos on his other channel Wedding Film School) and IIRC he would shoot weddings with only the 16-35 and a 50mm prime because those lenses combined with the 1.6x crop mode gave him enough focal length options. He also shoots in 4K and delivers in 1080 so he can punch in digitally as well. It's interesting that he mentioned wanting to try his wireless lav mic for travel, which I think @IronFilm suggested at some point. It makes sense if you do lots of talking to camera stuff like Kraig does.
  21. Ok, figured it out. Here's how to key out the edges and apply any Resolve adjustments to just the non-edges. This will allow NR without blurring edges (or any other adjustments you want to make). In the Colour panel create two nodes Add the EdgeDetect OFX plugin to one of them to make it the "key" node (Node 1 below) Connect the video signal like this: Make the adjustments you want to in Node 2 (eg, chroma NR) Adjust the settings in Node 1 to refine the mask that gets applied in Node 2 (I recommend adjusting Brightness to maximum to get a strong mask) The above setup excludes edges from being processed in the node, but it seems like you can also exclude other areas as well by adjusting the masks in Node 2 like qualifiers or power windows too, so you could (for instance) use the above to de-noise non-edge shadow areas by having the Qualifier in Node 2 exclude brighter areas of the frame. Enjoy! I wish I had worked this out before so I could do heavy NR processing without heavily blurring the video!!
  22. kye

    JVC LS300 in 2018

    Hang on a minute - I never agreed to that!! Show me evidence in writing!! **jumps up and down demanding evidence in true EOSHD style**
  23. I'd be interested to hear your impressions of the native noise that comes out of the camera vs the additional noise settings that you have developed. To my eyes it looked nice (once you remove the chroma noise of course). Considering your style it might even be something you could use for creative effect, using full manual settings at the ISO that gives you the right noise level and then control exposure with a variable ND. @mercer I'm probably stating the obvious but Resolve seems to have pretty good NR, which if you put it in a separate node and use masks then you can get it pretty specific. On the video I shot on the GoPro in the club (with tonnes of ISO noise) I tried to run a NR node with a mask eliminating the edges (as NR is basically blur) but I couldn't find a way to generate an edge mask. The Sharpen Edges OFX plugin detects them, but the mask channel out doesn't seem to work for some reason. I haven't looked into this fully but it would be a great way to do heavier NR without blurring the main edges of objects. Have you compared it to the "Force sizing to highest quality" option in the render settings? (it's under Video -> Advanced Settings in the Resolve Render tab). I typically just turn this on when doing a final render but never tested if it made any difference. I'm reminded of early photoshop days when rescaling images often gave you options about what algorithm to use (nearest pixel, linear, bicubic, etc). My memory was that there were lots of different ways to scale images and they made a real difference in both quality and CPU time required.
  24. kye

    JVC LS300 in 2018

    Your logic makes sense to me. If it's not the LS300 mk2 then perhaps another camera, but definitely something related.
×
×
  • Create New...