Jump to content

kye

Members
  • Posts

    7,499
  • Joined

  • Last visited

Everything posted by kye

  1. Awesome! I can't watch it (title not available in my location) but great to see some distribution happening
  2. Not necessarily. I just tested my Sandisk Extreme Pro C10 V30 U3 95MB/s and it tested at 85MB/s read and write, with a 5Gb file size and being almost full (which decreases performance - the BM app warned me that the test would be affected). 400Mbps is only about 50MB/s which the card can easily do, but the reason that the GH5 won't do the 400Mbps All-I mode is because the card isn't UHS-II. I think that might be a limitation of the GH5 and how it writes to the card perhaps, but I'm not sure. It just happens that that the V60 is only on UHS-II cards, so it's the right advice, but I bought my card on the basis that it could handle the bitrate, but it turns out there is more to it than that, because on bitrate alone my card would be V60 comfortably and almost V90.
  3. I can't speak for Zak, but wide angle lenses have significant stretching on the edges/corners and when combined with IBIS (and potentially other kinds of stabilisation) it gives the edges a strange kind of warping effect, kind of like the jelly effects of rolling-shutter but in a different way. Depending on the motion involved it can look very very strange, and if you're doing pro work then it can absolutely ruin shots.
  4. Great looking stills. Nice work! Awesome to hear that your brother was behind FEOTFW, that was one of the shows I binge watched when I first saw it - it was just a bit different to the normal narrative tropes and cookie-cutter plot lines. There seems to be a ground-swell of comic adaptations to TV/Cinema so your brother may have many shows in his future. While i’m not a huge fan of the medium, I think that comic books have a huge untapped potential in terms of narrative innovation. Hollywood has only had the ability to create worlds that don’t function like our own (eg, aliens, outer space, different laws of physics, etc) for a few decades, but comics have had full creative control since the first cave paintings were made thousands of years ago, and so the plot-lines are much more creative. They are also much more mature, instead of having a world like ours but with one twist (like including the afterlife and ghosts for example) instead you get a rich world where that twist has been fully integrated and all the ripples and implications are also present. Lots of potential there I believe!
  5. Footage is a tricky thing. For many of us our best footage is unavailable for some reason. For the pros amongst us there are contractual limitations, and there are professional reputations at stake when quickly posting anything. For people like me who shoot personal work of friends and family, it’s more personal and sharing our family videos is a bit odd. Most YT people don’t share video of their families, especially the children. So that means that what is left is our second-best work, which is pretty difficult to share on forums such as this, partly because it’s a film-making forum so the level of scrutiny is very high, but also it’s the internet, and these forums aren’t immune to savage criticism from people that are all talk and don’t actually shoot anything themselves. It’s a tough thing..
  6. Just make sure it's a UHS-II card. I've got the SanDisk Extreme 90MBps which can do the data rate easily but isn't UHS-II so won't do that mode.
  7. kye

    Lenses

    It's relatively easy to work out how much DoF your eyes have by bringing up a test chart on a laptop / tablet, putting your finger in the line of sight of the chart and focussing on your finger and seeing which detail in the test chart is discernible and which isn't. It ranges with focus distance of course, and also with low light when your eye opens up giving a larger aperture. It's not the crazy DoF of a 50mm at 1.2 or anything, but the background defocus is a thing, and F4 of 5.6 is easily within the range of our vision, so should look natural. It's a good experiment to do so you're aware of it.
  8. kye

    Lenses

    Soft in the corners isn't such a bad thing either. Lots of cine lenses have quite poor performance in the corners (even taking into account field curvature) and this only really matters when you put the subject in the corners of the frame. The eye is naturally drawn to things in focus (which is why DOPs use shallower DoF - people don't spend the entire movie looking at blurry backgrounds) so if your subject is nearer the centre then the softer edges will actually help draw attention to the subject instead of away from it.
  9. This is what I used to think, but if we think about film stocks and how they have these tints (IIRC there were other film stocks that had magenta/green tint instead of yellow/cyan) then here's the issue - every film ever shot on these film stocks would have this look built in. And the problem with that is that every film shot on film didn't look like it had only two colours, or that it was boring, or that it hid a lot. So, if film stocks had these things and didn't look like the POS that most orange/teal grades have, then I figure I must be missing something. Thus this thread. Maybe we're not missing things and in the film days they just designed sets and lighting to compensate, but maybe not. I'm no-where near done with this
  10. Let's start with film emulation. Resolve comes with a bunch of Film Emulation LUTs, let's play with the Kodak 2383 D60 LUT. Firstly, here's a greyscale to see what it does to luminance: The first is unprocessed, and so has nothing on the vectorscope as there is no colour. The second clearly shows that there is a push towards orange in the highlights, and a push towards teal in the shadows. If we change to a smoother gradient and zoom into the vectorscope further (by saturating the image hugely), we get this: From this we can see that it saturates the upper and lower mids more than the highlights and shadows, and that it isn't a straight application of two hues, but the hue varies. If we crop out the highlights and shadows in order to confirm which bits of the image are which bits in the vectorscope then this is what we get: Which confirms that the 'hook' parts of the vectorscope were the highlights and shadows. So there are changes in saturation and also hue applied to greyscale tones, but this is the OT look in this film stock. I suggested to the guys on LiftGammaGain that the film emulations in Resolve must be pretty good and was met with skepticism, however, if we assume that any lack of rigour on the part of the person creating these LUTs would tend towards making them overly simplistic rather than overly complex (a relatively safe assumption, but one all the same) then that suggests that this type of non-linear application of tint is likely in film stocks. So, what does this LUT do to colour? This is a very handy LUT stress-test image courtesy of TrueColor.us: Before image shows that the test image has hues that are in-line with the primary reference markers on the vectorscope, and that all the lines are straight, indicating there is also equal saturation of the image. After image shows a number of interesting things: The most saturated areas of the image are reduced saturation but the mid-levels of saturation are increased, giving a non-linear saturation response that would tend to increase saturation in the image without clipping things, very nice but not relevant to the OT look In terms of relative saturation, the Yellows are the most saturated, Cyan is next most saturated, Red and Magenta are in the middle of the range, and Blue and Green are the least saturated In terms of Hue, Cyan got pushed towards blue a bit, Yellow got pushed a bit orange, Magenta got pushed a little red, and RGB seemed unaffected @Juan Melara made an excellent video re-creating this LUT (he did the D65 version, which has a warmer white-point, but the overall look should be the same): The video is interesting as he uses a range of techniques that apply OT elements to the image: He uses a set of curves which apply a cooler tint to the shadows and a warmer tint to the highlights, and by having different curves for the Red and Green channel, gets some Hue variation across that range too Then there's a Hue vs Sat curve that saturates Yellow and Cyan above the other hues: Then a Hue vs Hue curve that pushes Yellow slightly towards Orange, Green towards Cyan, and Blue towards Cyan (up in the graph pushes colours left on the rainbow in the background of the chart): Then he has two YUV nodes each with a separate Key that are too complicated to explain easily here, but affect both the hue and saturation of the colours in the image. Juan also has the Resolve Powergrade available for download for free on his website, so check it out: https://juanmelara.com.au it's in his store, but it's free. So film tends to have elements of OT. Here are some additional film emulation LUTs in Resolve for comparison - note that all of them have Yellow and Cyan as the most saturated primaries..
  11. The Orange and Teal look is famous, and I've seen it in many different contexts and this thread is about trying to make sense of it. Specifically, why we might do it, and then what the best ways are. From what I can tell: It simulates golden hour (where direct sunlight is warm and the shadows are cool as they're lit by the blue sky) It creates more contrast between skin-tones and darker background elements, making the people in the shot stand out more It is also part of the look of (some) film stocks, so is mixed up in the retro aesthetic, and of course the "cinematic footage" trope There might be other reasons to do it too. If you can think of some please let me know. I've played with the look in the past and I just find that when I apply it to my footage it looks awful. Film, on the other hand, often looks wonderful with bright saturated colours, which I like a great deal. This probably means I'm not doing it right, and that's probably because I don't understand it sufficiently enough, thus this thread. As usual, more to come...
  12. kye

    Lenses

    No experience with that lens, but I agree with @TheRenaissanceMan about it covering some very useful focal lengths. You should be able to search places like flickr to see many example images - being an L lens there might be some reviews bouncing around too. Even if it is “optically middling” maybe that’s a good thing? Lots of people are fans of vintage glass as they take the digital edge off modern cameras, especially those with internal sharpening and h.264 compression. Being an L lens it should still be really nice though - my 70-210 F4 FD lens (not L) was lower down in the range and quite old now and is still lovely to use, so Canon has been putting out high quality glass for a very long time.
  13. I haven't seen that one before... How hilarious!! ???
  14. Yes, although it's a tech board more than an art board, I'm assuming that's how Andrew wants it. My attempts to get people to shoot more were only mildly successful, but from my own perspective I am seeing enough shortcomings of what I shoot that I don't need other people to point out where I need to improve lol. I am intending to come back to a lot of the threads I've started but for the moment I'm just chilling for the holiday and enjoying relaxing. I am about to start a new thread on the Orange/Teal look and how it relates to film-making, how it relates to film, and how to do it in post, as I've been thinking a lot about that recently and have fallen down the rabbit-hole pretty far (eg, YUV channel mixer vs Hue v Sat vs Hue v Hue vs RGB curves... that far!).
  15. Dude, that was hilarious!! Nice work I think socks and sandals in the ocean was my favourite bit, but the pause for sub was also right up there. The effect of stabilising on your head worked really well actually, I'm sure it's 'been done' already but definitely looks cool.
  16. The way I see it is that they haven't died out at all, they're either being used for over capture (360 cameras are almost good enough for that now - I've started to see people using these in "real" productions - for example check out Tiny House Nation on Netflix which used a GoPro Fusion for getting multiple camera angles and action/reaction shots in the tiny houses they're filming in and around) and 3D will also "come back" because it's a technology looking for a use and we haven't worked out what it's really for yet. Even if 3D never becomes a publishing format and stays in the over capture space, AI will really benefit from having multiple cameras with parallax of varying amounts. For example if you have a single 'normal' camera and have a time-of-flight camera / normal camera pairing to the left and another pairing to the right then you can get excellent depth, you have a second (or third) perspective for when objects are close to the camera and block one perspective, and you can also see slightly behind the subject and can use that information to change perspectives slightly, either 3D camera adjustment for offset stabilisation, for 3D effects (I've seen facebook images that use the portrait-mode of the phone to create an image with multiple planes that animate when you scroll or tilt/rotate the phone) etc. even just to see what's slightly behind the subject and you can then blur it for better bokeh simulation, the possibilities are endless. If you're going to stay current then you're going to have to separate the capture format from the publishing format. Computational photography separates these by definition by processing the input before it becomes the output.
  17. I agree with much of what @Oliver Daniel suggested, including the IBIS and EIS taking over gimbals. I think the technical solution for this might look very different though, but let me set some context from the other thread about how far we’ve come in the last decade.. in the last ten years: the 5Dii accidentally started the DSLR revolution the BMCC gave RAW 2.5K to the humble masses we got 3D consumer cameras we got 360 consumer cameras we got face-recognition AF, we got specific face recognition AF, we got eye AF we got computational photography that used multiple exposures from one camera we got computational photography that used multiple dedicated cameras (and other devices like time-of-flight sensors) and did so completely invisibly we got digital re-lighting we got the above 4 points in the most popular camera on the planet we got 6K50 RAW So, from before the DSLR revolution to 6K50 RAW and AI in the most popular and accessible camera on the planet in the last decade.... I think that we should take some queues from The Expanse and look at the floating camera that appears in Season 4. I think consumer cameras will go towards being smart and simple to use for the non-expert as phones continue to eat the entire photography market from the bottom up. They’ve basically killed the point-and-shoot and will continue to eat the low-end DSLR/MILC range and up. So I think we’ll get more and more 360 3D setups by default in an over capture situation where they have cameras in the front/back/sides/top/bottom/corners which will mean that they can do EIS perfectly in post. That will eliminate all rotational instability, but not any physical movement in any direction (how gimbals bob up and down when people walk with them). This will be addressed via AI, as the device will be taking the image apart, processing it in 3D, then putting it back together again. It won’t take much parallax adjustment in post to stabilise a few inches of travel when objects aren’t that close to the device. We won’t get floating cameras though, that would require a pretty significant advances in physics! This AI will enable computational DoF and other things, but I don’t think these will matter much, as I think shallow DoF is a trend that will decline gradually. If this is surprising to you then I’d suggest you go watch the excellent Fimmaker IQ video about the history of DoF, and take note that there was a period in cinematic history when everyone wanted deep DoF and once it became available through the tech the cinematic leaders of the time used it to tell stories where the plot advanced in the foreground, mid-ground and background simultaneously, and everyone wanted it. The normal folks of today view shallow DoF (and nice colours and lighting design and composition for that matter) as indicating something is fake, and it becomes associated with TV, movies, and large companies trying to PR you into loving them while they deforest the amazon and poison your drinking water. The look at authenticity is the look of a smartphone, because that’s where unscripted and unedited stories come from. The last decade started with barely the smartphone, now the smartphone is ubiquetious, and so developed that they basically all look the same. Camera phones basically didn’t exist a decade ago, now we have had the first feature films shot on them by famous directors (publicity stunt or not). People over-estimate what can happen in a year, but underestimate what can happen in a decade.
  18. kye

    Sirui anamorphic

    I'd suggest that it wouldn't work so well trying to make such a large change in hue. The problem is the same as greenscreening except that this example has extremely soft edges and getting the key right would be tricky and the edges would probably have strange halos. Take a few screenshots of one of the videos above and try it. I'd suggest grabbing a key, taking the key and blurring it vertically in one node, blurring it horizontally very heavily in the next node and then changing the colour based on that. Not sure if PP or FCPX will let you do that work flow, but it's pretty easy in Resolve Studio.
  19. well, ok, just a few.... You can adapt almost anything: It's flexible Portable - this is my dual camera setup for travel - pic taken on my towel on my lap in the tour bus doing 80kmh on the beach... and the files are crazy gradable - for example if we take this image here and try to break it we basically can't do it.... which means you're free to make whatever images you want..
  20. For me it's the GH5. Not perfect, but a real workhorse, and was really ahead of its time. IBIS means I can shoot hand-held but not be limited to a hand-held look, 10-bit internal gives hugely flexible footage without having to have an external recorder, 4K60 and VFR up to 1080p180 are great, MFT mount makes adapting almost anything super easy, flippy screen and EVF are really useful, the list goes on... That's my personal choice, but if I had to pick what was the most significant things in general then it would be the BMPCC and BMCC as I think they moved the goal posts forward by an enormous margin and much of the features we all enjoy today were inspired by that shift, as camera manufacturers have historically been very traditional and reluctant to give anything except the smallest improvement possible on newer models and this shook up the industry a bit and showed there is room for new players to be profitable and innovative.
  21. I was thinking that too. Even a stereo pair placed above the camera mixed in would give some sense of space. The best solution would be to decode a 3D signal to binaural output based on your viewing angle, but do we know if the platforms support this kind of encoded sound, or if they just pump out a stereo audio track? I figure that we're in the infancy of this stuff, and it will get there, but not really sure how far the tech is along right now w.r.t audio. I do know that in terms of adjustability it's normally pretty rubbish. I have MobileVRStation for iOS and the number of options in it is ridiculous, and the number of options in most online streaming platforms is zero.
  22. Great intro video from Seven Dovey about 180 VR film-making. He talks about the gear, framing and compositions, sound, and touches on editing too. He's shot a few films in this format and they're coming soon apparently so that will be interesting.
  23. kye

    Extreme telephoto work

    Finished processing the two timelapses I took. Resolve doesn't read RW2 files from the GH5 so I had to have a couple of goes of processing them, but I discovered that the Adobe DNG Converter creates 16-bit DNG files so keeps the bit-depth and DR of the RAW files, which is useful when you're clipping things this hard. Processing involved stabilisation, setting black and white point and a key of just the clipped areas which I pushed towards orange so it wasn't digital white. I'm thinking I'll do one where I don't clip the sun and see how it looks, although I suspect that everything else will just be black, but I'd be happy to be proven wrong. In both of them the edges of the sun appear to be blocky and horribly compressed, but it's actually due to the atmospheric effects, the RAW files are, well, RAW, and the pixel size is much smaller than the rather square-looking edges. I suspect it would make more sense visually if I had a higher frame rate (the GH5 can't do a time-lapse interval smaller than 1s which is what these are). I'd try video but I've zoomed in a bit on these, so the extra resolution seems to be quite useful. Suggestions welcome..
  24. Thanks all.. Looks like it's either a "real" video capture solution from BM or equivalent, a "real" monitor that I could also use for shooting, or a mains powered TV or monitor of some kind. I wouldn't use a monitor that much in my normal shooting but I would use it for shooting my kids sports games, although how many more seasons he sticks with that is uncertain.
×
×
  • Create New...