Jump to content

kye

Members
  • Posts

    7,482
  • Joined

  • Last visited

Everything posted by kye

  1. If YT detects that you've used someone elses music then it applies the relevant rule from its database. Most songs get your video demonetised (which means the ad royalties go to the copyright holder of the music instead of you) but some songs are blacklisted and so your video is just blocked and no-one can view it. YT has a search engine here to inspect what the rules are: https://www.youtube.com/music_policies?ar=2&nv=1 TBH it's actually pretty draconian considering that you might plan, write, cast, shoot, edit, grade and publish a 90-minute video but if you include something like 10s of someone elses music then they get the entire ad revenue and you get nothing, but then again on the other hand YT gives you about 0.3c per view, so you're basically not making much money anyway, which is why every video is now sponsored.
  2. Anyone know what the DR of the sensor is? The Canon film-making YouTubers have really struggled with the comparatively poor DR of late and one posted a series of videos from a trip that were clipped to hell and obviously had a bunch of interactions with other manufacturers because camera tests from other manufacturers followed quite closely.
  3. Thanks to a nice sunset, I took a few RAW photos that are interesting. Here's the first one - sunlight and shade on the fence (which is made of small sticks). The image looks fairly neutral, if perhaps a little warm. In realty the sticks are pretty neutral, doing the silvering that weathered wood tends towards. If we apply some OTT saturation for diagnostic purposes, we get this: That looks fine, not counting the over-saturation, but is well within the colour palette of cinema. If I shift the WB a little towards "neutral" and put some colour in the shadows then this is what we get. You'll note from the vectorscope that this has had a slightly non-linear effect (I did this adjustment with the Offset control in the first node, so not sure why it would do that) but I don't think it takes us too far from reality: So, this is the natural colours created when the suns light is scattered by the atmosphere, which is why sunsets are orange/red/purple and the sky is blue. The sun is a pretty good approximation of a source of full-spectrum black-body radiation: https://lco.global/spacebook/light/black-body-radiation/ https://en.wikipedia.org/wiki/Black-body_radiation There appears to be a spectrum from Blue to Red that objects follow, depending on their colour temperature. Here's a chart showing the colour temperature of various stars for example: As all of these are sources of broad spectrum radiation they all kind of act the same in terms of rendering colour, also including incandescent light globes and fire. Here's the spectrum of an incandescent light globe: and I suspect that fire is broadly the same. We didn't really get different coloured light sources until we started messing with chemistry and making things like fluoresce, like this fluorescent light: So, the Orange / Teal look occurs in nature, and I presume we attach some kind of psychological sense of well-being from times we spent with other folks around a fire, being both warm and (relatively) safe.
  4. Just did a couple of tests (8mm SLR Magic vs 7.5mm Laowa) and both wobbled a lot, although the 7.5mm wobbled a lot more. 7.5mm is only 6% wider than 8mm but it felt like there was more than 6% more wobble, although my test was very unscientific. Thinking about it more, Bill is right that it must be a comparison between a lens designed for a larger vs smaller sensor, however I think it might just be that wider lenses are more problematic due to the 3D projection onto 2D plane. I don't have any APSC or FF lenses wide enough for a real comparison as all my wider lenses are all MFT. It would be interesting if someone else could do the test though.
  5. Interesting question, and worthy of some testing. It will be to do with the level of distortion within the frame, but with all lenses there is a fundamental issue because the image projection is spherical and the sensor is flat, and it's worse on wider lenses, which I think is why wide lenses are either fisheyed or are very stretched in the corners. Let me do some tests....
  6. Great stuff! Well done I also shoot family travel videos so it's always interesting to see the choices that others make - not too many people upload this type of content so thanks for sharing!
  7. This is interesting to me, more from a nerd perspective rather than a desire to deliver in 8K, but interesting nonetheless. I think it would be really interesting to see real tests where an 8K sequence is downscaled to 4K or 2K, ran through the upscaler, then compared mathematically to the original resolution. Obviously we don't see exactly the same as a straight mathematical comparison, so subjective comparisons would also be useful, but comparisons can be made. I watched a comparison of some stills image upscalers and the person reviewing them consistently chose the worst quality ones because they smoothed the skin and did other things that no-skill retouchers love to do, so it would be nice to get more objective results.
  8. Awesome! I can't watch it (title not available in my location) but great to see some distribution happening
  9. Not necessarily. I just tested my Sandisk Extreme Pro C10 V30 U3 95MB/s and it tested at 85MB/s read and write, with a 5Gb file size and being almost full (which decreases performance - the BM app warned me that the test would be affected). 400Mbps is only about 50MB/s which the card can easily do, but the reason that the GH5 won't do the 400Mbps All-I mode is because the card isn't UHS-II. I think that might be a limitation of the GH5 and how it writes to the card perhaps, but I'm not sure. It just happens that that the V60 is only on UHS-II cards, so it's the right advice, but I bought my card on the basis that it could handle the bitrate, but it turns out there is more to it than that, because on bitrate alone my card would be V60 comfortably and almost V90.
  10. I can't speak for Zak, but wide angle lenses have significant stretching on the edges/corners and when combined with IBIS (and potentially other kinds of stabilisation) it gives the edges a strange kind of warping effect, kind of like the jelly effects of rolling-shutter but in a different way. Depending on the motion involved it can look very very strange, and if you're doing pro work then it can absolutely ruin shots.
  11. Great looking stills. Nice work! Awesome to hear that your brother was behind FEOTFW, that was one of the shows I binge watched when I first saw it - it was just a bit different to the normal narrative tropes and cookie-cutter plot lines. There seems to be a ground-swell of comic adaptations to TV/Cinema so your brother may have many shows in his future. While i’m not a huge fan of the medium, I think that comic books have a huge untapped potential in terms of narrative innovation. Hollywood has only had the ability to create worlds that don’t function like our own (eg, aliens, outer space, different laws of physics, etc) for a few decades, but comics have had full creative control since the first cave paintings were made thousands of years ago, and so the plot-lines are much more creative. They are also much more mature, instead of having a world like ours but with one twist (like including the afterlife and ghosts for example) instead you get a rich world where that twist has been fully integrated and all the ripples and implications are also present. Lots of potential there I believe!
  12. Footage is a tricky thing. For many of us our best footage is unavailable for some reason. For the pros amongst us there are contractual limitations, and there are professional reputations at stake when quickly posting anything. For people like me who shoot personal work of friends and family, it’s more personal and sharing our family videos is a bit odd. Most YT people don’t share video of their families, especially the children. So that means that what is left is our second-best work, which is pretty difficult to share on forums such as this, partly because it’s a film-making forum so the level of scrutiny is very high, but also it’s the internet, and these forums aren’t immune to savage criticism from people that are all talk and don’t actually shoot anything themselves. It’s a tough thing..
  13. Just make sure it's a UHS-II card. I've got the SanDisk Extreme 90MBps which can do the data rate easily but isn't UHS-II so won't do that mode.
  14. kye

    Lenses

    It's relatively easy to work out how much DoF your eyes have by bringing up a test chart on a laptop / tablet, putting your finger in the line of sight of the chart and focussing on your finger and seeing which detail in the test chart is discernible and which isn't. It ranges with focus distance of course, and also with low light when your eye opens up giving a larger aperture. It's not the crazy DoF of a 50mm at 1.2 or anything, but the background defocus is a thing, and F4 of 5.6 is easily within the range of our vision, so should look natural. It's a good experiment to do so you're aware of it.
  15. kye

    Lenses

    Soft in the corners isn't such a bad thing either. Lots of cine lenses have quite poor performance in the corners (even taking into account field curvature) and this only really matters when you put the subject in the corners of the frame. The eye is naturally drawn to things in focus (which is why DOPs use shallower DoF - people don't spend the entire movie looking at blurry backgrounds) so if your subject is nearer the centre then the softer edges will actually help draw attention to the subject instead of away from it.
  16. This is what I used to think, but if we think about film stocks and how they have these tints (IIRC there were other film stocks that had magenta/green tint instead of yellow/cyan) then here's the issue - every film ever shot on these film stocks would have this look built in. And the problem with that is that every film shot on film didn't look like it had only two colours, or that it was boring, or that it hid a lot. So, if film stocks had these things and didn't look like the POS that most orange/teal grades have, then I figure I must be missing something. Thus this thread. Maybe we're not missing things and in the film days they just designed sets and lighting to compensate, but maybe not. I'm no-where near done with this
  17. Let's start with film emulation. Resolve comes with a bunch of Film Emulation LUTs, let's play with the Kodak 2383 D60 LUT. Firstly, here's a greyscale to see what it does to luminance: The first is unprocessed, and so has nothing on the vectorscope as there is no colour. The second clearly shows that there is a push towards orange in the highlights, and a push towards teal in the shadows. If we change to a smoother gradient and zoom into the vectorscope further (by saturating the image hugely), we get this: From this we can see that it saturates the upper and lower mids more than the highlights and shadows, and that it isn't a straight application of two hues, but the hue varies. If we crop out the highlights and shadows in order to confirm which bits of the image are which bits in the vectorscope then this is what we get: Which confirms that the 'hook' parts of the vectorscope were the highlights and shadows. So there are changes in saturation and also hue applied to greyscale tones, but this is the OT look in this film stock. I suggested to the guys on LiftGammaGain that the film emulations in Resolve must be pretty good and was met with skepticism, however, if we assume that any lack of rigour on the part of the person creating these LUTs would tend towards making them overly simplistic rather than overly complex (a relatively safe assumption, but one all the same) then that suggests that this type of non-linear application of tint is likely in film stocks. So, what does this LUT do to colour? This is a very handy LUT stress-test image courtesy of TrueColor.us: Before image shows that the test image has hues that are in-line with the primary reference markers on the vectorscope, and that all the lines are straight, indicating there is also equal saturation of the image. After image shows a number of interesting things: The most saturated areas of the image are reduced saturation but the mid-levels of saturation are increased, giving a non-linear saturation response that would tend to increase saturation in the image without clipping things, very nice but not relevant to the OT look In terms of relative saturation, the Yellows are the most saturated, Cyan is next most saturated, Red and Magenta are in the middle of the range, and Blue and Green are the least saturated In terms of Hue, Cyan got pushed towards blue a bit, Yellow got pushed a bit orange, Magenta got pushed a little red, and RGB seemed unaffected @Juan Melara made an excellent video re-creating this LUT (he did the D65 version, which has a warmer white-point, but the overall look should be the same): The video is interesting as he uses a range of techniques that apply OT elements to the image: He uses a set of curves which apply a cooler tint to the shadows and a warmer tint to the highlights, and by having different curves for the Red and Green channel, gets some Hue variation across that range too Then there's a Hue vs Sat curve that saturates Yellow and Cyan above the other hues: Then a Hue vs Hue curve that pushes Yellow slightly towards Orange, Green towards Cyan, and Blue towards Cyan (up in the graph pushes colours left on the rainbow in the background of the chart): Then he has two YUV nodes each with a separate Key that are too complicated to explain easily here, but affect both the hue and saturation of the colours in the image. Juan also has the Resolve Powergrade available for download for free on his website, so check it out: https://juanmelara.com.au it's in his store, but it's free. So film tends to have elements of OT. Here are some additional film emulation LUTs in Resolve for comparison - note that all of them have Yellow and Cyan as the most saturated primaries..
  18. The Orange and Teal look is famous, and I've seen it in many different contexts and this thread is about trying to make sense of it. Specifically, why we might do it, and then what the best ways are. From what I can tell: It simulates golden hour (where direct sunlight is warm and the shadows are cool as they're lit by the blue sky) It creates more contrast between skin-tones and darker background elements, making the people in the shot stand out more It is also part of the look of (some) film stocks, so is mixed up in the retro aesthetic, and of course the "cinematic footage" trope There might be other reasons to do it too. If you can think of some please let me know. I've played with the look in the past and I just find that when I apply it to my footage it looks awful. Film, on the other hand, often looks wonderful with bright saturated colours, which I like a great deal. This probably means I'm not doing it right, and that's probably because I don't understand it sufficiently enough, thus this thread. As usual, more to come...
  19. kye

    Lenses

    No experience with that lens, but I agree with @TheRenaissanceMan about it covering some very useful focal lengths. You should be able to search places like flickr to see many example images - being an L lens there might be some reviews bouncing around too. Even if it is “optically middling” maybe that’s a good thing? Lots of people are fans of vintage glass as they take the digital edge off modern cameras, especially those with internal sharpening and h.264 compression. Being an L lens it should still be really nice though - my 70-210 F4 FD lens (not L) was lower down in the range and quite old now and is still lovely to use, so Canon has been putting out high quality glass for a very long time.
  20. I haven't seen that one before... How hilarious!! ???
  21. Yes, although it's a tech board more than an art board, I'm assuming that's how Andrew wants it. My attempts to get people to shoot more were only mildly successful, but from my own perspective I am seeing enough shortcomings of what I shoot that I don't need other people to point out where I need to improve lol. I am intending to come back to a lot of the threads I've started but for the moment I'm just chilling for the holiday and enjoying relaxing. I am about to start a new thread on the Orange/Teal look and how it relates to film-making, how it relates to film, and how to do it in post, as I've been thinking a lot about that recently and have fallen down the rabbit-hole pretty far (eg, YUV channel mixer vs Hue v Sat vs Hue v Hue vs RGB curves... that far!).
  22. Dude, that was hilarious!! Nice work I think socks and sandals in the ocean was my favourite bit, but the pause for sub was also right up there. The effect of stabilising on your head worked really well actually, I'm sure it's 'been done' already but definitely looks cool.
  23. The way I see it is that they haven't died out at all, they're either being used for over capture (360 cameras are almost good enough for that now - I've started to see people using these in "real" productions - for example check out Tiny House Nation on Netflix which used a GoPro Fusion for getting multiple camera angles and action/reaction shots in the tiny houses they're filming in and around) and 3D will also "come back" because it's a technology looking for a use and we haven't worked out what it's really for yet. Even if 3D never becomes a publishing format and stays in the over capture space, AI will really benefit from having multiple cameras with parallax of varying amounts. For example if you have a single 'normal' camera and have a time-of-flight camera / normal camera pairing to the left and another pairing to the right then you can get excellent depth, you have a second (or third) perspective for when objects are close to the camera and block one perspective, and you can also see slightly behind the subject and can use that information to change perspectives slightly, either 3D camera adjustment for offset stabilisation, for 3D effects (I've seen facebook images that use the portrait-mode of the phone to create an image with multiple planes that animate when you scroll or tilt/rotate the phone) etc. even just to see what's slightly behind the subject and you can then blur it for better bokeh simulation, the possibilities are endless. If you're going to stay current then you're going to have to separate the capture format from the publishing format. Computational photography separates these by definition by processing the input before it becomes the output.
  24. I agree with much of what @Oliver Daniel suggested, including the IBIS and EIS taking over gimbals. I think the technical solution for this might look very different though, but let me set some context from the other thread about how far we’ve come in the last decade.. in the last ten years: the 5Dii accidentally started the DSLR revolution the BMCC gave RAW 2.5K to the humble masses we got 3D consumer cameras we got 360 consumer cameras we got face-recognition AF, we got specific face recognition AF, we got eye AF we got computational photography that used multiple exposures from one camera we got computational photography that used multiple dedicated cameras (and other devices like time-of-flight sensors) and did so completely invisibly we got digital re-lighting we got the above 4 points in the most popular camera on the planet we got 6K50 RAW So, from before the DSLR revolution to 6K50 RAW and AI in the most popular and accessible camera on the planet in the last decade.... I think that we should take some queues from The Expanse and look at the floating camera that appears in Season 4. I think consumer cameras will go towards being smart and simple to use for the non-expert as phones continue to eat the entire photography market from the bottom up. They’ve basically killed the point-and-shoot and will continue to eat the low-end DSLR/MILC range and up. So I think we’ll get more and more 360 3D setups by default in an over capture situation where they have cameras in the front/back/sides/top/bottom/corners which will mean that they can do EIS perfectly in post. That will eliminate all rotational instability, but not any physical movement in any direction (how gimbals bob up and down when people walk with them). This will be addressed via AI, as the device will be taking the image apart, processing it in 3D, then putting it back together again. It won’t take much parallax adjustment in post to stabilise a few inches of travel when objects aren’t that close to the device. We won’t get floating cameras though, that would require a pretty significant advances in physics! This AI will enable computational DoF and other things, but I don’t think these will matter much, as I think shallow DoF is a trend that will decline gradually. If this is surprising to you then I’d suggest you go watch the excellent Fimmaker IQ video about the history of DoF, and take note that there was a period in cinematic history when everyone wanted deep DoF and once it became available through the tech the cinematic leaders of the time used it to tell stories where the plot advanced in the foreground, mid-ground and background simultaneously, and everyone wanted it. The normal folks of today view shallow DoF (and nice colours and lighting design and composition for that matter) as indicating something is fake, and it becomes associated with TV, movies, and large companies trying to PR you into loving them while they deforest the amazon and poison your drinking water. The look at authenticity is the look of a smartphone, because that’s where unscripted and unedited stories come from. The last decade started with barely the smartphone, now the smartphone is ubiquetious, and so developed that they basically all look the same. Camera phones basically didn’t exist a decade ago, now we have had the first feature films shot on them by famous directors (publicity stunt or not). People over-estimate what can happen in a year, but underestimate what can happen in a decade.
×
×
  • Create New...