-
Posts
7,849 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
I thought quite a few folks here liked the images from the Z6 but maybe the timing wasn't right for people to actually get one. Certainly, the colour from Fuji on their latest cameras is very nice, and the eterna colour profile is very nice indeed.
-
Fair enough. Unfortunately, your budget isn't sized appropriately for the resolutions you're talking about. I think you have three paths forward: Give up on the laptop and add a zero to your budget, making it $20000 instead of $2000, then go find where people are talking about things like multi-GPU-watercooling setups and where to put the server racks and how to run the cables to the control room Do nothing and wait for Apple to release the 16" MBP with their new chipset in it (this could be a few years wait though and no guarantees about 8K) Work with proxies Proxies are the free option, at least in dollar terms, and you probably don't need to spend any money to get decent enough performance. I'd suggest rendering 1080p proxies in either Prores HQ or DNxHD HQ. This format should be low enough resolution for a modest machine to work with acceptable performance, but high enough resolution and colour depth so that you can do almost all your editing and image processing on the proxy files, and they will be a decent enough approximation of how the footage looks. Things like NR and texture effects would need to be adjusted while looking at the source footage directly, but apart from that you should be able to work with the Proxy files and then just swap to the source files and render the project in whatever resolution you want to deliver and master in.
-
There are two ways to buy a computer for video editing. The first is to look at what performance you need and buy something that can deliver that for you, regardless of price. The second is to set a budget and get the most you can for that, accepting whatever level of performance that gives you and working around the limitations. $2000 is isn't even in the same universe as the first option, so your only hope is to buy the best performance you can, and then work out the best proxy workflow for your NLE and situation. To get good editing and colour grading performance, your system needs to be capable of maybe 2-4 times (or more) the performance required to play the media you're editing. Even a simple cut requires your computer to load the next clip, skip to the in point of the next clip, if it's IPB then it needs to retrace in the file back to the previous keyframe, then render each frame from there forwards until it knows what the first frame on your timeline looks like, and it needs to do all that while playing the previous clip. This doesn't include putting a grade on the clips once they're decoded, or even having to process multiple frames for things like temporal NR, etc. Playing a file is one thing, editing is something else entirely. By the way, Hollywood films are regularly shot in 2.8K or 3.2K and processed and delivered in 2K, so trying to convince someone that you need an 8K workflow is basically saying you need 16 times the image quality of a multi-million dollar Hollywood film, so good luck with that. Most systems work just fine with 2K by the way....
-
For some time I've been thinking about the texture of film. I've also been thinking about the texture of RAW images, both 4K and also 1080p. And I've been thinking of the texture of low-bitrate cheap digital camera images, and how much I don't like it. Last night I watched Knives Out, which was very entertaining, but of note was that it was shot by Steve Yedlin ASC, and that it was shot in 2.8K RAW and mastered in 2K. For those that aren't aware, Steve Yedlin is basically a genius, and his website takes on all the good topics like sensor size, colour science, resolution, and others, and does so with A/B testing, logic and actual math. If someone disagrees with Steve, their work is cut out in convincing me that they know something Steve doesn't! This inspired me to do some tests on processing images with the goal being to create a nice timeless texture. Film has a nice analog but very imperfect feel with grain (both the random noise grain but also grain size of the film itself which controls resolution). Highly-compressed images from cheap cameras have a cheap and nasty texture, often called digititis, and is to be avoided where possible. RAW images don't feel analog, but they don't feel digital in digititis way either. They're somewhere in-between, but in a super clean direction rather than having distortions, with film having film grain which isn't always viewed as a negative distortion, and highly-compressed digital having compression artefacts which are always viewed as a negative distortion. Here's the first test, which is based on taking a few random still images from the net and adding various blur and grain to see what we can do to change the texture of them. The images are 4-7K and offer varying levels of sharpness. The processing was a simple Gaussian Blur in Resolve, at 0.1 / 0.15 / 0.2 settings, and adding film grain to kind of match. On the export file the 0.1 blur does basically nothing, the 0.15 blur is a little heavy handed, and the 0.2 looks like 8mm film, so very stylised! The video starts with each image zoomed in significantly, both so that you can see the original resolution in the file, but also so that you can get a sense of how having extra resolution (by including more of the source file in the frame) changes the aesthetic. Interestingly, most of the images look quite analog when zoomed in a lot, which may be as much to do with the lens resolution and artefacts being exposed as it has to do with the resolution of the file itself. My impression of the zooming test is that the images start looking very retro (at 5X all their flaws are exposed) but transition to a very clean and digital aesthetic. The 0.15 blur seems to take that impression away, and with the film grain added it almost looks like an optical pull-out on film was shot of a printed photograph. In a sense they start looking very analog and at some point the blur I'm applying becomes the limiting factor and so the image doesn't progress beyond a certain level of 'digitalness'. In the sections where I faded between the processed and unprocessed image I found it interesting that the digitalness doesn't kick in until quite late in the fade, which shows the impact of blurring the image and putting it on top of the unprocessed image, which is an alternate approach to blurring the source image directly. I think both are interesting strategies that can be used. Now obviously I still need to do tests on footage I have shot, considering that I have footage across a range of cameras, including XC10 4K, GH5 4K, GH5 1080p, GoPro 1080p, iPhone 4K, and others. That'll be a future test, but I've played in this space before, trying to blur away sharpening/compression artefacts. There are limits to what you can do to 'clean up' a compressed file, but depending on how much you are willing to degrade the IQ, much is possible. For example, here are the graded and ungraded versions of the film I shot for the EOSHD cheap camera challenge 18 months ago. These were shot on the mighty Fujifilm J20 in glorious 640x480, or as I prefer to call it 0.6K.... IIRC someone even commented on the nice highlight rolloff that the video had. All credit goes to the Fuji colour science 😂😂😂 Obviously I pulled out all the stops on that one, but it shows what is possible, and adding blur and grain was a huge part of what improved the image from what is certain to be several orders of magnitude worse than what anyone is working with these days, unless you're making a film using 90s security camera footage or something.
-
I also don't get it, although I suspect this is more because my projects are too disorganised at the start (multiple cameras without timecode) and my process is too disorganised during the edit. One thing that might be useful for you, and which I use all the time, is the Source Tape viewer. It puts all the clips in the selected bin into the viewer in the order that they appear in the Media viewer (ie, you can sort however you like) and you can just scrub through the whole thing selecting in and out points and building a timeline. The alternative to that in the Edit page is having to select a clip in the media viewer, choose in and out points, add it to the timeline, then manually select the next clip. Having to manually select the next clip is a PITA, and I don't think you can do it via key combinations, so it's a mouse - keyboard - mouse - keyboard situation, rather than just hammering away on the keyboard making selects. The impression that I got from their marketing materials and the discussion around launch was that it was for quick turnaround of simpler videos like 30s news spots, vlogs, or other videos that are simpler and require a very fast turn-around. They even mentioned that the UI has been designed to work well on laptop screens, which further suggests that editing in the field because of fast turn-around times is a thing. Watching how YouTubers clamour over themselves to 'report' from conferences like CES, Apple events, etc, trying to be first comes to mind. If you were a YT or 'reporter' who shoots in 709, does a single-camera interview / piece-to-camera and then puts music and B-Roll over the top, puts on music, end cards (pre-rendered) and then hits export, it would be a huge step up in workflow. I suspect that it's just not made for you. I don't use multi cams, Fusion, or even Fairlight, but the Cut page is still too simple for me.
-
Welcome to the DR club. Come on in, the waters fine! I think you have three main decisions in front of you - the first is if you want to focus on the Cut or Edit page first, the second is if you want to invest lots of time up front or learn as you go, the third is if you're going to use a hardware controller of some kind. All have pros and cons. I'm not sure how familiar you are with DR, so apologies if you already know this.. DR has two main edit pages, the Cut page and Edit page. The Cut page is the all-in-one screen designed for fast turnaround and you can do everything inside it including import - edit - mix - colour - export, but its got limited functionality. The Edit page is fully-featured but is a bit cumbersome in terms of needing lots of key presses to get to functions etc. The Edit page also only focuses on editing, and is designed to be used in conjunction with the Fairlight and Colour and Delivery pages. I think BM have decided to leave the Edit page mostly alone and are really focusing on the Cut page. For example their new small editing keyboard is targeted at the Cut page but apparently doesn't work that well with the Edit page with buttons not working there, at least at the current time. I started using DR long before the Cut page, so haven't gotten used to it yet but if your projects are simpler then it might be worthwhile to get good at the Cut page to rough-out an edit and just jump into the Edit page for a few final things. If you're shooting larger more complicated projects then it might be good to focus on the Edit page first. Eventually you'll need to learn both as they have advantages. The other major decision is if you take time to watch decently long tutorials, taking notes along the way, or if you want to jump in and just do a few quick tutorials to get up and running. The first approach is probably better in the long run but more painful at the start. I suspect you're going to have three kinds of issues learning it: Finding the basic controls which every NLE has Finding the more advanced things that DR will have but won't be named the same so are hard to search for Accomplishing things when DR goes about it a different way than what you're used to (eg, differences in workflow) which will be impossible to search for Learning deeply / thoroughly at the start will give you all three, whereas learning as you go will leave the latter two subject to chance, and potentially leave you less productive for months or years. Plus it's pretty painful to go through the deep / thorough materials once you've already got the basics as much will be repeated. If you're getting a hardware controller, at least for editing, then that can steer your other choices. Like I said before, the new Speed Editor editing keyboard is designed to work with the Cut page, so that will steer you in that direction. The other reasons I mention it is that it will give you clues about what things are called and the rationale of wider workflow issues, especially if you watch a few tutorials on how to use it as they will cover the specifics as well as at least nod in the direction of the concepts. If you're going to get a hardware controller then now is probably the time, you can always sell it again if it doesn't fit your workflow or you change devices to a different one. The Speed Editor is pretty cheap (in the context of edit controllers) so that might be something worth considering. Some general advice: Make good use of the Keyboard Customisation - it has presets for other NLEs but is highly customisable so use that to your advantage RTFM. Even just skim it. It is truly excellent, and is written by a professional writer who has won awards. I open it frequently and keyword searching is very useful, assuming you know what stuff is called. Skim reading it may answer a lot of more abstract / workflow type questions too. It's long though - current version is 2937 pages, and no I'm not kidding! Google searches often work - I didn't learn deeply and thoroughly when I started (as I didn't really know how to edit so much of it would have been out of context) so I do random searches from time to time, and I often find other people are asking the same questions as me, so this can help find things, or help you with what stuff is called at least. Workflow searching unfortunately doesn't yield much help, at least in my assistance. As questions here, EOSHD has a growing number of DR users, and even if we don't know, an experienced editor asking a question is useful to me as it gives away how pros do it, which helps me, so I often research questions for myself as much as for others. It seems so. 12.5 used to crash about 2-4 times per hour for me, but 16 basically hasn't crashed in weeks/months. I think it's probably related to your software / hardware environment.
-
Shooting music videos in a warzone... 2020 was a full-on year!!
-
Merry Christmas @Andrew Reid and everyone.. Just think how great it will be once we're all able to get out and actually start shooting again!
-
True. If they'd designed an interchangeable lens mount then it could still have been an M12 based system, but having the thread permanently mounted to the camera wouldn't have worked. Much better would have been to mount a few metal female mounting points around the sensor and have some kind of assembly that contained an M12 mount and lens that could be fastened to the board with the sensor on it. M12 is a reasonable mount because it's widely used in security / CCTV cameras, but the mounts are not designed to be re-used beyond initial assembly of the product.
-
The problem is that the flange distance of the GoPro lens setup, which IIRC is M12, is very small and the camera has a very narrow opening. This is a GoProp replacement lens - note that the entire lens is very narrow, essentially fitting inside the M12 mount (which has an internal thread). Most interesting lenses are much thicker, like this one, so can't be mounted without major surgery to the GoPro electronics: However, if GoPro had designed it a little differently then it would have been pretty easy to design it so that other lenses could have been used - the could even have sold them at horrifically inflated prices like every other product they sold.
-
A couple of years ago my wife and I did a trip to India with a charity and the trip consisted of a mixture of tourist stuff as well as visiting the beneficiaries of the charity in a bunch of rural villages. Due to a combination of factors I didn't film the people we visited in the villages, but I have a number of stills photos that were taken and am now editing the project and looking for advice on how to integrate these images as seamlessly as possible. The rest of the project consists of footage of travel legs of cities and rural India, as well as the tourist locations like Taj Mahal etc, so my concern is that the images will be a bit jarring, but visiting the villages was a personal highlight of the trip so if anything I want to emphasise those parts rather than de-emphasise them. The images I have are smartphone images and range from being single images (like a group photo), to having sequences of photos (how people will take 3 or 4 images of something happening), all the way to a few bursts of 20+ images of things like a guy operating a loom. I'm immediately reminded of the work of Ken Burns, and will definitely animate some movement in the images, but I don't typically narrate my videos and I have very limited audio from these locations, so the images may be required to stand on their own, which I'm concerned about. I can probably cheat a little and find some audio from elsewhere and use that as ambience, which I've done before on other projects to cover gaps in my filming (lol). I also took audio when the women sang so I have those too. I'm thinking that I should embrace it and deliberately give it a more mixed-media feel, considering that I can make stop-motion from the sequences / bursts, and I could even use stills instead of video in other moments where I did shoot video, or even go so far as pulling frames from video to use as stills, in order to kind of 'balance' the amount of stills vs video and make it a consistent style element. Has anyone edited a project where there were a lot of still images, stop-motion, or other non-video elements? I'm keen to hear ideas....
-
So, to paraphrase... In March, GoPro stocks were down to 3% of their 2014 peak But they've gone up from 3% to 10% of their peak The rise was unexpected In a conversation about a companies business model, where that business model hasn't really changed in the entire lifetime of the company, I would suggest that "down by 90% since 2014" is a more relevant figure than what they did in the last quarter, month, quarter-of-an-hour, etc... It's relatively easy to select any time portion you like to prove a point. For example, in the last few days they lost over 8% of their value. At that rate I guess they'll be bankrupt by the new year. Unless, of course, you think I'm taking the most current data out of context and I should zoom out a bit? 🙂
-
The question "what sensor most matches film?" is about as useful as "what film matches digital?". If you're interested in using digital sensors to emulate film, then learn to colour grade, like I said. What is possible far outreaches what people think is possible.
-
-
Use of a single prime can have many advantages, this article from Noam Kroll is a good overview: https://noamkroll.com/many-iconic-directors-have-shot-their-feature-films-with-just-a-single-prime-lens-heres-why/ Of course, it has to fit with your situation as @zerocool22 illustrates. I derived my choice of the 35mm FOV from my situation, being that I wanted environmental portraits while being on holiday with people I know, so typically filming between 1-3m away from the person, and I'm often not allowed to move around such as in tour busses and boats. The FOV also creates a neutral point of view that is close enough to 'normal' that it doesn't create any obvious effect. I found a 50mm FOV to be too tight as it tends to isolate at that distance, and 28mm FOV isn't tight enough because when you get someone large enough in frame they become too distorted. The idea of a 2x zoom is interesting and I've recently discovered that with the 5K GH5 sensor I can shoot 1080p and get a 5K-2K downscale but also if I use the 2x digital zoom then I get a 2.5K-2K downscale so image quality is still preserved and noise reduced compared to the 2.5x 1:1 crop which is also a bit too much of a change to be so flexible. That turns my 35mm FOV lens into a 70mm FOV lens too. It's a useful option and for those that like 24mm focal length the 24-70 is a great choice. Interesting to hear about your scheduling limitation @Anaconda_ - do you find that you can plan and operate on a weekly cycle with this schedule? What are the productivity implications you've found?
-
One of the things that I think amateurs miss about professional film-making is that it's a production line. There are standards for how each operator does their bit so that when it's passed to the next operator they know what to do with it and don't have to re-invent the wheel each time. These standards have been developed over many decades in order to get the best results from an acceptable time commitment. I think that's something that is really missing from people who think of film-making as a single-operator. Not only because you're not in the mind-set of working with others and thinking of their needs, but also in the sense that we can do anything and 'get away with it' because we're passing the footage we shot to ourselves to edit and then to ourselves to sound mix and then to ourselves to colour grade and then ourselves to encode and deliver. As a single-operator if you do something a bit wrong and then get it into the edit you're now in a situation where you're having to work with what you have, and maybe you get frustrated and maybe you learn. In a team if you do something a bit wrong you will get a pretty severe talking to from the boss and you will learn from that experience and probably never do it again. Netflix sets dozens of rules with their cameras, it's not just TC. These reasons are there so that the chain of how team-based film-making works isn't completely screwed up because of camera choice. Anyone who isn't familiar with the rules should go read them, and if there's a rule that you don't understand the benefit of, then you should learn more about it, because these rules have been created by the people that do this successfully for a living for decades.
-
GoPros allow highly-skilled operators to make spectacular situations look great in the same way. Great cameras allow passable operators to make modest situations look very good in whatever way you want as an artist. I think GoPro missed an opportunity by not offering a model where the lenses could be interchanged, even if it was via a relatively delicate process that took some time and couldn't be done 'in the field'. I bought a Sony X3000 instead of a GoPro as it had OIS instead of EIS, which matters in low-light situations, and that will be replaced by the wide-angle camera in whatever smartphone I have at that point. GoPro filled a niche and didn't innovate. Not a recipe for the long-term.
-
Thanks, that's interesting. I'm not relishing the idea of having to buy it separately, but I guess it might be worth me doing some reading about custom view LUTs. Thanks, that's also worth knowing. If you can't easily switch it on and off then that would mean it would have to be suitable to be the only way you're viewing footage while recording. Normally a DP would have a technical view like false colour to ensure proper exposure, the DP and director (and others) would have a 'normal' view (perhaps with a look LUT), and the focus puller would have a separate view with peaking. If I applied a LUT in-camera then I'd be designing a LUT for all three applications simultaneously, which i'm not sure is possible to design well.
-
I recently re-watched a great video on the visual style of Alfonso Cuarón and collaborator Emmanuel Lubezki, and in delving into this more deeply, was reminded how the pair imposed their own limitations when shooting Y Tu Mamá También and how limitations can help focus the creative process and also keep costs down and allow amplified creativity. Who else does this in their own work? We likely all have limitations imposed externally, considering we're not world famous with infinite time and money, but even for those who are operating with limitations, how many of you are either consciously shaping your process in order to fit within your limitations, or even imposing more when you don't need to, in order to simplify and increase creativity? I have found that the limitations that Cuarón/Lubezki impose fit well with my own. They shoot only wide angle lenses, exclusively hand-hold the camera, use natural light, and feature the characters relationships to each other and to their wider surroundings. Further, the camera movement is deliberate and has a 'character', they use long takes at the climaxes in order to further the sense of reality of the situations. I shoot my families travel and the occasional event, shooting hand-held with a 35mm prime (and only changing lenses when a specific shot is called for), shoot only in available light, feature the moments of my family and friends interacting with each other and the environment we're in, and because i'm behind the camera and have a relationship with my family my movement and framing will take on that character. It almost seems to me that there will be a range of famous directors, DoPs, cinematographers, and other visual artists that will align well enough with your own style and preferences that we can learn a lot from it. Who else is studying the visual styles and processes of others to learn?
-
Smuggle in your own popcorn, and put your mobile phone on silent, but don't accept the defeat of cinema! RISE UP!
-
Good question and thanks for raising it. I've contemplated this option in the past and decided against it because at the time I decided that being able to see colour was more important for framing and composition for me. I shoot personal videos of my friends and family in uncontrolled situations so it's useful for me to see the colour so that I can include it in the shot or exclude it depending on what I want. Having said that though, with the freedom of colour grading, I can 'fix' any undesirable colours in post, so even if I don't see them during filming it's not a problem for the final footage. In a sense I'd like a setting that's half-way, where saturation is reduced to perhaps a third and the highlights are in a fully-saturated colour. It would be great to be able to apply a display LUT in-camera (GH5 doesn't have that feature) as I could design one that partly desaturates the image and also shows an image with increased contrast so that it exaggerates exposure and makes it easier to get the exposure of things like skintones correct. I'd also make it so that pure white was 100% red and pure black was 100% blue, so you could tell what you were clipping.
-
Q: When will digital catch up to film? A: When you learn to colour grade properly. With a few notable exceptions (you know who you are), the colour grading skill level of the average film-maker talking about this topic online is terrible. Worse still, is that people don't even know enough to know that they don't know how little they actually know. I have been studying colour grading for years at this point, and I will be the first to admit that I know so little about colour grading that I have barely scratched the surface. Here's another question - Do you want your footage to look like a Super-8 home video from the 60s? I suspect not. That's not what people are actually looking for. Most people who want digital to look like film actually don't. Sure, there are a few people on a few projects where they want to shoot digital and have the results look like it was shot on film in order to emulate old footage, but mostly the question is a proxy for wanting nice images. Mostly they want to get results like Hollywood does. Hollywood gets its high production value from spending money on production design. Production design is about location choice, set design, costume / hair / makeup, lighting design, blocking, haze, camera movement, and other things like that. If you point a film camera at a crappy looking scene then you will get a crappy looking scene. There's a reason that student films are mostly so cringe and so cheap-looking. They spent no money on production design because they had no money. Do you think that big budget films would spend so much money if it didn't contribute to the final images? I suggest this: Think about how much money you'd be willing to spend on a camera that created gorgeous images for you, and how much you'd spend on re-buying all your lenses, cages, monitors, and all the kit you would need to buy Think about how much time you would be willing to invest on doing all the research to work out what camera that was, how much time you would spend selling your existing equipment, how much time you would spend working out what to buy for the new setup, how much time you would spend learning how to use it, how much time you would spend learning to process the footage Take that money and spend half of it on training courses and take the other half and put it into shooting some test projects that you can learn from, so you can level-up your abilities Take that time you would have spent and do those courses and film those projects People love camera tests, but it's mostly a waste of time. Stop thinking about camera tests and start thinking about production value tests. Take a room in your house, get one or two actors, hire them if you have to (you have a budget for this remember) and get them to do a simple scene, perhaps only 3-6 lines of dialog per actor. It should be super-short because you're going to dissect it dozens of times, maybe hundreds. Now experiment with lighting design and haze. Play with set design and set dressing. Do blocking and camera movement tests. Do focal length tests (not lens tests). Now do costume design, hair and makeup tests. Take this progression into post and line them up and compare. See which elements of the above added the most production value. But you're not done yet - you've created a great looking scene but it is probably still dull. Now you have to play with the relationship between things like focal length / blocking / camera movement and the dramatic content of the scene. Most people know that we go closer to show important details, and when the drama is highest, but what about in those moments between those peaks? Film the whole scene from every angle, every angle you can even think of, essentially getting 100% coverage. Now your journey into editing begins. Start with continuity editing (if you don't know what that is then start by looking it up). You now have the ability to work with shot selection and you should be using it to emphasise the dramatic content of the scene. Create at least a dozen edits, trying to make each one as different as possible. You can play with shot length, everything from the whole scene as one wide shot to a cut every 1s. You can cut between close-ups for the whole scene, or go between wides and close-ups. Go from wide to mid to close and go straight from wide to close without the mid shots in between. What did you learn about the feel of these choices? What about choosing between the person talking and the person listening? What does an edit look like where you only see the person talking, or just the person looking? Which lines land better when you see the reaction-shot? Play with L and J cuts. Now we play with time. You have every angle, so you can add reverse-angles to extend moments (like reality TV does), you can do L and J cuts and play with cutting to the reaction shot from some other line. What about changing the sequence of the dialogue? Can you tell a different story with your existing footage? How many stories can you tell? Try and make a film with the least dialogue possible - how much of the dialogue can you remove? What about no dialogue at all - can you tell a story with just reaction shots? Can you make a silent film that still tells a story - showing people talking but without being able to hear them? Play with dialogue screens like the old silent films - now you can have the actors "say" whatever you like - what stories can you tell with your footage? Then sound design.... Then coaching of actors.... Now you've learned how to shoot a scene. What about combining two scenes? Think of how many combinations are now available - you can now combine scenes together where there are different locations, actors, times of day, seasons, scenarios, etc. Now three scenes. Now acts and story structure.... Great, now you're a good film-maker. You haven't gotten paid yet, so career development, navigating the industry, business decisions and commercial acumen. Do you know what films are saleable and which aren't? Have you worked out why Michael Bay is successful despite most film-makers being very critical of him and his film-making approach and style? There's a saying about continuity - "people only notice continuity errors if you film is crap". Does it matter? Sure, but it's not the main critical success factor. Camera choice is the same.
-
Sony a7S III ... for a cinematic look/feel? Or look elsewhere?
kye replied to bonesandskin's topic in Cameras
I'm going to disagree with all the sentiments in this thread and recommend something different. Go rent an Alexa. For practical purposes, maybe an Alexa Mini. Talk to your local rental houses and see if there's a timeframe you can rent one and get a big discount, often rental houses are happy to give you a discount if you're renting it when the camera wouldn't be rented by anyone else so have a chat with them. Shoot with it a lot. Shoot as much as you can and in as many situations as you can. Just get one lens with it then take it out and shoot. Shoot in the various modes it has, shoot into the sun and away from it. Shoot indoors. Shoot high-key and shoot low key. Then take the camera back and grade the footage. I suspect you won't do this. It's expensive and a cinema camera like an Alexa is a PITA unless you have used one before. So I'll skip to the end with what I think you'll find. The footage won't look great. The footage will remind you of footage from lesser cameras. You will wonder what happened and if you're processing the footage correctly. I have never shot with an Alexa, but I am told by many pros that if you don't know what you're doing, Alexa footage will look just as much like a home video as from almost any other camera. Cinematic is a word that doesn't even really have any meaning in this context. It really just means 'of the cinema' and there's probably been enough films shot and shown in cinemas on iPhones that now an iPhone technically qualifies as being 'cinematic'. Yes, i'm being slightly tongue-in-cheek here, but the point remains that the word doesn't have any useful meaning here. Yes, images that are shown in the cinema typically look spectacular. Most of this is location choice, set design, hair, costume, makeup, lighting, haze, blocking, and the many other things that go into creating the light that goes through the lens and into the camera. That doesn't mean that the camera doesn't matter. We all have tastes, looks we like and looks we don't, it's just that the word 'cinematic' is about as useful as the word 'lovely' - we all know it when we see it but we don't all agree on when that is. By far the more useful is to work out what aspects of image quality you are looking for: Do you like the look of film? If so, which film stocks? What resolution? Some people suggest that 1080p is the most cinematic, whereas some argue that film was much higher resolution than 4K or even 8K. What about colour? The Alexa has spectacular colour, so does RED. But neither one will give you good colour easily, and neither will give you great colour - great colour requires great production design, great lighting, great camera colour science, and great colour grading. By the way - Canon also has great colour, so does Nikon, and other brands too. You don't hear photographers wishing their 5D or D800 had colour science like in the movies. What lenses do you like? Sharp? Softer? High-contrast? Low contrast? What about chromatic aberation? and what about the corners - do you like a bit of vignetting or softness or field curvature? Bokeh shape? dare I mention anamorphics? But there is an alternative - it doesn't require learning what you like and how to get it, it doesn't require the careful weighting of priorities, and it's a safer option. Buy an ARRI Alexa LF and full set of Zeiss Master Primes. That way you will know that you have the most cinematic camera money can buy, and no-one would argue based on their preferences. You still wouldn't get the images you're after because the cinematic look requires an enormous team and hundreds of thousands of dollars (think about it - why would people pay for these things if they could get those images without all these people?) but there will be no doubt that you have the most cinematic camera that money can buy. I'd suggest Panavision, but they're the best cameras that money can't buy. -
There was a hack to do it that I came across. From memory you save the project file (very important!), then select all clips on the timeline, edit-cut, now the timeline is empty you can change the frame rate, then paste everything back again. I remember trying it and it working, so if the above doesn't work then let me know and I'll see if I wrote it down anywhere.
-
The Best Davinci Resolve Color Grading for Skintones Explanation Ever?
kye replied to herein2020's topic in Cameras
Great to hear you're upping your colour grading game, and getting better results! I watched that video a long time ago and found it quite useful at the time. The workflow you describe above is a pretty standard workflow in colour grading circles. In terms of how I believe it's normally discussed: Skintone exposure is typically set on location through a combination of camera and lighting / lighting modifiers Somehow the image gets converted to a 709 space (This can be via a great many methods, depending on your preferred workflow, but a conversion or PFE LUT is fairly common) The adjustments that get made to the whole image are referred to as primary adjustments, or "primaries" The adjustments that get made to parts of the image (for example via power windows or a key) are secondary adjustments, or "secondaries" Colour grading can seem like a bit of a dark art, and in many ways it is, but it's definitely a case of the 80/20 rule where you can put in a little effort and get a big reward in return. Here are a few videos that I've found useful that cover the basics but go a little further than Avery does in the above video... enjoy! Great video from Wandering DP (excuse the clickbait title - it was done as a joke!): I can't speak highly enough of Wandering DP - his channel is full of cinematography breakdown videos where he talks about lighting and composition and are tremendously useful if you shoot your own content. Ironically, the above video by him talking about colour grading is better than most YT colour grading videos, despite the fact he isn't a colourist, doesn't claim to be one, and this is the only video on his channel that talks about it! I have gone through a kind of mental 180 degree shift in how I think about shooting and colour grading over the last 6 months or so, and I think that his videos have played a significant part in that transformation. My understanding about how to go about shooting and grading is now far simpler, clearer, and I'm getting radically better results, and I'm not sure how I didn't understand this years ago or how there is any other way to think about it at all! And to take things up a notch, here's a video from Waqas, who is a professional colourist and obviously enormously talented. This video shows his approach, how he might make a commercial grade, and how he might make a cinematic grade. I can also recommend his videos too, as although he has a standard approach that he likes to use (as all colourists tend to have), most of his videos have little details and tips that you can pick up new stuff from, even if you've watched his other videos already. You'll also notice if you look at his channel that he recently did long interviews with the DP and the Cinematographer of Joker. The common theme between these two YT channels is that both of them are industry professionals, not internet professionals, so their frame of reference is how things are done on set, rather than the typical YT / Vlog / buy-my-LUT / links-in-the-description folks that are all over YT pretending to know what they're doing. Good luck!