-
Posts
7,929 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by kye
-
I was advocating for 1080p Prores / DNxHD proies, so that only requires a 1080p ALL-I capable computer, which these days is almost all of them. If he wants to shoot 8K and master in 8K then good luck to him, it's the editing experience that is the question. Plus who knows what kind of hype around 8K is present in the marketplace these days - until clients work out that 8K is pretty well useless and doesn't improve IQ even if it can be broadcast then there might still be money in offering 8K as a service, and considering the state of film-making and 2020 I understand anything to give a competitive edge..
-
Soju! I had my first soju experience only the other day, and it was quite delicious 🙂 I also like that the grapefruit one was featured - it was one of the ones that everyone liked the most!
-
The other challenge that @herein2020 was avoiding was that of incompatibility. My dad used to work for a large educational institution and ordered a custom built PC to replace their main file server, so naturally ordered the latest motherboard, CPU, RAM, RAID controller and drives. Long story short, two months after getting it he still hadn't managed to get an OS to install correctly, and neither had the other people on the hundred-page thread talking about the incompatibility, in which multiple people verified that the manufacturers of various components were all blaming each other for the problem and no-one was working on a solution, so my dad did what everyone else in the thread did and gave up. He was lucky enough to be able to lean on their wholesaler to take the equipment back with a full refund, but others weren't so lucky, and the thread continued for another year as various people swapped out parts for other models/brands to see what the best fully-functional system was. Or you just buy something that someone else has already checked for you. There's a reason that many serious software packages are only supported on certain OS versions and certain hardware configurations. It's because their clients value reliability and full-featured service and support rather than getting a 12% improvement on performance.
-
XMAS DOGGIES!! My only question is - what alcohol did you put in those little barrels? Brandy perhaps, for the holiday season? Whisky for a more straight-up drink? ...Tequila perhaps? 🙂 🙂 🙂
-
While the C70 has many advantages and attractive features, I think the above is probably the most important factor of any camera because it impacts the level of creativity and inspiration of the video, not just what colour or clarity the pixels are in the output files. I consistently find that creativity evaporates when I feel like i'm fighting the equipment rather than it having my back and supporting the work. In the creative pursuits, this is a night-and-day difference, but of course, is also different for all people, so it's about the synergy between the camera and the user rather than one camera suiting everyone. Great stuff!
-
I thought quite a few folks here liked the images from the Z6 but maybe the timing wasn't right for people to actually get one. Certainly, the colour from Fuji on their latest cameras is very nice, and the eterna colour profile is very nice indeed.
-
Fair enough. Unfortunately, your budget isn't sized appropriately for the resolutions you're talking about. I think you have three paths forward: Give up on the laptop and add a zero to your budget, making it $20000 instead of $2000, then go find where people are talking about things like multi-GPU-watercooling setups and where to put the server racks and how to run the cables to the control room Do nothing and wait for Apple to release the 16" MBP with their new chipset in it (this could be a few years wait though and no guarantees about 8K) Work with proxies Proxies are the free option, at least in dollar terms, and you probably don't need to spend any money to get decent enough performance. I'd suggest rendering 1080p proxies in either Prores HQ or DNxHD HQ. This format should be low enough resolution for a modest machine to work with acceptable performance, but high enough resolution and colour depth so that you can do almost all your editing and image processing on the proxy files, and they will be a decent enough approximation of how the footage looks. Things like NR and texture effects would need to be adjusted while looking at the source footage directly, but apart from that you should be able to work with the Proxy files and then just swap to the source files and render the project in whatever resolution you want to deliver and master in.
-
There are two ways to buy a computer for video editing. The first is to look at what performance you need and buy something that can deliver that for you, regardless of price. The second is to set a budget and get the most you can for that, accepting whatever level of performance that gives you and working around the limitations. $2000 is isn't even in the same universe as the first option, so your only hope is to buy the best performance you can, and then work out the best proxy workflow for your NLE and situation. To get good editing and colour grading performance, your system needs to be capable of maybe 2-4 times (or more) the performance required to play the media you're editing. Even a simple cut requires your computer to load the next clip, skip to the in point of the next clip, if it's IPB then it needs to retrace in the file back to the previous keyframe, then render each frame from there forwards until it knows what the first frame on your timeline looks like, and it needs to do all that while playing the previous clip. This doesn't include putting a grade on the clips once they're decoded, or even having to process multiple frames for things like temporal NR, etc. Playing a file is one thing, editing is something else entirely. By the way, Hollywood films are regularly shot in 2.8K or 3.2K and processed and delivered in 2K, so trying to convince someone that you need an 8K workflow is basically saying you need 16 times the image quality of a multi-million dollar Hollywood film, so good luck with that. Most systems work just fine with 2K by the way....
-
For some time I've been thinking about the texture of film. I've also been thinking about the texture of RAW images, both 4K and also 1080p. And I've been thinking of the texture of low-bitrate cheap digital camera images, and how much I don't like it. Last night I watched Knives Out, which was very entertaining, but of note was that it was shot by Steve Yedlin ASC, and that it was shot in 2.8K RAW and mastered in 2K. For those that aren't aware, Steve Yedlin is basically a genius, and his website takes on all the good topics like sensor size, colour science, resolution, and others, and does so with A/B testing, logic and actual math. If someone disagrees with Steve, their work is cut out in convincing me that they know something Steve doesn't! This inspired me to do some tests on processing images with the goal being to create a nice timeless texture. Film has a nice analog but very imperfect feel with grain (both the random noise grain but also grain size of the film itself which controls resolution). Highly-compressed images from cheap cameras have a cheap and nasty texture, often called digititis, and is to be avoided where possible. RAW images don't feel analog, but they don't feel digital in digititis way either. They're somewhere in-between, but in a super clean direction rather than having distortions, with film having film grain which isn't always viewed as a negative distortion, and highly-compressed digital having compression artefacts which are always viewed as a negative distortion. Here's the first test, which is based on taking a few random still images from the net and adding various blur and grain to see what we can do to change the texture of them. The images are 4-7K and offer varying levels of sharpness. The processing was a simple Gaussian Blur in Resolve, at 0.1 / 0.15 / 0.2 settings, and adding film grain to kind of match. On the export file the 0.1 blur does basically nothing, the 0.15 blur is a little heavy handed, and the 0.2 looks like 8mm film, so very stylised! The video starts with each image zoomed in significantly, both so that you can see the original resolution in the file, but also so that you can get a sense of how having extra resolution (by including more of the source file in the frame) changes the aesthetic. Interestingly, most of the images look quite analog when zoomed in a lot, which may be as much to do with the lens resolution and artefacts being exposed as it has to do with the resolution of the file itself. My impression of the zooming test is that the images start looking very retro (at 5X all their flaws are exposed) but transition to a very clean and digital aesthetic. The 0.15 blur seems to take that impression away, and with the film grain added it almost looks like an optical pull-out on film was shot of a printed photograph. In a sense they start looking very analog and at some point the blur I'm applying becomes the limiting factor and so the image doesn't progress beyond a certain level of 'digitalness'. In the sections where I faded between the processed and unprocessed image I found it interesting that the digitalness doesn't kick in until quite late in the fade, which shows the impact of blurring the image and putting it on top of the unprocessed image, which is an alternate approach to blurring the source image directly. I think both are interesting strategies that can be used. Now obviously I still need to do tests on footage I have shot, considering that I have footage across a range of cameras, including XC10 4K, GH5 4K, GH5 1080p, GoPro 1080p, iPhone 4K, and others. That'll be a future test, but I've played in this space before, trying to blur away sharpening/compression artefacts. There are limits to what you can do to 'clean up' a compressed file, but depending on how much you are willing to degrade the IQ, much is possible. For example, here are the graded and ungraded versions of the film I shot for the EOSHD cheap camera challenge 18 months ago. These were shot on the mighty Fujifilm J20 in glorious 640x480, or as I prefer to call it 0.6K.... IIRC someone even commented on the nice highlight rolloff that the video had. All credit goes to the Fuji colour science 😂😂😂 Obviously I pulled out all the stops on that one, but it shows what is possible, and adding blur and grain was a huge part of what improved the image from what is certain to be several orders of magnitude worse than what anyone is working with these days, unless you're making a film using 90s security camera footage or something.
-
I also don't get it, although I suspect this is more because my projects are too disorganised at the start (multiple cameras without timecode) and my process is too disorganised during the edit. One thing that might be useful for you, and which I use all the time, is the Source Tape viewer. It puts all the clips in the selected bin into the viewer in the order that they appear in the Media viewer (ie, you can sort however you like) and you can just scrub through the whole thing selecting in and out points and building a timeline. The alternative to that in the Edit page is having to select a clip in the media viewer, choose in and out points, add it to the timeline, then manually select the next clip. Having to manually select the next clip is a PITA, and I don't think you can do it via key combinations, so it's a mouse - keyboard - mouse - keyboard situation, rather than just hammering away on the keyboard making selects. The impression that I got from their marketing materials and the discussion around launch was that it was for quick turnaround of simpler videos like 30s news spots, vlogs, or other videos that are simpler and require a very fast turn-around. They even mentioned that the UI has been designed to work well on laptop screens, which further suggests that editing in the field because of fast turn-around times is a thing. Watching how YouTubers clamour over themselves to 'report' from conferences like CES, Apple events, etc, trying to be first comes to mind. If you were a YT or 'reporter' who shoots in 709, does a single-camera interview / piece-to-camera and then puts music and B-Roll over the top, puts on music, end cards (pre-rendered) and then hits export, it would be a huge step up in workflow. I suspect that it's just not made for you. I don't use multi cams, Fusion, or even Fairlight, but the Cut page is still too simple for me.
-
Welcome to the DR club. Come on in, the waters fine! I think you have three main decisions in front of you - the first is if you want to focus on the Cut or Edit page first, the second is if you want to invest lots of time up front or learn as you go, the third is if you're going to use a hardware controller of some kind. All have pros and cons. I'm not sure how familiar you are with DR, so apologies if you already know this.. DR has two main edit pages, the Cut page and Edit page. The Cut page is the all-in-one screen designed for fast turnaround and you can do everything inside it including import - edit - mix - colour - export, but its got limited functionality. The Edit page is fully-featured but is a bit cumbersome in terms of needing lots of key presses to get to functions etc. The Edit page also only focuses on editing, and is designed to be used in conjunction with the Fairlight and Colour and Delivery pages. I think BM have decided to leave the Edit page mostly alone and are really focusing on the Cut page. For example their new small editing keyboard is targeted at the Cut page but apparently doesn't work that well with the Edit page with buttons not working there, at least at the current time. I started using DR long before the Cut page, so haven't gotten used to it yet but if your projects are simpler then it might be worthwhile to get good at the Cut page to rough-out an edit and just jump into the Edit page for a few final things. If you're shooting larger more complicated projects then it might be good to focus on the Edit page first. Eventually you'll need to learn both as they have advantages. The other major decision is if you take time to watch decently long tutorials, taking notes along the way, or if you want to jump in and just do a few quick tutorials to get up and running. The first approach is probably better in the long run but more painful at the start. I suspect you're going to have three kinds of issues learning it: Finding the basic controls which every NLE has Finding the more advanced things that DR will have but won't be named the same so are hard to search for Accomplishing things when DR goes about it a different way than what you're used to (eg, differences in workflow) which will be impossible to search for Learning deeply / thoroughly at the start will give you all three, whereas learning as you go will leave the latter two subject to chance, and potentially leave you less productive for months or years. Plus it's pretty painful to go through the deep / thorough materials once you've already got the basics as much will be repeated. If you're getting a hardware controller, at least for editing, then that can steer your other choices. Like I said before, the new Speed Editor editing keyboard is designed to work with the Cut page, so that will steer you in that direction. The other reasons I mention it is that it will give you clues about what things are called and the rationale of wider workflow issues, especially if you watch a few tutorials on how to use it as they will cover the specifics as well as at least nod in the direction of the concepts. If you're going to get a hardware controller then now is probably the time, you can always sell it again if it doesn't fit your workflow or you change devices to a different one. The Speed Editor is pretty cheap (in the context of edit controllers) so that might be something worth considering. Some general advice: Make good use of the Keyboard Customisation - it has presets for other NLEs but is highly customisable so use that to your advantage RTFM. Even just skim it. It is truly excellent, and is written by a professional writer who has won awards. I open it frequently and keyword searching is very useful, assuming you know what stuff is called. Skim reading it may answer a lot of more abstract / workflow type questions too. It's long though - current version is 2937 pages, and no I'm not kidding! Google searches often work - I didn't learn deeply and thoroughly when I started (as I didn't really know how to edit so much of it would have been out of context) so I do random searches from time to time, and I often find other people are asking the same questions as me, so this can help find things, or help you with what stuff is called at least. Workflow searching unfortunately doesn't yield much help, at least in my assistance. As questions here, EOSHD has a growing number of DR users, and even if we don't know, an experienced editor asking a question is useful to me as it gives away how pros do it, which helps me, so I often research questions for myself as much as for others. It seems so. 12.5 used to crash about 2-4 times per hour for me, but 16 basically hasn't crashed in weeks/months. I think it's probably related to your software / hardware environment.
-
Shooting music videos in a warzone... 2020 was a full-on year!!
-
Merry Christmas @Andrew Reid and everyone.. Just think how great it will be once we're all able to get out and actually start shooting again!
-
True. If they'd designed an interchangeable lens mount then it could still have been an M12 based system, but having the thread permanently mounted to the camera wouldn't have worked. Much better would have been to mount a few metal female mounting points around the sensor and have some kind of assembly that contained an M12 mount and lens that could be fastened to the board with the sensor on it. M12 is a reasonable mount because it's widely used in security / CCTV cameras, but the mounts are not designed to be re-used beyond initial assembly of the product.
-
The problem is that the flange distance of the GoPro lens setup, which IIRC is M12, is very small and the camera has a very narrow opening. This is a GoProp replacement lens - note that the entire lens is very narrow, essentially fitting inside the M12 mount (which has an internal thread). Most interesting lenses are much thicker, like this one, so can't be mounted without major surgery to the GoPro electronics: However, if GoPro had designed it a little differently then it would have been pretty easy to design it so that other lenses could have been used - the could even have sold them at horrifically inflated prices like every other product they sold.
-
A couple of years ago my wife and I did a trip to India with a charity and the trip consisted of a mixture of tourist stuff as well as visiting the beneficiaries of the charity in a bunch of rural villages. Due to a combination of factors I didn't film the people we visited in the villages, but I have a number of stills photos that were taken and am now editing the project and looking for advice on how to integrate these images as seamlessly as possible. The rest of the project consists of footage of travel legs of cities and rural India, as well as the tourist locations like Taj Mahal etc, so my concern is that the images will be a bit jarring, but visiting the villages was a personal highlight of the trip so if anything I want to emphasise those parts rather than de-emphasise them. The images I have are smartphone images and range from being single images (like a group photo), to having sequences of photos (how people will take 3 or 4 images of something happening), all the way to a few bursts of 20+ images of things like a guy operating a loom. I'm immediately reminded of the work of Ken Burns, and will definitely animate some movement in the images, but I don't typically narrate my videos and I have very limited audio from these locations, so the images may be required to stand on their own, which I'm concerned about. I can probably cheat a little and find some audio from elsewhere and use that as ambience, which I've done before on other projects to cover gaps in my filming (lol). I also took audio when the women sang so I have those too. I'm thinking that I should embrace it and deliberately give it a more mixed-media feel, considering that I can make stop-motion from the sequences / bursts, and I could even use stills instead of video in other moments where I did shoot video, or even go so far as pulling frames from video to use as stills, in order to kind of 'balance' the amount of stills vs video and make it a consistent style element. Has anyone edited a project where there were a lot of still images, stop-motion, or other non-video elements? I'm keen to hear ideas....
-
So, to paraphrase... In March, GoPro stocks were down to 3% of their 2014 peak But they've gone up from 3% to 10% of their peak The rise was unexpected In a conversation about a companies business model, where that business model hasn't really changed in the entire lifetime of the company, I would suggest that "down by 90% since 2014" is a more relevant figure than what they did in the last quarter, month, quarter-of-an-hour, etc... It's relatively easy to select any time portion you like to prove a point. For example, in the last few days they lost over 8% of their value. At that rate I guess they'll be bankrupt by the new year. Unless, of course, you think I'm taking the most current data out of context and I should zoom out a bit? 🙂
-
The question "what sensor most matches film?" is about as useful as "what film matches digital?". If you're interested in using digital sensors to emulate film, then learn to colour grade, like I said. What is possible far outreaches what people think is possible.
-
-
Use of a single prime can have many advantages, this article from Noam Kroll is a good overview: https://noamkroll.com/many-iconic-directors-have-shot-their-feature-films-with-just-a-single-prime-lens-heres-why/ Of course, it has to fit with your situation as @zerocool22 illustrates. I derived my choice of the 35mm FOV from my situation, being that I wanted environmental portraits while being on holiday with people I know, so typically filming between 1-3m away from the person, and I'm often not allowed to move around such as in tour busses and boats. The FOV also creates a neutral point of view that is close enough to 'normal' that it doesn't create any obvious effect. I found a 50mm FOV to be too tight as it tends to isolate at that distance, and 28mm FOV isn't tight enough because when you get someone large enough in frame they become too distorted. The idea of a 2x zoom is interesting and I've recently discovered that with the 5K GH5 sensor I can shoot 1080p and get a 5K-2K downscale but also if I use the 2x digital zoom then I get a 2.5K-2K downscale so image quality is still preserved and noise reduced compared to the 2.5x 1:1 crop which is also a bit too much of a change to be so flexible. That turns my 35mm FOV lens into a 70mm FOV lens too. It's a useful option and for those that like 24mm focal length the 24-70 is a great choice. Interesting to hear about your scheduling limitation @Anaconda_ - do you find that you can plan and operate on a weekly cycle with this schedule? What are the productivity implications you've found?
-
One of the things that I think amateurs miss about professional film-making is that it's a production line. There are standards for how each operator does their bit so that when it's passed to the next operator they know what to do with it and don't have to re-invent the wheel each time. These standards have been developed over many decades in order to get the best results from an acceptable time commitment. I think that's something that is really missing from people who think of film-making as a single-operator. Not only because you're not in the mind-set of working with others and thinking of their needs, but also in the sense that we can do anything and 'get away with it' because we're passing the footage we shot to ourselves to edit and then to ourselves to sound mix and then to ourselves to colour grade and then ourselves to encode and deliver. As a single-operator if you do something a bit wrong and then get it into the edit you're now in a situation where you're having to work with what you have, and maybe you get frustrated and maybe you learn. In a team if you do something a bit wrong you will get a pretty severe talking to from the boss and you will learn from that experience and probably never do it again. Netflix sets dozens of rules with their cameras, it's not just TC. These reasons are there so that the chain of how team-based film-making works isn't completely screwed up because of camera choice. Anyone who isn't familiar with the rules should go read them, and if there's a rule that you don't understand the benefit of, then you should learn more about it, because these rules have been created by the people that do this successfully for a living for decades.
-
GoPros allow highly-skilled operators to make spectacular situations look great in the same way. Great cameras allow passable operators to make modest situations look very good in whatever way you want as an artist. I think GoPro missed an opportunity by not offering a model where the lenses could be interchanged, even if it was via a relatively delicate process that took some time and couldn't be done 'in the field'. I bought a Sony X3000 instead of a GoPro as it had OIS instead of EIS, which matters in low-light situations, and that will be replaced by the wide-angle camera in whatever smartphone I have at that point. GoPro filled a niche and didn't innovate. Not a recipe for the long-term.
-
Thanks, that's interesting. I'm not relishing the idea of having to buy it separately, but I guess it might be worth me doing some reading about custom view LUTs. Thanks, that's also worth knowing. If you can't easily switch it on and off then that would mean it would have to be suitable to be the only way you're viewing footage while recording. Normally a DP would have a technical view like false colour to ensure proper exposure, the DP and director (and others) would have a 'normal' view (perhaps with a look LUT), and the focus puller would have a separate view with peaking. If I applied a LUT in-camera then I'd be designing a LUT for all three applications simultaneously, which i'm not sure is possible to design well.
-
I recently re-watched a great video on the visual style of Alfonso Cuarón and collaborator Emmanuel Lubezki, and in delving into this more deeply, was reminded how the pair imposed their own limitations when shooting Y Tu Mamá También and how limitations can help focus the creative process and also keep costs down and allow amplified creativity. Who else does this in their own work? We likely all have limitations imposed externally, considering we're not world famous with infinite time and money, but even for those who are operating with limitations, how many of you are either consciously shaping your process in order to fit within your limitations, or even imposing more when you don't need to, in order to simplify and increase creativity? I have found that the limitations that Cuarón/Lubezki impose fit well with my own. They shoot only wide angle lenses, exclusively hand-hold the camera, use natural light, and feature the characters relationships to each other and to their wider surroundings. Further, the camera movement is deliberate and has a 'character', they use long takes at the climaxes in order to further the sense of reality of the situations. I shoot my families travel and the occasional event, shooting hand-held with a 35mm prime (and only changing lenses when a specific shot is called for), shoot only in available light, feature the moments of my family and friends interacting with each other and the environment we're in, and because i'm behind the camera and have a relationship with my family my movement and framing will take on that character. It almost seems to me that there will be a range of famous directors, DoPs, cinematographers, and other visual artists that will align well enough with your own style and preferences that we can learn a lot from it. Who else is studying the visual styles and processes of others to learn?
-
Smuggle in your own popcorn, and put your mobile phone on silent, but don't accept the defeat of cinema! RISE UP!