
KnightsFan
Members-
Posts
1,355 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by KnightsFan
-
Agreed. The f4 is the only piece of my kit that I use daily so it was worth every penny. But I have seen ridiculously cheap used h6's and if you only rarely use it, or really want something handheld, it might be a good option.
-
I often record dialog for video games and sometimes voiceovers for film projects. I have a corner of my room lined on a couple sides with thick, wool sleeping bags (really dense and heavy). I record with an AKG CK93 running into a Zoom F4 used as a USB audio interface, usually with Reaper as the software. I monitor with MDR-7506's. It sounds great. It's extremely budget-efficient as all of the components are things that I use on set. Well within 3 figures if you look for used equipment. - You could switch out the mic for a cheaper cardioid (or omni, if you have to), but stay away from shotguns indoors. - The Zoom F4 can be swapped for a cheaper H6. - You can use Audacity to record for free--though I highly recommend Reaper if you do any audio post work at all. It's phenomenal! - You may want a pop filter. I haven't gotten one yet. I would definitely get ambient sound to fill the silence if there is no other audio playing. You can just get ambient sound from the room where you are doing the VO. It will probably just be faint hiss, but will remove that "jarring" factor of silence. If you go with laboratory sounds, you could record them in stereo. That way the mono VO will stand out against the ambient sounds better.
-
Ah yeah, makes sense.
-
Z Cam releasing S35 6k 60fps camera this year
KnightsFan replied to Oliver Daniel's topic in Cameras
Yeah, I've been watching those developments closely. Price is a mystery still, but hopefully we'll hear within the next few weeks. I'm very excited, because the two main problems I have with the E2 are the smaller sensor, and the low MP count. I'd like have decently high resolution still images, and close to the native FOV with my vintage FF lenses. A S35 version with a M43 mount seems to be something they are looking into, though they seem more interested in EF at the moment. So maybe we'll finally have that spiritual successor to the LS300 as far as lenses are concerned. -
Why don't you just go from FD to MFT? In general those FD lens -> EF camera adapters with glass will reduce quality so they aren't recommended unless absolutely necessary.
-
I do this all the time. With nikon F to canon EF adapters, you can use a flathead screwdriver to tighten the leaf springs before mounting the lens, so that the adapter will have zero play. It is just as physically solid as if the nikon lens was a native canon lens.
-
There is not much visual difference between 24 and 25 if you stick to one or the other, but you should not mix and match them. Use 25 for PAL television or 24 if you want to use the film convention. 30 is slightly different on scenes with a lot of motion. I am almost 100% certain YouTube keeps the original framerate. I think they can even do variable frame rate? I could be mistaken on that one though. I would be very surprised if Vimeo converted, but I don't use Vimeo so I am not positive. Yes, exactly. You can speed it up. If you speed it up 2x it will look a little jerky simply because it is essentially a 15 fps file now. If you speed it up to some weird interval, like 1.26x, then there will be frame blending and that will probably look pretty bad, depending on the content of your shot (a static shot with nothing moving won't have any issues with frame blending, whereas a handheld shot walking down a road will look really bad). Technically, yes, you can do that. If you want your final product to be a 48fps file, and you are sure such a file is compatible with your release platform(s), then it should work. I think that it is a phenomenal idea to try it out as an experiment--but definitely try it out thoroughly before doing this in a serious/paid project--there is a very good chance it will not look the way you expect if you are still figuring stuff out. Also, for any project, only do this if you want the large majority of your 30fps clips sped up. If you want MOST to be normal speed and a COUPLE to be sped up, then use a 30fps timeline and speed up those couple clips. If I were you, I'd go out one weekend and shoot a bunch of stuff in different frame rates, then try out different ways of editing. Just have fun, and try all the things that we tell you not to do and see what you think of our advice!
-
Totally agree, thats one reason its easier to conform rather than manually slow down. You conform TO the desired framerate so you never have to wonder whether you need tk slow by 50 or 40 percent.
-
Not a silly question at all. Basically it just means to tell your editing software to play the frames in the file at a different rate. Example: if you shoot for 2 seconds at 60 fps, you have 120 frames total. If you conform to 30 fps, it is like telling the software that you actually shot at 30 fps for 4 seconds to get those 120 frames. Now as far as the software is concerned, its a 30 fps file (which happens to look slowed down). So for slow motion, I find it easiest to select all the high frame rate files in your bin before putting them on a timeline, and conforming them to the timeline frame rate. Then when you drag it on the timeline, it will be slowed down automatically.
-
Youtube will play anything. If you shot 23.976, then stick with that. I don't know for certain, but i bet vimeo will also play anything. The only time you really have to be careful when deciding which format to use is with television broadcast or specific festivals, since modern computers can play anything. For slow motion, you can shoot anything higher than your timeline frame rate and conform it. If your NLE has the option, you should conform footage instead of manually slowing it down. That way you will avoid any frame artifacts in case your math wasnt correct. But to directly answer the question, slowing 59.94 to 23.976 is a great way to get slow motion.
-
You should shoot with your distribution frame rate in mind. If you are shooting for PAL televisions, then you should shoot in 25 fps for normal motion. If you want 2x slow motion, shoot in 50 and conform to 25. If you want 2.3976x slow motion, shoot in 59.94 and conform to 25, etc. (I know you aren't talking about slow motion, I just mention it to be clearer) Essentially, at the beginning of an edit you will pick a timeline framerate to edit in, based on artistic choice or the distribution requirements. Any file that is NOT at the timeline framerate will need to be interpolated to some extent to play back at normal speed. Mixing any two frame rates that are not exact multiples of each other will result in artifacts, though there are ways to mitigate those problems with advanced interpolation algorithms. So you shouldn't mix 23.976 and 59.94. If you have a 23.976 timeline, the 59.94 footage will need to be modified to show 2.5 video frames per timeline frame. You can't show .5 frames, so yoy have to do some sort of frame blending or interpolation, which introduces artifacts. Depending on the image content, the artifacts might not be a problem at all. The same would apply for putting 23.976 footage on a 29.97 timeline, or any other combination of formats. The only way to avoid all artifacts completely is to shoot at exactly the frame rate you will use on the timeline and in the final deliverable, or conform the footage for slow/fast motion.
-
It is true that MFT specifies a thicker sensor stack than most other formats, which means that adapting, say, an EF lens straight to MFT will not result in optimal performance, especially wide angle lenses wide open. It is similar to how the prism in bolex 16mm reflex made non-r lenses look soft. Its also the reason Metabones made a special speed booster for blackmagic cameras, because BM used a stack that was not the standard MFT thickness. Not sure about microlenses though, if they are part of that stack thickness or what.
-
That's a good question. I assume BRaw is always lossy because none of the official Blackmagic info I've seen says it's mathematically lossless, but with q0 the data rate is as high as some lossless formats. Of course higher ratios like 12:1 and such must be lossy. I think calling Braw "RAW" is misleading, but I fully support the format. The image quality is great at 12:1, and the use of metadata sidecar files could really improve round tripping even outside of Blackmagic software. Back when I used a 5D3 and the choice was either 1080p in 8 bit H264 or uncompressed 14 bit RAW, the latter really stood out. Nowadays with 4k 10 bit in HEVC or ProRes, I see no benefit to the extra data that lossless raw brings. BRaw looks like a really good compromise.
-
This is especially true if you want a definition of RAW that works for non-bayer sensors. If your definition of RAW includes "pre-debayering" then you've got to find exceptions for 3 chip designs, Foveon sensors, or greyscale sensors. Compression is a form of processing, so even by @Mattias Burling's words, "compressed raw" is an oxymoron. But in fairness, I often see people use RAW to describe lossless sensor data, whereas raw (non-capitalized) is often a less strict definition meaning minimal compression to a bayered image, thus including ProRes Raw, Braw, Redcode, and RawLite. So as long as we remember the difference I grudgingly accept that convention.
-
Considering both DNxHD and ProRes are lossy codecs, that last bit sounds like false advertising. But it will be veey little loss, really not a big deal at all. It sounds like a really neat feature.
-
@mirekti HDR just means that your display is capable of showing brighter and darker parts of the same image at the same time. It doesnt mean every video made for an HDR screen needs to have 16 stops of DR, it just means that the display standard and technology is not the limiting factor for what the artist/creator wants to show.
-
So Is a7 III Still The Dynamic Range King? (Not tolling, just asking)
KnightsFan replied to Mark Romero 2's topic in Cameras
That doesnt actually explain why the factor is 1 though. That just explains why its linear. -
So Is a7 III Still The Dynamic Range King? (Not tolling, just asking)
KnightsFan replied to Mark Romero 2's topic in Cameras
I'm not 100% sure about this, but it's my current understanding. The reason you are incorrect is because doubling light intensity doesn't necessarily mean doubling the bit value. In other words, the linear factor between scene intensity and bit value does not have to be 1. For example: If each additional bit means a quadrupling of intensity instead of doubling, it is still a linear relationship, and 12 bits can hold 24 stops. As @tupp was saying, there is a difference between dynamic range as a measure of the signal measured in dB, and dynamic range as a measure of the light captured from the scene measured in stops. They are not the same measure. A 12 bit CGI image has signal DR just like a 12 bit camera file does, but the CGI image has no scene-referred, real world dynamic range value. It seems that all modern camera sensors respond linearly to light, roughly at a factor of 1 comparing real-world light to bits. I do not know exactly why this is the case, but it does not seem to be the only conceivable way to do it. Again, I am not 100% sure about this, so if this is incorrect, I'd love an explanation! -
@kye Unity is a lot more programmer friendly than Unreal, certainly a lot easier to make content primarily from a text editor than it is in Unreal. Unless you need really low level hardware control, Unity is the way to go for hacking around and making non-traditional apps.
-
ProRes has very specific data rates. If two programs make files of differing sizes, one of them is wrong, either by using the wrong ProRes flavor (LT, HQ, etc) or it's not really creating a proper ProRes file.
-
Yes. ProRes is very similar to DNxHR in terms of quality and file size. Both are significantly larger than H265 files of similar quality, but are easier on the CPU, for smoother playback on most systems.
-
I did some VR development a few years ago when I had access to a university Oculus. It was a ton of fun. Even simple things like mapping a 360 video so you can freely look around is amazing, let alone playing VR games with the handset and everything. I guess what I love in games is where the game never forces you to use a certain item to defeat the monster, thereby encouraging creativity to overcome tasks. You could get that specific item, or you could find a way to bypass the monster altogether--but then that same monster may come up later in the game. That's where cinematic techniques like color come in. The developer may use color to psychologically influence a player to make a decision, which makes it much more rewarding to find a different way to accomplish the task. For a great use of color in a game, think of Mirror's Edge, where objects you interact with are bright red and yellow against a mostly white world. It's makes it much easier to identify things you can climb without stopping and breaking your momentum. Films can use color in a similar way, to draw attenion to certain objects, but the fact that attention is drawn to an object actually changes how a game is played and, in some cases, the actual plot of the game, whereas a movie still exists on a linear timeline no matter where you look.
-
DNxHR is a much less efficient codec, so you will see a significant file size increase. HQX in 4k should be around 720 mb/s, and 444 is around 1416 mb/s. I am not very familiar with DNxHD to be honest. I think you can include an alpha channel in any version, so that would add another 33% on any of those rates. So you can easily get a 10x increase in file size over the XT3, depending on what data rate you shot in.