Jump to content

KnightsFan

Members
  • Posts

    1,222
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. There is not much visual difference between 24 and 25 if you stick to one or the other, but you should not mix and match them. Use 25 for PAL television or 24 if you want to use the film convention. 30 is slightly different on scenes with a lot of motion. I am almost 100% certain YouTube keeps the original framerate. I think they can even do variable frame rate? I could be mistaken on that one though. I would be very surprised if Vimeo converted, but I don't use Vimeo so I am not positive. Yes, exactly. You can speed it up. If you speed it up 2x it will look a little jerky simply because it is essentially a 15 fps file now. If you speed it up to some weird interval, like 1.26x, then there will be frame blending and that will probably look pretty bad, depending on the content of your shot (a static shot with nothing moving won't have any issues with frame blending, whereas a handheld shot walking down a road will look really bad). Technically, yes, you can do that. If you want your final product to be a 48fps file, and you are sure such a file is compatible with your release platform(s), then it should work. I think that it is a phenomenal idea to try it out as an experiment--but definitely try it out thoroughly before doing this in a serious/paid project--there is a very good chance it will not look the way you expect if you are still figuring stuff out. Also, for any project, only do this if you want the large majority of your 30fps clips sped up. If you want MOST to be normal speed and a COUPLE to be sped up, then use a 30fps timeline and speed up those couple clips. If I were you, I'd go out one weekend and shoot a bunch of stuff in different frame rates, then try out different ways of editing. Just have fun, and try all the things that we tell you not to do and see what you think of our advice!
  2. Totally agree, thats one reason its easier to conform rather than manually slow down. You conform TO the desired framerate so you never have to wonder whether you need tk slow by 50 or 40 percent.
  3. Not a silly question at all. Basically it just means to tell your editing software to play the frames in the file at a different rate. Example: if you shoot for 2 seconds at 60 fps, you have 120 frames total. If you conform to 30 fps, it is like telling the software that you actually shot at 30 fps for 4 seconds to get those 120 frames. Now as far as the software is concerned, its a 30 fps file (which happens to look slowed down). So for slow motion, I find it easiest to select all the high frame rate files in your bin before putting them on a timeline, and conforming them to the timeline frame rate. Then when you drag it on the timeline, it will be slowed down automatically.
  4. Youtube will play anything. If you shot 23.976, then stick with that. I don't know for certain, but i bet vimeo will also play anything. The only time you really have to be careful when deciding which format to use is with television broadcast or specific festivals, since modern computers can play anything. For slow motion, you can shoot anything higher than your timeline frame rate and conform it. If your NLE has the option, you should conform footage instead of manually slowing it down. That way you will avoid any frame artifacts in case your math wasnt correct. But to directly answer the question, slowing 59.94 to 23.976 is a great way to get slow motion.
  5. You should shoot with your distribution frame rate in mind. If you are shooting for PAL televisions, then you should shoot in 25 fps for normal motion. If you want 2x slow motion, shoot in 50 and conform to 25. If you want 2.3976x slow motion, shoot in 59.94 and conform to 25, etc. (I know you aren't talking about slow motion, I just mention it to be clearer) Essentially, at the beginning of an edit you will pick a timeline framerate to edit in, based on artistic choice or the distribution requirements. Any file that is NOT at the timeline framerate will need to be interpolated to some extent to play back at normal speed. Mixing any two frame rates that are not exact multiples of each other will result in artifacts, though there are ways to mitigate those problems with advanced interpolation algorithms. So you shouldn't mix 23.976 and 59.94. If you have a 23.976 timeline, the 59.94 footage will need to be modified to show 2.5 video frames per timeline frame. You can't show .5 frames, so yoy have to do some sort of frame blending or interpolation, which introduces artifacts. Depending on the image content, the artifacts might not be a problem at all. The same would apply for putting 23.976 footage on a 29.97 timeline, or any other combination of formats. The only way to avoid all artifacts completely is to shoot at exactly the frame rate you will use on the timeline and in the final deliverable, or conform the footage for slow/fast motion.
  6. It is true that MFT specifies a thicker sensor stack than most other formats, which means that adapting, say, an EF lens straight to MFT will not result in optimal performance, especially wide angle lenses wide open. It is similar to how the prism in bolex 16mm reflex made non-r lenses look soft. Its also the reason Metabones made a special speed booster for blackmagic cameras, because BM used a stack that was not the standard MFT thickness. Not sure about microlenses though, if they are part of that stack thickness or what.
  7. That's a good question. I assume BRaw is always lossy because none of the official Blackmagic info I've seen says it's mathematically lossless, but with q0 the data rate is as high as some lossless formats. Of course higher ratios like 12:1 and such must be lossy. I think calling Braw "RAW" is misleading, but I fully support the format. The image quality is great at 12:1, and the use of metadata sidecar files could really improve round tripping even outside of Blackmagic software. Back when I used a 5D3 and the choice was either 1080p in 8 bit H264 or uncompressed 14 bit RAW, the latter really stood out. Nowadays with 4k 10 bit in HEVC or ProRes, I see no benefit to the extra data that lossless raw brings. BRaw looks like a really good compromise.
  8. I agree, but hey, what can we do. As long as everyone knows what everyone else is talking about, that's really the best we can hope for at this point.
  9. This is especially true if you want a definition of RAW that works for non-bayer sensors. If your definition of RAW includes "pre-debayering" then you've got to find exceptions for 3 chip designs, Foveon sensors, or greyscale sensors. Compression is a form of processing, so even by @Mattias Burling's words, "compressed raw" is an oxymoron. But in fairness, I often see people use RAW to describe lossless sensor data, whereas raw (non-capitalized) is often a less strict definition meaning minimal compression to a bayered image, thus including ProRes Raw, Braw, Redcode, and RawLite. So as long as we remember the difference I grudgingly accept that convention.
  10. Considering both DNxHD and ProRes are lossy codecs, that last bit sounds like false advertising. But it will be veey little loss, really not a big deal at all. It sounds like a really neat feature.
  11. KnightsFan

    HLG explained

    @mirekti HDR just means that your display is capable of showing brighter and darker parts of the same image at the same time. It doesnt mean every video made for an HDR screen needs to have 16 stops of DR, it just means that the display standard and technology is not the limiting factor for what the artist/creator wants to show.
  12. That doesnt actually explain why the factor is 1 though. That just explains why its linear.
  13. I'm not 100% sure about this, but it's my current understanding. The reason you are incorrect is because doubling light intensity doesn't necessarily mean doubling the bit value. In other words, the linear factor between scene intensity and bit value does not have to be 1. For example: If each additional bit means a quadrupling of intensity instead of doubling, it is still a linear relationship, and 12 bits can hold 24 stops. As @tupp was saying, there is a difference between dynamic range as a measure of the signal measured in dB, and dynamic range as a measure of the light captured from the scene measured in stops. They are not the same measure. A 12 bit CGI image has signal DR just like a 12 bit camera file does, but the CGI image has no scene-referred, real world dynamic range value. It seems that all modern camera sensors respond linearly to light, roughly at a factor of 1 comparing real-world light to bits. I do not know exactly why this is the case, but it does not seem to be the only conceivable way to do it. Again, I am not 100% sure about this, so if this is incorrect, I'd love an explanation!
  14. @kye Unity is a lot more programmer friendly than Unreal, certainly a lot easier to make content primarily from a text editor than it is in Unreal. Unless you need really low level hardware control, Unity is the way to go for hacking around and making non-traditional apps.
  15. ProRes has very specific data rates. If two programs make files of differing sizes, one of them is wrong, either by using the wrong ProRes flavor (LT, HQ, etc) or it's not really creating a proper ProRes file.
  16. Yes. ProRes is very similar to DNxHR in terms of quality and file size. Both are significantly larger than H265 files of similar quality, but are easier on the CPU, for smoother playback on most systems.
  17. I did some VR development a few years ago when I had access to a university Oculus. It was a ton of fun. Even simple things like mapping a 360 video so you can freely look around is amazing, let alone playing VR games with the handset and everything. I guess what I love in games is where the game never forces you to use a certain item to defeat the monster, thereby encouraging creativity to overcome tasks. You could get that specific item, or you could find a way to bypass the monster altogether--but then that same monster may come up later in the game. That's where cinematic techniques like color come in. The developer may use color to psychologically influence a player to make a decision, which makes it much more rewarding to find a different way to accomplish the task. For a great use of color in a game, think of Mirror's Edge, where objects you interact with are bright red and yellow against a mostly white world. It's makes it much easier to identify things you can climb without stopping and breaking your momentum. Films can use color in a similar way, to draw attenion to certain objects, but the fact that attention is drawn to an object actually changes how a game is played and, in some cases, the actual plot of the game, whereas a movie still exists on a linear timeline no matter where you look.
  18. DNxHR is a much less efficient codec, so you will see a significant file size increase. HQX in 4k should be around 720 mb/s, and 444 is around 1416 mb/s. I am not very familiar with DNxHD to be honest. I think you can include an alpha channel in any version, so that would add another 33% on any of those rates. So you can easily get a 10x increase in file size over the XT3, depending on what data rate you shot in.
  19. It's truly a great time for camera tech. Every month we get something new and awesome! The RAW footage from a given camera will almost certainly be better for keying, compared to 8 bit 420 from the same camera.
  20. They are not tied to each other, exactly. I said "correlate" and "necessarily" because in real world, manufacturers usually add more bits to higher DR images to avoid artifacts. So they usually do correlate in the real world, but only because of convention and not because of some intrinsic property. True, I implicitly encompassed both of those factors into saying an encoding of the same image. I should have been more specific. Yes, the ADC bit depth does limit the DR assuming its linear, but the encoded image might not retain all that range in the final image.
  21. The problem is there are many ways to measure DR. If you read "the Sony a7III has 14 stops of DR" and "the Arri Alexa has 14 stops of DR" both may be correct, but are utterly meaningless statements unless you also know how they were measured. Many years ago, Cinema5D pegged the a7sII at like 14 stops. However, they later standardized their measurement to use SNR = 2, which gave the result of a7sII at 12. But whichever way you measure, it's ~2 stops less than the Alexa. Many members here will tell you that Cinema5D is untrustworthy, so take that as you will. I have yet to find another site that even pretends to do scientific, standardized tests of video DR. Cinema5D puts the XT2 and XT3 at just over 11, so that confirms your finding. And again if you change your methods, maybe it will come out at 13, or 8, or 17--but in every case it should be a stop less than the a7sII when measured the same way. Bit depth doesn't necessarily correlate exactly to dynamic range. You can make a 2 bit camera that has 20 stops of DR: anything below a certain value is a 0, anything 20 stops brighter is a 3, and then stick values for 1 and 2 somewhere in between. It would look terrible, obviously, because higher bit depth reduces banding and other artifacts. There is pretty much no scenario in which an 8 bit encoding has an advantage over 10 bit encoding of the same image.
  22. You can use ffmpeg to convert to DNxHD or DNxHR. I have never done it myself, but the answer on this page seems very thorough https://askubuntu.com/questions/907398/how-to-convert-a-video-with-ffmpeg-into-the-dnxhd-dnxhr-format. Many windows applications can decode ProRes, they just cant encode it. What do you mean by DVR?
  23. That is true, but I would have to actually make a calibration profile for the TV. I started to do so last week, but after DisplayCAL took 5 hours to analyze the TV, I found I'd made a mistake, and I haven't had a need for accurate color that was worth 5 more hours. The real problem with color calibration on consumer devices is that for whatever reason, there is no consistent way to calibrate the entire computer display without an expensive LUT box. Graphics cards should just have a LUT box built into them, instead of the mish-mash of ICM profiles, VCGTs, and application-specific LUTs that we currently have. It's a ridiculous headache even with color managed software, let alone browsers.
  24. Guess I've got a one of a kind XT3 then!
  25. It charges while it's on from a 5v.
×
×
  • Create New...