Jump to content

KnightsFan

Members
  • Posts

    1,214
  • Joined

  • Last visited

Posts posted by KnightsFan

  1. 1 hour ago, herein2020 said:

    The funny thing is, I see that literally all the time in online videos, not as much in Hollywood productions but even some of them have some really bad scenes where it is jittery. I think the biggest problem is that the typical camera doesn't have an option to record at a multiple of 24, they are mostly multiples of 30 (i.e. 30FPS, 60FPS, 120FPS) vs multiples of 24 (24FPS, 48FPS, 72FPS) so anytime you record at the higher frame rates using a multiple of 30 (just talking NTSC here), getting it to conform properly to a 24FPS timeline is going to require some additional configuration and thought for the process.

    If you see it in YouTubers' shots but not Hollywood shots with similar amounts of motion, then it's user error and not a problem of 24 fps per se. Certainly any time you put footage on a timeline with a mismatched frame rate you'll have problems unless you know what you're doing.

    I've shot a lot of projects at 24.00 fps, but they were shown in theaters on native projectors (nothing big, just screenings for friends, small festivals). On a proper projector it's pretty easy to see the difference between 24 and 30 fps, with the former feeling more like a classic movie to most people. So at this point, under proper conditions and with proper technique, there is certainly an argument to be made for 24 if you are trying to conjure a traditional movie impression.

    Once we move to 120 and 240 hz monitors though, the technical limitation will be gone and then any problems you see will purely be user error with mismatched settings, or bad technique. Then it will just be a question of the impression you want to give, 24 vs 30 vs 60, or even 120. With 240 hz you could even add 48 to that list of native formats.

    I do think we should move away from the fractional rates though... 29.97 instead of 30 is just annoying to think about.

     

    53 minutes ago, herein2020 said:

    I use. It's just incredible to me that so many people release YouTube videos with that kind of stutter and either don't notice it or don't care; yet the rest of the footage is great (color grade, camera movements, content, etc.) and they had to have seen the stuttering prior to uploading.

    Probably because it's a lot easier to stare at a still image and painstakingly grade it than to let something play and evaluate the overall effect.

  2. I'm curious @herein2020 if what you are seeing is judder (from playing 24 fps on a 30 or 60 hz screen) or something else. I've never been able to verify that I can see judder personally. Once 120 hz monitors are the norm, of course judder won't be an issue.

    I watched that drone video you posted, and at the 28s mark there is definitely something wrong. It's jumping all over the place without that much motion, so I would guess at operator error somewhere. Certainly dropping a 60fps clip on a 24 fps timeline will cause noticeable problems without good frame blending.

  3. I did do a quick test. My process was to film a white wall as a 4k Raw clip, which I processed into a 4:2:2 10 bit uncompressed file. I then used ffmpeg to process the uncompressed video into two different clips with the only difference being the bit depth. I used 420 color and crf 16 on both. The two files both ended up roughly the same size. (8 bit is 3,515 KB, 10 bit is 3,127 KB).

    I applied a fairly extreme amount of gain, and white balance adjustment equally to all clips. I've included a 100% crop of the uncompressed, 10 bit, and 8 bit files. As you can see, the 8 bit has significantly more ugly banding than the 10 bit.

    As you can see, the 8 bit has some nasty banding that is not present in the 10 bit version. This is of course an extreme example to show a relatively small difference, but also it does get perceptually worse in motion rather than still frames. Also note that the PNG files themselves are 8 bit (which would match a typical delivery). The banding you see is from the color grading, as all 3 versions have been quantized down to 8 bit upon rendering.

    Moreover, the 10 bit is actually a 10% smaller file. I find 10 bit HEVC is consistently a smaller file size than 8 bit for better quality. The real benefit of more accurate sampling is that it allows more accurate processing throughout, from compression to coloring.

    On an related note, both the HEVC clips have lost all the grain and detail compared to uncompressed, which is very unfortunate. However, they are 1% of the file size so I can't complain too much!

    10bit.png

    8bit.png

    Uncompressed.png

     

     

    Edit: just look at the file names to see which pic is which

  4. A decent test would be to shoot a scene in as high quality as you can, like uncompressed raw, and then export a 10 bit and 8 bit version with roughly matching codec and size from that Raw master, and compare those results. If you really want to isolate 10 vs 8 bit, export uncompressed videos with those bit depths. You will most likely see the biggest difference in scenes with smooth gradients in the shadow, particularly with big color grades such as incorrect white balancing, or underexposed scenes.

    Maybe I'll do some tests later today.

  5. I'm glad you're bringing attention to this, @Andrew Reid, as everyone should at least be aware of who is tracking them and why. I have two Firefox addons, one is uBlock Origin for blocking ads, and the other is Blur for blocking trackers. I can confirm that the EOSHD main site has those two Google trackers (which Blur blocks), and the forum has 0. Blur blocks 9 trackers on SonyAlphaRumors even after opting out of cookies, and 67 ads are blocked.

    It's worth pointing out that even if you are okay with being tracked, the richest companies and people in the world make their money off analysis of your data. It's worth considering whether you want to freely donate your data (which YOU pay for with electricity and internet bills!) to the wealthiest people on earth.

  6. 1 hour ago, leslie said:

    its a burst mode right ? not sure which cameras could do that continuously ? 

    It's continuous. Hold the shutter and it goes until you run out of card space--according to specs. I only ever used it for short bursts.

  7. @kye isn't talking about undercranking though, right? He's talking about slowing down 24 fps. Speaking of Wong Kar Wai, didn't he use used the effect kye is talking about in Chungking Express? It's a nice effect for some scenes. I feel like it's used more for "impact" in action scenes as opposed to the "cool factor" or "rhythm" of normal slow motion, if that makes any sense.

    2 hours ago, Mark Romero 2 said:

    Can we undercrank with digital cameras??? I think on my S1 I can shoot at 24fps but then have a shutter speed of 1/12th of a second. Will have to try it.

    Long ago on my GH3 I did that trick with the shutter speed, but a lot of digital cameras don't let you lower the shutter below the frame rate. The Z Cam E2 can shoot any integer frame rate from 1 up to the max, and as a bonus the shutter angle setting always behaves correctly. I did some "retro" tests with 16mm crop mode and a few C mount 16mm lenses, in 15 fps.

  8. 39 minutes ago, EphraimP said:

    Do you have to have the dongle plugged in to access DaVinci if you go that route? I thought you'd load it onto you machine via the dongle and then unplug it.

    The dongle must remain plugged in whenever the software is in use.

  9. If you have the license key, you can have at most 2 activations at a time. If you activate a 3rd computer, it will automatically deactivate on the other two computers. On an activated computer, if you go too long without it checking in with the server, it will not open. So for the license key, you need to have an internet connection at least periodically. With the license, if you have the key available then you can activate on any internet-connected computer at any time, no need to remember to bring a dongle if you are editing remotely. The dongle is perhaps slightly easier if you always have it on you, and it's necessary if you go off grid for a project and don't have internet, but it's also easier to lose, so there are tradeoffs in the edge cases either way.

    It is true that once you buy a dongle or license, you get access to all future upgrades for free.

    As far as crashing, yeah Resolve crashes from time to time. I've never had it crash every 5 minutes, more like once a day, and usually only when I'm working with the Fusion tab. Pretty much all complex software crashes sometimes. In my experience, Resolve is much more stable than Premiere. I do think that v16 was worse than v15 for crashing. I'm now using the v17 beta now and it has been very stable so far, so I'm hopeful that they are improving stability.

  10. 5 hours ago, IronFilm said:

    You only need the one (per camera), because any professional (and even semi-professional these days) field recorder will hold timecode accurately by itself. 

    Right, jam syncing. $200 to "plug in and forget" would be sort of reasonable to me, but $200 to also need to jam sync and remember to do that every time you shut down, and/or make sure devices are in free run, etc. leaves a lot of room for human error. Especially since it's such a simple problem to solve. So yeah I consider them to be way overpriced for the function they perform.

  11. 4 hours ago, IronFilm said:

    Doing secondhand vs new isn't a fair comparison. 
    Secondhand Tentacle Sync could be found for as cheap as roughly US$100ish, perhaps $150 max. 

    Maybe I look at the wrong time, but I've looked every now and then and I have yet to see any on eBay below $200 in the US.

    4 hours ago, IronFilm said:

    Both mics and TC boxes are needed on a film shoot. 
    And you can get away with just one Tentacle. (the Zoom F Series and MixPre Gen2 hold TC by themselves)

    You can sync without TC, it just takes more effort, as the OP and many of us did for years, especially since the major NLE's can sync from waveform. It's a lot of money for something that makes a small improvement to smaller projects.

    You are talking about jam syncing, in that case, right? You can't use a single tentacle to send constant TC between two devices--can you?

  12. @josdr I had a DR-60D II before, so I believe that. I can live without 32 bit, but it's the top of my "nice to haves," particularly when effects and I'm already holding a mic and performing sounds all at once.

    13 minutes ago, IronFilm said:

    I don't know of any used mic that is cheaper than a used Tentacle Sync which I'd use as a primary boom. They're not even close in price, because that Tentacle is so cheap.

    I found my AKG CK93 for less than $200, just shy of a new Tentacle Sync E, and that's my primary mic. My first mic, the Rode NTG1, was about $130 if I recall. Both of which were in my opinion better investments that a timecode system, especially since you need minimum two tentacles.

  13. 25 minutes ago, josdr said:

    I presume I would need a Y splitter of sorts for that.

    Yes, exactly.

    25 minutes ago, josdr said:

    I wish my camera could at least jam with teh timecode from the mixpre but Fuji has left this option out from both the X-T3/4.

    It's very annoying, since it is such a small feature that is almost always missing from photo cameras, even ones marketed as hybrids. The GH5S is the only affordable hybrid I can think of with any real TC support. And like you say, dropping a few hundred on Tentacles is such a money drain. A nice, used mic or lens is less than a timecode system, which still only gets you audio LTC that you have to wrestle with in post! I'll never primarily use a camera that doesn't have an actual timecode input ever again, it's such a pain.

    Depending on how DIY you are, you could build something yourself. I ended up making my own system with Arduinos synced over bluetooth to a cell phone app I made. It was cheap and works as well as a tentacle/other gizmo, but I probably put a hundred hours into making it so it was costly in that regard

    I've been considering switching to a MixPre II myself. 32 bit would be useful when I'm recording by myself.

  14. You're welcome!

    14 minutes ago, josdr said:

    Any way if that is feasible I can send much cleaner audio from the mixpre to the camera to use as scratch  and also have proper timecode to use.

    You can send a mix from the MixPre to one channel on the XT3, and LTC to the other. Or potentially send a mix from the Mix Pre to the XT3 and HDMI TC the other direction.

    15 minutes ago, josdr said:

    Populating the movie audio with a timecode signal seemed the wrong way of going about things from the beginning,even to an amateur like me

    I wouldn't say the wrong way. It's better than nothing, but it is a workaround compared to pro cameras that have proper timecode functionality. I would explore the HDMI side and see if that gets you where you need to be, and otherwise take some time to work out the workflow sending TC via audio. Once you have to down, it's not difficult, but there's so little information about it that it is indeed difficult to figure out how to make it work. Most people seem to be either pros and cameras with proper TC, or don't use it at all.

  15. 4 hours ago, josdr said:

    My first question is :  Can I include an audio timecode channel in the mixpre output along with the recorded dialogue sound?  Is it advisable to do so? It only seems to record TC in the sound file header.

    Even if the MixPre can do this, it's not advisable. The purpose of timecode is tgetting that time stamp into metadata, and LTC on an audio track is just a workaround to do it. The only reason you'd want LTC in an audio track is if you A) can't put it in metadata, like on DSLR's, or B) need stretch and warp the audio to compensate for drift during the takes which I haven't ever personally seen anyone do, though it is technically possible.

    4 hours ago, josdr said:

    My second question is : Is this the best way to utilise timecode while using the X-T3 or is there a better way to go about it with what I have?

    Wireless LTC is the simplest way in my opinion, but Imake sure you have scratch audio. My last big project with an XT3, I used a 3.5mm splitter, and ran LTC into one audio channel and used a $6 mic for scratch audio in the other. I used wireless lav transmitters to carry LTC from my Zoom F4 to the XT3.

    5 hours ago, josdr said:

    The x-t3 can output timecode using its mini hdmi port (which I presume the mixpre can accept through its cam tc in mini hdmi port ) but I need the mini hdmi port of the X-T3 for my atomos shinobi.

    I am not sure if they are compatible, but it seems worth a test to me. See the answer below, jamming the timecode via HDMI might be a good combination of cheap/easy if the drift is acceptable for your work.

    5 hours ago, josdr said:

    Does timecode output have to be constantly connected from camera to recorder? 

    If you are writing LTC into an audio track, then yes, it needs to be constantly connected.

    If you are jamming the internal clock to an external source, then typically, no, once you disconnect the timecode source then the internal clock takes over from that same place. That means it is subject to drift once they are disconnected, and you would have to be sure that both are in free run. Definitely do some tests with the devices you own to see how much drift you're dealing with, whether the clock stops when the device is turned off, or other model-specific "gotchas."

    5 hours ago, josdr said:

    Third question: Is there a way to update timecode from the audio track into a file track header in Premiere pro or am I stuck with Resolve for this.

    I'm not sure, I use Resolve exclusively. However it is possible to use 3rd party software to update the file's timecode metadata, which is what I do. I don't know of any ready-made solutions as I wrote the one that I use, but it's a simple ffmpeg command batched across all my files. In my opinion this is a better solution than doing it in your NLE, since the metadata will be ready for use in any software.

  16. Netflix's requirements is not only about having a minimum fidelity to the image, or even making it strictly easier in post, but having a standard workflow. They need to be able to send any one of their productions to any one of their post houses and have it be immediately workable, and that if that team gets shifted to a different project midway through, they can send that half done project to another post house and have them pick it up immediately, no questions. They've decided that TC is a mandatory element to their workflow, from capture through post. To be honest, if I were Netflix, I would specify a lot more of the technical and metadata requirements for my productions just to ensure that every piece is compatible across their entire billion dollar post workflow.

    On a much smaller scale we do the same thing at my work (outside the film industry). If the designer for one project is out sick and the client needs a change, other people on the team have to be able to open it up and immediately know how to make the change. We adopt standards, some of them because it's the easiest way, some picked randomly just to make sure we're all on the same page.

  17. 51 minutes ago, Matins 2 said:

    I'm curious to know if you have any footage without rolling shutter to show to support this claim.

    The Alexa SXT Studio had a rotating mirror shutter, just like a film camera. Technologically, this could be a feature on any digital camera, if there was demand for it. Rolling shutter isn't a sign that digital isn't caught up, per se, it's a sign that in most cases the benefit of film-like shutter artifacts are not worth the added cost.

  18. 20 hours ago, seanzzxx said:

    Lol because the box office has been famously dominated by thoughtful, high quality movies.

    A couple highest grossing films of their year: Lawrence of Arabia, 2001 A Space Odyssey, The Godfather, Star Wars, Terminator 2, Lord of the Rings 2 and 3, The Dark Knight. Obviously some lousy films make a lot of money, but by and large prior to ~2005 I'd say that many of the highest grossing films are pretty good or better. I can't say the same for the movies and particularly the shows that rise to the top on Netflix. Every single time I have picked a movie from the top of the Netflix list, I wish I had done something else instead.

  19. 8 hours ago, mkabi said:

    This is where our thought processes diverge... 

    Theatres don't support filmmakers... studios support filmmakers... I know you are going to say that theatres support studios and so from super structure point of view that if theatres support studios then theatres support the creatives/artists... right? Perhaps, but that only gives half the story

    I don't know if our thoughts diverge that much. I agree with what you said. I'm saying that streaming is supporting a cultural shift away from more thoughtful movies, towards lower quality (imo) content. Like I said, it's not that streaming is inherently making movies worse, but that the audience is less invested in their content since it's on in the background, or literally stays on after they fall asleep. Quantity over quality. Of course some people still critically watch their streamed content (myself included), but we're in the vast minority.

    (The other issue I brought up, of concentrating wealth in fewer people is a separate problem.)

×
×
  • Create New...