Jump to content

KnightsFan

Members
  • Posts

    1,351
  • Joined

  • Last visited

Everything posted by KnightsFan

  1. That's my point, I can't help at all if I don't know what you are seeing. It's not second guessing at all, I'm trying to figure out what exactly the issue is.
  2. @no_connection Nice, that looks better than the sharpening I tried. What software? Also, what kind of screen do you have? I ask because the upscale looks pretty bad to me, but it may not be apparent at 100% on an HD monitor. The "sharpening in post effect" is fairly visible especially on the large tree on the far left (circled blue), where in the upscale there's a white tinge to the detail in the bark which is completely missing from the UHD version. As I've said in a few other threads, higher resolution allows for smoother details. Just look at the points where light spills through the trees (circled red). It looks digital and sharp on the upscale, and smooth on the UHD. Then of course there's just detail itself. The pink circle is barely intelligible mess on the left, but shows shapes of leaves nicely on the right. Additionally, overall the UHD just has crisper, richer colors. This might be due to compression, pixel pinning, or line skipping rather than the resolution itself.
  3. @mkabi I just shot some test footage and uploaded the camera files here https://drive.google.com/open?id=1yZloslk3CYR_Z7JNUk8EooO6BnwaCwjL I shot at f5.6 on a Nikon 105mm f2.5, an incredibly sharp lens. I picked a scene with a lot of detail and shot at the highest bitrate an unhacked NX1 can do. I haven't rendered out any scalings, let me know if you want me to try any particular algorithms or scaling methods. From what I see, the UHD has significantly more detail, immediately apparent without pixel peeping on a 32" UHD screen at normal viewing distance. But like I said, the NX1 doesn't have great HD. Downscaling the UHD results in a slightly more detailed HD image than the HD straight, but not by much. Upscaling in Resolve using any of its scaling options does not bring any of the HD clips up to the sharpness of UHD, nor does adding a sharpening filter help. Though, to be fair, I have not been happy with the Sharpen OFX plugin in Resolve in the past, so maybe a different one would work better. Unfortunately the light change without me noticing. Hopefully that doesn't effect sharpness, but I can try again later today if you'd like. I used an outdoor scene to get more high detail objects, but obviously there was a downside to that decision. ...actually I went out and did another version with more consistent light. They are in the same folder with a V2 in their names. While I kept all picture settings the same between shots, the NX1's sharpness setting could be a factor, as the UHD versions look slightly over sharpened to me. I left it at 0 for these tests.
  4. That is exactly the point of this thread. We aren't sure what the stuttering that you are seeing is, and therefore can't tell you which cameras (if any) don't have it. "the motion was weird / stuttering" doesn't explain to me what the problem is. If it's rolling shutter you are seeing, then the XT3 is the best APS-C option outside cine cams--or possibly a FF camera using the APS-C crop. If you are seeing stutter from bad playback, then the XT-3 is likely the worst option. If you're just seeing added sharpness over the mushy Canon, then simply use any camera, apply a generous blur, transcode to a lower bitrate and you're good to go lol.
  5. @mkabi not yet, i will shoot some clips today though. Do you know if there is a way i can try any form of AI upscaling or something? All i have right now is whatever Resolve uses.
  6. This is a really impressive update. I see G9's for under $1k on ebay right now. With its 20Mmp native, 80mp high res, 4k60 and now 10 bit, it really hits a LOT of special use cases.
  7. Just realized that I can do this with my NX1 also (face palm moment lol), I'll try to do that test later this week. But I can be pretty sure of the results ahead of time, the NX1's 1080p isn't great tbh. @kye Yeah I think 2k is all that is necessary, especially for streaming. To me these are two very different questions: Can you see a difference between 1080p upscaled vs 4k native? Is 4k necessary to tell your story? And I guess since we're also talking about frame rates, I think that HFR is phenomenal for nature docs. 60 fps is still a little jarring for narrative cinema to me, but for Planet Earth type photography and content, 60fps is great. I'd rather see 1080p60 nature docs than 4k24. I think sports are probably similar, but I don't really watch them personally.
  8. I certainly would if I had an XT3 available, my experience with it was shooting and editing a number of projects on my friend's XT3 earlier this year. Adding higher frame rate to the upscaling is in interesting dimension I haven't explored before.
  9. I've never used a 4k blu ray. I'm saying I can tell the difference between a high quality 4k image, and that same image downscaled to 1080p. My primary experience here is with uncompressed RGB 4:4:4 images--CG renders and video games (both as a player and developer)--so no compression/bit rate factors involved. I haven't done in-depth A/B testing, but my sense from looking at 4k footage, and then at 2k intermediates for VFX is that there is a noticeable difference in normal viewing there as well. That's both with the NX1 (80 mbps 4k, ProRes 422 HQ 2k intermediates) and the XT3 (100 mpbs 4k, ProRes 422 HQ 2k intermediates). Once the VFX shots are round tripped back they aren't as nice as the original 4k shots when viewed on the timeline--though naturally since it's delivered in 2k it's fine for the end product. That's actually one perk of having Fusion built into Resolve, it completely eliminates round tripping simple VFX shots, so I don't have to choose between massive 4k intermediate files and losing the 4k resolution earlier in the workflow.
  10. I just got a 32" 4k monitor, and the difference between 4k and 1080p (native or upscaled) is immediately apparent at normal viewing distance. Uncompressed RGB sources without camera/lens artifacts, such as video games or CG renders, have absolutely no comparison; 4k is vastly smoother and more detailed. I haven't tried any AI-based upscaling, nor have I looked at 8k content. I get your point, but that's an exaggeration. Even my non-filmmaker friends can see the difference from 3' away on a 32" screen. Though only with quality sources, you are exactly right that bit rate is a huge factor. If the data rate is lower than the scene's complexity requires, the fine detail is estimated/smoothed--effectively lowering the resolution. I haven't done A/B tests, but watching 4k content on YouTube I don't get the same visceral sense of clarity that I do with untouched XT3 footage in Resolve, or uncompressed CG.
  11. Yes, it is, which is why we get so many color problems from bayer sensors! I suspect I have less development experience than you here, but I've done my share of fun little projects with procedurally generating imagery, though usually in RGB. My experience with YUV is mostly just reading about it and using it as a video file. I'll dig a little more at the images. From my initial look, I don't see any extra flexibility in the Raw vs. uncompressed YUV (assuming that the Raw will eventually be put into a YUV format, since pretty much all videos are). The benefit I see is just the lack of compression, not the Raw-ness. I'm not saying the 8 bit raw is bad, but uncompressed anything will have more flexibility than anyone needs.
  12. If we are talking fidelity, not subjective quality, then no, you can't get information about the original scene that was never recorded in the first place. I'm not sure what you mean by interpolation in this case. Simply going 8 bit to 12 bit would just remap values. I assume you mean using neighboring pixels to guess the degree to which the value was originally rounded/truncated to an 8 bit value? It is possible to make an image look better with content aware algorithms, AI reconstruction, CGI etc. But these are all essentially informed guesses; the information added is not directly based on photons in the real world. It would be like if you have a book with a few words missing. You can make very educated guess about what those words are, and most likely improve the readability, but the information you added isn't strictly from the original author. Edit: And I guess just to bring it back to what my post was about, that wouldn't give an advantage to either format, Raw or YUV.
  13. I wasn't talking about 12 bit files. In 8 bit 4:2:0, for each set of 4 pixels in 8 bit 4:2:0, you have 4x 8 bit Y, and 1x 8 bit U and 1x 8 bit V. That averages out to 12 bits/pixel, a simplification to show the variation per pixel compared to RGGB raw. It seems your argument is essentially that debayering removes information, therefore the Raw file has more tonality. That's true--in a sense. But in that sense, even a 1 bit Raw file has information than the 8 bit debayered version won't have, but I wouldn't say it has "more tonality." I don't believe this is true. You always want to operate in the highest precision possible, because in minimizes loss during the operation, but you never get out more information than you put in. It's also possible we're arguing different points. In essence, what I am saying is that lossless 8 bit 4:2:0 debayered from 12 bit Raw in camera has the potential* to be a superior format to 8 bit Raw from that same camera. *and the reason I say potential is that the processing has to be reasonably close to what we'd do in post, no stupidly strong sharpening etc. About this specific example form the fp... I didn't have that impression, to be honest. It seems close to any 8 bit uncompressed image.
  14. Previewing in resolve should not take up more file space unless you generate optimized media or something. Running resolve will use memory. Have you tried media player classic? Thats my main media player, since unlike VLC it is color managed. I don't kniw how performance compares. I am actually surprised that VLC lags for you. What version do you have? Have you checked to see if it runs your CPU to 100% when playing files in VLC?
  15. Kasson's tests were in photo mode, not video. It's not explicitly clear, but I think he used the full vertical resolution of the respective cameras. He states that he scaled the images to 2160 pixels high, despite the horizontal resolution not being 3840.
  16. I agree. Part of the problem with gimbals--especially low priced electronic ones--is that they are so easy to get started with, many people use them without enough practice. Experienced steadicam operators make shots look completely locked down.
  17. I don't know if I agree with that. The A7S and A7SII at 12MP had ~30ms rolling shutter, and the Fuji XT3 at 26MP has ~20ms. 5 years ago you'd be right, but now we really should expect lower RS and oversampling without heat issues. Maybe 26MP is more than necessary, but I think 8MP is much less than optimal. I don't know if it was strictly the lack of oversampling or what, but to be honest the false color on A7S looked pretty ugly to me. And I'm talking about the 4k downsample real world ones. I think it's a question of how big many photons hit the photosensitive part. Every sensor has some amount of electronics and stuff on the front, blocking light from hitting the photosensitive part (BSI sensors have less, but some), and every sensor has color filters which absorb some light. So it depends on the sensor design. If you can increase the number of pixels without increasing the photosensitive area blocked, then noise performance should be very similar.
  18. It can be, yes. I was responding to the claim that the fp's 8 bit raw could somehow be debayered into something with more tonality than an "8 bit movie." I suspected that @paulinventome thought that RGGB meant that each pixel has 4x 8 bit samples, whereas of course raw has 1x 8 bit sample per pixel. My statement about 4:2:0 was to point out that even 4:2:0 uses 12 bits/pixel, thus having the potential for more tonality than an 8 bit/pixel raw file has. Of course, 8 bit 4:4:4 with 24 bits/pixel would have an even greater potential.
  19. Interesting! I think that A7S is also being oversampled, as far as I can tell from Kasson's methodology. The sensor is 12MP with a height of 2832, and it is downscaled to 2160. That's just over 31% oversampling. The numbers I see thrown around for how much you have to oversample to overcome bayer patterns is usually around 40%, so A7S is already in the oversampling ballpark of resolving "true" 2160. With that in mind, I was surprised at how much difference in color there is. Even on the real world images, there's a nice improvement in color issues on the A7R4 over the A7S. The difference in sharpness was not very pronounced at all. But bayer patterns retain nearly 100% luminance resolution, so maybe that makes sense. The color difference evens out a bit when the advanced algorithms are used, which really shows the huge potential of computational photography, all the way from better scaling algorithms, up to AI image processing. I suspect that some of our concept that oversampling makes vastly better images comes from our experience moving from binned or line skipped to oversampling, rather than directly from native to oversampling. And I also think that we all agree that after a point, oversampling doesn't help. 100MP vs 50MP won't make a difference if you scale down to 2MP.
  20. Haha thanks, I didn't even notice that! I have now passed full subsampling on EOSHD.
  21. Doesn't make sense to me. 8 bit bayer is significantly less information per pixel than 8 bit 4:2:0. RGGB refers to the pattern of pixels. It's only one 8 bit value per pixel, not 4.
  22. Can you share an example file with both the original H.265, and the transcoded ProRes?
  23. Aha, I think I understand now. So the audio is going from the band out to the speakers without time manipulation? In that case, you would just monitor it normally. As for syncing up, that requires you to know what the band is going to do, and choreograph beforehand. Example: Say you want to reframe when the key changes. Since your video is being slowed down, anything you film will be seen on screen at some point x seconds in the future. Therefore, you would have to reframe x seconds before the band shifts key. I'm not sure what the purpose of chopping up the audio and playing back 1/4 of is anymore?
  24. If you are slowing audio and video together from the same rig, you can manipulate the camera in real time based on the music being played by the band (stream A from my example), and all of those camera changes will already be in sync with the audio when they are slowed down together. Am I getting this right? I'm probably missing something about what you're trying to accomplish still Edit: Here's a more concrete example of what I'm saying. If you are recording the band, and you want to reframe when they change key, then you simply need to reframe when they change key. When the audio and video are slowed down, the slowed down video will still reframe exactly when the slowed down audio changes key. Hopefully that will clarify how I am misunderstanding you. Experimenting is the fun part! It doesn't necessarily have to produce results at all.
  25. What do you mean by "live?" Do you mean live as its recorded or live as its played? I suggest for clarification we refer to monitoring as its recorded or as it is played, it's less confusing that way (to me!). Ultimately, you will need to do what @BTM_Pix suggested originally, which is record to a buffer, and selectively play out of that buffer. Essentially, record 4 seconds of audio to a file, slow that file down, and then play it over the next 16 seconds. You can continue to do this until you run out of file space. At that point, you have two streams of audio: A. the one being recorded, and B. the one being played (which is always 4x the length of stream A). I assume the question isn't how to monitor either of those. So I originally thought what you were trying to do is listen to the audio that you are recording, in a rough approximation of what it will sound like after it is slowed down. To do this, every 4 seconds, you listen to the 1st second slowed down to fill the whole 4 seconds. If this is the case, then all you need to do is introduce a third audio stream C, which is exactly the same as stream B, but only plays every 4th group of 4 seconds. This will monitor in real time as audio is recorded, with a slow down effect so you know what it will sound like later when it plays (although you won't be able to listen to all of it). However, you drew a diagram which implies you are listening to normal speed audio, but only listening to 1 second every 4 seconds, the exact opposite of my original thought. So that would also work, but obviously you are listening to audio significantly after it was recorded. So that would bring me to this question: what does meaningful mean to you? What are you trying to hear and adjust for in the audio? So to be clear, this is music being played on site, not a pre-recording of music? I ask because if both the video and audio are being recorded at the same time, and slowed in the same way, they should stay in sync already. So if a band is playing and you are shooting it with an HFR camera and audio, and you project the video in another room with that same audio, both slowed down, then they will be in sync.
×
×
  • Create New...