
KnightsFan
Members-
Posts
1,351 -
Joined
-
Last visited
Content Type
Profiles
Forums
Articles
Everything posted by KnightsFan
-
Which Will Be Adopted Sooner: 4K or Rec. 2020 / HDR Broadcasting???
KnightsFan replied to Mark Romero 2's topic in Cameras
I think the difference may be that for 4K, commercials can zoom in and say "if you had a 4k screen, you would see THIS much detail!" and that demonstration works pretty well, even on an HD screen. HDR is literally something you can't display on your current screen, so marketing is like "well, we can't show you what it is unless you buy the hardware." It's way too abstract unless you either see it yourself, or have some prior knowledge about how displays work. The hurdle that I see with HDR is that Rec 709 and sRGB are so entrenched, not just for pro/semi-pro web and T broadcast, but for desktops, documents, video games, and everything else we see on screens. Scaling an HD image (whether it's a film or Windows Explorer) to a 4k screen is simple. I'm not sure how easy it is to coordinate all the moving parts for the switch to HDR. For example, I've got some old photos from ten years ago. If I get an HDR/WCG monitor, will those photos display properly? I don't know if they even have the necessary metadata to identify them as sRGB. Will my video games from the early 2000's look correct? How about my DVD collection? It seems like a bigger mess for backwards and/or forwards compatibility to go to HDR, compared to 4k. -
Which Will Be Adopted Sooner: 4K or Rec. 2020 / HDR Broadcasting???
KnightsFan replied to Mark Romero 2's topic in Cameras
Please correct me if I'm wrong, but I thought Rec.2020 only specifies 4k and 8k resolutions, using standard dynamic range, not HDR. Perusing Wikipedia, I'm finding: Rec.709: standard dynamic range, standard color gamut, 1080p, 8 or 10 bit Rec.2020: standard dynamic range, wide color gamut, 2160p/4320p, 10 or 12 bit Rec.2100: high dynamic range, wide color gamut, 1080p/2160p/4320p, 10 or 12 bit So perhaps Alistair Chapman was referring to Rec.2100? (Not trying to be pedantic, just making sure I understand the alphabet soup here!) Back on topic, I think 4k is easier to market for whatever reason, so we will see mass adoption of 4k before HDR. The public seems to "understand" 4k better than they do HDR. Moreover, we're all agreed on what 4K is, whereas HDR is still in a kind of format war from what I can see, between HLG and PQ. -
If it were me, I'd probably use a variety of programs to use the strongest tools of each. I'd animate the text in Blender (or the 3D package of your choice), as well as some of the other solid, static "hero" elements the camera mainly just circles around. For example, the room and chain at 0:16 in the embedded youtube video. I'd prefer to do objects like this in a legit 3D package, because they can be modelled easily, have few moving parts, and I don't want to fight a software layout designed primarily for compositing. After rendering out the 3D parts--possible in a few layers or maybe a separate Z-depth render, I'd bring those into After Effects or another compositor. There are some 2D elements which I'd do the compositor, on top of the 3D renders. Elements that either don't require much perspective change, or are particle based (fireworks, smoke) are usually easier to fake in 2D than to simulate fully. The asteroid field from the dailymotion link, and the planet in the background would also be in AE. Some of the foreground elements, such as the trees at 0:30 in the YouTube video, can be sourced from real photos of trees and then composited in. If I was feeling adventurous, I'd use Fusion instead of After Effects. I've never used Fusion on anything complex, but AE is an unintuitive mess, so I'd love to give Fusion a spin. Remember that little things, such as proper motion blur, will help sell it. Depending on how well you want it to match camera footage, you could compress it in H.264, add noise, or something like that before use.
-
When I said I wanted better rolling shutter than the NX1, I meant by more than 0.45ms...
-
Enough drama, lets play the camera and lens game again!
KnightsFan replied to jagnje's topic in Cameras
Well the first two people who did guess both got the brand correct, so there's that. If you want to prove that people can't tell the difference, next time share some high quality files, not a an 8 Mbps YouTube video. one of my pet peeves is people pretending to see a cameras banding, compression, dynamos range, macro blocking, motion cadence (whatever that is) etc from YouTube videos additionally, the vast majority of discussion I see about cameras are: - ergonomic--shape of the camera (for ease of use. As you said, "more convenient") - nds, xlrs, battery, HDMI size (again, ease of use, not much impact on final output) - crop factor (for lens compatibility) - bitrate and codec (moot by YouTube, the destroyer of all images) - stills capability (not applicable) - color science (2/3 of us recognized Sony) - low light capability (we don't have $20k in lights available, due to budget or type of shooting) - rolling shutter (didn't have a strong feeling either way on this video) Anyway, great job on the music video! it's very nicely done. -
Cropping for 120 doesn't mean it has less processing power then the NX1. NX1's 120fps is not a full sensor readout either; line skipping vs 1:1 cropping, same processing power (if the XT3 is oversampling at all in 120fps, then it is using more processing power. The NX1's 120fps looks like a 1920x1080 readout at BEST). Edit: also, adjusted for inflation, the XT3 is cheaper than the NX1 was.
-
Enough drama, lets play the camera and lens game again!
KnightsFan replied to jagnje's topic in Cameras
Looked Sony to me to, my guess would be A6500. Pretty impossible to tell after grade and compression though. -
I knew this would eventually come up. Perspective distortion is only due to distance between camera and subject. Focal length has nothing to do with it.
-
Do ALL LOG Profiles Require / Benefit From Significant Overexposure?
KnightsFan replied to Mark Romero 2's topic in Cameras
Of course! I suppose I was getting a bit off topic as I wasn't directing that part of my post towards the OP. -
Do ALL LOG Profiles Require / Benefit From Significant Overexposure?
KnightsFan replied to Mark Romero 2's topic in Cameras
I had a chance to play with an FS7 recently, and I felt something was off with exposure. So I did a comparison with my NX1. I set my NX1 to ISO 800 and the FS7 at its "base ISO" of 2000. In Resolve, I added the builtin SLOG to Rec709 to the FS7 footage, and it was darker than the NX1. ISO 2000 vs ISO 800. So I check with my light meter, which unsurprisingly agreed with the NX1. So in what world was it ISO 2000? One where you make middle grey from the washed out SLOG3 file remain middle grey after grading? In other words, one where you either keep the washed out-ness, or clip the top four stops of highlights. Whether you want to say "overexpose" on set or "underexpose" referring to post, doesn't matter. The issue is that it's universally accepted (except by the manufacturer) that you should pretend that your camera's ISO reading is actually a stop lower than it says. Hence: Exactly... That was basically my thought. Changing to log doesn't change the analog gain that Sony (or Panasonic, etc) is using, it just lets them put up some far-fetched number for "low light" performance. The truth is, everyone should test their camera extensively and find out what exposure works best, regardless of what the numbers say. -
There's a pinned topic in the Shooting subforum here on EOSHD for sharing videos straight out of camera. You can often find files with some digging. Sometimes you can download the original file from Vimeo, sometimes people have links in the YouTube description. There are a lot of files for popular cameras (A7s2, GH5, etc) out there on forums. It would be nice if there was a more centralized place for such files.
-
Do ALL LOG Profiles Require / Benefit From Significant Overexposure?
KnightsFan replied to Mark Romero 2's topic in Cameras
This is something I've always wondered about, but to me it seems like manufacturers pretty much lie with ISO ratings. If you have to "overexpose" by two stops, doesn't that mean the ISO rating is off by two stops? -
Panasonic announcing a full frame camera on Sept. 25???
KnightsFan replied to Trek of Joy's topic in Cameras
Not surprised! It would be nice if they took a note out of Canon's playbook, and included a rear filter slot and/or electronic ring on the adapter. -
Panasonic announcing a full frame camera on Sept. 25???
KnightsFan replied to Trek of Joy's topic in Cameras
https://en.m.wikipedia.org/wiki/Flange_focal_distance look at this list. Any lens with a flange distance LONGER than a camera can be adapted with a simple mechanical adapter, but not vice versa. So you can put an ef lens on an l mount, but you can't put an l lens on an Ef mount so you can adapt f, ef, m42, pl, and many others to l. whether or not you can electronically control the adapted lens depends on whether the lens protocol is open, or if someone has reverse engineered it. -
There is no rule mandating a 1:1 correlation between dynamic range captured and dynamic range displayed. Measuring the "stops of DR" of Rec.709 is based on an 8 bit image with a standard gamma curve. This means that on a Rec709 screen, the light emitted by a white pixel is 5 stops brighter than the light emitted by a black pixel. It doesn't matter what the pixels refer to--an "untouched" video from your camera, CGI, some text, a webpage, etc. The amount of compression (or expansion) of dynamic range that you apply to an image is just an artistic choice. It's just like a CGI render: your image doesn't HAVE to look like the real world, but it will end up on a screen that emits 5 stops of DR. So, to make a "natural" looking image, you should display about 5 stops of real world DR on that screen. However, we've universally agreed that we like a little bit of compression at least, so we put maybe 8 stops of the real world onto our 5 stop display. But if you show straight up log footage with 12 or 14 stops, it looks really unnatural.
-
Panasonic announcing a full frame camera on Sept. 25???
KnightsFan replied to Trek of Joy's topic in Cameras
I wonder if Panasonic will use a <1mm sensor stack like Leica does? If so, some lenses designed for film might have slightly better performance here than on the new Sony/Canon/Nikon. -
Panasonic announcing a full frame camera on Sept. 25???
KnightsFan replied to Trek of Joy's topic in Cameras
Pixel shift would be amazing! I occasionally need 5000x5000 square photos, which I use to make seamlessly tiling 4k textures. If the lower resolution model can get that 5000 line height via pixel shift, it would be the only system so far that would solve my occasional need for high resolution stills, while providing good video. -
Panasonic announcing a full frame camera on Sept. 25???
KnightsFan replied to Trek of Joy's topic in Cameras
Glad to see that there is, in fact, a low resolution version. Hopefully it's priced competitively with the Z6/A73. -
Stabilizing in-camera allows GoPro to use hardware sensors (accelerometer, gyroscope) in combination with image analysis to determine what movements and rotations to compensate. Doing it in post can only analyze the images, which as @Yurolov pointed out, are already compressed.
-
Panasonic announcing a full frame camera on Sept. 25???
KnightsFan replied to Trek of Joy's topic in Cameras
That was my first thought, except I don't even need FF. I just want to be able to adapt my existing lenses to it. -
Atomos Ninja V modules - Now All Your Stills Cameras Have Timecode!
KnightsFan replied to IronFilm's topic in Cameras
Because I usually clean tracks in audition or sound forge, and render those out to new files. For example, I might take a boom file, and then render out one version with noise reduction for dialog, and then another that gets some of the effects sounding right, etc. I cant do that cleaning stage before picture lock, and I don't want to mess around with replacing audio files in resolve just to send updated xmls over to reaper every time I need to add a new file in. It would be way easier to just drag that new file into reaper and have it auto align. -
Atomos Ninja V modules - Now All Your Stills Cameras Have Timecode!
KnightsFan replied to IronFilm's topic in Cameras
@buggz I tried Fairlight when it was first included, and found it very unstable. I didn't have any actual crashes, but audio would regularly cut off or have like a 24 dB attenuations in 1 or more channels, with absolutely no way to get it back to normal other than moving the contents of that track to a new track and then deleting the old one. This would happen in projects where I didn't touch Fairlight or any audio controls at all. I've been using Reaper for a while and really like it as well, so I have no plans to switch--yet! I love the level of customization you can do to everything, from layout to hotkeys, etc. I also find it to have a very intuitive layout and menu structure. It's usually pretty easy for me to do things I've never done before because it just works the way I expect. However, I haven't been able to find much information about timecode in Reaper, other than syncing for live performances. -
Atomos Ninja V modules - Now All Your Stills Cameras Have Timecode!
KnightsFan replied to IronFilm's topic in Cameras
Thanks for the explanation! I had a suspicion syncing was usually done before editing. However, on my projects, I AM sound post ? Which gives me a lot of flexibility in making my own workflow. My ideal workflow would be to get picture lock without ever importing audio files into Resolve, and then send my project to Reaper. It would be nice to simply be able to import an audio file to Reaper, right click on an audio clip and have some sort of "align to timecode" button. Afaik this feature doesn't exist, BUT Reaper has some extensive scripting capabilities, so I will look into creating this feature myself. I imagine I'd have to loop through all media items from my picture edit, read the timecode and duration, and then use that info to tell whether or not the audio in question should be aligned to that media item. The benefit of this would be: less synchronization required between Resolve and Reaper, no importing audio to Resolve at all, and audio sync issues would be solved in Reaper instead of Resolve. -
Atomos Ninja V modules - Now All Your Stills Cameras Have Timecode!
KnightsFan replied to IronFilm's topic in Cameras
So do you have to sync by appending audio to your video clips before editing? Is there a way to edit first, and then sync afterwards? -
Atomos Ninja V modules - Now All Your Stills Cameras Have Timecode!
KnightsFan replied to IronFilm's topic in Cameras
I've never had the chance to use timecode on any of my projects. Does anyone have a good workflow to sync audio and video with timecode, either in Resolve or Reaper?