Jump to content

KnightsFan

Members
  • Posts

    1,352
  • Joined

  • Last visited

Posts posted by KnightsFan

  1. 41 minutes ago, androidlad said:

    BT.2020 for now is only used as a container colour space, only a handful of RGB laser projectors can fully cover the gamut. Current UHD Bluray discs are encoded in P3 D65 colour space using BT.2020 primaries.

    To be clear, that's just because the content itself was mastered in P3. The P3-based image data needs a transformation to look correct if it is displayed in the Bt.2020 space. So it's not "encoded" in P3, it's encoded in Bt.2020, but doesn't use the parts of the Bt.2020 gamut that are outside the P3 gamut. Right?

  2. I think the difference may be that for 4K, commercials can zoom in and say "if you had a 4k screen, you would see THIS much detail!" and that demonstration works pretty well, even on an HD screen. HDR is literally something you can't display on your current screen, so marketing is like "well, we can't show you what it is unless you buy the hardware." It's way too abstract unless you either see it yourself, or have some prior knowledge about how displays work.

    The hurdle that I see with HDR is that Rec 709 and sRGB are so entrenched, not just for pro/semi-pro web and T broadcast, but for desktops, documents, video games, and everything else we see on screens. Scaling an HD image (whether it's a film or Windows Explorer) to a 4k screen is simple. I'm not sure how easy it is to coordinate all the moving parts for the switch to HDR. For example, I've got some old photos from ten years ago. If I get an HDR/WCG monitor, will those photos display properly? I don't know if they even have the necessary metadata to identify them as sRGB. Will my video games from the early 2000's look correct? How about my DVD collection?

    It seems like a bigger mess for backwards and/or forwards compatibility to go to HDR, compared to 4k.

  3. Please correct me if I'm wrong, but I thought Rec.2020 only specifies 4k and 8k resolutions, using standard dynamic range, not HDR. Perusing Wikipedia, I'm finding:

    Rec.709: standard dynamic range, standard color gamut, 1080p, 8 or 10 bit

    Rec.2020: standard dynamic range, wide color gamut, 2160p/4320p, 10 or 12 bit

    Rec.2100: high dynamic range, wide color gamut, 1080p/2160p/4320p, 10 or 12 bit

    So perhaps Alistair Chapman was referring to Rec.2100? (Not trying to be pedantic, just making sure I understand the alphabet soup here!)

     

    Back on topic, I think 4k is easier to market for whatever reason, so we will see mass adoption of 4k before HDR. The public seems to "understand" 4k better than they do HDR. Moreover, we're all agreed on what 4K is, whereas HDR is still in a kind of format war from what I can see, between HLG and PQ.

  4. If it were me, I'd probably use a variety of programs to use the strongest tools of each. I'd animate the text in Blender (or the 3D package of your choice), as well as some of the other solid, static "hero" elements the camera mainly just circles around. For example, the room and chain at 0:16 in the embedded youtube video. I'd prefer to do objects like this in a legit 3D package, because they can be modelled easily, have few moving parts, and I don't want to fight a software layout designed primarily for compositing.

    After rendering out the 3D parts--possible in a few layers or maybe a separate Z-depth render, I'd bring those into After Effects or another compositor.

    There are some 2D elements which I'd do the compositor, on top of the 3D renders. Elements that either don't require much perspective change, or are particle based (fireworks, smoke) are usually easier to fake in 2D than to simulate fully. The asteroid field from the dailymotion link, and the planet in the background would also be in AE.

    Some of the foreground elements, such as the trees at 0:30 in the YouTube video, can be sourced from real photos of trees and then composited in.

    If I was feeling adventurous, I'd use Fusion instead of After Effects. I've never used Fusion on anything complex, but AE is an unintuitive mess, so I'd love to give Fusion a spin.

    Remember that little things, such as proper motion blur, will help sell it. Depending on how well you want it to match camera footage, you could compress it in H.264, add noise, or something like that before use.

  5. 2 hours ago, jagnje said:

    people here demand top quality cameras and lenses but most can`t tell one from the other

    Well the first two people who did guess both got the brand correct, so there's that. If you want to prove that people can't tell the difference, next time share some high quality files, not a an 8 Mbps YouTube video.

    one of my pet peeves is people pretending to see a cameras banding, compression, dynamos range, macro blocking, motion cadence (whatever that is) etc from YouTube videos

    additionally, the vast majority of discussion I see about cameras are:

    - ergonomic--shape of the camera (for ease of use. As you said, "more convenient")

    - nds, xlrs, battery, HDMI size (again, ease of use, not much impact on final output)

    - crop factor (for lens compatibility)

    - bitrate and codec (moot by YouTube, the destroyer of all images)

    - stills capability (not applicable)

    - color science (2/3 of us recognized Sony)

    - low light capability (we don't have $20k in lights available, due to budget or type of shooting)

    - rolling shutter (didn't have a strong feeling either way on this video)

    Anyway, great job on the music video! it's very nicely done.

  6. 1 hour ago, sandro said:

    Funny fact: no one (for the price) still can't match the NX1 processing power. 

    IBIS not there is bummer but God still cropping the sensor even for 1080p 120fps??? 

    Did you see for pics the ISO performance is much worse than the x-t2?

    Cropping for 120 doesn't mean it has less processing power then the NX1. NX1's 120fps is not a full sensor readout either; line skipping vs 1:1 cropping, same processing power (if the XT3 is oversampling at all in 120fps, then it is using more processing power. The NX1's 120fps looks like a 1920x1080 readout at BEST).

    Edit: also, adjusted for inflation, the XT3 is cheaper than the NX1 was.

  7. 9 hours ago, Eric Calabros said:

    I think the overexpose is not the right term here. Exposure is light, filtered by time, filtered by aperture. Period. ISO has nothing to do with that.   But, basically you use higher ISO to "underexpose" and get equal brightness of normally exposed image (which is important for action photographers, they can use higher shutter speed without the image "look" darker). So in case of log, they don't tell you to overexpose, they just ask you to Not Underexpose! 

    I had a chance to play with an FS7 recently, and I felt something was off with exposure. So I did a comparison with my NX1. I set my NX1 to ISO 800 and the FS7 at its "base ISO" of 2000. In Resolve, I added the builtin SLOG to Rec709 to the FS7 footage, and it was darker than the NX1. ISO 2000 vs ISO 800. So I check with my light meter, which unsurprisingly agreed with the NX1.

    So in what world was it ISO 2000? One where you make middle grey from the washed out SLOG3 file remain middle grey after grading? In other words, one where you either keep the washed out-ness, or clip the top four stops of highlights.

    Whether you want to say "overexpose" on set or "underexpose" referring to post, doesn't matter. The issue is that it's universally accepted (except by the manufacturer) that you should pretend that your camera's ISO reading is actually a stop lower than it says.

    Hence:

    4 hours ago, Robert Collins said:

    If you think about it - a camera (as in say the A7iii) which has a base iso of 100, why would it really choose to shoot SLOG at an iso of 800 (lets forget about the dual iso bit for the moment)? If it did so, it would effectively reduce dynamic range by close to 3 stops and increase noise by close to 3 stops. The answer is it doesnt record slog at iso 800 but at iso 100 and therefore you have to 'overexpose' to 'expose' correctly by +2EV+.

    Exactly... That was basically my thought. Changing to log doesn't change the analog gain that Sony (or Panasonic, etc) is using, it just lets them put up some far-fetched number for "low light" performance.

    The truth is, everyone should test their camera extensively and find out what exposure works best, regardless of what the numbers say.

     

  8. 13 hours ago, AaronChicago said:

    Where do you guys watch uncompressed videos online?

    There's a pinned topic in the Shooting subforum here on EOSHD for sharing videos straight out of camera.

    You can often find files with some digging. Sometimes you can download the original file from Vimeo, sometimes people have links in the YouTube description. There are a lot of files for popular cameras (A7s2, GH5, etc) out there on forums. It would be nice if there was a more centralized place for such files.

  9.  

    Which mounts can be adapted to the L mount?

    https://en.m.wikipedia.org/wiki/Flange_focal_distance

    look at this list. Any lens with a flange distance LONGER than a camera can be adapted with a simple mechanical adapter, but not vice versa. So you can put an ef lens on an l mount, but you can't put an l lens on an Ef mount

    so you can adapt f, ef, m42, pl, and many others to l.

    whether or not you can electronically control the adapted lens depends on whether the lens protocol is open, or if someone has reverse engineered it.

  10. 8 hours ago, mercer said:

    I’m confused. I understand that Rec709 (for instance) only has so many visible stops but if you display one of those dynamic range charts in a Rec 709 space, you can clearly see more than 5 stops of DR.

    And there’s also the effect of DR on color tonality, highlight rolloff, etc. So it seems it is a little more complex than just to say you can only display so many stops in any specific space?

    There is no rule mandating a 1:1 correlation between dynamic range captured and dynamic range displayed.

    Measuring the "stops of DR" of Rec.709 is based on an 8 bit image with a standard gamma curve. This means that on a Rec709 screen, the light emitted by a white pixel is 5 stops brighter than the light emitted by a black pixel. It doesn't matter what the pixels refer to--an "untouched" video from your camera, CGI, some text, a webpage, etc.

    The amount of compression (or expansion) of dynamic range that you apply to an image is just an artistic choice. It's just like a CGI render: your image doesn't HAVE to look like the real world, but it will end up on a screen that emits 5 stops of DR. So, to make a "natural" looking image, you should display about 5 stops of real world DR on that screen. However, we've universally agreed that we like a little bit of compression at least, so we put maybe 8 stops of the real world onto our 5 stop display. But if you show straight up log footage with 12 or 14 stops, it looks really unnatural.

  11. 1 hour ago, hoodlum said:

    Looks like it will have IBIS.

     

    https://www.l-rumors.com/l5-on-sept-25-panasonic-will-annonce-the-development-of-two-ff-cameras-and-three-ff-lenses/

    1) Entry level Full Frame L-mount camera with IBIS, 4k60p and Low Resolution Sensor
    2) Pro level Full Frame L-mount camera with IBIS, 4k60p and High Resolution Sensor (close to 50MP)
    3) New Full Frame 24-70mm zoom lens
    4) New Full Frame 50mm fast prime
    5) New Full Frame 70-200mm zoom prime

    Glad to see that there is, in fact, a low resolution version. Hopefully it's priced competitively with the Z6/A73.

  12. 7 minutes ago, Robert Collins said:

    But isnt 'electronic stabilisation' just something that you can do in post (with a much more powerful processor)? Mechanical stabilisation aka ibis or say DJI Mavic gimbal is stabilisation that works before PP.

    Stabilizing in-camera allows GoPro to use hardware sensors (accelerometer, gyroscope) in combination with image analysis to determine what movements and rotations to compensate. Doing it in post can only analyze the images, which as @Yurolov pointed out, are already compressed.

  13. 13 minutes ago, IronFilm said:

    Then why not sync up files from day one? Mute all but the boom track, and visual edit to that. Then dig into the rest of the tracks when you're focusing on the sound phase

    Because I usually clean tracks in audition or sound forge, and render those out to new files. For example, I might take a boom file, and then render out one version with noise reduction for dialog, and then another that gets some of the effects sounding right, etc. I cant do that cleaning stage before picture lock, and I don't want to mess around with replacing audio files in resolve just to send updated xmls over to reaper every time I need to add a new file in. It would be way easier to just drag that new file into reaper and have it auto align.

  14. @buggz I tried Fairlight when it was first included, and found it very unstable. I didn't have any actual crashes, but audio would regularly cut off or have like a 24 dB attenuations in 1 or more channels, with absolutely no way to get it back to normal other than moving the contents of that track to a new track and then deleting the old one. This would happen in projects where I didn't touch Fairlight or any audio controls at all.

    I've been using Reaper for a while and really like it as well, so I have no plans to switch--yet! I love the level of customization you can do to everything, from layout to hotkeys, etc. I also find it to have a very intuitive layout and menu structure. It's usually pretty easy for me to do things I've never done before because it just works the way I expect.

    However, I haven't been able to find much information about timecode in Reaper, other than syncing for live performances.

  15. 9 hours ago, IronFilm said:

    I'd recommend syncing at the start, then handing over OMF/AAF to sound post after there is a picture lock. 

    Thanks for the explanation! I had a suspicion syncing was usually done before editing. However, on my projects, I AM sound post ? Which gives me a lot of flexibility in making my own workflow.

    My ideal workflow would be to get picture lock without ever importing audio files into Resolve, and then send my project to Reaper. It would be nice to simply be able to import an audio file to Reaper, right click on an audio clip and have some sort of "align to timecode" button. Afaik this feature doesn't exist, BUT Reaper has some extensive scripting capabilities, so I will look into creating this feature myself. I imagine I'd have to loop through all media items from my picture edit, read the timecode and duration, and then use that info to tell whether or not the audio in question should be aligned to that media item.

    The benefit of this would be: less synchronization required between Resolve and Reaper, no importing audio to Resolve at all, and audio sync issues would be solved in Reaper instead of Resolve.

×
×
  • Create New...