Jump to content

KnightsFan

Members
  • Posts

    1,214
  • Joined

  • Last visited

Reputation Activity

  1. Like
    KnightsFan got a reaction from aaa123jc in The Panasonic DC-BGH1 camera soon to be announced   
    I don't know what all the negativity is about, this looks pretty good to me. Worse specs than a Z Cam E2, but you gain Panasonic Brand (brands aren't my thing but brands are worth real money), SDI, timecode without an annoying adapter, and you can use that XLR module if you want. Plus it takes SD cards instead of CFast. If Z Cam didn't exist I'd get this for sure.
  2. Like
    KnightsFan reacted to tupp in Image thickness / density - help me figure out what it is   
    Of course, a lot of home weren't properly exposed and showed scenes with huge contrast range that the emulsion couldn't handle.  However, I did find some examples that have decent exposure and aren't too faded.
     
    Here's one from the 1940's showing showing a fairly deep blue, red and yellow, and then showing a rich color on a car.
     
    Thick greens here, and later a brief moment showing solid reds, and some rich cyan and indigo.  Unfortunately, someone added a fake gate with a big hair.
     
    A lot of contrast in these shots, but the substantial warm greens and warm skin and wood shine, plus one of the later shots with a better "white balance" shows a nice, complex blue on the eldest child's clothes.
     
    Here is a musical gallery of Kodachrome stills.  Much less fading here.  I'd like to see these colors duplicated in digital imaging.  Please note that Paul Simon's "Kodachrome" lyrics don't exactly refer to the emulsion!
     
    OP's original question concerns getting a certain color richness that is inherent in most film stocks but absent from most digital systems.  It doesn't involve lighting, per se, although there has to be enough light to get a good exposure and there can't be to much contrast in the scene.
     
     
    We have no idea if OP's simulated images are close to how they should actually appear, because 80%-90% of the pixels in those images fall outside of the values dictated by the simulated bit depth.  No conclusions can be drawn from those images.
     
    By the way, I agree that banding is not the most crucial consideration here -- banding is just a posterization artifact to which lower bit depths are more susceptible.  I maintain that color depth is the primary element of the film "thickness" in question.
  3. Like
    KnightsFan got a reaction from tupp in Image thickness / density - help me figure out what it is   
    Got some examples? Because I generally don't see those typical home videos as having thick images.
    They're pretty close, I don't really care if there's dithering or compression adding in-between values. You can clearly see the banding, and my point it that while banding is ugly, it isn't the primary factor in thickness.
  4. Like
    KnightsFan got a reaction from tupp in Image thickness / density - help me figure out what it is   
    I've certainly been enjoying this discussion. I think that image "thickness" is 90% what is in frame and how it's lit. I think @hyalinejimis right talking about shadow saturation, because "thick" images are usually ones that have deep, rich shadows with only a few bright spots that serve to accentuate how deep the shadows are, rather than show highlight detail. Images like the ones above of the gas station, and the faces don't feel thick to me, since they have huge swathes of bright areas, whereas the pictures that @mat33 posted on page 2 have that richness. It's not a matter of reducing exposure, it's that the scene has those beautiful dark tonalities and gradations, along with some nice saturation.
    Some other notes:
    - My Raw photos tend to end up being processed more linear than Rec709/sRGB, which gives them deeper shadows and thus more thickness.
    - Hosing down a scene with water will increase contrast and vividness for a thicker look. Might be worth doing some tests on a hot sunny day, before and after hosing it down.
    - Bit depth comes into play, if only slightly. The images @kyeposted basically had no difference in the highlights, but in the dark areas banding is very apparent. Lower bit depth hurts shadows because so few bits are allocated to those bottom stops. To be clear, I don't think bit depth is the defining feature, nor is compression for that matter.
    - I don't believe there is any scene where a typical mirrorless camera with a documented color profile will look significantly less thick than an Alexa given a decent colorist--I think it's 90% the scene, and then mostly color grading.
  5. Like
    KnightsFan got a reaction from aaa123jc in Need some help with wireless microphone system   
    I don't have super extensive experience with wireless systems. I've used Sennheiser G3's and Sony UWP's on student films, and they've always been perfectly reliable. The most annoying part is that half of film students don't know that you have to gain stage the transmitter...
    Earlier this year I bought a Deity Connect system, couldn't transmit a signal 2 feet and I RMA'd it as defective. Real shame as they get great reviews and are a great price. What I can say is their build quality, design, and included accessories are phenomenal. I very nearly just bought another set, but my project needs changed and I got a pair of Rode Wireless Go's instead.
    I've been quite happy with the Go's for the specific use case being in a single studio room. For this project, having 0 wires is very beneficial--we're clipping the transmitters to people and using the builtin mics--so they are great in that sense. If your use case is short range and you aren't worried about missing the locking connector, you can save a lot of money with them. I will say I wouldn't trust them for "normal" films as the non-locking connector is a non-starter no matter the battery life and range. Though I think they will still be useful as plant mics, they are absolutely tiny!
  6. Like
    KnightsFan got a reaction from Emanuel in Need some help with wireless microphone system   
    I don't have super extensive experience with wireless systems. I've used Sennheiser G3's and Sony UWP's on student films, and they've always been perfectly reliable. The most annoying part is that half of film students don't know that you have to gain stage the transmitter...
    Earlier this year I bought a Deity Connect system, couldn't transmit a signal 2 feet and I RMA'd it as defective. Real shame as they get great reviews and are a great price. What I can say is their build quality, design, and included accessories are phenomenal. I very nearly just bought another set, but my project needs changed and I got a pair of Rode Wireless Go's instead.
    I've been quite happy with the Go's for the specific use case being in a single studio room. For this project, having 0 wires is very beneficial--we're clipping the transmitters to people and using the builtin mics--so they are great in that sense. If your use case is short range and you aren't worried about missing the locking connector, you can save a lot of money with them. I will say I wouldn't trust them for "normal" films as the non-locking connector is a non-starter no matter the battery life and range. Though I think they will still be useful as plant mics, they are absolutely tiny!
  7. Like
    KnightsFan reacted to BrunoCH in X-T3 Question: 4:2:2 10-bit versus 4:2:0 10-bit gradeability   
    There is no difference in "gradeability" between 420 and 422.  The big difference about "gradeability" is 10bits vs 8 bits. F-log is better too. ProRes is better than H265 for hardware but it's another question. 
  8. Like
    KnightsFan got a reaction from techie in Z-CAM E2 Sticky Thread   
    More amazing news from Z Cam:  the E2-M4 is on the way, which is the same internals as the Z Cam E2 but with the body and interchangeable mount of the S6/F6. It does not have the multicam sync feature of the E2, however. It's $1500 in the US. If you bolt Z Cam's 1st party focal reducer on the front, you basically have a S35 active locking EF cinema camera in a tiny package for a low price.
    https://cvp.com/product/z-cam-e2-m4-cinematic-camera
    Also, the S6 and F6 will have price drops of $500 and $1k respectively.
  9. Like
    KnightsFan got a reaction from techie in Z-CAM E2 Sticky Thread   
    The new firmware (v0.95) enables ProRes RAW over HDMI to Atomos recorders on all E2 models. This includes the $800 E2c. This means that the E2 series now has not one, but two different Raw formats.
    I'm sort of surprised the Z Cam is still barely talked about here. It checks a lot of boxes that people complain about with the P4K. The entire series now has H.264, H.265, ProRes, ZRAW, and ProRes RAW recording capabilities, plus incredible frame rate options at full sensor width on the E2, multiple crop options and aspect ratios, low rolling shutter, long battery life, and incredible build quality. I recently got one, though probably won't have any projects to use it on for a few months.
  10. Thanks
    KnightsFan got a reaction from Ty Harper in Z97X-UD5H | 4790K | GTX 1080TI/MINI: add another 1080TI?   
    If there's no bottleneck, I personally would save my money haha. I think that for normal editing, your 1080TI probably already outperforms your CPU so adding another would likely give no benefit.
  11. Thanks
    KnightsFan got a reaction from Ty Harper in Z97X-UD5H | 4790K | GTX 1080TI/MINI: add another 1080TI?   
    First if you're editing high bitrate footage from an external HDD make sure that's not the bottleneck. A 7200 RPM drive reads at 120 MB/s, even even two uncompressed 14 bit HD raw streams would go over that.
    I have a GTX 1080 and use Resolve Studio. Earlier this year I upgraded from an i7 4770 to a Ryzen 3600, and got an enormous performance boost when editing HEVC. So while the decoding is done on the GPU, it's clear that the CPU can bottleneck as well. When I edit 4K H.264 or Raw my 1080 rarely maxes out.
    Overall I'd be pretty surprised if you need another/a new GPU for basic editing and color grading. Resolve studio is a better investment imo than a second 1080.
  12. Like
    KnightsFan reacted to Andrew Reid in The EOSHD Interview - Kazuto Yamaki, CEO of Sigma and Takuma Wakamatsu, Sigma Fp Product Manager   
    I'd like to welcome Kazuto Yamaki, CEO of Sigma and the product manager of the Sigma Fp and Cinema lenses, Takuma Wakamatsu to the pages of EOSHD for this long interview!
    I have recently been shooting with anamorphic with my Sigma Fp - the video you can see above is shot with the Rapido Technologies FVD-16A focus module housing a tiny Bolex Moller 8/19 anamorphic. I have more on this soon, as well as how ProRes RAW performs on the Sigma Fp attached to Atomos Ninja V.
    Ever since the Sigma embarked on the high quality ART lenses the company's factory has output higher and higher quality products, even industry leading lenses in fact. Now the Sigma Fp marks their entry into filmmaking cameras too with internal 4K RAW recording, external ProRes RAW and even BRAW - as well as being the smallest interchangeable lens full frame camera available on the market.
    Read the full interview:
    https://www.eoshd.com/news/the-eoshd-interview-kazuto-yamaki-ceo-of-sigma-and-takuma-wakamatsu-sigma-fp-product-manager/
  13. Like
    KnightsFan got a reaction from gethin in Complete list of 4K cameras for filmmakers   
    I think @IronFilm is right for narrative indie filmmakers. I can't speak for wedding/corporate/doc people, who may need more mobility, AF, and less rigging. I think even music videos tend to require more mobility than narrative.
    In my experience, even the worst Blackmagic camera, the 2.5k EF mount, was preferable over the Panasonic and Sony mirrorless that I also used back in ~2015. The only thing comparable was 5d3 raw, which was unreliable and an even bigger PITA to use (and also more expensive, incidentally).
    That said, obviously photo/video hybrids are a different class which does exclude things like the Pocket, or GoPros, etc, so it's a useful categorization.
  14. Like
    KnightsFan reacted to kye in Prores vs h264 vs h265 and IPB vs ALL-I... How good are they actually?   
    Prores from ffmpeg is significantly different - different bitrates and performance.

    Obviously this is rabbit hole I'm going to be falling down for a while!
  15. Like
    KnightsFan got a reaction from kye in Prores vs h264 vs h265 and IPB vs ALL-I... How good are they actually?   
    I found that you can set the encoding profile to Main10 or Main10 444, which encodes 10 bit in 420 or 444. No 422 afaik. I only tried H.265, and this was on Windows using Resolve studio.
  16. Like
    KnightsFan got a reaction from User in Prores vs h264 vs h265 and IPB vs ALL-I... How good are they actually?   
    @kye Yes, your H.264 files are 8 bit. So in your initial run, were all your tests generated using either resolve or ffmpeg using your reference file, or were any made directly from the source footage?
    1% of the file size is not remotely true, unless you start with a tiny H.265 file and then render it into ProRes which will increase the size without extra quality. I wouldn't trust much that comes from CineMartin.
    Of course, the content matters a lot for IPB efficiency. So I guess you might be able to engineer a 1% scenario if your video isn't moving, and you use the worst ProRes encoder that you can find. (Stuff like trees moving in the wind is actually pretty far on the "difficult for IPB" spectrum, though it looks like only about half your frame is that tree).
    Just btw, in my experience Resolve does a lot better with encoding when you use a preset rather than a defined bitrate. Emphasis, a lot better.
  17. Like
    KnightsFan got a reaction from User in Prores vs h264 vs h265 and IPB vs ALL-I... How good are they actually?   
    Great tests! One thing I will say is that when I did my ProRes vs H265 tests, I tested on a >HD Raw file in order to maximize the quality of the reference file, to avoid softness and artifacts from debayering. Additionally, my reference file was 4:4:4 rather than 4:2:2... fwiw.
    The other thing that would be nice is some files to look at, since while SSIM is great to have it's not the only way to look at compression.
    Also quick question: Are your H.264/H.265 files 4:2:0 or 4:2:2? 10 bit or 8 bit?
  18. Like
    KnightsFan reacted to kye in Prores vs h264 vs h265 and IPB vs ALL-I... How good are they actually?   
    In my initial run the Resolve ones were from the original timeline, but all the ffmpeg ones were from my 422 reference file, which I have now replaced with a real reference file.  I'll start re-running the conversions again.
    I'm thinking I'll do h264 IPB, h264 ALL-I, h265 IPB, h265 ALL-I, all in 10-bit to begin with.
    The 1% of file size seemed fishy, but there isn't much out there, and people do a lot of comparing h264 and h265 but not against Prores.
    Considering how good ffmpeg is compared to Resolve I'm now wondering if I should export a high quality file from Resolve and then use ffmpeg to make the smaller one.  Of course, I publish to YT so probably not, though.
  19. Like
    KnightsFan reacted to SoFloCineFile in Testing Danne's new EOS-M ML Build (7/29/2020)   
    Also, just want to give a disclaimer that I made a mistake earlier when I said that the 33 min video test recording mentioned above was 16 by 9 format. I opened the recording up in MLV and its actually 21 by 10...however, that's still a fuller look than 2:35 or 2:39, so it's awesome imo to be able shoot continuously in this mode, as well as 2.7k and 2.8k raw continous in the cropped aspect ratios.
    Before this latest build, I was only able to get continuous in the 1080 rewire and anamorphic pixel binned modes, where the image needed to be stretched in post and didn't look like true 2.5k.
    My next goal is to do a short film and try to film it entirely in 2.5k or above resolution.
  20. Like
    KnightsFan reacted to Andrew Reid in EOSHD testing finds Canon EOS R5 overheating to be fake   
    EOSHD testing finds Canon EOS R5 overheating to be fake, with artificial timers deployed to lock out video mode. In this test, we will probe my Canon EOS R5’s actual internal temperature in Celsius, as reported by the firmware.
    This week CDA-TEK and I are developing an Android app for the Canon EOS R5, which connects to the camera via the Canon API...
    Please read the rest of the article on the blog carefully before commenting below
  21. Like
    KnightsFan got a reaction from kye in Prores vs h264 vs h265 and IPB vs ALL-I... How good are they actually?   
    I used 4K raw files, and exported them as uncompressed 10 bit 444 HD files and did all my tests in HD, so yes I downscaled. And that's pretty similar to my workflow, where I always work on an HD timeline using the original files with no transcoding.
     
    You can run "ffprobe -i videofile.mp4" and it'll tell you the stream format. There will be a line that goes something like:
    "Duration: 00:04:56.28, start: 0.000000, bitrate: 20264 kb/s
        Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 4096x2160 [SAR 1:1 DAR 256:135], 20006 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)"
    So in my example above is 8 bit 4:2:0. 10 bit 4:2:2 would have yuv422p10le
  22. Like
    KnightsFan reacted to kye in Prores vs h264 vs h265 and IPB vs ALL-I... How good are they actually?   
    Some cameras shoot RAW and Prores, and some shoot h264 and a few shoot h265.  There's lots of bitrates on offer too, 50Mbps, 100Mbps, etc.  Some are ALL-I and some are IPB.
    But how good are they?
    I couldn't find any comparisons, so I did some myself.
    What I did was take a few shots from the BM Micro Cinema Camera shot in uncompressed RAW of a tree moving in the wind, and made a single UHD frame by putting them in each corner, like this:

    Also, they were of different lengths, so I just repeated each one, like this:

    So we have a test clip that was shot RAW (maybe compressing already compressed footage is easier?  I don't know, anyway..), that includes decent movement but isn't some stupid test case that means nothing in real life, that doesn't repeat (because the clips are different lengths), and has some deliberately almost crushed blacks to test the pixelation that h264 and h265 sometimes get in the shadows.
    Then I exported an uncompressed 10-bit 422 YUV file to use as a reference.  After some tests and seeing the file sizes and processing times, I decided to only use the first 12s of the timeline.
    Then I rendered a bunch of clips, either h264 from Resolve, or h264 and h265 from ffmpeg.  I tried rendering h265 from Resolve but had issues, and in this test all the maximum bitrates I tried all created the same size file, so I abandoned that.  Common wisdom online is that Resolves h265 export mechanism isn't the best and you should use ffmpeg anyway.
    Then I compared the compressed clips with the uncompressed reference file, which gives a score called SSIM, which goes from 1 (a perfect match) downwards.
    Here's the results so far:

    Here are some observations / thoughts, and some answers to some questions I'd had:
    In Resolve, H264 seems to top out, as I couldn't get it to export at more than about 400Mbps IPB, but ffmpeg went higher than that quite happily ALL-I h264 doesn't seem to be that different than IPB, at higher bitrates anyway - slightly lower quality and slightly higher file size, but not the 3x I've read around the place Prores isn't that much worse in terms of quality vs compression than h264 or h265, despite being an older codec (although maybe there are versions?  I have no idea how prores works.. maybe that's important for this topic?) Different encoders have different levels of quality, so what's in a given camera is likely to differ from these results I guess the real question is, how much h264 do you have to have to equate to Prores?  The answer seems to be "about the same bitrate, but probably a little less for an ALL-I codec, and a little less bitrate again if it's an IPB".
  23. Like
    KnightsFan got a reaction from SoFloCineFile in Testing Danne's new EOS-M ML Build (7/29/2020)   
    It looks so much better now, so it was indeed the workflow that was causing issues. There's still some noise in low light, but the mushiness is gone and the color has life in it. Also, I'm not sure if you slowed it down before or had a frame rate mismatch, but that opening shot of the front gate used to have jerky and unnatural movement but looks normal now.
    One thing though is that you've got some over exposure that wasn't present before. You might want to adjust the curves in Resolve, or experiment with the raw settings. The shot of the bust at 0:44 in the original is properly exposed, but that shots at 0:48 of the bust in the new one is blown out. But yeah, definitely looks like it was captured in Raw now, which the first video did not.
  24. Like
    KnightsFan reacted to SoFloCineFile in Testing Danne's new EOS-M ML Build (7/29/2020)   
    Hello everyone, 
    I just recently picked up an EOS-M after being inspired by Zeke and wanted to share a short film I shot using Danne's new ML build (7/29/2020): https://bitbucket.org/Dannephoto/magic-lantern_jip_hop_git/downloads/crop_rec_4k_mlv_snd_raw_only_2020Jul29.EOSM202.zip
    YouTube link to my short film, "A Venetian Splendor on the Gulf:" 
     
     
    Shooting Conditions:
    Shot hand-held using neck-strap to stabilize
    15-45mm EF-M lens
    23.972 fps
    12-bit, raw 4k anamorphic and 10-bit 2.5k raw clips
    Exported from MLV as h264 12-bit mov, up-scaled to 2160p and graded in LumaFusion
  25. Like
    KnightsFan got a reaction from IronFilm in Audio Recorders?   
    Those new capsules look fairly interesting. It would be neat if Zoom transitioned to a modular recorder system, without compromising quality, durability and form factor. Being able to just add 4 more XLR's with phantom power, or an ambisonic capsule, is actually pretty cool. They're in a position to make a Zoom F1 sequal that is small enough to be a beltpack recorder, has remote triggering from bluetooth, but can also turn into a recorder with 4-5 XLR inputs when needed. That would be great for people at my budget/skill level who need a swiss army knife for lots of different uses.
    Since their new capsule can do 4 tracks apparently, it would be nice to see some capsules with, say, two XLRs plus stereo mics, though I don't know how many people would ever use those. I wonder if it's bi-directional? Could they make an output module for live mixing?
    It seems like they've put some effort into improving their OS as well. Touchscreen control is... not ideal, but moving towards a modern UI with more functionality is a good direction. When they say it's app based, I assume there are no 3rd party apps--but it would be REALLY interesting if they could get a Tentacle Sync app.
×
×
  • Create New...