Jump to content

jcs

Members
  • Posts

    1,839
  • Joined

  • Last visited

Everything posted by jcs

  1. The Audix is a good deal. Some say 80% of the Schleps CMC641. I use both; they are pretty similar. See reviews here https://www.bhphotovideo.com/c/product/242661-REG/Audix_SCX1_HC_SCX1_HC_Microphone.html?sts=pi Might still be worth it if more expensive in your country (or purchase overseas and have shipped). Why get the MKH50 over the CMC641 if it costs more?
  2. Inspiration on so many levels!
  3. If you set up dynamic DNS you can host the files from your own local server and fully utilize your fast network connection, with no upload time for your content, since the client(s) will be pulling content directly from your server. Also no monthly cloud fees. https://www.howtogeek.com/66438/how-to-easily-access-your-home-network-from-anywhere-with-ddns/
  4. I got around 980Mbps after hours at WeWork in Hollywood to Vimeo. If you want max speed and cost is no object perhaps consider dedicated hardware near your physical location (or even at your location- will need a decent firewall for security ). Then you'll be able to get real SpeedTest.net speeds.
  5. @Andrew Reid @jonpais if possible, posts regarding guessing the camera can be moved to the top of the thread here: This post can be deleted.
  6. @IronFilm, new thread for new topic) Canon example I recently shot: 1DX II and custom picture style, SOOC, no post work (1080p): Mystery camera: Quick CC to set pure white to white, quick try at making skin look less video-y (also note green/yellow cast in skin now that white is closer to white): Reminds me of issues I had with Sony A7S I and FS700 (not so much with A7S II, which I found to work pretty well if WB set correctly and decent lights). Low CRI/TLCI lights can also be hurting overall color and especially skintones.
  7. Can't really estimate camera size as one could mount a small camera on a heavier rig. Thought the GH5 had decent IBIS, however I suppose with a long lens could start to look shakier. Guess is Sony, though I've seen Panasonic look like that too, especially with challenging light sources (thin skintones).
  8. Yeah, they seem to be having trouble getting whites white and skintones looking natural at the same time. The green/magenta knife-edge challenge (requires effectively different WB for different exposure levels in the frame; a lot more work in post). That's what challenged me with the FS700. I didn't pixel peep, but I think one reason Red can appear filmic is inherent noise; this footage appeared perhaps too clean. Latest Red color science can produce skin tones I think are pretty good. Didn't really look like Canon: skintones and again perhaps too clean, and Canon lenses typically don't look like that. While I've used higher end lenses on Red (friend's gear), I only use Canon on Canon cameras, mostly Sony on Sony (to get some form of AF), and only Panasonic on Panasonic (sold the Voigtlander 25mm F.95, which is pretty awesome (don't use the GH4 much anymore)). Lens(es) used in your example didn't look like Canon or Cooke etc (more like Sony or Panasonic). I've gotten 'thin skintones' out of the GH4 with lower-quality lights, however the footage seemed too clean for GH4, and more Sony-like.
  9. Computational cameras come up every now and then: We can compute depth data from multiple cameras, from depth sensors (iPhone X), or both. Computing depth from multiple cameras is computationally expensive (though probably not a big deal for modern phone GPUs and just 2 camears), and the iPhone X (IR hardware depth sensor, from same company who built the Kinect for XBox) doing real-time background subtraction / 'segmentation' / background replacement without a green screen in real-time is pretty cool. IR depth sensors have had trouble in sunlight in the past, curious to see how much they have improved with the iPhone X IR depth sensor. Once you have clean, high-quality depth data, a small sensor camera can then be used to simulate pretty much whatever you want in software, and with modern GPUs, many effects will be possible in real-time, including with video! When the depth data is made available for NLEs (someday in the future for sure), we'll be able to set focus and lens simulations in post.
  10. Doesn't look like Canon or Red. Low saturation hides skintones a bit, so perhaps Sony FS5 or similar (stopped down- not shallow DOF). I can get nice skintones and decent DR from the A7S II using Slog2 + SGamut3.cine (A7R II might be even better re: color). Haven't used or studied GH5 much, though handheld shots seemed a bit shakier than the new GH5 should produce? Skin highlights clipped kinda hard, so appears like Panasonic or non-Slog2 Sony gamma. Given the video-y look, magenta bias WB with sometimes green skin (color is kinda all over the place), and sometimes decent DR, final guess is Sony with a 'clippy' gamma.
  11. You could start the story with the black-eyed children first being noticed on the tram. Then the horror happens on top of the mountain. Even shoot the tram part with a phone, etc. if there are tram people on the tram (haven't ridden in a while, don't recall if it's just riders). You could use this music, perhaps for a horror music video: Using 'opposite' music can be pretty powerful. Also the band name of course (you'd want to test the song upload to YT first, just to make sure they'll just stick an ad on it vs. block the video).
  12. Are you going to include the tram in your film? Trapped in the tram with a monster could be scary short.
  13. Where people prefer the motion cadence of one camera to another, the difference would be the sampling interval. Is it precise, right-on every frame, or does it jitter, and if so is it constant, variable, is the magnitude constant, variable, is it uniform/Gaussian random etc. It's interesting with all the tests folks have posted over the years, I don't recall seeing one on motion cadence / temporal and/or spatial jitter. Motion picture film has spatial jitter during playback in the theater, so even digitally acquired material transferred to film would have playback jitter. Via testing one could discover how much, if any jitter helps make the viewing experience more pleasing or not. I'm personally am not a fan of large amounts of jitter, however perhaps small amounts might help create the illusion of being more organic. Or maybe the opposite is happening, cameras with little or no jitter are what people prefer when they say they like the motion cadence of a camera. A scientific test with perhaps a very accurate strobe could be used to test cameras. Another way would be to introduce jitter in a completely synthetic test with computer graphics/animation. I do know from video games running at constant- vblank-sync'd 60fps, a single frame drop is massively noticeable and used to be a criterion to fail a game in QA back when there were no online updates available. Similarly, if a game is running with constant variable jitter, a single frame drop or game slowdown can be barely noticeable. Found research papers on temporal jitter for video: https://www.google.com/search?rlz=1C5CHFA_enUS508US508&q=temporal+jitter+video+quality&oq=temporal+jitter+video+quality. Quickly skimming the conclusions, I didn't see a clear idea presented beyond what I mentioned from video game experience: constant temporal jitter is better than perfect timing with occasional jitter/dropped frames. Temporal jitter used to remove rendering artifacts: 24Hz judder can be an issue with panning, perhaps temporal jitter can provide a kind of temporal anti-aliasing, resulting in less apparent judder. This would make sense based on the 'perfect 60fps highly noticeable frame-drop video game' effect. This could be tested by adding jitter to juddering video to see if it helps hide the judder. Another factor could be effective motion blur with a digital sensor (may not be precisely related to the camera's shutter setting). This too could be scientifically tested.
  14. We get more views/interaction on IG than YouTube. Perhaps start with <= 1 minute to build a following on IG, then do longer work for YouTube and bring your fans over? You could also do 1 min versions for IG and longer edits for YT (we do that too).
  15. Maybe those cameras have less temporal jitter? Or the variance is Gaussian random around the average/center, vs. non-random drift-and--snap. It's something that could be measured... 23.976 for everything except theater in 60Hz countries. Notice how motion looks much better on embedded devices vs. browsers on desktops (e.g Apple / Fire / Roku / HDTV). So called hard real-time systems guarantee consistent frame rates. So far I haven't seen desktop browsers come close. A dedicated desktop app using the GPU synced to the hardware refresh, as with a video game, could come close to a hard realtime system. Thus it's helpful when studying motion to understand where the temporal jitter is coming from.
  16. Jitter also applies to positional variance... Temporal jitter makes it clear, and is something we can measure or create in post WRT to the idea of motion cadence.
  17. It would appear the primary factor in motion cadence would be the clock / sampling interval. If the sampling interval is perfect, e.g. each frame sampled at very close to 1s/23.976, that will have different a perception vs. a system with temporal jitter. It would seem film would have some temporal jitter, vs. a digital system would could be a lot more precise. The question is what 'looks better' and if temporal jitter is helpful, how would a digital camera implement it? (statistical sampling: average interval, variance, magnitude etc.). Likewise, if temporal jitter is pleasing, why aren't there post tools which specifically address this? (I've seen film effects which 'damage' footage, however nothing so far that subtly creates temporal jitter based on specific camera systems in the same way e.g. Film Convert works for color/grain). With a precise motion pattern, cameras could be measured for temporal jitter / motion cadence (it would appear that cameras with genlocks could be jittered live).
  18. In the same way cryptocurrency (bitcoin et al) provides a new means for energy exchange/currency, we need a non-government global identity/trust system. After the 143 million person Equifax breach, I received a phone call from my own phone number, stating it was from the phone company and there was a security issue, enter last 4 of social etc. I did nothing and the call hung up after 30s. Called phone company and they acknowledged that the phone network can be hacked and is not secure. So if the phone system can be hacked, calling 611 might not route your call to the actual phone company, and thus any information you provide may not be going to the phone company. HTTPS (AES encryption) is designed to provide a secure connection over the internet, however it's not 100% secure either (it's mostly secure, the likelihood at this point for exploits for most things is low). There's currently nothing like HTTPS for phone calls (or email that is standardized/widely used). It's interesting that most people in this thread so far are using their real name (I'm using my initials with links in signature that can be followed to see my identity). I think there is value in real identities (Facebook tried to make this case, though it's understood that it's not a 'kind' system nor is google ('Do no evil' lol)). So it's understood that some people may wish to remain anonymous (as much as possible) when dealing with 'unkind' systems. However for a forum such as EOSHD and other friendly communities, perhaps it would be cool if there were 'game theory' -based reasons / rewards for people using their real identities as well as positive reinforcement for desired community behavior. Maybe a "Verified ID" badge or similar. For email between trusted friends / colleagues, folks used to use PGP, however it doesn't appear to be widely used anymore. Seems like a great business opportunity for folks to tackle- a universally trusted crypto ID system that is pervasive and not owned or controlled by any government or corporation, similar to bitcoin for currency. Rawlite OLPF- was curious what camera this was for... http://rawlite.com/ EDIT: 'Edgar' was hacked at the SEC: https://www.bloomberg.com/news/articles/2017-09-21/all-about-edgar-and-why-its-hack-shakes-the-sec-quicktake-q-a . This is a very big deal. The world could really use a new trust system...
  19. jcs

    Game of Egos

    Everyone in this discussion appears to be alive. How do we know if our reality is direct or an abstraction, an illusory layer? One or more of us could be an artificial intelligence. The more anonymous the user, the easier it would be to fool us. My background is computer simulations, artificial intelligence, and cognitive science, which filters my view of reality and projects this background on what I perceive. Which will likely be a different view of reality than anyone else's here. This thread was started as a result of Jim Carey's comments on reality, which led to a discussion of the ego, how it can create illusions, and that if we don't acknowledge that our own point of view may be an illusion, as opposed to knowing anything for certain, including are we alive, that we might be deceiving ourselves. That's all. Observe the other thread where people are arguing about sensor size again. And folks unwilling to use the principle of equivalence to see for themselves another view of reality. What blocks them from even looking?
  20. jcs

    Game of Egos

    How can we determine if reality is real, a dream, a simulation, or something else? I think therefore I am- René Descartes.
  21. jcs

    Game of Egos

    Why are the stakes too high? How do you know this a game of ignorance?
  22. jcs

    Game of Egos

    How do you know you're alive? How will you know when you are dead?
×
×
  • Create New...