Jump to content

jcs

Members
  • Posts

    1,839
  • Joined

  • Last visited

Posts posted by jcs

  1. 1 hour ago, Kisaha said:

    @IronFilm what lab and int mic?

    I am still trying to decide what is going to be my next int mic after my Oktava. Audix/Neumann MT 185, or sell a kidney and buy the MKH50 (it is 1600€ here. Do the conversion and you will be surprised! More expensive than Schoeps)

    The Audix is a good deal. Some say 80% of the Schleps CMC641. I use both; they are pretty similar. See reviews here https://www.bhphotovideo.com/c/product/242661-REG/Audix_SCX1_HC_SCX1_HC_Microphone.html?sts=pi

    Might still be worth it if more expensive in your country (or purchase overseas and have shipped).

    Why get the MKH50 over the CMC641 if it costs more?

  2. 4 hours ago, Thpriest said:

    I'm increasingly needing large cloud file storage (1TB or more) and fast upload/download/transfer speeds. I have a Mega account as a regular client has it and uses it to share large amounts of files with me but I find it too slow (I have 200mb fibre optic cable which I have tested and is perfect).

    Wetransfer and Myairbridge used to be ok for free sites but I am now sharing 100gb or more every week and often I need speed!

    I don't mind having a monthly subscription or whatever but I want to invest in the best.

    What do you use? Any recommendations?

    I got around 980Mbps after hours at WeWork in Hollywood to Vimeo. If you want max speed and cost is no object perhaps consider dedicated hardware near your physical location (or even at your location- will need a decent firewall for security ). Then you'll be able to get real SpeedTest.net speeds.

  3.  

    @IronFilm, new thread for new topic)

    Canon example I recently shot: 1DX II and custom picture style, SOOC, no post work (1080p):

    59c8a6a42aead_ScreenShot2017-09-24at11_34_10PM.thumb.jpg.152822bbc5640e5e213ed382526d90d4.jpg

    Mystery camera:

    59c8a6a2ded5f_ScreenShot2017-09-24at11_32_15PM.thumb.jpg.77a34a360c6cc6da793d5580dd99dcda.jpg

    Quick CC to set pure white to white, quick try at making skin look less video-y (also note green/yellow cast in skin now that white is closer to white):

    p3a.thumb.jpg.977c2bb15bcd6e502facc4e8f58b4e41.jpg

    Reminds me of issues I had with Sony A7S I and FS700 (not so much with A7S II, which I found to work pretty well if WB set correctly and decent lights).

    Low CRI/TLCI lights can also be hurting overall color and especially skintones.

     

     

  4. 1 hour ago, IronFilm said:

    So is a smaller camera without IBIS?

     

    Thus in conclusion: a Sony (but not in s-log), or maybe a Panasonic?

    Can't really estimate camera size as one could mount a small camera on a heavier rig. Thought the GH5 had decent IBIS, however I suppose with a long lens could start to look shakier.

    Guess is Sony, though I've seen Panasonic look like that too, especially with challenging light sources (thin skintones).

  5. 18 minutes ago, IronFilm said:

    Why not RED? Or Canon.

     

    Because the skintones & DR are not good?

     

    Yeah, they seem to be having trouble getting whites white and skintones looking natural at the same time. The green/magenta knife-edge challenge (requires effectively different WB for different exposure levels in the frame; a lot more work in post). That's what challenged me with the FS700. I didn't pixel peep, but I think one reason Red can appear filmic is inherent noise; this footage appeared perhaps too clean. Latest Red color science can produce skin tones I think are pretty good. Didn't really look like Canon: skintones and again perhaps too clean, and Canon lenses typically don't look like that. While I've used higher end lenses on Red (friend's gear), I only use Canon on Canon cameras, mostly Sony on Sony (to get some form of AF), and only Panasonic on Panasonic (sold the Voigtlander 25mm F.95, which is pretty awesome (don't use the GH4 much anymore)). Lens(es) used in your example didn't look like Canon or Cooke etc (more like Sony or Panasonic). I've gotten 'thin skintones' out of the GH4 with lower-quality lights, however the footage seemed too clean for GH4, and more Sony-like.

  6. Computational cameras come up every now and then: 

    We can compute depth data from multiple cameras, from depth sensors (iPhone X), or both. Computing depth from multiple cameras is computationally expensive (though probably not a big deal for modern phone GPUs and just 2 camears), and the iPhone X (IR hardware depth sensor, from same company who built the Kinect for XBox) doing real-time background subtraction / 'segmentation' / background replacement without a green screen in real-time is pretty cool. IR depth sensors have had trouble in sunlight in the past, curious to see how much they have improved with the iPhone X IR depth sensor.

    Once you have clean, high-quality depth data, a small sensor camera can then be used to simulate pretty much whatever you want in software, and with modern GPUs, many effects will be possible in real-time, including with video! When the depth data is made available for NLEs (someday in the future for sure), we'll be able to set focus and lens simulations in post.

     

  7. 1 hour ago, IronFilm said:

     

    Ha! Those guesses are all over the place.

    But you are right that it wasn't shot on an F3, or any other camera I own. As I had zero influence over camera choice.

    Well, as people talk so much about the inherent color science of one brand or another, I'll give a hint by narrowing it down to just a few brands it might be and see what people might rule out:

    RED

    Sony

    Canon

    Panasonic

    The camera was from one of those four.

     

    Random observation:

    We shot this over one single day. You can very obviously see how the ratio of internal to external light changed over the day by observing the kitchen window on the left.

    Doesn't look like Canon or Red. Low saturation hides skintones a bit, so perhaps Sony FS5 or similar (stopped down- not shallow DOF). I can get nice skintones and decent DR from the A7S II using Slog2 + SGamut3.cine (A7R II might be even better re: color). Haven't used or studied GH5 much, though handheld shots seemed a bit shakier than the new GH5 should produce? Skin highlights clipped kinda hard, so appears like Panasonic or non-Slog2 Sony gamma. Given the video-y look, magenta bias WB with sometimes green skin (color is kinda all over the place), and sometimes decent DR, final guess is Sony with a 'clippy' gamma.

  8. You could start the story with the black-eyed children first being noticed on the tram. Then the horror happens on top of the mountain. Even shoot the tram part with a phone, etc. if there are tram people on the tram (haven't ridden in a while, don't recall if it's just riders).

    You could use this music, perhaps for a horror music video:

    Using 'opposite' music can be pretty powerful. Also the band name of course ;)  (you'd want to test the song upload to YT first, just to make sure they'll just stick an ad on it vs. block the video).

  9. 8 minutes ago, kaylee said:

    HAAAAA thats too funny i worked at UTC ???

    its called Idyllwild~! prolly just gonna make that the name of the series, it looks pretty in a serifed font and it sounds cool. and theres a Y in it

    its awesome up there!

    360º views

    Are you going to include the tram in your film? Trapped in the tram with a monster could be scary short.

  10. 1 hour ago, Jonesy Jones said:

     My understanding of what JCS is saying is that all film, even modern, is not as precise as digital. In other words, film is mechanical, not electronic, and therefore there are some extremely subtle variances to the way it records.  These variances could be both temporal and positional. Unlike digital which will be perfect.  I don't think we're talking about massive skipping and stuttering, I think we're talking about nearly imperceptible flaws that make a film feel alive the way digital, video, often does not. 

    Where people prefer the motion cadence of one camera to another, the difference would be the sampling interval. Is it precise, right-on every frame, or does it jitter, and if so is it constant, variable, is the magnitude constant, variable, is it uniform/Gaussian random etc. It's interesting with all the tests folks have posted over the years, I don't recall seeing one on motion cadence / temporal and/or spatial jitter. Motion picture film has spatial jitter during playback in the theater, so even digitally acquired material transferred to film would have playback jitter. Via testing one could discover how much, if any jitter helps make the viewing experience more pleasing or not.

    I'm personally am not a fan of large amounts of jitter, however perhaps small amounts might help create the illusion of being more organic. Or maybe the opposite is happening, cameras with little or no jitter are what people prefer when they say they like the motion cadence of a camera. A scientific test with perhaps a very accurate strobe could be used to test cameras. Another way would be to introduce jitter in a completely synthetic test with computer graphics/animation. I do know from video games running at constant- vblank-sync'd 60fps, a single frame drop is massively noticeable and used to be a criterion to fail a game in QA back when there were no online updates available. Similarly, if a game is running with constant variable jitter, a single frame drop or game slowdown can be barely noticeable.

    Found research papers on temporal jitter for video: https://www.google.com/search?rlz=1C5CHFA_enUS508US508&q=temporal+jitter+video+quality&oq=temporal+jitter+video+quality. Quickly skimming the conclusions, I didn't see a clear idea presented beyond what I mentioned from video game experience: constant temporal jitter is better than perfect timing with occasional jitter/dropped frames.

    Temporal jitter used to remove rendering artifacts:

    24Hz judder can be an issue with panning, perhaps temporal jitter can provide a kind of temporal anti-aliasing, resulting in less apparent judder. This would make sense based on the 'perfect 60fps highly noticeable frame-drop video game' effect. This could be tested by adding jitter to juddering video to see if it helps hide the judder.

    Another factor could be effective motion blur with a digital sensor (may not be precisely related to the camera's shutter setting). This too could be scientifically tested.

  11. 12 minutes ago, Jonesy Jones said:

     Good idea. That's one more thing for me to experiment with. 

     But also begs the question as to why some cameras, take black magic for instance, have such a good feel of motion, when it's  very doubtful that they are playing with temporal values in camera. 

    One more question, should we all be shooting at 24P and distributing at 23.98?

    Maybe those cameras have less temporal jitter? Or the variance is Gaussian random around the average/center, vs. non-random drift-and--snap. It's something that could be measured...

    23.976 for everything except theater in 60Hz countries. Notice how motion looks much better on embedded devices vs. browsers on desktops (e.g Apple / Fire / Roku / HDTV). So called hard real-time systems guarantee consistent frame rates. So far I haven't seen desktop browsers come close. A dedicated desktop app using the GPU synced to the hardware  refresh, as with a video game, could come close to a hard realtime system.

    Thus it's helpful when studying motion to understand where the temporal jitter is coming from.

  12. It would appear the primary factor in motion cadence would be the clock / sampling interval. If the sampling interval is perfect, e.g. each frame sampled at very close to 1s/23.976, that will have different a perception vs. a system with temporal jitter. It would seem film would have some temporal jitter, vs. a digital system would could be a lot more precise. The question is what 'looks better' and if temporal jitter is helpful, how would a digital camera implement it? (statistical sampling: average interval, variance, magnitude etc.). Likewise, if temporal jitter is pleasing, why aren't there post tools which specifically address this? (I've seen film effects which 'damage' footage, however nothing so far that subtly creates temporal jitter based on specific camera systems in the same way e.g. Film Convert works for color/grain).

    With a precise motion pattern, cameras could be measured for temporal jitter / motion cadence (it would appear that cameras with genlocks could be jittered live).

  13. In the same way cryptocurrency (bitcoin et al) provides a new means for energy exchange/currency, we need a non-government global identity/trust system. After the 143 million person Equifax breach, I received a phone call from my own phone number, stating it was from the phone company and there was a security issue, enter last 4 of social etc. I did nothing and the call hung up after 30s. Called phone company and they acknowledged that the phone network can be hacked and is not secure. So if the phone system can be hacked, calling 611 might not route your call to the actual phone company, and thus any information you provide may not be going to the phone company. HTTPS (AES encryption) is designed to provide a secure connection over the internet, however it's not 100% secure either (it's mostly secure, the likelihood at this point for exploits for most things is low). There's currently nothing like HTTPS for phone calls (or email that is standardized/widely used).

    It's interesting that most people in this thread so far are using their real name (I'm using my initials with links in signature that can be followed to see my identity). I think there is value in real identities (Facebook tried to make this case, though it's understood that it's not a 'kind' system nor is google ('Do no evil' lol)). So it's understood that some people may wish to remain anonymous (as much as possible) when dealing with 'unkind' systems. However for a forum such as EOSHD and other friendly communities, perhaps it would be cool if there were 'game theory' -based reasons / rewards for people using their real identities as well as positive reinforcement for desired community behavior. Maybe a "Verified ID" badge or similar.

    For email between trusted friends / colleagues, folks used to use PGP, however it doesn't appear to be widely used anymore. Seems like a great business opportunity for folks to tackle- a universally trusted crypto ID system that is pervasive and not owned or controlled by any government or corporation, similar to bitcoin for currency.

    Rawlite OLPF- was curious what camera this was for... http://rawlite.com/

    EDIT: 'Edgar' was hacked at the SEC: https://www.bloomberg.com/news/articles/2017-09-21/all-about-edgar-and-why-its-hack-shakes-the-sec-quicktake-q-a . This is a very big deal. The world could really use a new trust system...

  14. Everyone in this discussion appears to be alive. How do we know if our reality is direct or an abstraction, an illusory layer? One or more of us could be an artificial intelligence. The more anonymous the user, the easier it would be to fool us. My background is computer simulations, artificial intelligence, and cognitive science, which filters my view of reality and projects this background on what I perceive. Which will likely be a different view of reality than anyone else's here.

    This thread was started as a result of Jim Carey's comments on reality, which led to a discussion of the ego, how it can create illusions, and that if we don't acknowledge that our own point of view may be an illusion, as opposed to knowing anything for certain, including are we alive, that we might be deceiving ourselves. That's all.

    Observe the other thread where people are arguing about sensor size again. And folks unwilling to use the principle of equivalence to see for themselves another view of reality. What blocks them from even looking?

  15. 1 hour ago, Jonesy Jones said:

    This is exactly what I mean jcs. The stakes are too high to be playing this game of ignorance. 

    Why are the stakes too high?

    How do you know this a game of ignorance?

  16. 1 hour ago, Jonesy Jones said:

    You're missing the point. 'When' you and I die is uncertain. 'That' we will die is certain. Like I said before, the stakes are very high.

    How do you know you're alive? 

    How will you know when you are dead?

×
×
  • Create New...