Jump to content

jcs

Members
  • Posts

    1,839
  • Joined

  • Last visited

Reputation Activity

  1. Like
    jcs got a reaction from EthanAlexander in Best 50mm-ish lens to pair with Sigma 18-35mm   
    $23,100,000 on sale limited stock, film your own moon landing movie just add sand and front projection ??‍??
    https://www.premiumbeat.com/blog/10-incredible-camera-lenses/
  2. Like
    jcs got a reaction from Emanuel in Best 50mm-ish lens to pair with Sigma 18-35mm   
    https://en.m.wikipedia.org/wiki/Carl_Zeiss_Planar_50mm_f/0.7
  3. Like
    jcs got a reaction from EthanAlexander in Best 50mm-ish lens to pair with Sigma 18-35mm   
    https://en.m.wikipedia.org/wiki/Carl_Zeiss_Planar_50mm_f/0.7
  4. Like
    jcs got a reaction from EthanAlexander in Big Fancy Cameras, Professional Work, and "Industry Standard"   
    Rent or borrow an Alexa, shoot something great, problem solved? You'll include camera and gear rental costs in higher end gigs.
  5. Like
    jcs reacted to HockeyFan12 in Big Fancy Cameras, Professional Work, and "Industry Standard"   
    I know of one or two DPs who own expensive kits like that. Reds or Alexas. It worked for them, they got work from bundling the camera in with the deal, and they gradually paid the camera off. But the majority of people I know who are working consistently and supporting themselves well don't own any camera except something like a t2i for personal use. Partially because different cameras are better for different shoots, mostly because they're getting hired for their ability and not their gear. Yes, if you're being hired mostly for bundling a cheap rental then that cheap rental will open doors to you... but only with bad shitty clients. So yeah, it will open doors for sure... imo, the wrong ones. Most commercial sets cost $250k/day. Lower end shoots still cost five figures a day. Is saving a few hundred dollars on a camera rental really that important to anyone but the most miserly client? Is the most miserly client the one you want?
    There is a middle ground of C300 and FS7 ops who work as wet hires for lower rates ($600-$800/day wet hire, maybe a lot more but that seems to be the agreed upon low end) and seem to do REALLY well because they get tons of work for mostly documentary style stuff, tv and web. Usually they can pay off their small camera ($20k investment rather than $200k investment) in the first six months while still making money and after that it's just gravy. Talent helps there but all you need to be able to do is operate competently and reliably. But when it comes to Alexas and Epics... I rarely see owner/ops unless they own their own production company or are independently wealthy or just crazy ambitious. The cost of the crew to support those cameras is thousands of dollars a day, anyway, so most cheap professional clients don't want the hassle. A lot of student films do, though, and if you're in a city with a lot of film schools you can do okay just with that since you can recruit a free crew of film students and still ask a decent rate for yourself.
    In my experience the most important thing is who you know. You want to know people who are looking for DPs. You also have to be able to do the job reliably. That's about it.
    I've witnessed a number of DP hiring decisions and it's usually just who's easiest to work with. What camera someone owns almost never matters at all. Having a good reel of course is very helpful.
    Edit: for narrative specifically I can see having a higher end camera being a strong selling point. For breaking into indie films (where rates are low but passion is high) having a good camera could be a significant factor.
  6. Like
    jcs got a reaction from ntblowz in Big Fancy Cameras, Professional Work, and "Industry Standard"   
    Rent or borrow an Alexa, shoot something great, problem solved? You'll include camera and gear rental costs in higher end gigs.
  7. Like
    jcs got a reaction from jonpais in Big Fancy Cameras, Professional Work, and "Industry Standard"   
    Rent or borrow an Alexa, shoot something great, problem solved? You'll include camera and gear rental costs in higher end gigs.
  8. Like
    jcs got a reaction from Cinegain in Big Fancy Cameras, Professional Work, and "Industry Standard"   
    Rent or borrow an Alexa, shoot something great, problem solved? You'll include camera and gear rental costs in higher end gigs.
  9. Like
    jcs reacted to freeman in GH4/GH5 users: i'm going crazy here   
    Thanks for the digging JCS, yes Direct Focus Area is what is being turned on automatically when I mount any adapted lens with manual focus. I guess panasonic are right in assuming that when mounting manual focus glass you will want the Direct Focus Area tool working.
     
    My desire was eliminate everything off of my composition screen. Unfortunately it looks like I just can't get rid of this manual focus box. (at least when using manual focus only lenses) When I mount my olympus 12mm which can utilize AF I found some options change. I can now select which kind of AF mode I want, and I found that indeed selecting the 225 point AF area will remove the box (the center cross still stays, but.. OK) However, the second I unmount the lens the little Direct Focus Area box comes back. At this point I have literally tried everything. I guess I still could be missing something, but i'm fairly certain that when using an MF lens, this box pops up automatically with no option to hide it. 
    Silver lining, in my intense digging I discovered as Jonpais said the four directions on the rear wheel can be configured as 4 additional custom buttons! (for anyone wondering, pressing DISP in the "Fn Button Set" brings up options to configure the wheel as a 4 way D-pad like button, and on the GH5 the new thumb nub can have itself configured as a 4 way) I find that pretty damn cool even though I'm running out of needs to configure haha.
  10. Like
    jcs got a reaction from freeman in GH4/GH5 users: i'm going crazy here   
    The box stays up in MF mode (using native lens).
    Searching for Direct Focus Area found this: https://***URL removed***/forums/thread/3677576
    See last post here; might help: http://www.dvxuser.com/V6/showthread.php?325388-Turning-off-focus-pinpoint
  11. Like
    jcs got a reaction from meanwhile in Which Sound Recorder to buy? A guide to various indie priced sound recorders in 2017   
    I rank the Sound Devices MixPre 3/6 above the Zoom F4/F8 for pure sound quality: smoother, fuller, more natural sounding, more analog like, and of course the amazing analog limiters. I base this on owning a Zoom F4 and a Sound Devices USB Pre2 (which has the same audio topology as the 744T, meaning it sounds as good as the higher end SD recorders) as well as the YouTube/SoundCloud comparison videos:
    Couldn't find any comparisons of Sound Devices to Zaxcom, Nagra, AETA or other high-end recorders. It seems sound quality doesn't improve after Sound Devices, only features (channels etc.), power system, and size? (that's what I got from a quick peek at Gearslutz.com).
  12. Like
    jcs got a reaction from kaylee in NETFLIX: Which 4K Cameras Can You Use to Shoot Original Content? (missing F5! WTH?!?)   
    Does that remind you of something after taking something? 
    It's called a zone plate test chart, available here: http://www.bealecorner.org/red/test-patterns/, ZoneHardHigh.png. Resized to 12% with Nearest Neighbor resampling.
  13. Like
    jcs got a reaction from ssrdd in Light L16 - A Camera Breakthrough!   
    It's similar to VR- lots of hype but until certain elements are massively improved, these technologies won't be desired by the mainstream. There will be casualties along the way, though neither will completely disappear. At some point in the future, when VR gets 3D without glasses and computational cameras produce quality HDR 3D video (from high quality 3D depth data), they'll get married and everyone will be happy.
  14. Like
    jcs reacted to Emanuel in Camera resolution myths debunked   
    Sample fixed now -- original is 9388 x 7019:

  15. Like
    jcs got a reaction from EthanAlexander in Camera resolution myths debunked   
  16. Like
    jcs got a reaction from EthanAlexander in Camera resolution myths debunked   
  17. Like
    jcs got a reaction from EthanAlexander in Camera resolution myths debunked   
  18. Like
    jcs got a reaction from sam in Camera resolution myths debunked   
    Haha thanks for the laugh! You should try balloon juice, it has electrolytes!
    I think I know how Luke Wilson's character felt  
  19. Like
    jcs got a reaction from jonpais in "Leica’s new TL2 is a much improved mirrorless camera"   
    What? That's the ND for a nuke photo.
  20. Like
    jcs reacted to DBounce in My Thoughts Canon 1DXMK2 vs Panasonic Lumix GH5   
    Well... ? ... The speed booster is yet another piece of gear I have... And I have concluded that if I'm gonna use it on the GH5, any size/weight advantage is pretty much null and void.
    Nice to know I'm not alone... But not really. I'm sort of looking for the silver bullet here. 
    Yeah, I'm starting to think I just need to duplicate my Canon collection, with a m43 spin. So the above mentioned, plus the 35-100mm f2.8... keep the Nocticron... It's a special lens. Get the SLR  Magic compact... Add some diopters. Maybe add the 100-400mm F4, and call it a day.
    That's it then... GH5 done... I think ?
  21. Like
    jcs got a reaction from EthanAlexander in Camera resolution myths debunked   
    Have you noticed how @HockeyFan12 has disagreed with me politely in this thread, and we've gone back in forth in a friendly manner as we work through differences in ideas and perceptions, for the benefit of the community?
    The reason you reverted to ad hominem is because you don't have a background in mathematics, computer graphics, simulations, artificial intelligence, biology, genetics, and machine learning? That's where I'm coming from with these predictions: https://www.linkedin.com/in/jcschultz/. What's your linkedin or do you have a bio page somewhere so I can better understand your point of view? I'm not sure how these concepts can be described concisely from a point of view solely from chemistry, which appears to be where you are coming from? Do you have a link to the results of your research you mentioned? It's OK if you don't have a background in these fields, I'll do my best to explain these concepts in a general way.
    I used the simplest equation I am familiar with, Z^2 + C, to illustrate how powerful generative mathematics can create incredibly complex, organic looking structures. That's because nature is based on similar principles. There's an even simpler principle, based on recursive ratios: the Golden Ratio: https://en.wikipedia.org/wiki/Golden_ratio. Using this concept we can create beautiful and elegant shapes and patterns, and these patterns are used all over the place, from architecture and aesthetic design to nature in all living systems:


     
    I did leave the door open for a valid counter argument, which you didn't utilize, so I'll play this counter argument which might ultimately help bridge skepticism that generative systems will someday (soon) provide massive gains in information compression, including the ability to capture images and video in a way that is essentially resolution-less. Where the output can be rendered at any desired resolution (this already exists in various forms, which I'll show below), and even any desired frame rate.
    Years ago, there was massive interest in fractal compression. The challenge was and still is, how to efficiently find features and structure which can be coded into the generative system such that the original can be accurately reconstructed. RAW images are a top-down capture of an image: brute force uncompressed RGB pixels in a Bayer array format. It's massively inefficient, and was originally used to offload de-Bayering and other processing from in-camera to desktop computers for stills. That was and still is a good strategy for stills, however for video it's wasteful and expensive because of storage requirements. That's why most ARRI footage is captured in ProRes vs. ARRIRAW. 10- or 12-bit log-encoded DCT compressed footage is visually lossless for almost all uses (the exception being VFX/green-/blue-screen where every little bit helps with compositing). Still photography could use a major boost in efficiency by using H.265 algorithms along with log-encoding (10 or more bits). There is a proposed JPG replacement based on H.265 I-frames.
    DCT compression is also a top down method which more efficiently captures the original information. An image is broken into macro blocks in the spatial domain, which are further broken down into constituent spectral elements in the frequency domain. The DCT process computes the contribution coefficients for all available frequencies (see the link- how this works is pretty cool). Then the coefficients are quantized (bits are thrown away), and the remaining integer coefficients are further zeroed and discarded below a threshold and the rest further compressed with arithmetic coding. DCT compression is the foundation of almost all modern, commercially used still and video formats. Red uses wavelet compression for RAW, which is suitable for light levels of compression before DCT becomes a better choice at higher levels of compression, and massively more efficient for interframe motion compression (IPB vs. ALL-I).
    Which leads us to motion compression. All modern interframe motion compression used commercially is based on the DCT and macroblock transforms with motion vectors and various forms of predictions and transforms between keyframes (I-frames), which are compressed in the same way as a JPEG still. See h.264 and h.265. This is where things start to get interesting. We're basically taking textured rectangles and moving them around to massively reduce the data rate.
    Which leads us to 3D computer graphics. 3D computer graphics is a generative system. We've modeled the scene with geometry: points, triangles, quads, quadratic & cubic curved surfaces, and texture maps, bump maps, specular and light and shadow maps (there are many more). Once accurately modeled, we can generate an image from any point of view, with no additional data requirements, 100% computation alone. Now we can make the system interactive, in real-time with sufficient hardware, e.g. video games.
    Which leads us to simulations. In 2017 we are very close to photo-realistic rendering of human beings, including skin and hair: http://www.screenage.com.au/real-or-fake/

    Given the rapid advances in GPU computing, it won't be long before this quality is possible in real-time. This includes computing the physics for hair, muscle, skin, fluids, air, and all motion and collisions. This is where virtual reality is heading. This is also why physicists and philosophers are now pondering whether our reality is actually a simulation! Quoting Elon Musk: https://www.theguardian.com/technology/2016/oct/11/simulated-world-elon-musk-the-matrix
    A reality simulator is the ultimate generative system. Whatever our reality is, it is a generative, emergent system. And again, when you study how DNA and DNA replication works to create living beings, you'll see what is possible with highly efficient compression by nature itself.
    How does all this translate into video compression progress in 2017? Now that we understand what is possible, we need to find ways to convert pixel sequences (video) into features, via feature extraction. Using artificial intelligence, including machine learning, is a valid method to help humans figure out these systems. Current machine learning systems work by searching an N-dimensional state space and finding local minima (solutions). In 3D this would look like a bumpy surface where the answer(s) are deep indentations (like poking a rubber sheet). Systems are 'solved' when the input-output is generalized, meaning good answers are provided with new input the system has never seen before. This is really very basic artificial intelligence, there's much more to be discovered. The general idea, looking back at 3D simulations, is to extract features (resolution-less vectors and curves) and generalized multi-spectral textures (which can be recreated using generative algorithms), so that video can be massively compressed, then played back by rendering the sequence at any desired resolution and any desired frame rate!
    I can even tell you how this can be implemented using the concepts from this discussion. Once we have photo-realistic virtual reality and more advanced artificial intelligence, a camera of the future can analyze a real world scene, then reconstruct said scene in virtual reality using the VR database. For playback, the scene will look just like the original, can be viewed at any desired resolution, and even cooler, can be viewed in stereoscopic 3D, and since it's a simulation, can be viewed from any angle, and even physically interacted with!
    It's already possible to generate realistic synthetic images from machine learning using just a text description! https://github.com/paarthneekhara/text-to-image.
    https://github.com/phillipi/pix2pix
    http://fastml.com/deep-nets-generating-stuff/ (DMT simulations )
    http://nightmare.mit.edu/
    AI creating motion from static images, early results:
    https://www.theverge.com/2016/9/12/12886698/machine-learning-video-image-prediction-mit
    https://motherboard.vice.com/en_us/article/d7ykzy/researchers-taught-a-machine-how-to-generate-the-next-frames-in-a-video
  22. Like
    jcs got a reaction from EthanAlexander in Camera resolution myths debunked   
    Balloon juice? Do you mean debunking folks hoaxing UFOs with balloons or something political? If the later, do your own research and be prepared to accept what may at first appear to be unacceptable- ask yourself why you are rejecting it when all the facts show otherwise. You will be truly free when you accept the truth, more so when you start thinking about how to help repair the damage that has been done and help heal the world.
    Regarding generative compression and what will someday be possible: have you ever studied DNA? Would you agree that it's the most efficient mechanism of information storage ever discovered in the history of man? Human DNA can be completely stored in around 1.5 Gigabytes, small enough to fit on a thumb drive (6×10^9 base pairs/diploid genome x 1 byte/4 base pairs = 1.5×10^9 bytes or 1.5 Gbytes). 1.5 Gbytes of information accurately reconstructs through generative decompression, 150 Zettabytes (10^21)!  (1.5 Gbytes x 100 trillion cells = 150 trillion Gbytes or 150×10^12 x 10^9 bytes = 150 Zettabytes (10^21)). These are ballpark estimates, however the compression ratio is mind-boggling. DNA isn't just encoding an image, or a movie, it encodes a living, organic being. More info here.
    Using machine learning which is based on the neural networks of our brains (functioning similar to N-dimensional gradient-descent optimization methods), it will someday be possible to get far greater compression ratios than the state of the art today. Sounds unbelievable? Have you studied fractals? What do you think we could generate from this simple equation:
    Z(n+1) = Z(n)^2 + C, where Z is a complex number? Or written another way Znext = Znow*Znow + C? How about this:

    From a simple multiply and add, with one variable and one constant, we can generate the Mandelbrot set. If your mind is not blown from this single image from that simple equation, it gets better: it can be iterated and animated to create video:
    And of course, 3D (Mandelbulb 3D is free):
    Today we are just learning to create more useful generative systems using machine learning, for example efficient compression of stills and in the future video. We can see how much information is encoded in Z^2 + C, and in nature with 1.5Gbytes of data for DNA encoding a complete human being (150 ZettaBytes), so somewhere in between we'll have very efficient still image and video compression. Progress is made as our understanding evolves, likely through advanced artificial intelligence, to allow us to apply these forms of compression and reconstruction to specific patterns (stills and moving images) and at the limit, complete understanding of DNA encoding for known lifeforms, and beyond!
  23. Like
    jcs got a reaction from TheRenaissanceMan in NETFLIX: Which 4K Cameras Can You Use to Shoot Original Content? (missing F5! WTH?!?)   
    I remember the Kuro from 2008, it was very nice (ended up getting one of these, and am still using it as a live monitor for the C300 II in the studio): https://www.amazon.com/Sony-Bravia-KDL-52XBR5-52-Inch-1080p/dp/B000WDW6G6. I think the OLEDs have finally caught up (and passed) the top plasmas, but yeah it did take a while!
  24. Like
    jcs got a reaction from mat33 in NETFLIX: Which 4K Cameras Can You Use to Shoot Original Content? (missing F5! WTH?!?)   
    Imagine a meeting with Netflix executives, marketing, and lawyers, along with reps from ARRI, Red, Sony, and Panasonic, regarding the new 4K subscriptions and 4K content. Red, Sony, and Panny say "we have cameras that actually shoot 4K" and ARRI says "but but...". Netflix exec replies, we're selling 4K, not the obviously superior image quality that ARRI offers, sorry ARRI, when you produce an actual 4K camera like Red, Sony, and Panasonic, let's talk. Netflix marketing and lawyers in the background nod to exec. A while later ARRI releases the Alexa 65 with a 6.6K sensor and it's accepted (actually 3 Alev III sensors rotated 90 degrees and placed together, AKA the A3X sensor).
    Nyquist is > 2x sampling to capture without aliasing, e.g. sample >4K to get 2K and >8K to get 4K (along with the appropriate OLPF). 4K pixels can have max 2K line pairs- black pixel, white pixel, and so on. ARRI doesn't oversample anywhere near 2x and they alias because of it. That's the math & science. From Geoff Boyle's recent chart tests, the only cameras that showed little or no aliasing for 4K were the "8K" Sony F65 and the 7K Red (only showed 1080p on Vimeo, however small amounts of aliasing might be hidden when rendering 4K to 1080p). We can see from inspection of current cameras shooting those lovely test charts, that as sampling resolution approaches Nyquist, aliasing goes down, and as we go the other way, aliasing goes up. As predicted by the math & science, right?
    Since the 5D3 had eliminated aliasing, when I purchased the C300 II I was surprised to see aliasing from Canon (especially for the massive relative price difference)! The C300 II has a 4206x2340 sensor which is barely oversampled and uses a fairly strong OLPF producing somewhat soft 4K. Unlike the 1DX II 1080p, which produces fatter aliasing due to the lower resolution, it's still challenging with the C300 II 4K and fine fabrics. Shooting slightly out of focus could work in a pinch (and up post sharpening), however it will be cool when cameras have sufficient sensors to eliminate aliasing.
    Given how powerful and low-energy mobile GPUs are today, there's no reason in 2017 for cameras to have aliasing, other than a planned, slow upgrade path as cameras gradually approach Nyquist. Once there, what's the resolution upgrade path? Can't have that before there are 8K displays  
  25. Like
    jcs reacted to mat33 in NETFLIX: Which 4K Cameras Can You Use to Shoot Original Content? (missing F5! WTH?!?)   
    No disrespect for the GH5 wonder cam, but the fact that the GH5 will likely be a Netflix approved camera and the Alexa isn't makes a bit of a mockery of the whole thing.  Maybe the Netflix execs had a good sales pitch from Sony/Red that they thought was a proper comparison which Steve Yedlin alludes to in his ASC interview. I'm not sure I buy the whole future proof argument either, going by current trends any good films will have had at least one remake by the time an Alexa-shot film looks bad. 
×
×
  • Create New...