Jump to content

Jay Turberville

Members via Facebook
  • Posts

    20
  • Joined

  • Last visited

Everything posted by Jay Turberville

  1. Somethng worth thinking about with raw from a CFA sensor is that while the demosaiced files may well be 4:4:4, the data contained within is not that good. Every four pixels only gets one red, one blue and two green pixels. So for green screen work the real data density is pretty similar to 4:2:2. (I say "similar' because the green in your screen isn't an exact match for the green filter on the sensor pixels - also, I think the green filters on the sensor pixels of CFA sensors tend to be fairly broad spectrum and more green/yellow - though that may vary from camera to camera.) If you use a blue screen, your mattes might suffer since you only get one blue pixel for every four image pixels. So anyway, I'm not surprised at all that the GH4 matte stands up well against the BMCC - given that the results are views at a resolution much lower than the native GH4 resolution. The GH4 records 2048 Green, 1024 Red and 1024 Blue samples at C4K. By comparison, the BMCC records only 1216 Green, 608 Red and 608 Blue samples in 2.5K raw. The GH4 has a clear advantage in total color samples. Basically, it has almost twice as many samples and at 1080p, it records a minimum of one color sample for each of the three primary colors - better even than 4:4:4.. Where the GH4 suffers is in bit depth and compression. Given a locked down camera with a green screen, the h.264 compression is probably really efficient. So it mostly comes down to the GH4's bit depth inferiority. So while the GH4 doing better isn't a numeric slam dunk, it shouldn't be a surprise either.
  2. Yes. That's exactly the way I was using the term "sensor." My point is/was that 4/3s and m4/3s cameras are different. The SSWF on 4/3s and m4/3s camera is a very thin membrane or piece of glass that sits above the sensor. There's an air gap between the sensor/filter sandwich (aka "sensor") and the SSWF with no physical support.between. This is what allows that surface to vibrate effectively at the high frequencies that will pop dust off of the surface. Furthermore, I think the actual mounting of the SSWF isn't particularly robust either. So I suggest looking into the manufacturer recommended methods of cleaning (or not) before proceeding as you would with a typical DSLR sensor. It isn't the same.
  3. Keep in mind that you can't clean the sensor on a GH4. You can only clean the SSWF that sits in front of the sensor. I believe that this is a very thin item and I recommend being very careful with it. You might want to investigate further and get more detailed info. I've had 4/3s or m4/3s cameras for quite a few years now and have never tried cleaning the SSWF. As a practical matter debri that the SSWF can't dislodge has always been insignificant at practical apertures (f/11 and below IMO).
  4. Part of the reason is that you have to establish some standard threshold for noise in the darker tones. And this threshold is somewhat subjective. I did a lot of DR testing using Imatest and the software gave multiple results for different noise thresholds. It is up to the user to decide which threshold is appropriate. This is one reason why you should be VERY careful about drawing conclusions about DR when the results come from different sources. One of the big values of DxO testing is that the testing schemes are very consistent. But as mentioned previously, those results can only be considered strong hits about what to expect when shooting video from the same sensor.Not only is there variations in processing of the raw data to consider, but the fact that video uses an electronic shutter may have influence (more noise) as well.
  5. Right - unless the previous firmware was significantly faulty. S-Log shouldn't affect DR results either. What can affect results is noise reduction because it lowers the apparent noise threshold in the darker tones. But that is misleading since it reduces resolution/detail as well.
  6. The lugs for the strap rattle far more noticablely than the the lens. As for the original usability question, I just mouned it to the GH4 and: 1) No AFC. AFS only. 2) AFS is not very fast. The lens hunts back and forth and finally settles in. For the most part I'd consider this to be a manual focus lens on the GH4.
  7. OK. I'm confused. First the camera is too crisp, then the camera forces you to shoot wide open resulting in images that show all the flaws of the fast lenses that you are using. In other words, the image is too flawed/fuzzy. So which is it? Too crisp, or too fuzzy? From where I sit is seems like you have more of an issue with lens selection than anything else. Theyve been complaining about the 4/3s sensors having less DoF vs. full frame sensors for years now in the stills community. Bottom line is that 4/3s simply generally has less DoF range (less true now with affordable focal length reducers). And if you want to maximize DoF and still have sharp images, then you need to spend big money on excellent fast glass. I assure you that my 50mm f/2.0 Zuiko Digital Macro is plenty sharp wide open at f/2.0 (decent bokeh quality as well). What I find surprising is the number of posts I see in the film community complaining about the DoF of the GHx series cameras. Why is this? I ask because here's another "bottom line." The 35mm still camera "full frame" film format happens to be pretty near optimal for acheiving the maximum overall range of DoF compared to any other imaging format. It's right around the sweet spot of compromises. And thats compared to the smaller Academy Standard frame as well - which has been the mainstay of filmaking for many decades. That standard is closer in size to 4/3 than is 35mm "full frame." So if anything, the extra large "full frame" sensors should be less "cinematic", not more. When I look at specs for cinema primes, the standard aperture seems to cluster around f/1.8 or so (t-stop actually). Zooms seem to range between f/2.0 and f/2.8. Now I'm no expert, but I'm betting the vast majority of footage shot was probably not shot wide open. Given the availability of speedboosters, the fact that m4/3s sensors use the best (center) section from 35mm "full frame" lenses, and that lenses designed for m4/3s (good ones cluster around f/2.8 with a few here and there in the f/1.8 - f/2.0 ballpark) generally perform very well wide open, and that the format sizes are fairly similar - I don't see what the fuss is all about. When I step back and try to take in the landscape, it seems to me that what we have today is some kind of new cinematic sensibitlity taking hold where shallow DoF has somehow come to reign supreme. And this has caused the real significance in the differences in DoF between a GHx camera and a traditional 35mm cinema camera to become exaggerated. It is absolutely true that GHx cameras don't provide as much flexibility with DoF as a full frame DSLR. There is no debating that. And if getting the most shallow DoF is a priority for you, then that's a big factor. But goodness folks, lets not get carried away with the degree to which that matters in general cinematic shooting. The formats used by the GHx cameras are actually a very nice compromise and fairly similar to traditional cinema formats. If that weren't the case, then the GHx series probably would not have so successful that Panasonic has continued to evolve it more and more toward professional videography and filmmaking.
  8. Yes, the rattling is from the IS element - though on my lens it is a very dull and hard to notice rattle. When the camera is on and IS is enable there is no rattling. For a GH4, the lens is rather large. This is partly because it is moderately fast for this range, but I'd bet mostly because it was designed for the backfocus of a 4/3s camera. I suspect that a made for m4/3 lens of similar focal and aperture range would be significantly smaller. Optically the lens has an excellent reputation. I like mine a lot. I've read other videographers who hold this lens in moderately high regard. I bought the DMC-L1 with lens just for the lens back when you couldn't buy the lens by itself - that's how strong the lense' reputation was back then. Anybody want a DMC-L1 DSLR camera body? :^) Oh, and a front lens scratch probably won't have any real world affect on image quality unless it is a very large and deep gouge. It is mostly a cosmetic issue. I might have tested the lens at multiple apertures and focal lengths (wide and stopped down especially) to assure myself of that - and then I might have complained and argued for a reduced price to keep the lens. Maybe I'll play around with the AF on the GH4 and report back later.
  9. No, you can trade resolution for levels of tonal gradation. The simple proof of this is in every B&W halftone image you've ever seen printed. And actually, that's true for every B&W photo as well. A B&W negative is fundamentally not continuous tone, but instead is a fine distribution of opaque silver clumps and clear emulsion. The appearance of tonal gradation comes from the size and distribution density of the film grain. The same is true for prints except that there is an opaque base underneath the emulsion. Saying that "all that's going to happen is a low pass filter filter and any additional luma variance will come from noise ..." is, I think, missing the point. The low pass filter is a key factor in converting an image with fewer tonal levels into one with more tonal levels. That's part of the process by which a truly two level (black ink & white paper) printed image become viewed as a "continuous" tone image. As you reduce the image magnification, the detail in the halftoning dots becomes too small for the eye to distinguish separately. Your eye acuity acts as a low pass filter. And the result is that you see a smooth blending of tones rather than just black and white specs. Now, whether a downsampled GH4 4K image delivers a 10 bit luminance channel or not is a tougher question to answer. And frankly, I don't have the math and image sampling theory skills to really tackle it well. But my bet is that if you low pass filter the image correctly you will see a real improvement in the luminance bit depth. I'd also bet that it doesn't make it all the way to 10bit. I think it will fall short because the 4K image doesn't really have 4k of luminance information to begin with. It really only has about 3K (3072x1648) of luminance detail available for downsampling. If you downsampled to 720p, then I think you'd probably get your 10 bit luminance. Oh, and for those who might try to test this, please be sure to apply some kind of additional low pass filtering (gaussian blur or someting similar) prior to downsampling. Most downsampling algorithms are geared toward producing (artificially) sharp looking images and don't use enough low pass filtering to produce an image with per-pixel detail similar to the original. They tend to let a pretty significant amount of aliasing (low frequency artifacts) to occur. I'd experiment with pre-blurring the image by somewhere between .5 and 1.0 pixels. (We went through this with the staff at DPReview.com years ago in regard to downsampling and dynamic range. They kept incorrectly claiming that downsampling didn't improve dynamic range when in fact it does - if you downsample correctly. They didn't low pass appropriately and image noise that should have been reduced with a low pass filter was passed through as aliased noise.)
  10. That might be logical if you are going to display the image at the same final per pixel magnification and view it from the same distance. In other words, if you were going to display it at twice the width and height of the 1080p image. But we know that's not how videos/movies are typically viewed. So no, it really isn't logical. The practical reality is that an image with higher pixel count can tolerate more compression because each pixel and the resulting artifacts from noise and compression is visually smaller. If we want to get into pixel peeping, we need to consider question of the information density coming from the sensor. When shooting 4K on the GH4, we are getting one image pixel for each sensor pixel. But the GH4, of course, uses a CFA sensor. And in the real world of CFA image processing, such images are not 1:1 pixel sharp. This is due to the nature of the Color Filter Array as well as the use of the low pass filter in front of the sensor that reduces color moire. A good rule of thumb is that the luminance resolution is generally about 80% of what the pixel count implies. The color resolution is even less. Years ago I demonstrated online via a blind test that an 8MP image from a 4/3s camera (E-500 I think) only has about 5Mp of luminance info. Dpreview forum participants could not reliably distinguish between the out of camera 8Mp image and the image that I downsampled to 5Mp and the upsampled back to 8Mp. They got it right about 50% of the time. So what's the point? The point is that before you can even (mis)apply your logic about scaling bandwidth with image resolution, you must first be sure that the two images are similarly information dense. What is the real information density of the A7S image before it is encoded to 1080p video? I think of CFA images as being like cotton candy - not information dense at all. About 35% of the storage space used is not necessary from an information storage standpoint. (Though it may be useful from a simplicity of engineering and marketing standpoint.) And even with this, there's surely plenty I'm not considering. For instance, if the A7S uses line skipping (I have no idea what it really uses) that will introduce aliasing artifacts. How do the aliasing artifacts affect the CODEC's ability to compress efficiently. The bottom line is that all too often camera users apply simplistic math to photographic questions that are actually more complex. There are often subtle issues that they fail to consider (like viewing distance, viewing angles, how a moving image is perceived differently than a still images and more.) Personally, as a guy who sometimes leans toward the school of thought that "if a little bit of something is good, then too much is probably just right.", I kinda wish that the GH4 did offer 4K recording at 200mbs. Why not given that the camera can record at 200mbs? But the reality is that I can't really see much in the way of CODEC artifacts in the 4K footage I've shot with the GH4 so far. So 100Mbs is probably just fine. 50Mbs on the A7s is probably pretty good as well.
  11. You must set the camera frequency to 24hz and then restart the camera. It will then output a 24hz HDMI signal. I just did this yesterday. My Vizio TV would not play the signal. It simply reported a non supported signal. Our larger Samsun plasma screen played the video back fine and also showed 1080p 24hz on the display for a few moments once communications were established.
  12. My second edit system is a six core AMD with a fairly low end GPU. I think has something like 130 Cuda cores. That machine plays the clip fine in Adobe Premiere 5.0. But I've got Premiere "hacked" (not much of a hack really) to use the Cuda cores on the non-approved card and this seems to be enough to let the clip be played back OK. Keep in mind that when playing 4K 24p files you are dealing with a number of possibilities. 1) If your display is less than 4K, then a very large image must be re-scaled 24 times a second and get rescaled. Cuda cores are great for that, but if your player doesn't use them, they may as well not be there. A multi-threaded player might be fast enough to do the real time scaling - but maybe not. Also, some kind of negotiation needs to happen so that the 24fps frames get sent properly to screen that may be running at 60hz, 72hz or greater. 2) If your display can show a full 4K, then it needs a LOT of bandwidth in order to push all that image data. Remember that this data has to be pushed around uncompressed. All that said, the OP did say that he re-scaled to 1080p. So its anybody's guess what he is really seeing. But given what I see in the clip he posted, it must be some kind of fault/difference in his playback system. One good thing about threads like this is that it gets me to test things I might not otherwise test. The original file only plays from the camera when the camera is optioned for 24hz. Our smaller HD TV won't deal with 24hz output from the GH4, but our larger one will. The file looks perfectly fine when viewed up close with 24hz HDMI display on our 58" Samsung plasma TV. Interestingly, I was just watching a documentary movie, "Searching for Sugarman" and was (once again) finding the judder in slow pans annoying.
  13. No need to justify your preference for 24 fps. Many agree with you. I only bring it up because I'm particularly bothered by judder and it isn't bothering me in you clip. I just don't see it. I am bothered by the "shaky cam" a bit. But I'm not seeing anything like the "judder" we often see in 24 fps pans. I'll play your clip on a larger monitor this weekend and see if that makes a difference in what I perceive. What I do notice with the GH4 at 30 or 24 fps is that the rolling shutter effect is more pronounced - which I also don't like.
  14. There isn't much panning in your shot and hence not much there to cause juddering. When I play that clip on a Premiere timeline, it seems just fine to me - and I happen to be one of those guys who dislikes judder and have been critical of the look given by 24fps that so many film makers seem so happy with. I find it distracting. My guess is that your display was dropping frames. What did you use to play back the video?
  15. In fact, it is more than just an 8MP moving image. In 4K mode, the effective sensor size is reduced ("crop factor" is changed). That changes the FoV and perhaps for him the usability/desirability of a 14mm lens. BTW, some wide angle adapters that were designed for use on higher end compact digitals work quite well on some micro 4/3s lenses. My WC E68 Nikon adapter works very nicely on my 14-42mm MSC II Olympus lens that I modified to accept 52mm filters. It becomes a 9.5mm - 28.5mm lens with the addition (the adapter works vignette free through the entire zoom range. Image quality seems to remain high - though I've not done any formal testing. This is a welcome change when shooting 4k.
  16. I've got to say that letting the product out with this kind of flaw is unimpressive to me. Users shouldn't be fiddling with bending pins or forcing cables to be inserted to begin with. Mind you, I've been a big fan of micro 4/3s and the Panasonic line from the beginning. I like Panasonic. But this is kind of a bad goof IMO.
  17. And they stayed down? Mine are spring loaded and pop right back up. Or are you saying that you bent them backwards a bit so that when the micro-HDMI cable is inserted it they get pushed down?
  18. Yes. I'm having this problem and it seems to be exactly as described. There are two spring-loaded pins on the flat side of the camera's micro-HDMI socket. Their faces are flat so that when the cable is inserted they prevent the insertion of the cable. And yes, I'm very concerned about pressing any harder than I already have. If the face of these pins was ramped, they would probably be pushed down by the pin. If the lower lip of the cable was ramped, it might push the pins down. But neither are. I did dry to modify a cable by filing a ramp, but the metal is too thin. It is only thick enough to provide some support for the plastic that surrounds the cable contacts. Filing makes it so thin that the HDMI port pins deform the metal and plastic of the cable. Still jammed. The temporary solution I came up with was to cut a strip of thin plastic out of some packing material I have. The strip is wide enough to cover both pins. I then insert it into the socket on the pin side of the connector. Then, when I try to insert the HDMI cable, the pins get pushed down by the strip of plastic. Once I'm pretty sure that the cable is just slightly past the pins, I pull out the plastic. If I don't do this before the cable is fully inserted the whole combo will be jammed. It's necessary to find the sweet spot where you can continue to push the cable in while pulling the piece of plastic out. Frankly, this whole business is BS. I love the GH4, but there was NO NEED to use such a rinky-dink connector for HDMI output. I suspect that whoever designed this HDMI port got a bit overzealous with the pin design as an attempt to ensure that the micro-HDMI cable wouldn't come loose too easily. I think they should have just stayed with mini-HDMI instead. I'm simply not going to force the cable in as someone else described. I'll try to look at the pins with a loupe and see if I can bend them or if they seem to be too robust, check to see if I can maybe file their fronts to a ramped profile (not likely given their small size.)
  19. BTW, you can trade resolution for bit depth. I did this experiment some time ago, converting a 2bit black and white image of very high spatial resolution (10,000 pixels wide I think) to a much lower resolution 8 bit full tone image. You can get a true continuous, full tone 8 bit image that way. Remember folks, that black and white film has no gray tones. It simply has different densities of opaque silver embedded into a largely transparent film base. If you've ever used a grain focuser in a darkroom, you are well aware of the true "black" and "white" nature of film. How this applies to the GH4 and 4K to 2K conversions remains to be seen. After all, the GH4 4K image is created from a CFA (Bayer Mask) array that only has one red, one blue and two green pixels for every quad of pixels on the sensor. So each 4K image sensor is already borrowing/interpolating from neighboring pixels. Plus, There's an AA filter that softens the image before hitting the sensor, and the luminance needs to be interpolated from the four colored pixels. A native, full resolution image from such a sensor isn't exactly information dense like you might get from a 3CCD video camera. But when converting from a 4K to 2K image, You have two green, one red and one blue color sample for each pixel. So if the encoding of the 4K image doesn't do too much damage, the resulting 2K image should be of a fairly high quality. Of course, information is lost and we shouldn't expect a converted 4K from an AVC/.h264 codec to equal what could be captured directly via either HDMI or SDI without such a compression stage. I'd strongly suspect that anybody shooting 4K with the intent of downrezzing to 2K seriously consider using as little sharpening as possible since sharpening actually removed information, introduces artifacts, accentuates noise, and generally makes it more difficult for the compression CODEC to do a good job. It will be interesting to see which workflows turn out to be the best. For me, my main interest is in getting a good quality 1080p image for the green screen work that we sometimes do. I'm pretty hopeful that the GH4 will do well with that. I'll be doing some testing soon. The image on the right is the same exact 2 bit image you see in the middle. It is just reduced for display by a factor of 25.
  20. If you run the 4K image through a suitable low pass filter (blur it a bit - something like a .6 pixel gaussian blur should be about right) before downsampling to 2K, you will reduce the noise in the image and enhance the dynamic range (which is limited by the noise you can tolerate in the shadows). I can't say if there will be a benefit in color bit depth and color grading, but I'm pretty sure about the noise issue. I've verified this in the past with still images from digital cameras and the principles are the same. When you downsample an image without first using a low pass filter, a lot of the image noise gets passed through as aliasing rather than getting reduced by pixel averaging as it should be. Most downsizing algorithms place an emphasis in preserving perceived sharpness and will introduce aliasing in images where it wasn't there before downsizing.
×
×
  • Create New...