Jump to content

tupp

Members
  • Posts

    1,148
  • Joined

  • Last visited

Everything posted by tupp

  1. I believe that the Novo 4K is a GoPro 4K modified for C-mount lenses. Radiant Images has also modified earlier GoPros to take interchangeable lenses.
  2. I imagine that if you dropped it on the pavement that it would be groundbreaking. Seriously, I think it is a great product, but it seems more of a natural phase of the evolution and miniaturization of balancing/gimbal/camera technology, rather than a major breakthrough. Gimbals certainly are helpful tools, but they can never replace the precision and artistry of a good Steadicam operator. Furthermore, both a gimbal and a Steadicam give a dramatically different look/movement than a camera on, say, a Fisher 10 dolly. Dollies give a more solid and controlled feel.
  3. +1 Nikon F mount can be physically adapted to almost any camera, but keep in mind that the electronics (AF, image stabilization) may not work.
  4. This argument assumes that one is using a full-frame lens along with a typically sized focal reduction element. As David Bowgett mentioned above, one could start with a medium format lens and also use a larger focal reduction element (perhaps something like this). Not sure if such a large focal reduction element would have to be positioned inside the camera, but both pros and "amateurs" have modified Canon HDSLRs to internally accommodate larger optical systems.
  5. You can use a wide angle adapter that fits on the front of the lens, but you won't get a gain in brightness. The cheaper wide angle adapters sometimes exhibit a slight loss in sharpness, but a few brands such as Century Precision Optics (now Century Schneider) have very sharp adapters.
  6. Evidently, this guy made it work. Of course, you could also dual boot.
  7. Not so sure about that. There have been many tests between the two modes, and the results are inconsistent. Some tests show bigger differences than others. Here's one in which there seems to be very little difference. I agree that, generally, the raw looks a little sharper than h264 when both are shot at the same resolution. However, a lot depends on how the footage is handled. Good results can be had with h264 by setting picture style sharpness to "1" while giving a slight sharpness boost in post. Of course, doubling the h264 bit rate and shooting all I-frames with TL/ML additionally cleans up the look.
  8. I think the shuttle bug problem only occurs with EF-M lenses, but they are working on it. Ahem... The C-mount adapter and an extension tube was included with my Fujian 35 -- all for $28. Might be possible as the flange focal distance of the EOSM vs. M4/3 is 18mm vs. 19.25mm, respectively, while the throat diameters are 47mm vs. 38mm, respectively. So, a slightly recessed adapter is possible, especially if the M4/3 lens release lever is on back/inside face of the adapter.
  9. I just remembered a couple of caveats. To boost the bit rate and/or use all I-frames, the audio has to be disabled -- best to do so in the Canon menu. So, you have to sync the sound the old fashioned way -- no Pluralize or other such software. With a fast card and with sound disabled, you should have no trouble getting a stable 2x bit rate with all I-frames. Also, use a flat picture style, such as "Cinestyle" or "Flaat 11," but don't push things too far or you could tangle with FPN or banding in parts of the frame. Again, try setting the sharpness to "1" instead of "0," which requires a lower sharpness boost in post, hence, avoiding noise.
  10. International Space Station... with the way the fan is mounted it looks like you're "floating" upside-down.
  11. Never tried the 3x crop on the EOSM (nor on the T3i), but I can't imagine that a pixel peeper would be happy using the RJ focal reducer in crop mode. Lenses only have a finite number of resolving lines within their image circles. The more one crops into the image circle, the more one reduces the number of resolving lines in the frame. Focal reducers squeeze more of a lens' resolving lines into the frame, but at the same time the focal reducer causes a slight loss of sharpness by introducing another piece of glass into the optical chain. If the focal reducer is high quality, this tradeoff is optically "equitable" and no loss of sharpness is noticed between the images with and without the focal reducer. I don't know if the RJ focal reducer will hold up to such a crop, but if you like the look of the Cosmicar 12.5mm-75mm in 3x crop (I love it), it might be okay. Also, the RJ wouldn't bring the 3x crop close to APS-C size. The RJ focal reducer crop factor is 0.72x, and the crop factor of the EOSM's crop mode is 3x: 0.72 x 3 = 2.16 So, the effective size of the frame in 3x crop mode with the RJ focal reducer would be 2.16 times smaller than (or slightly less than 1/2 the size of) an APS-C sensor.
  12. Your videos are inspiring and informative. The "Angry Toddlers" video is what induced me to get the EOSM along with the Fujian 35mm -- that's a magical combination! For those unfamiliar with the Fujian 35, it is an inexpensive C-mount lens that has a wonderful "wonkiness" in its plane of focus, and its image circle covers the EOSM's entire APS-C sensor. Using the Fujian with such a large sensor maximizes its focus wonkiness so that it "pops" across the frame. I avoided raw with ML and TL, as the "work flow" early on seemed to be a little tedious. It was okay to sacrifice a little dynamic range and sharpness, for ready-to-use files that are full HD with all I-frame, high bit rate h264. Setting the picture style sharpness to "1" (not 0) and then boosting the sharpness slightly in post gives clean/flexible camera files and plenty of sharpness in the final image.
  13. Regular Magic Lantern for a while has allowed one to boost the h264 bit rate. The main advantage of Tragic Lantern's h264 is the ability to set all I-frames, so that there are no inter-frame artifacts. The all I-frame capability combined with a boosted bit rate eliminates almost all perceptible artifacts in h264 at the EOSM's full HD resolution. By the way, TL also provides this same all I-frame capability on the 600d/T3i (not sure if it does so on the 7D). On the other hand, I believe that regular ML has all I-frame capability in the source code, but it is not "switched on" in the provided builds. I seem to recall reading in the EOSM thread that someone had enabled all I-frames in the ML source, compiled it and used it without any serious problems.
  14. I've got the EOSM and this RJ focal reducer. Essentially, it makes the EOSM a full-frame camera and gives an extra stop of brightness. It goes a little soft (with a slight chromatic aberration) on the edges, but most don't notice it unless they are looking for it. When I received the RJ adaptor, the back mount (EF-M) was loose/wobbly and could not be tightened. Basically, the mount screws were too long and/or the threads were not tapped deep enough into the body of the adaptor. RJ was diligent in corresponding on the problem, and RJ sent shorter screws which eliminated looseness/wobble. The RJ adaptor mostly stays mounted to the EOSM, as all my lenses except one are old Nikkors and as it prohibits dust from getting to the sensor. Sometimes the RJ gets replaced with an tilt-swing adaptor ( EF-M-to-Nikkor-F), which is a lot of fun. Almost never used are my EF-M-to-Nikkor-F dummy adaptor and my EF-M-to-EF adaptor. On the RJ adapter, I am considering using tape to fix the "G" aperture adjustment ring to the smallest position, as I don't have any "G" lenses and as you can inadvertently bump it so that it keeps the aperture on "F" lenses from closing without your realizing it. By the way, the EOS M2 can currently be had new for almost the same low price to which the original EOSM sunk, but ML is just now starting to explore the M2.
  15. I never tested it. I merely have a little knowledge of how high-end down-conversions (and up-conversions) have worked since the early days of DV. Plus, the math and theory are straightforward and "dumb-simple." I think we basically agree here. As I have maintained throughout this thread, summing in a down-conversion is merely swapping resolution for bit depth -- not increasing color depth. One can sacrifice resolution for greater bit-depth, but one can never increase the color depth of a digital image (not without introducing something artificial). However, I am not sure on whether or not the "color accuracy" can be increased during a down conversion. A lot depends on what is depicted by those four pixels. One of those four summed pixels might be 199 with another being 202 and the other two pixels being 200, hence, 801. Obviously, smoother surfaces/areas (such as the example you gave ) can cause minute "color accuracy" discrepancies. The main instance in which such minute discrepancies become apparent is banding effects on smooth areas. Banding has been discussed in this thread, and. nevertheless, the color depth of the original image is maintained in a down-conversion when the pixels are summed -- even with banding. No, there is definitely a benefit in increasing the bit-depth during a down-conversion. It is important to understand that there is a world of difference between color depth and what you call "color accuracy." If you do not sum and also increase the bit depth in a down-conversion, you throw away valuable color depth -- even if the "color accuracy" remains "8-bit" on some smooth sections of the image. Such a sacrifice in color depth will be apparent in the more complex, "cluttered" sections of the image that have zillions of complex transitions between color tones. If you don't sum the pixels and don't increase the bit-depth, you may or may not have occasional banding (8-bit accuracy) but you will certainly have reduced color depth (apparent in the more complex areas of the image). If you do sum and do increase the bit-depth, you likewise may or may not have occasional banding, but you will have nonetheless maintained the color depth of the original image (no reduced color depth). The increased bit-depth is not artificial -- it is merely sacrificing resolution for bit-depth to maintain color depth. Most people don't realize that resolution is a major factor in color depth, and that fact is usually the misunderstood point in the aforementioned down-conversions. In fact, you could have a system with a bit depth of "1," and, with enough resolution, have the same degree of color depth as a 12-bit, 444 system. Actually, there exist countless images that have absolutely no bit-depth, yet every one of those images have color depth equal to or greater than 12-bit, 444 images. By the way, the mathematical relationship between color depth, resolution and bit depth is very simple in digital RGB imaging systems: COLOR DEPTH = (RESOLUTION X BIT DEPTH)3
  16. I'll have to take your word that those programs average (without rounding) when they downscale. However, summing is more accurate, and increased bit-depth is implicit with summing. That's fine, but more importantly, does resolve yield you greater bit-depth in the final down-scaled file, without rounding the average?
  17. A down-scale from UHD to full HD is 4:1 ratio. So, regardless of the resolution per color channel, about 3/4 of the information per color channel would be thrown away, when down-scaling from UHD to full HD without summing nor averaging the adjacent pixels in the original digital image (and without increasing the bit-depth of the final image -- assuming the final chroma sub-sample is identical to the original).
  18. I did not make any claims regarding the scaling method used by any programs. By "simple down-scaling," I mean reducing the resolution of the image without any summing nor averaging the adjacent pixels in the original image and without increasing the bit-depth in the final image. Such a simple conversion throws away the information of the unused pixels from the original image. It is irrelevant that Resolve generally operates in 32-bit depth with RGBY color space, if it doesn't increase the bit-depth when down-scaling. Again, to downscale a digital image and retain all of the color depth of the original image, the adjacent pixels in the original image must somehow be summed or averaged and the bit-depth must be increased in the final image. If your program/transcoder is not doing both of those things, then you are losing color depth information.
  19. Works on most any digital imaging file. All it does is retain the color depth of the original, higher-res file by swapping resolution for bit-depth. By the way, contrary to some of the more recent comments, the method discussed in this thread is very different from simply scaling-down an image, A simple down-scaling throws away information and does not retain the original color depth of the image
  20. Then perhaps it would be best for you to stop being dramatic. All I have done is link references, state fact, ask questions and suggest folks be careful when dealing with Lenovo machines.
  21. You are incorrect on both "stands." No exaggeration on my part -- I simply link articles that report facts, and I also linked a press release directly from Lenovo and a warning from the US government. Furthermore, I have never been Lenovo customer nor product owner. I have no clue what you are trying to say. Please just say what you mean. What? What does my age have to do with the fact that Lenovo has repeatedly snuck persistent malware into the BIOSes of it's machine, even after lying about it multiple times? I am no expert on laptops that are good for editing, and the power of such machines is constantly progressing. However, with a brief web search it shouldn't be too difficult to find a comparison article on current units that fit the bill. No doubt, most of the non-Lenovos won't have insidious malware in the BIOS. Again, no clue as to what you mean here. Please say what you mean. Good for you! You've been warned about Lenovo machines with links citing facts about their practices. You're on your own now. This Lenovo fiasco is rather recent. Unless you are less than eight years old, it wasn't "pre-existent" for you. Same to you, bud. Perhaps you might eventually experience the real life consequences of not heeding multiple security warnings.
  22. In light of Lenovo's practices, avoiding its products is motivated more by wisdom than fear. Perhaps a more accurate and comprehensible analogy would be that of the choice between going down a dark alley or a well lit street. Would you choose to traverse a dark alley in which you know creeps lurk at night, or would you choose the bright street in which you can see everything? Likewise, do you choose a laptop manufacturer who keeps sneaking creepy, tenacious malware/crapware deep into the BIOS and who repeatedly lies about it, or do you choose an honest manufacturer who is just trying to create a good product?
  23. In addition to the items you already mentioned (sound, power distro, etc.), take wide angle photos/videos from each corner of any shooting room. Scout/plan staging areas for equipment, hair/makeup, craft services, wardrobe, actor privacy/dressing rooms, etc. Take light meter readings, and bring a compass determine which direction windows/doors/openings face. Gauge the cooperativeness of the property owner/caretaker. If you plan on sending light through a window, note the height of the window from ground level outside.
  24. Looks like Lenovo was messing with the Thinkpads after all. What was that again regarding "bitches" and "misinformation?" People, please be careful when choosing Lenovo machines.
  25. Well, that NHK rack was one of the first HEVC encoders in the world, and it was encoding 8K as well, without FGPAs. So, of course, it's not going to be efficient and miniaturized. I have no idea if NHK has continued developing their system to make smaller, but the size of a first attempt is not the point. I don't know... someone who really wanted to shoot 8K in 2013 (who had some funding) would probably not be hindered by the size of that first NHK encoder. Keep in mind, when the 4K Dalsa Origin first appeared in 2003, it had a huge body that was tethered to a desktop computer and a raid array. Likewise, Quadruplex SD video recorders which were used widely in the late 1950s and early 1960s were humongous, yet TV productions still seemed to shoot. Me, too, but currently I am not very keen on dealing with anything past 4K.
×
×
  • Create New...