Jump to content

tupp

Members
  • Content Count

    954
  • Joined

  • Last visited

Posts posted by tupp

  1. wouldn't like speedboosting an nex camera probably be the cheapest option and have decent video? maybe not much cheaper if you have a good speedbooster on it, plus no match for 5D raw or any sort of autofocus or ergonomics, but just a thought

    I think if you just want the fullframe look, get a speedbooster on some mirrorless aps-c camera.

    This would be the cheapest option, no AF.

    http://www.eoshd.com/comments/topic/18504-canon-eos-m-focal-reducer-fullframe-raw-for-300/

    I was about to make these very same suggestions.

     

    The only drawback to this path for the OP is that he is evidently already significantly invested in Canon glass (no aperture control on lens).  So, unless there is an option for a powered EF speed booster that enables aperture control, OP would probably have to get/rent new lenses (and sell his Canon lenses).

  2. That was long ago and never moved farther. Perhaps no interest from the author or not enough knowledge of the hardware of these cameras.

    Don't belittle Lukas' crucial foundation work.  He did most of the heavy lifting for those that followed, and he did it only May of last year.  Obviously, the NX hacks moved farther because of his work.

    The Lukas rooting of NX cameras led to this work (and this work), which led to the simple work of your boy on DP Review disabling the video time/file-size limit.  Your boy acknowledges the immediate upstream source, and that source acknowledges Lukas.  From the comments on Lukas' page, there are evidently others who are also hacking the NX cameras, thanks to him.

  3. Dvd's probably won't have a problem, but magnetic or electronic devices have a problem with strong magnetic fields, such as in a case of an über-sunstorm.

    Okay.  I think we agree that archival optical disks, such as the 1000-year Millenniata disk, will outlast film by centuries and will also survive an "über-sunstorm."  Thus, film is already soundly trumped by digital in archival scenarios.

     

    However, I would still be interested in hearing about any incident in which a magnetic field from an über-sunstorm has ruined a disconnected hard drive.

  4. It is true that digital files can be copied/stored and archived with greater speed and lower cost than analogue tape or camera negative, but ultimately any hard drive is prone to either mechanical failure or corruption of data, solid state or otherwise.

    Not exactly.  If you keep a hard drive disconnected and in cold storage, the data on the discs should last a very long time, as should the mechanical components.  No one knows for sure how long a hard drive will last in this scenario, as we have not yet reached the point of failure in such a case.  With a stored, disconnected drive, I would imagine that the capacitors in the hard drive's circuitry will go bad sooner than a breakdown in the mechanics or with the info on the disk.

     

    Analogue storage via LTO tape by banks and film companies are still a preferred method for archival digital data, being analogue tape-based it is virtually immune to the volatile nature of any digital storage.

    Those tapes are digital, not analog.  And that tape suffers the same deterioration problems as regular audio tape -- the magnetic layer separates/flakes-off the base layer as the tape ages.

     

    By the way, there used to be countless "digital/analog" computer tape systems.  I still have one.  The computer encodes digital files into analog audio beeps/tones (similar to modems) which can be recorded almost any audio tape recorder.  That system is different than the digital tape system that you mention.

     

    Camera negative or film print (properly stored) can preserve for 100 + years, I don't know of any digital drive that can promise that.

    With film, the image/dyes still progressively fade during that 100+ years (and the base becomes more and more brittle).  And, again, film cannot be copied without generational loss.

     

    Actually, there exists digital media that have already lasted for centuries (and that still work!).  Music boxes using pins and spaces on cylinders (as their digital "ones" and "zeros") first appeared in the 1200s.  By 1800, music boxes using metal disks (with holes and lack of holes) started to appear and became popular in that century.  Metal discs from the 1800s are still being played by enthusiasts today, and they sound exactly as they did in the 1800s.  So, digital mediums can last for a very long time and not suffer any degradation of quality.  (It is also kind of cool that digital audio recording existed centuries prior to the arrival of analog audio recording.)

     

    Certainly, it would be cumbersome and inefficient to try to encode video files to music box cylinders and disks.  On the other hand, there exist long-lasting digital media that can do so compactly and efficiently.  The Millenniata disk is expected to keep digital data up to 1000 years, as it uses microscopic engraved pits to record data, instead of dyes that can fade.

     

    Likewise, "pressed" CDs/DVDs use physical pits to store data, in contrast to common "burned" CDs/DVDs which use dyes.  Pressed disks are projected to last up to 300 years.  Of course, the average eoshd.com poster won't have a disk press connected to their laptop. but some "burnt" optical disks have stable dyes that are estimated to last 100-250 years.

     

    And again, it is difficult to know how long a hard drive will last stored and disconnected.

     

    However, it is immaterial that all of these digital mediums have a superior lifespan to film, merely due to two facts:

    1. digital files can be repeatedly copied with absolutely no generational loss;

    2. there is no automatic, progressive fading/degradation of the information as a digital file sits in storage.

     

    These two abilities allow digital files to last forever exactly as they were originally.  If the same could be done with film, then it could last forever, too.

     

     

     

     

  5. The reality of course is that film is still the most reliable form of archival format.

    I think I understand what you are saying, but I am not sure that film is actually more archival than digital.

     

    Film ages, and its colors fade.  Furthermore, every time an analog image is copied, generational loss occurs, so there is a practical limit to how long film can be maintained.

     

    In contrast, one can keep making copies of a digital file on fresh medium, and the copies will be exactly the same as the original -- no generational loss and no aging nor fading.

     

    The thing about film is that, when properly shot and handled, a film image can capture an incredibly vast, "fluid" color depth, unencumbered by the incremental bit depth limitations of digital imaging.  Having all of that color depth in the original image makes film a little more "future-proof" than digital.

     

    So, film has a limited shelf life compared to digital, but a film image usually starts with more color information, which makes it more future proof.

  6. The DJI OSMO - is it groundbreaking?

    I imagine that if you dropped it on the pavement that it would be groundbreaking.

     

    Seriously, I think it is a great product, but it seems more of a natural phase of the evolution and miniaturization of balancing/gimbal/camera technology, rather than a major breakthrough.

     

    Gimbals certainly are helpful tools, but they can never replace the precision and artistry of a good Steadicam operator.  Furthermore, both a gimbal and a Steadicam give a dramatically different look/movement than a camera on, say, a Fisher 10 dolly.  Dollies give a more solid and controlled feel.

  7. The only possible benefit from permanently removing mirror and box from Full frame canon dslr to accommodate speedbooster optics would be when shooting crop mode with Magic Lantern Raw. Then the heavy crop mode would be optically reduced to around s35 (or slightly under) but will allow higher resolutions to be recorded (for short periods). The optics would have to be professionally installed and would effectively restrict the camera to ML crop mode only.

    This argument assumes that one is using a full-frame lens along with a typically sized focal reduction element.  As David Bowgett mentioned above, one could start with a medium format lens and also use a larger focal reduction element (perhaps something like this).

    Not sure if such a large focal reduction element would have to be positioned inside the camera, but both pros and "amateurs" have modified Canon HDSLRs to internally accommodate larger optical systems.

  8. My question is, there is anything similar to a speedbooster for full frame cameras? something to enchance the field of view of a lens and his aperture?

    You can use a wide angle adapter that fits on the front of the lens, but you won't get a gain in brightness.  The cheaper wide angle adapters sometimes exhibit a slight loss in sharpness, but a few brands such as Century Precision Optics (now Century Schneider) have very sharp adapters.

     

  9. 720p raw is much more detailed than canon 1080p

    Not so sure about that.  There have been many tests between the two modes, and the results are inconsistent.  Some tests show bigger differences than others.  Here's one in which there seems to be very little difference.

     

    I agree that, generally, the raw looks a little sharper than h264 when both are shot at the same resolution.  However, a lot depends on how the footage is handled.  Good results can be had with h264 by setting picture style sharpness to "1" while giving a slight sharpness boost in post.  Of course, doubling the h264 bit rate and shooting all I-frames with TL/ML additionally cleans up the look.

  10. I'm not sure I'd use it in any situation where you really needed to depend on it, even as a crash-cam, because of the shutter bug.  The camera just freezes up sometimes on ML (unless they fixed it; don't think they ever did).

    I think the shuttle bug problem only occurs with EF-M lenses, but they are working on it.

     

     

    As Mercer says, the camera with the Fujian CCTV lens ($30) is magical.

    Ahem...

     

     

     

    The c-mount adapter is $10.  

    The C-mount adapter and an extension tube was included with my Fujian 35 -- all for $28.

    is there an mft to eos-m adapter? could be possible based on flange distance. so a different speedbooster could maybe be adapted to it and more suited for it? less the point of the topic maybe, with regards to price, but just curious

    Might be possible as the flange focal distance of the EOSM vs. M4/3 is 18mm vs. 19.25mm, respectively, while the throat diameters are 47mm vs. 38mm, respectively.  So, a slightly recessed adapter is possible, especially if the M4/3 lens release lever is on back/inside face of the adapter.

  11. I'll look into the all i-frame mode and test them with that. 

    I just remembered a couple of caveats.

     

    To boost the bit rate and/or use all I-frames, the audio has to be disabled -- best to do so in the Canon menu.  So, you have to sync the sound the old fashioned way -- no Pluralize or other such software.  With a fast card and with sound disabled, you should have no trouble getting a stable 2x bit rate with all I-frames.

     

    Also, use a flat picture style, such as "Cinestyle" or "Flaat 11,"  but don't push things too far or you could tangle with FPN or banding in parts of the frame.  Again, try setting the sharpness to "1" instead of "0," which requires a lower sharpness boost in post, hence, avoiding noise.

  12. With just the 3x crop on, the camera is insanely better than the original canon specs, so I was wondering what the image would look like with the Nikon focal reducer shot in 3x crop... Would it bring it back to apsc sized without the aliasing?

    Never tried the 3x crop on the EOSM (nor on the T3i), but I can't imagine that a pixel peeper would be happy using the RJ focal reducer in crop mode.  Lenses only have a finite number of resolving lines within their image circles.  The more one crops into the image circle, the more one reduces the number of resolving lines in the frame.

     

    Focal reducers squeeze more of a lens' resolving lines into the frame, but at the same time the focal reducer causes a slight loss of sharpness by introducing another piece of glass into the optical chain.  If the focal reducer is high quality, this tradeoff is optically "equitable" and no loss of sharpness is noticed between the images with and without the focal reducer.  I don't know if the RJ focal reducer will hold up to such a crop, but if you like the look of the Cosmicar 12.5mm-75mm in 3x crop (I love it), it might be okay.

     

    Also, the RJ wouldn't bring the 3x crop close to APS-C size.  The RJ focal reducer crop factor is 0.72x, and the crop factor of the EOSM's crop mode is 3x:

    0.72 x 3 = 2.16

     

    So, the effective size of the frame in 3x crop mode with the RJ focal reducer would be 2.16 times smaller than (or slightly less than 1/2 the size of) an APS-C sensor.

  13. Yes, like TheRenaissanceMan says, you can do 1280 native focal length, but get moire.  However, if you have shallow depth of field and are not shooting anything with hard edges you'll get that rich color look.  The biggest problem are the focus pixels.  You need to remove them or they create "pink dots".  I wrote software for windows that does it well (but only in crop mode)  https://bitbucket.org/maxotics/focuspixelfixer/downloads  For other resolutions, you want the PinkDotRemover.  The best combo, IMHO, is a 10-20mm type lens, then shoot crop mode, which will make it 40 to 80.  Also, you'll need a 95MBS read/write card (even though it will max out at 40mbs).   Here's a cool video I did using the EOS-M to show you what your sensor really sees in Bayer mode: https://vimeo.com/79857693  And here's one that show the basic quality, it's VERY filmmlike to me.  Here's one with the kit-lens https://vimeo.com/76181035  And this one with a Sigma 10-20.  Finally, here's a good one in H.264 with the EOS-M.  I LOVED that camera.  https://vimeo.com/75122636  I now have a BMPCC.  But I LEARNED so much using the 50D (another great ML camera) and the EOS-M.

    Your videos are inspiring and informative.  The "Angry Toddlers" video is what induced me to get the EOSM along with the Fujian 35mm -- that's a magical combination!

     

    For those unfamiliar with the Fujian 35, it is an inexpensive C-mount lens that has a wonderful "wonkiness" in its plane of focus, and its image circle covers the EOSM's entire APS-C sensor.  Using the  Fujian with such a large sensor maximizes its focus wonkiness so that it "pops" across the frame.

     

    I avoided raw with ML and TL, as the "work flow" early on seemed to be a little tedious.  It was okay to sacrifice a little dynamic range and sharpness, for ready-to-use files that are full HD with all I-frame, high bit rate h264.  Setting the picture style sharpness to "1" (not 0) and then boosting the sharpness slightly in post gives clean/flexible camera files and plenty of sharpness in the final image.

  14. You could also go the Tragic Lantern route and just get a jacked up bit rate for h.264.

    Regular Magic Lantern for a while has allowed one to boost the h264 bit rate.  The main advantage of Tragic Lantern's h264 is the ability to set all I-frames, so that there are no inter-frame artifacts.  The all I-frame capability combined with a boosted bit rate eliminates almost all perceptible artifacts in h264 at the EOSM's full HD resolution.  By the way, TL also provides this same all I-frame capability on the 600d/T3i (not sure if it does so on the 7D).

     

    On the other hand, I believe that regular ML has all I-frame capability in the source code, but it is not "switched on" in the provided builds.  I seem to recall reading in the EOSM thread that someone had enabled all I-frames in the ML source, compiled it and used it without any serious problems.

  15. So yesterday i stumbled apoun this product on ebay ( http://www.ebay.com/itm/new-Nikon-F-G-focal-reducer-speed-booster-adapter-to-Canon-EOS-M-EOSM-M2-M3-/361376870358?hash=item5423bd6fd6:g:u84AAOSwHnFV4muu ), its a Eos m to nikon g focal reducer.

    [snip]

    If anyone has tried a common combo i would be interested in knowing more.

    I've got the EOSM and this RJ focal reducer.  Essentially, it makes the EOSM a full-frame camera and gives an extra stop of brightness.  It goes a little soft (with a slight chromatic aberration) on the edges, but most don't notice it unless they are looking for it.

     

    When I received the RJ adaptor, the back mount (EF-M) was loose/wobbly and could not be tightened.  Basically, the mount screws were too long and/or the threads were not tapped deep enough into the body of the adaptor.  RJ was diligent in corresponding on the problem, and RJ sent shorter screws which eliminated looseness/wobble.

     

    The RJ adaptor mostly stays mounted to the EOSM, as all my lenses except one are old Nikkors and as it prohibits dust from getting to the sensor. Sometimes the RJ gets replaced with an tilt-swing adaptor ( EF-M-to-Nikkor-F), which is a lot of fun.  Almost never used are my EF-M-to-Nikkor-F dummy adaptor and my EF-M-to-EF adaptor.

     

     

      On the RJ adapter, I am considering using tape to fix the "G" aperture adjustment ring to the smallest position, as I don't have any "G" lenses and as you can inadvertently bump it so that it keeps the aperture on "F" lenses from closing without your realizing it.

     

    By the way, the EOS M2 can currently be had new for almost the same low price to which the original EOSM sunk, but ML is just now starting to explore the M2.

  16. I know what you are thinking with the summing of the 4 pixels increasing bit depth, it is logical and I tested this theory just like you did by writing program to do it.

    I never tested it.  I merely have a little knowledge of how high-end down-conversions (and up-conversions) have worked since the early days of DV.  Plus, the math and theory are straightforward and "dumb-simple."

     

     

    Yes if you downscale 4K to HD you can get 10 bit 4:4:4, however all you have done is incease the numeric precision from 8bit to 10bit you have not increased the color accuracy,

    I think we basically agree here.  As I have maintained throughout this thread, summing in a down-conversion is merely swapping resolution for bit depth -- not increasing color depth.  One can sacrifice resolution for greater bit-depth, but one can never increase the color depth of a digital image (not without introducing something artificial).

     

    However, I am not sure on whether or not the "color accuracy" can be increased during a down conversion.

     

     

    For example lets say you have 4 pixel who's lumance if recorded 10bit native would have a value of 801, 10bit is 0 - 1023. When recorded at 8 bit these pixels would have a value of 200, 8bit 0 - 255. Take those 4 pixels and sum them up and you have a 10bit value of 800 not 801.

    A lot depends on what is depicted by those four pixels.  One of those four summed pixels might be 199 with another being 202 and the other two pixels being 200, hence, 801.

     

    Obviously, smoother surfaces/areas (such as the example you gave ) can cause minute "color accuracy" discrepancies.  The main instance in which such minute discrepancies become apparent is banding effects on smooth areas.  Banding has been discussed in this thread, and. nevertheless, the color depth of the original image is maintained in a down-conversion when the pixels are summed -- even with banding.

     

     

    So you have increased the bit depth but the accuracy of the data is still the same as it was when it was 8bit. So there is no benifit to the increase in bit depth as the final pixel will have the same potential error as before as far as grading is concerned

    No, there is definitely a benefit in increasing the bit-depth during a down-conversion.

     

    It is important to understand that there is a world of difference between color depth and what you call "color accuracy."   If you do not sum and also increase the bit depth in a down-conversion, you throw away valuable color depth -- even if the "color accuracy" remains "8-bit" on some smooth sections of the image.  Such a sacrifice in color depth will be apparent in the more complex, "cluttered" sections of the image that have zillions of complex transitions between color tones.

     

    If you don't sum the pixels and don't increase the bit-depth, you may or may not have occasional banding (8-bit accuracy) but you will certainly have reduced color depth (apparent in the more complex areas of the image).  If you do sum and do increase the bit-depth, you likewise may or may not have occasional banding, but you will have nonetheless maintained the color depth of the original image (no reduced color depth).

     

     

    In fact if I am not misaken there is the same amount of potential error as if you just converted the 4K to 10bit without downscaling, there is a increase in chroma though as you now have chroma data for every pixel, hence no sub sampling. Basically the increased bit depth is artifical and not the same as 10bit capture.

    The increased bit-depth is not artificial -- it is merely sacrificing resolution for bit-depth to maintain color depth.

     

    Most people don't realize that resolution is a major factor in color depth, and that fact is usually the misunderstood point in the aforementioned down-conversions.  In fact, you could have a system with a bit depth of "1," and, with enough resolution, have the same degree of color depth as a 12-bit, 444 system.

     

    Actually, there exist countless images that have absolutely no bit-depth, yet every one of those images have color depth equal to or greater than 12-bit, 444 images.

     

    By the way, the mathematical relationship between color depth, resolution and bit depth is very simple in digital RGB imaging systems:

    COLOR DEPTH = (RESOLUTION X BIT DEPTH)3

  17. Yes and every single NLE, color grading, encoding software uses some kind of averaging when doing downscaling. It is pratically impossible to find any that will even give the option not to. Premiere, photoshop, after effects, nuke, resolve, media encoder and final cut all do.

    I'll have to take your word that those programs average (without rounding) when they downscale.

     

    However, summing is more accurate, and increased bit-depth is implicit with summing.

     

     

    Resolve converts everything to 32bit float and processes it in that bit depth which is the highest precision in computing unless you count doubles which no software I am aware of uses. 

    That's fine, but more importantly, does resolve yield you greater bit-depth in the final down-scaled file, without rounding the average?

×
×
  • Create New...