Jump to content

richg101

Members
  • Posts

    1,828
  • Joined

  • Last visited

Posts posted by richg101

  1. Andrew, I was going to ask this, in your discussion, related to your write-up [Prototype Metabones Speed Booster equipped NEX 7 *VS* full frame (5D Mark III)].

     

    This adds 1 stop of light, to, every adaptable lens, right? To EVERY possible lens, that, it is made for? So, lets say, an SLR Magic T.95 lens would also add another stop of light (assuming, there is an adapter made for it)? So, it would turn into an T.67 lens?

     

    If that's true, then I don't wanna know how Stanley Kubrik's soul would feel about this. Especially after his long work on the f.7 lens, for barry lyndon ...  :P

     

    the slr magic lenses for m4/3 actually have this type of speed booster within them.  they're just the same as a f1.4 full frame lens with a reducer (like the speed booster) in them  they dont cover full frame, and have too short flange focal distance so cannot be used with this type of adaptor anyway..

  2. I'd like to see this paired with that excellent-looking NEX-5N cinema cage. I can't wait to acquire a speed booster for my NEX-7. Double my focal length collection, gain a stop across the board and attain FF DOF and FOV? 600$ is a steal.

     

     

    It's funny.  enquiries I've been getting about the Cine Housing have become more regular since the speed booster was announced.  I'm glad I designed the housing to be compatible with the diameter of the original metabones ef-emount.  I think since the m4/3 version is likely to take a while to come to market a lot might move to emount because of this.  might make the cine housing more cost effective if i get a load of orders off the back of this.  Even the bigger Nex7 is a little small to accommodate a larger piece of Canon L glass.  Whereas the Cine housing gives a nice bit of ballast to counteract the weight of the lens.

  3. I've played with these too on the RX100.

     

    I found Portrait at minimum contrast has the greatest dynamic range, then Sunset. I have DRO on all the time at about 3.

     

    Put together, these two things get you pretty close to a Cinestyle type profile.

     

    It seems to me that DRO is just pushing up the blacks, but doesn't pull back highlights.

     

    I think you're right.  highlights don't seem to be affected.  it just pulls up the blacks.  The main issue is that DRO seems to be dynamic, changing depending on exposure levels.  if you have a dark subject moving around in front of a white background it might take time to adjust meaning parts of the white background around the dark subject might get boosted slightly until it auto adjusts.

  4. Without applying some kind of chroma filtering (some codecs do this, or have options to) you will retain banding in a 32bit workflow if it exists in the original footage.  32bit isn't going to create transitional data where there was none (not without intentionally applying another process expressly to do so, or by some byproduct of that process).  If the 8bit source bands a first step transcode to 32bit will band unless it's filtered or dithered.  Make sure you're also not introducing it with some discrepancy in your color management, just as a "crazy check".

    Indeed.  I meant that since changing to using purely 32bit colour correction effects in premiere I have not been introducing any hugely noticeable banding when applying quite prominent grading.  I can certainly obtain cleaner gradients and smoother skintones/skys when I use just 32bit effects.  Re. capturing in 8bit colour, I am quite impressed with Sunset profile on my nex5n.  I hadn't used it before but since reading some articles on how it tries to, and quite successfully replicates 10bit colour in the way graduations are captured and processed.  EOSHD Andrew said something along the lines of that the profile is almost too good to have been included in the nex5n.   It would be interesting to work out exactly how Sony process the image in sunset profile to do what it does.  Must be some type of smoothing in the same way Neatvideo does it.

  5. Took a couple of shots of my son.  Used Neutral then Canon Log.  Neutral was fine.  With CANON LOG, when adjusting the shadows and highlights within legal limits, one gets some banding in the image!  Piss me off.  I'm going to do some more testing this afternoon with the neighbor. 

     

     

    are you grading with a 32bit workflow?  so all your effects applied to the image are 32bit?  Your banding should disappear all together with a 32bit workflow, even with 8bit original capture

  6. 4k downscaled to 1080p for the effect of 4:4:4 is indeed the key, but s35 is true s35 as good as any 1080p sensor'd camera out there from my research.  I imagine the s35 crop mode is about as sharp as the c300, c100 and probably beats the fs100, and has a higher bitrate in camera, onto card codec.   If I recall, s35 crop mode is nearly the same as c300 straight to card,

  7. I just tried a very basic test and the 2x adaptor spun around does indeed work.  at least 1 stop brighter, what looks like wider too, but it is hard to tell due to the macro only focus due to the 2x tele converter bayonet positioned against the lens bayonet.  in effect i have increased the flange / plane distance by about 10mm which has limited maximum focus to about 25cm.  

     

    I wonder if I should remove the optical module and rehouse it in a dumb adaptor using a turned aluminium centering shim?  The optical module could be adjusted to change the distance between sensor and lens.  I suppose if I do the maths I can turn an adaptor which goes straight from a Hasslebald Planar 80mm f2.8, through the inverted 2x element and onto the sensor.  There are sl35 to SLX / Rollei 6000 series adaptors available too.  In my head i think the image circle from a medium format lens would be scaled down about enough to be slightly bigger than APS-C?  I'd be looking at a -1.0 focal reduction instead of a 0.71 for the metabones.  I bet the Rollei optics are on a par or better than that of the metabones as well?

  8. Whilst I am not a patent lawyer, I think Metabones may be in the clear on the SB. Looking at the Kodak patent US5499069.pdf, all the claims are to do with SLR in the wording or implied. Of course mirrorless cameras are not SLRs and they may have a clear path on this technicality.

    From what I see it is a 1.4x teleconverter mounted backwards and the beauty of the design is the excellent correction of spherical aberration. Nevertheless there will be copies (using different lens prescription) from third parties which may include Canon themselves for future expansion of the EOSM system.



    I like it! I have a 2x rollei tele converter. very good quality. I will try it backwards..:)

    I wonder how it would work with a lens designed for a bigger area such as a hasselblad zeiss lens.
  9. Well done man!  I have come close to signing away my life for a 2yr interest free deal on the 1DC.  I'm yet to let my mad brain allow it, but I have come almost too close!   Hope you enjoy it.  I think you just bought the best value camera at the moment (and I'm being serious).  


  10.  

     

    Could you recommend me any workflow for this. Where do you get your 4k grain from?

     

    I´ve read the gh2 hold good up to 2.5k. Has anyone dcp projected or seen dcp projected gh2 material?

     

    Thanks richg101

     

    if gh2 holds up to 2.5k, I recon it'll be fine if you work in a 4k timeline with the intention of exporting out at a smaller frame (1080 tall).  

     

    Simply open up your editor and create a working project with a timeline that is DCP 4k anamorphic 2.39:1 ( 4096x1716 frame size).  Now import in your footage and scale vertically and horizontally (with proportions unconstrained).  you'll need to scale height by about 150% (which isnt very much really) and then stretch horizontally to get the desired amount of de-sqeeze.  if you shoot with very sharp lenses you can then apply a tiny bit of gaussian blur (maybe 3 pixels wide) to soften up the harsher pixel steps due to the upscale.  then apply a slight sharpen to take it back.  then drop a true 4k film grain scan over the top (each pixel of the 4k scan will be 1/4 the size of the upscaled pixels from the gh2 you have underneath.  carefully adjusting contrast and sharpness of the 4k grain can make it add a perceived sharpness to the whole image.  

     

    now when you export/downsize to 2k anamorphic you will be downscaling the image back to smaller than it was originally but with hyper small pixels over the top.  this process seems to give a nice impression that the film was shot on something higher end than the projector it is being played back on.  also the 4k grain adds some detail for the codec to bite into meaning it see's detail instead of just dark or light areas of block colour.  since in some situations you might need to mask noise by crushing backs it's nice to have the 4k grain there on top to disguise this and give the codec a bit of detail it thinks it needs to keep intact.

     

    Just my opinions based on a bit of experimenting.  give it a go.  export a small portion of the uprezed footage with 4k grain overlay and look at it 100% pixel magnification and it certainly looks slightly more detailed and crisp, while not being too obvious if done right.      
     

  11. with the limitations of 2k projection I'd change my suggestion to not include the nesting and squeezing to 4:3. but the actual desqueeze and framing of the image I mention prior to that I still stand by. work in a timeline with your full height of 1080 (maybe even work in a 4k timeline and scale up. the gh2 is pretty detailed for 1080p, and with a bit of 4k grain overlay and delicate sharpening might pass for 4k capture, downscaled to 2k) and then export out at the desired frame height of around 800px.. what a shame!140px off the top and bottom from your original 1080p footage. that's a lot of waste!, especially since you'll have lost a fair bit of horizontal resolution during your de squeeze and cropping from 3.55 to 2.39.

  12. Great, thanks! The only thing is, from what I´ve read and been told there are no more anamorphic projections left so squeezing everything to a final 4:3 won´t work :S Check this links: 

    http://www.antoniourquijo.com/en/blog/dcp-for-you-anamorphic-projection

    http://www.arridigital.com/forum/index.php?topic=7495.0

     

     

     

    The question is, what would be the balance to get the best out of the GH2 + the Kowa 2x anamorphic taking in consideration that the final file should fit the 2k dcp (2048x1080)? Taking in consideration that we prefer and want an anamorphic look.

     

    Ignore my last comment:)  I guess they must scale the projected image to fill the full height of the screen.  this method removes all reasons for anamorphic in the first place.  losing 1/3 of the projector area and brightness too.  I guess when I have seen the 2x projection lenses in action it has been during film projection?

  13. i'd try de-squeezing a little more if i were you. the samples you show are still a little too squeezed. make your frame 1080tall x 2580wide (2.39:1), import your footage and stretch to fill the frame, stretch out as much as you can, adjust the L/R position to best make use of your frame (you have a few hundred pixels which will be out of frame on each side so you can play with horizontal framing to get the most out of each shot. certain shots will allow you to de squeeze more and others less. a landscape or woodland setting which requires more detail can be less stretched out while a close up of a face or a side on view of a car will need to be more accurately de squeezed (closer to the 2x) so they look right.

    Based on your DCP suggestion, I would assume it would be best to now export the entire project out as a 4:3 format squeezed export, ready to be de-squeezed by the picture house (who will be using a 2x anamorphic projector lens for anamorphic stuff).

    Get everything arranged and nest it all together in your current 2.39:1 frame but now apply a 2x squeeze to the nested project. This will result in a 4:3 image area which is 1080x1440. once put through the picture houses projector your film will take up the full height of the screen but will stretch out to the correct 2.39:1 and fill the width too. if you just make your frame 800 or so pixels tall it will letter box and you will lose 1/3 of your screen area and projector brightness when projected.

    Hope this makes sense. I have never done this, but it makes sense from what I have seen in smaller picture houses. i think the loss of pixels on the width is worth it when you consider your picture will properly fill the entire 2.39:1 screen in the cinema, instead of just filling a letterboxed portion of the 1.85:1(or 16:9) part of their screen.

  14. @richg101   Did he send you a fake Ebay wire transfer invoice?

     

    nope.  but he asked for my full address, username and telephone number in order to set it up.  I questioned him then 2 mins later the item was removed and relisted under a different ebay name.  now it's listed under another ebay seller.  exact same listing.  

  15. terrykim

     

    Here is a link to calibrating for infinity

     

    http://www.metabones.com/smart-adapter-operation-manual/155-infinity-adjustment-speed-booster-only

     

    Any chance you could try adapters on the front of the speed booster, like a nikon/pentax adapter etc? Any one know if using adapters on the front of the speed booster causes any problems? Could you describe how well the autofocus works? Cheers.

     

     

     

    since many people use flat 'disk' type adaptors with canon cameras for OM, c/y, m42, etc lenses there wont be any problem using them with this adaptor too.  just think of a nex7 + speed booster as a 5dmk2.  if it works with the 5dmk2 it'll work with nex+ speed booster

  16. I been playing with a neat little feature called 'DRO' what it seems to be doing is adjusting the sensor ISO depending on the brightness of each element within the frame, so, as far as I can understand parts of the sensor are running at the base ISO which you set, then other parts then get raised or lowered depending on how exposed each element of the subject is.

    So far i have tested by exposing for the sky (with the settings I used, 100iso took the image to just below clipping) and naturally the ground level will be under exposed. But it is clear that with DRO turned on the ground level exposure is better (maybe by more than 1 stop), while the sky is still exposed the same as I originally set. If you go the other way and expose for the ground level (lets say 400iso) the sky is clipped when DRO is turned off, but when you enable DRO the sky is brought back down to the exposure I saw when the ISO was originally set to 100iso.

    I have run a few tests, both straight out of the camera and post processed to see how the footage holds up. And from my initial findings it seems the DRO is adjusting sensor ISO for separate areas of the sensor depending on the brightness of each element in frame.

    Have a look and download the footage to get a better idea. I'd love to hear your opinions. Is this adjustment being done at ISO level or as an in camera post processing technique in the same way picture profiles are achieved? The effect is clearly visible in realtime during recording with no lag but when using the DRO in still mode it seems the effect is applied after the image is taken.


    Straight out of camera:- http://www.vimeo.com/57541462

    Colour corrected:- http://www.vimeo.com/57558654
  17. It's a scam.  Just tried to buy the pair from him.  He ain't interested in talking about legit buy it now options on ebay.  asked for my full address, ebay username and contact number.  and kept talking about needing the details so he could set up the sale with ebay....

  18. Micro Four Thirds becoming Super 35mm is nothing to be sniffed at!

     

    How many of us were calling for a Blackmagic Cinema Camera 2 with Super 35mm sized chip?

    if anything i think a hacked gh2 or BMCC with same field of view as s35 and 1 stop of extra iso headroom before noise becomes a problem is more exciting than the benefits of a s35 chip becoming full frame.

     

    What with the EF version of the BMCC I think it would be quite easy to remove the optical element (at the point where infinity focus tweeks are done) of a dumb speed booster and fit it into a small cone of machined aluminium which jams into the ef recess on the ef bmcc:) 

  19. Of course. But well, basically everybody on EOSHD just comes here to rave of bitch about the newest hardware. Check the amount of actual posts in the screening room... ;-) I like technology. It won't make me a better film maker, still I like it and it makes me enthusiastic.

     

    Anyway, I'll be posting something in the Screening Room soon, made without a Speed Booster...

     

     

    Not possible.  Without being recorded through a speed booster, all editing software will crash during export from here on in.  

×
×
  • Create New...