Jump to content

see ya

Members
  • Posts

    215
  • Joined

  • Last visited

About see ya

Profile Information

  • Gender
    Not Telling

see ya's Achievements

Active member

Active member (3/5)

28

Reputation

  1. Yep, but they also have the apeture ring right at the back against the camera body making access difficult compared to numerous other sharp old manual primes like Oly Zuiko's, Helios 44-2 or Meyer Optik or Pentacon where the apeture ring is to the front of the lens. But you know already. And as you know, the later 3 have 'Preset' versions with smooth apeture ring adjustment with no on cost of declicking, making it easy to adjust a variable apeture not stepped and with nothing more than the tip of a finger smoothly at the same time as focusing manually. Just saying, addressing the balance, not interested in any "my lens is sharper than yours"
  2. Yes very helpful friendly members there, frequented the site for many years. Here's a SMC Tak breakdown, scroll down below the Canon FD bit: http://k10dpentax.blogspot.co.uk/search/label/Repair And a short list ;-) of links to repair breakdowns and such: http://www.4photos.de/camera-diy/index-en.html Have you also tried pentaxforums? http://www.pentaxforums.com/forums/54-pentax-lens-articles/179912-pentax-k-28mm-f3-5-disassembly-cleaning.html Biggest contention I've found over the years has been what sort of lubrication to use for smooth focus and I settled on this, which has worked well: http://www.ebay.com/itm/HELIMAX-XP-Camera-Telescope-Optical-Instrument-Focusing-Helicoid-Grease-w-PTFE-/271052175856?clk_rvr_id=631740458361&afsrc=1
  3. This site is excellent for info on manual lenses and has good info on care and repair, getting a basic toolkit together and all that. http://forum.mflenses.com/equipment-care-and-repairs-f6.html http://forum.mflenses.com/basic-techniques-to-repair-lenses-and-cameras-t32862.html
  4. You're right and it wasn't necessary for me to suggest DVI was crap, just thinking more consumer end but even then dual I suppose is an option and to correct myself again it's actually hdmi from Nvidia vid cards that is the output reduced range RGB not over DVI. DVI to hdmi and visa versa can raise problems though.
  5. Nope, that's nothing to do with the underlying problem of hdmi & dvi output ranges from the GPU dependant on attached display, that BS in the Nvidia panel just scales video output levels and does it badly to compensate for incorrectly setup signal chain. It's not the solution to the problem.
  6. For the GH4 the range is 16 - 255 luma ffmpeg gives me steady luma upto 255 and I believe it, QT Player, doesn't it clips to 235 when working at 8bit, Resolve using QT SDK I presume previews as it should clipped but data above can be pulled down, same with Premiere CS6. For format and codec it's not so much what can be encoded into it, it's how it's interpreted by whatever 'make' of decompressing codec is used outputing it. Is that what you mean?
  7. Depends on how your G6 native files were encoded in camera, when you know that you'll know if 701 & full range is correct or not.
  8. Ok, I found some native GH4 MOv's and they're not flagged full range ie: JPEG Levels, they are as Sony NEX etc 16 - 255 YCbCr. The 20 levels above 235 are superwhites, so it's upto individual whether to expose for 235 and let QT or whatever clip above that or expose to the right, capture into the 255 and pull down in post. The black thing on Vimeo is not to do with Vimeo but to do with the limited range RGB signal chain from nvidia vid cards (as mentioned) plus effects of HW acceleration but both only when DVI is used. No problem over hdmi or SDI. DVI is crap anyway, 8bit RGB. And maybe wrong black level setting on the monitor if it's not auto sensed. All of which scale and rescale grey levels through the signal chain inducing increased banding. Those using a decent or even reference monitor or who simply want better representation of the signal for grading and preview will be going 8/10bit hdmi YCbCr or better still SDI both from something like a Blackmagic Mini Monitor, using a 3D monitor LUT from calibration package and avoiding the issue every which way. DVI's fine for the NLE / Grading app interface where it doesn't matter but not for playback / review / grading of the image, no good. As sunyata explains, 'legal' levels is an 8bit thing and related more to analog delivery & viewing, which is why you can pull the levels down in a 32bit workspace if you wish or let the decompressing codec / player clip for you at end of the signal chain. What's important is knowing if the clip is fullrange JPEG or normal range with supers and was it graded using a correct signal chain. QT clips to 235 when using 8bit and so do some transcoders too, which is why native files get asked for not transcodes. Otherwise it's not unknown for people to download transcoded clipped files and talk BS about no highlight roll. :-)
  9. Where? The Driftwood stuff has been transcoded and not native, titles added, fades at the end? Does the GH4 use MOV container or mp4? Can you provide a link to a native file straight off the camera, would you mind. Something that is clearly exposed to the right and clipping.
  10. Geez, how long have we been discussing pulling down super whites in a 32bit float workspace, it really isn't anything new. Sorry but.... Any Sony cam like a NEX etc all 16 - 255, long discussed. '?do=embed' frameborder='0' data-embedContent>> '?do=embed' frameborder='0' data-embedContent>> '?do=embed' frameborder='0' data-embedContent>> But question is does the GH4 shoot 16 - 255 in a MOV or is it flagged fullrange so should be interpreted as 16 - 235, if someone would actually post a link to a freakin native file rather than some transcode it could be established. The GH3 h264 in a MOV was fullrange and flagged so, the GH3 AVCHD was 16 - 235 and didn't.
  11. Not that registration distance has been a problem for Canon in choice of lens even with the mirror due to greater regsitration distance, perhaps Canon had the forethought to provide a decent distance to accomodate pretty much any lens mount via a lensless adapor where Nikon failed to, concequently choice of lens for Nikon without more expensive mount modifications is limited. Adapting a lens mount for Canon DSLR's it's just a cheap bit of alu to make the distance up. As an aside and reading talk of off colors with Nikon's, QT doesn't interpret Nikon h264 correctly, same with Canon Rebel line as neither Canon Rebels or Nikon DSLR's appear to use luma rec709 coefficients, so reds go to orange, blues go to green in any preview via QT including Resolves.
  12. My comments have nothing to do with dissing 4k to 2k, cheaper 4k cameras, compelling reasons to aquire 4k or anything else you care to mention. My comments relate to the dubious suggestion that "true 4:4:4 RGB" is created from a 4:2:0 h264 source however easy you think the maths. If you don't mind me asking what's your understanding of what "true 4:4:4 RGB" is compared to other methods of constructing RGB frames from 4:2:0 h264?
  13. I did read the previous articles, my comments are all there to see, what I find misleading is that and this is not meant as heavy criticism just address the balance, of coarse this is your site, you tell it how you want, "read it here first, exclusive" is an old adage and sells newspapers, but also the forum is here to provoke discussion rather than merely ass kiss? The articles center on the GH4, fine, talking it up may help prevent it becoming marginalized in the slew of other first round of rec709 shooting 4K cameras from other manufacturers. But the questionable process of 4:2:0 to 4:4:4 8 to 10bit can be done on any camera source, any reasonable resolution although little point in 1080p down sampling. The Canon 1DC 4k, the Nikon V1 4k ?? The articles suggest 4:4:4, it wasn't until David Newman clarified he considered that it was RGB 4:4:4 not YCC 4:4:4, big difference and to many it was read as YCC, big news 4:4:4 YCC, well no not really. And 4:4:4 RGB is a particular description to suggest natively captured full sample RGB, not RGB from a 4:2:0 compressed via some interpolation and scaling down scheme, even with the theoretical maths. Then 10bit output, well actually no, 10bit in the luma, again dubious as to the extent of the benefit of that, comments that it grades better, it's not any surprise that scaling down and interpolating values makes it 'better' for grading, that's been common knowledge for a long time, in fact taking an 8bit 4:2:0 frame, converting to RGB using bicubic or Lanzos interpolation rather than point resize provides cleaner edges, interpolated values to mush the image up a bit and therefore appear to grade better than a 4:2:0 YCC frame, simply applying a tiny blur to a 4:2:0 frame interpolates the pixels and it stands a bit more grading, denoising a 4:2:0 frame at 32bit say, will give interpolated values and make it appear to grade better. BUT bottom line is all that can be done in the NLE or grading app, not some preprocess transcode, eating mass storage and 'better' for grading what compared to what?. Does the image actually look any better compared to say a 32bit workflow, just how much of the benefit here is from the 10bit aspect or even the pseudo 4:4:4 and how much is from the scaling down and interpolation of values in the conversion to RGB? But every ones own tests will decide that for themselves. What's getting shouted about is the 10bit and the 4:4:4 like it's gold standard output from the GH4 albeit at lower resolution, but as emphasized now 2k, that's still serious, talking the camera up in the process like it's something special other 4k cameras can't do. On the new discovery, I feel it's misleading to suggest that this new found process of alchemy happened here via this site, yes Thomas has provided the app as a first but it's not too far stretched to consider that the process has been done before, anyone using Matlab, Avisynth or even Nuke or Shake has probably done the math. 4K to 2K, it's common to go 4:4:4 RGB 4K film scan native from the scanner sensor or from a 4:4:4 RGB camera like the Alexa rather than YCbCr 4:2:0 compressed using h264 which not only throws away by subsampling but also throws away by compression. It's not 4:2:0 uncompressed nor 4:2:0 with gentle compression. So this process works any resolution, any 4:2:0 or 4:2:2 camera source, not a new process, RGB 4:4:4 interpolated, not native. Mileage will vary.
  14. Thanks for the clarification, many equate 4:4:4 to YCbCr not RGB. And the title of this thread and previous mislead. Just to query, you say 8bit chroma planes but they are not full 8bit worth of levels (16 - 240) does this matter and chroma is the difference in terms of red & blue once the luma is extracted from the original RGB value at any given point, then mixed back in with interpolation to averaged luma values in the scaling down process and conversion to RGB? So is it even true 4:4:4 RGB full sample? Isn't the whole thing about using the bastardized term 4:4:4 RGB is to differentiate between RGB captured natively 'full sample' via a camera or film scanner from something like a 3CCD sensor, 1 for each of the color channels, to diffentiate from interpolated RGB from say 4:2:0 or 4:2:2 such as described in this thread? And to say green is mostly from luma, depends on white balance and even the temperature of the light captured to determine which derives most luma, how does low light stand in conversion particularly with a codec like h264 which generally throws away data from low end of the luma range as part of the compression? Interesting to see if there's any real benefit in doing this over just 4:4:4 upsampling by 32bit linear in the NLE or grading app at the point it's actually needed ie: grading rather than filling hard drives with 10bit dpx's in advance, assuming that's how it'll be used or cuts only, export 4:2:0 and then do the upsample before going to the grading process. Had you seen the 8 to 10/16bit process using a modified denoiser to extract 8bit worth of LSB, keep the 8bit MSB and create 16bit per channel, then range convert to 10bit?
  15. Just not 10bit 4:4:4 YCbCr as suggested now and previously. It is a bit misleading, it's also missleading to suggest no one has ever done it before, anyone using Matlab, Avisynth or even I'd guess a nodal compositor like Nuke may well have done the necessary math on 1080P to 480P 4:4:4 or RGB in the past, who knows, who really cares. And on any camera, doesn't have to be the GH4.
×
×
  • Create New...