Jump to content

see ya

Members
  • Posts

    215
  • Joined

  • Last visited

Everything posted by see ya

  1. Yep, but they also have the apeture ring right at the back against the camera body making access difficult compared to numerous other sharp old manual primes like Oly Zuiko's, Helios 44-2 or Meyer Optik or Pentacon where the apeture ring is to the front of the lens. But you know already. And as you know, the later 3 have 'Preset' versions with smooth apeture ring adjustment with no on cost of declicking, making it easy to adjust a variable apeture not stepped and with nothing more than the tip of a finger smoothly at the same time as focusing manually. Just saying, addressing the balance, not interested in any "my lens is sharper than yours"
  2. Yes very helpful friendly members there, frequented the site for many years. Here's a SMC Tak breakdown, scroll down below the Canon FD bit: http://k10dpentax.blogspot.co.uk/search/label/Repair And a short list ;-) of links to repair breakdowns and such: http://www.4photos.de/camera-diy/index-en.html Have you also tried pentaxforums? http://www.pentaxforums.com/forums/54-pentax-lens-articles/179912-pentax-k-28mm-f3-5-disassembly-cleaning.html Biggest contention I've found over the years has been what sort of lubrication to use for smooth focus and I settled on this, which has worked well: http://www.ebay.com/itm/HELIMAX-XP-Camera-Telescope-Optical-Instrument-Focusing-Helicoid-Grease-w-PTFE-/271052175856?clk_rvr_id=631740458361&afsrc=1
  3. This site is excellent for info on manual lenses and has good info on care and repair, getting a basic toolkit together and all that. http://forum.mflenses.com/equipment-care-and-repairs-f6.html http://forum.mflenses.com/basic-techniques-to-repair-lenses-and-cameras-t32862.html
  4. You're right and it wasn't necessary for me to suggest DVI was crap, just thinking more consumer end but even then dual I suppose is an option and to correct myself again it's actually hdmi from Nvidia vid cards that is the output reduced range RGB not over DVI. DVI to hdmi and visa versa can raise problems though.
  5. Nope, that's nothing to do with the underlying problem of hdmi & dvi output ranges from the GPU dependant on attached display, that BS in the Nvidia panel just scales video output levels and does it badly to compensate for incorrectly setup signal chain. It's not the solution to the problem.
  6. For the GH4 the range is 16 - 255 luma ffmpeg gives me steady luma upto 255 and I believe it, QT Player, doesn't it clips to 235 when working at 8bit, Resolve using QT SDK I presume previews as it should clipped but data above can be pulled down, same with Premiere CS6. For format and codec it's not so much what can be encoded into it, it's how it's interpreted by whatever 'make' of decompressing codec is used outputing it. Is that what you mean?
  7. Depends on how your G6 native files were encoded in camera, when you know that you'll know if 701 & full range is correct or not.
  8. Ok, I found some native GH4 MOv's and they're not flagged full range ie: JPEG Levels, they are as Sony NEX etc 16 - 255 YCbCr. The 20 levels above 235 are superwhites, so it's upto individual whether to expose for 235 and let QT or whatever clip above that or expose to the right, capture into the 255 and pull down in post. The black thing on Vimeo is not to do with Vimeo but to do with the limited range RGB signal chain from nvidia vid cards (as mentioned) plus effects of HW acceleration but both only when DVI is used. No problem over hdmi or SDI. DVI is crap anyway, 8bit RGB. And maybe wrong black level setting on the monitor if it's not auto sensed. All of which scale and rescale grey levels through the signal chain inducing increased banding. Those using a decent or even reference monitor or who simply want better representation of the signal for grading and preview will be going 8/10bit hdmi YCbCr or better still SDI both from something like a Blackmagic Mini Monitor, using a 3D monitor LUT from calibration package and avoiding the issue every which way. DVI's fine for the NLE / Grading app interface where it doesn't matter but not for playback / review / grading of the image, no good. As sunyata explains, 'legal' levels is an 8bit thing and related more to analog delivery & viewing, which is why you can pull the levels down in a 32bit workspace if you wish or let the decompressing codec / player clip for you at end of the signal chain. What's important is knowing if the clip is fullrange JPEG or normal range with supers and was it graded using a correct signal chain. QT clips to 235 when using 8bit and so do some transcoders too, which is why native files get asked for not transcodes. Otherwise it's not unknown for people to download transcoded clipped files and talk BS about no highlight roll. :-)
  9. Where? The Driftwood stuff has been transcoded and not native, titles added, fades at the end? Does the GH4 use MOV container or mp4? Can you provide a link to a native file straight off the camera, would you mind. Something that is clearly exposed to the right and clipping.
  10. Geez, how long have we been discussing pulling down super whites in a 32bit float workspace, it really isn't anything new. Sorry but.... Any Sony cam like a NEX etc all 16 - 255, long discussed. '?do=embed' frameborder='0' data-embedContent>> '?do=embed' frameborder='0' data-embedContent>> '?do=embed' frameborder='0' data-embedContent>> But question is does the GH4 shoot 16 - 255 in a MOV or is it flagged fullrange so should be interpreted as 16 - 235, if someone would actually post a link to a freakin native file rather than some transcode it could be established. The GH3 h264 in a MOV was fullrange and flagged so, the GH3 AVCHD was 16 - 235 and didn't.
  11. Not that registration distance has been a problem for Canon in choice of lens even with the mirror due to greater regsitration distance, perhaps Canon had the forethought to provide a decent distance to accomodate pretty much any lens mount via a lensless adapor where Nikon failed to, concequently choice of lens for Nikon without more expensive mount modifications is limited. Adapting a lens mount for Canon DSLR's it's just a cheap bit of alu to make the distance up. As an aside and reading talk of off colors with Nikon's, QT doesn't interpret Nikon h264 correctly, same with Canon Rebel line as neither Canon Rebels or Nikon DSLR's appear to use luma rec709 coefficients, so reds go to orange, blues go to green in any preview via QT including Resolves.
  12. My comments have nothing to do with dissing 4k to 2k, cheaper 4k cameras, compelling reasons to aquire 4k or anything else you care to mention. My comments relate to the dubious suggestion that "true 4:4:4 RGB" is created from a 4:2:0 h264 source however easy you think the maths. If you don't mind me asking what's your understanding of what "true 4:4:4 RGB" is compared to other methods of constructing RGB frames from 4:2:0 h264?
  13. I did read the previous articles, my comments are all there to see, what I find misleading is that and this is not meant as heavy criticism just address the balance, of coarse this is your site, you tell it how you want, "read it here first, exclusive" is an old adage and sells newspapers, but also the forum is here to provoke discussion rather than merely ass kiss? The articles center on the GH4, fine, talking it up may help prevent it becoming marginalized in the slew of other first round of rec709 shooting 4K cameras from other manufacturers. But the questionable process of 4:2:0 to 4:4:4 8 to 10bit can be done on any camera source, any reasonable resolution although little point in 1080p down sampling. The Canon 1DC 4k, the Nikon V1 4k ?? The articles suggest 4:4:4, it wasn't until David Newman clarified he considered that it was RGB 4:4:4 not YCC 4:4:4, big difference and to many it was read as YCC, big news 4:4:4 YCC, well no not really. And 4:4:4 RGB is a particular description to suggest natively captured full sample RGB, not RGB from a 4:2:0 compressed via some interpolation and scaling down scheme, even with the theoretical maths. Then 10bit output, well actually no, 10bit in the luma, again dubious as to the extent of the benefit of that, comments that it grades better, it's not any surprise that scaling down and interpolating values makes it 'better' for grading, that's been common knowledge for a long time, in fact taking an 8bit 4:2:0 frame, converting to RGB using bicubic or Lanzos interpolation rather than point resize provides cleaner edges, interpolated values to mush the image up a bit and therefore appear to grade better than a 4:2:0 YCC frame, simply applying a tiny blur to a 4:2:0 frame interpolates the pixels and it stands a bit more grading, denoising a 4:2:0 frame at 32bit say, will give interpolated values and make it appear to grade better. BUT bottom line is all that can be done in the NLE or grading app, not some preprocess transcode, eating mass storage and 'better' for grading what compared to what?. Does the image actually look any better compared to say a 32bit workflow, just how much of the benefit here is from the 10bit aspect or even the pseudo 4:4:4 and how much is from the scaling down and interpolation of values in the conversion to RGB? But every ones own tests will decide that for themselves. What's getting shouted about is the 10bit and the 4:4:4 like it's gold standard output from the GH4 albeit at lower resolution, but as emphasized now 2k, that's still serious, talking the camera up in the process like it's something special other 4k cameras can't do. On the new discovery, I feel it's misleading to suggest that this new found process of alchemy happened here via this site, yes Thomas has provided the app as a first but it's not too far stretched to consider that the process has been done before, anyone using Matlab, Avisynth or even Nuke or Shake has probably done the math. 4K to 2K, it's common to go 4:4:4 RGB 4K film scan native from the scanner sensor or from a 4:4:4 RGB camera like the Alexa rather than YCbCr 4:2:0 compressed using h264 which not only throws away by subsampling but also throws away by compression. It's not 4:2:0 uncompressed nor 4:2:0 with gentle compression. So this process works any resolution, any 4:2:0 or 4:2:2 camera source, not a new process, RGB 4:4:4 interpolated, not native. Mileage will vary.
  14. Thanks for the clarification, many equate 4:4:4 to YCbCr not RGB. And the title of this thread and previous mislead. Just to query, you say 8bit chroma planes but they are not full 8bit worth of levels (16 - 240) does this matter and chroma is the difference in terms of red & blue once the luma is extracted from the original RGB value at any given point, then mixed back in with interpolation to averaged luma values in the scaling down process and conversion to RGB? So is it even true 4:4:4 RGB full sample? Isn't the whole thing about using the bastardized term 4:4:4 RGB is to differentiate between RGB captured natively 'full sample' via a camera or film scanner from something like a 3CCD sensor, 1 for each of the color channels, to diffentiate from interpolated RGB from say 4:2:0 or 4:2:2 such as described in this thread? And to say green is mostly from luma, depends on white balance and even the temperature of the light captured to determine which derives most luma, how does low light stand in conversion particularly with a codec like h264 which generally throws away data from low end of the luma range as part of the compression? Interesting to see if there's any real benefit in doing this over just 4:4:4 upsampling by 32bit linear in the NLE or grading app at the point it's actually needed ie: grading rather than filling hard drives with 10bit dpx's in advance, assuming that's how it'll be used or cuts only, export 4:2:0 and then do the upsample before going to the grading process. Had you seen the 8 to 10/16bit process using a modified denoiser to extract 8bit worth of LSB, keep the 8bit MSB and create 16bit per channel, then range convert to 10bit?
  15. Just not 10bit 4:4:4 YCbCr as suggested now and previously. It is a bit misleading, it's also missleading to suggest no one has ever done it before, anyone using Matlab, Avisynth or even I'd guess a nodal compositor like Nuke may well have done the necessary math on 1080P to 480P 4:4:4 or RGB in the past, who knows, who really cares. And on any camera, doesn't have to be the GH4.
  16. Yeah, I understand the difference in handling between QT & FFmpeg/libav it's why I asked which you were using. When I decompress a native GH4 h264 file I see full range levels, well above 235 as it's a CineD lifted blacks type of profile used so shadows didn't go below 16, so then it's a case of is the mov container flagged 'fullrange' in the VUI options as QT decompresses the same native file and shows no full range levels, in fact I've never been able to get 4:2:0 planar out of QT it's always been 4:2:2 interleaved but I'm sure you could clarify that anyway I'm no programmer. So taking the fullrange flag off the source appears to show full levels, therefore if flagged like the GH3 mov's then rather than a typical broadcast file we have a jpeg levels encoded file, luma and chroma normalised over full range like a jpg image. But anything out of QT will have had those jpeg levels scaled into broadcast range 709 it would appear. I was just wondering with all the talk of 8bit 4:2:0, which doesn't actually use 8bits worth of levels range has any impact on calculations to 10bit assuming 8bit? QT only clips full range levels if they are not flagged fullrange otherwise it scales levels into broadcast and would appear at the moment to screw up chroma in the highs introducing a lot of noise in a very small range of higher levels. Can't remember it ever doing that before. Had you noticed anything? Best to check with a full and limited range encoding of the same file, adjusting the luma coeffs calculation to suit and using QT as the decompressor.
  17. I mentioned this in another thread, perhaps someone could clarify whether the fact 8bit 4:2:0 doesn't use the full 8bit levels range unless its JFIF standard then should these calculations include that? Also the GH4 keeps getting talked up regarding the conversion to 10bit but to clarify surely any 4k cam output could do the same in theory. Not sure whether Thomas is using QT or FFmpeg for decompression but these GH4 movs do appear to be JFIF and flagged full range so QT appears to immediately rescale levels to 16-235 at decompression and clip any non flagged full range sources? Certainly looks that way from Driftwoods GH4 native source and buggers muddle of a full range encode on Vimeo.
  18. Amem to original native camera files, transcoded from a new camera I don't understand personally, unless for playback on lower powered hardware, nor why 10bit when editing & grading apps will do the 4:4:4 BS when necessary, when transcoding important metadata can be lost, levels scaled, artifacts introduced through cheap chroma interpolation which is what 5DToRGB was all about I think, mitigating crap interpolation at the time, but is it still necessary now, only if on old outdated apps. Bad playback, such as Prores on Windows is shit whether via 32bit QT or ffmpeg using wrong color matrix on occasions, think DNxHD would have been better choice for Windows users. But gift horse and mouth..... Maybe a modified denoiser to give 'cleaner' 8bit per channel in the MSB and plough the noise back in or certain amount as 8bit LSB, giving 16bit per channel, then range convert, 'organic' dither to 10bit etc. There are open source GPU assisted denoisers available. http://forum.doom9.org/showthread.php?p=1386559 Probably better than 4:2:0 to Cineform 10bit RGB
  19. That should be useful info to the OP. Yes, my mention of compression on CPU was specific to BM's RED raw decoding methods in Resolve 10.1.3 released a couple days ago. Yes there's some good performance stats done for upgrading a previous Mac Pro versus the Trashcan. But for heavy Resolve it would be conjunction with a Cubix or similar, maybe a Titan or two internally otherwise, then the internal 770 / 780 GPU for Resolve probably chosen as GUI only, maybe 'Compute' and UltraStudio for SDI / 10bit hdmi out. Checking out the GPU go compare site you link to interesting to see the GTX770 out performs the GTX780. :-)
  20. sigh, the MK III SD Controller thottles to 20MB/s doesn't it, it's the CF that'll do 100MB/s? And the controller is the limiting factor on all other Canon's not the memory card. sigh.
  21. When I mentioned low end these days it really means low end GPUs rather than processor, so floating point opps vis GLSL shaders I can see as bring achievable on lower spec GPUs but for non NVidia cards or low core count NVidia cards can see that the processor would out perform GPU, then debatable that 32bit float would be comparible to 8bit processing? For non NVidia then relying on extent of OpenCL support in an app. For CUDA related processing a GTX770 is entry level and at 4K resolution a 4GB Vram version. Which I think is as high end any mac pro can take? Not sure about imacs. Then its all OpenCL for mac anyway.
  22. 32bit float preferred regardless but this can have a slow down on lower powered systems for playback and render / encode times. Test it and see probably best approach. Certain codecs that store 'full range' or at least luma in the 16 - 255 range such as Sony cameras would probably benefit from 32bit operations regarding clipping in 8bit as the values above 235 luma using the typical YCbCr to RGB conversion won't fit ie: clipped unless the NLE handles those files specifically in a different way.
  23. 32bit floating point is higher precision color processing over 8bit, float versus integer precision, 32bit is also usually done in the linear domain rather than on gamma applied image data. 32bit float there should be no loss of data from clipping, image data values can be negative or greater than 1.0 although you won't see that on your monitor and it will look like clipping is happening on your scopes but as you grade you'll see the data appear into and out of scope range, where as 8bit processing will clip below 0 and above 1 ie: 0 to 255 in 8bit terms. Full versus Video levels. Whether the image is encoded in camera based on a RGB to YCbCr conversion that derives YCbCr values, (luma & chroma difference) based on luma over limited range or full range, you're aim is to do the correct reverse for RGB preview on your monitor. You can monitor / preview & work with either limited or full as long as you are aware of what your monitor expects, is calibrated accordingly and that you feed it the right range. If you're unsure then video levels. Video export 'should' be limited range certainly for final delivery, full range only if you're sure of correct handling further along the chain for example to grade in BM Resolve you can set 'video' or 'data' interpretation of the source. Your 1DC motion jpegs are full range YCbCr but as the chroma is normalized over the full 8bit range along with luma (JPEG/JFIF), it's kind of equivilent to limited range YCbCr and the MOV container is flagged full range anyway so as soon as you import it into an NLE it will be scaled into limited range video levels YCbCr. Canon DSLR, Nikon DSLR and GH3 MOV's are all h264 JPEG/JFIF, flagged 'full range' in the container, interpreted as limited range in the NLE etc. What you want to avoid is scaling levels back and forth through the chain from graphics card to monitor, including ICC profiles and OS related color management screwing with it on the way as well. You may also have to contend with limited versus full range RGB levels as well depending on the interface you're using from your graphics card, DVI versus hdmi for example, NVidia feeding limited range RGB over DVI full over hdmi.
×
×
  • Create New...