Jump to content

tupp

Members
  • Posts

    1,148
  • Joined

  • Last visited

Posts posted by tupp

  1. 12 hours ago, UHDjohn said:

    The reason why MF digital is 'better' has nothing to do with DOF or out of focus rendering and everything to do with CMOS vs CCD and the strength of the CFA's over the sensors.

    I disagree for two reasons:

    1.  The difference was apparent in the analog days on both 35mm and large format cameras when using film stocks that gave comparable resolution (color depth) from each format;

    2.  Gonzalo Ezcurra generally uses CMOS/Bayer cameras with his eCyclops and MiniCyclops ultra-large format DOF adapters.  As is apparent from this demo shot with the MiniCyclops and a 5D mkII, such a drastically large format (combined with an appropriate large format lens) yields a uniquely rich  DOF roll-off and an extraordinarily clean, flat focal plane, regardless of the type of camera sensor.

     

    So, a larger format with the appropriate optics does seem to make a difference that has nothing to do with the sensor type.

     

    If you can find a Super 16 lens and camera that gives the same performance as shown with the MiniCyclops, I would love to see it.  You can even use DigiBolex CCD camera and shoot in Super 16 mode.

     

  2. More details would be helpful, such as what type of interference are you experiencing (static noise, distorted signal?) and what bands/frequencies have you manually set.

     

    However, it sounds like your camera/lens combination might be generating weak, spurious EMI (electromagnetic interference) that your receiver is picking up due to proximity.  If so, changing frequencies/bands might help.  If the housing of your receiver is plastic, you might try wrapping the housing with metal foil as a test to see if it reduces the interference (leave a small space between the foil and base of the antenna).  By the way, RF means "radio frequency" -- not necessarily "interference."

     

    It appears that others have had problems with the EW100 G3.  If nothing else works, you might try using a spectrum analyzer to see if the camera rig (or some other source) is generating EMI.

  3. Stacking filters is no problem.

     

     

    I generally put my polarizer's up front for convenience (as I change-out a polarizer most often) and to avoid interaction with the other filters.  A variable ND essentially is two polarizers.

     

    Can't recommend any adapters nor matte boxes, but it sounds like you are using screw-in filters, at least for the variable ND (77mm).  There is nothing optically disadvantageous with screw-in filters that except that filter changes take longer, and you usually have to unscrew and reattach the filters with each lens change.  Also, if you stack too many screw-in filters, vignetting can start to appear.

     

     

  4. 4 hours ago, Greg Block said:

    Long time lurker here. Joined up to show you the file I get when I hook up the gx85 to my Blackmagic Video Assist.

    Thanks for posting the specs.  Those specs look like the properties of the video file generated by the Blackmagic Video Assist (ProRes, 1920p, 10 bit, 4:2:2) -- not the properties of the signal coming out of the camera's HDMI port.

  5. On 6/5/2016 at 2:49 AM, jax_rox said:

    Hence why I said 'if there's demand'. There are even some PL mount Panny cams floating around because there was enough demand. Panavision is a great company (though, apparently, not overly profitable ;)) and it is certainly to their benefit that they can manufacture whatever they want/need. I'm just saying, the major of cameras Panavision rent out are PV mount. As I say, if there's demand for it, they'll make it (which is exactly what the rep told you according to your post), but the push and idea will be to use PV lenses. And why wouldn't you?

    No.  The Panavision rep didn't say anything about "demand."   Panavison does "one-offs."   They are known for dong so.  They are proud of doing so.

     

    There doesn't have to be any huge "demand."  An ASC member or some DP with a big project might ask them for something special, and there is a good chance that they will provide it (probably based on whether or not they make it without too much expense).  After the customer is finished, they put the part on the shelf, and sometimes it gets rented again or sometimes it doesn't.

     

    Panavision can operate in this fashion because they don't have to sell anything.  The old Clairmont Camera outfit worked similarly, making specialty items that individuals would request, but Clairmont was a little more active in trying to rent out the special items.

  6. 9 hours ago, tomastancredi said:

    so no news about gx85/80 hdmi output? 10 bit 422?
    has anyone tried?

    At Cinegear, I asked a Panasonic salesman about the HDMI output of both the GX85 and the G7, and he insisted that both gave 8 bit, 4:2:0 video out of HDMI.

     

     

    Later, I went to the Atomos booth, and asked a rep if any of their recorders could sense the bit depth and chroma sub-sample of a camera signal (to test the outputs of the G7 and GX85).  He said no, but after taking a swig from his beer, he boasted that he knew the output specs on every camera and that the G7 was definitely 8 bit, 4:2:2.  After I offered a US$5 wager that the G7 wasn't 8 bit, 4:2:2, we went to the Panasonic booth to find out the answer.

     

     

    With the tipsy Atomos rep at my side, suddenly the Panasonic salesman was not so sure as before.  He admitted that the output of some cameras haven't been tested, but that the corporate office instructed the sales reps to say 8 bit, 4:2:0 for most cameras.  However, he was definitely sure that the G7 output 8 bit, 4:2:0.

     

     

    So, I don't know what to tell you.  Hopefully, someone will connect the GX85 to a capable monitor/recorder/analyzer and post the answer here.

  7. 1 hour ago, jax_rox said:

    Panavision don't sell cameras

    That certainly is earth-shattering news...

     

    1 hour ago, jax_rox said:

    so there's no real need or point to adapt to different lens mounts

    Actually, the fact that Panavision only rents is the very reason why they have more flexibility to make specialty items and one-offs, compared to the other manufacturers who only make money off of sales/service.  In fact, Panavision has bragged about that advantage over the years, and, indeed, when I asked the Panavision rep about a special front plate, he smiled and said, "we only rent, and we make what we rent, so we can fabricate whatever we want to fit onto our cameras."

     

    Here are just a few of the specialty items that Panavision has made/adapted over the years, which they still rent.

     

    If some ASC member wants to adapt his old Hasselblad glass to the DXL for a feature, you can bet tha Panavision will jump at the his/her request.

     

    1 hour ago, jax_rox said:

    And really, despite the rhetoric about the colour science and panavising, it's essentially a Weapon in a Panavision body

    I wouldn't be so sure.  Light Iron is a high-end DIT/finishing outfit, and they have just a little bit of experience over the years in dealing with the shortcomings of Red color.  If they initiated the collaboration between Panavision and Red and if they were involved in the design/engineering of the A/D converters and/or the color algorithms of the DXL, I would guess that the image from the DXL is a step or two above that of a stock RED unit.

     

    Furthermore, even without their Light Iron division, Panavision has been no slouch in regards to their past digital cameras.  In addition, I am fairly sure that they still own the Dynamax sensor foundry.

     

     

    1 hour ago, jax_rox said:

    and it'll probably be cheaper to rent as well!

    I also wouldn't be so sure of that, as Panavision is known for making deals.

  8. 23 hours ago, duffman said:

    Why the fuck does this use R3D? Guessing this is just a rebranded red cam right?

    I saw the camera at Cinegear yesterday.  I asked one of the reps why they collaborated with Red on this camera, and he said that Panavision's newly acquired Light Iron division pushed the partnership (and helped design the camera's color science), because Light Iron was strongly Red oriented when it was an independent DIT/finishing house.

     

    It's not just a "rebranded" Red camera -- they evidently put some effort into "Panavising" it and changing not just the ergonomics, but also the image.

     

    By the way, the Panavision rep also said that the camera probably won't be available for the plebeians until 2017.  Also, when asked about special lens mounts for alternative medium format lenses, the rep said that they might consider making special mounts on a piecemeal basis, but that they would probably prefer to adapt a lens to the Panavision mount and keep it in their rental stock.

  9. Thank you for the first-hand insight into the distinctiveness of the Cyclops' look.  Perhaps the "buffer" in the highlights is similar to the "glow" that many enjoyed with 35mm DOF adapters.

     

    It certainly would be interesting to see a side-by-side DOF/look comparison test between a Cyclops and a much smaller format camera (such as a BMPCC, a BMMCC or a Digital Bolex).  A test of such dramatically different formats might finally settle the DOF/look argument.

  10. Gonzalo,

     

    Thank you for the reply!  I guess that quite a few of the extra-large format photography lenses were not designed for extreme swing/tilts.  On the other hand, it might be possible to find a cheap lens from one of the old, huge, graphics stat/process cameras that would allow much more play, and also have a larger maximum aperture.

     

    By the way, there has been a long-running debate here on whether or not there is a difference in "look" between large formats and smaller formats.  Have you found any differences in look between your Cyclops rigs and smaller cameras?  If so, please explain the differences that you see.

     

    Thanks!

  11. There are a lot of different lighting scenarios shown in that video, and many of them are available light.

     

    In regards to the screen grab that you posted, the solitary light source seems to be coming from a moderately steep angle, and ever so slightly to camera right.  Judging from the falloff and cast shadow edges, it appears to be fairly close to the subject, perhaps two to three feet outside of the frame.  The source might be 10-24 inches in diameter/width, and it probably is a flat, smooth-faced (diffused) source.

     

    To get a similar effect, you could use an LED panel with some diffusion, or get a small portable soft box (like a Rifa) or just clip a large piece of diffusion to the bardoors of most any fixture.  Position the light closer than usual, and have stuff in the background that barely reads in the darker area of the light's falloff.

  12. 5 minutes ago, John Matthews said:

    A quick note about RawTherapee, it lets me "see" the raw file, but I don't think it's optimised- "supported" would be too strong of a word.

    I think I'm just going to have to be patient until software catches up with cameras. I did have a look at Iridient Developer 3 (officially supporting the GX80, along with Silkypix) and it's been amazing. Too bad it's OSX only, costs $99, and workflow would need to be changed.

    Perhaps RawTherapee the default settings/algorithm is not optimized for the GX80, or perhaps its defaults are just a little laid-back in regards to sharpening, saturation and contrast, so some tweaking might help.  As I mentioned, I am happy with Darktable's presets as a starting point.

     

    I would guess that Iridient Developer uses open source DCRaw, if it already has GX80 raw capability.  Many of the open source raw developers/processors are multi-platform, so you can use them on OSX, Windows or Linux.  As open source software is usually free, it can't hurt to try other raw processors to see if you like their defaults.

  13. 13 hours ago, John Matthews said:

    I should have been clearer though. I'm not a newbee to Linux, but I've never used it as a day-to-day machine workhorse. I've installed countless distros into Virtualbox in the hopes to finding:

    1) a stable/secure desktop environment

    2) pro and free solutions for both video and photo editing without too much fuss

    Not sure what you are after in regards to a "stable/secure desktop," but running an OS in a VM might not be the best way to test such "stability," because of the resource drain and potential glitches.  I have used a lot of different desktops and window managers in the past 14 years, and I never had any problems that I can recall.  I tend to use lightweight window managers instead of full desktops.  By the way, those who use tiling widow managers usually run circles around their "point-&_click" counterparts.

     

    In regards to "pro and free" video and photo editors, the two are most definitely not mutually exclusive.  A significant number of pros use open source (free) software -- even photo and video editors.

     

     

    13 hours ago, John Matthews said:

    To restate my question in clearer terms: If I'm a "pro" running Linux in the film capital of the world (maybe that's your case), what would I most likely be using?

    For an NLE/compositor, you would probably be using Blender, Cinelerra, Lightworks or Pirnanha (proprietary, with the high-end version at US$250,000).  Kdenlive looks like a good NLE, and it is has become more robust and a lot more popular since I played around with it many years ago.  The studio version of the Lightworks NLE is probably pretty good, but I have never tried it.

     

    There are numerous image editors/processors that run on Linux.  My favorites are GIMP and Darktable, but there is also Krita, CinePaint, RawTherapee, Raw Studio, Delaboratory, UFRaw, GTKRawGallery, LightZone, Pixeluvo (looks like an interesting processor/editor combo), Photivo, AfterShot Pro, Fotoxx, etc.  These are mostly raster image editors, and, of course, there are also a few open source vector image creators/editors.

     

     

    13 hours ago, John Matthews said:

    My technical abilities are probably going to be sufficient or I can put in some effort and time. I just don't want to waste time on software that lacks community and developers. I would like to learn something that I know will be supported in the the future (5-10 years) and that has a following.

    Both proprietary and open source projects come and go, and no one can guarantee the future.  I am guessing that you don't want to stick with FCP.  For open source NLEs, Blender has a strong community with a lot of folks crazy about its editing capabilities.  The community version of Cinelerra is updated fairly regularly, and it has some unique capabilities (but its default theme is rather garish).  I don't know much about the proprietary NLEs, but I think Lightworks has a following.  I am keeping my eye on Kdenlive.

     

     

    13 hours ago, John Matthews said:

    Concerning the GX80, only Raw Therapee will allow me to edit the raw files.

    I wouldn't be so sure of that.  I would guess that a few others in the list of the open source image processors above can already read raw files from the GX80.  Open source projects can move fast.

     

     

    13 hours ago, John Matthews said:

    I just can't seem to get better results than the in-camera jpegs (which look great). All my previous cameras have had inferior results when compared to software. Not the GX80. I spend most of my time trying to get a result that the camera would have given me out of the box in the first place. The major problem is color.

    Most of the raw image processing apps have fine color control.

     

    I don't know much about RawTherapee, but Darktable has preset camera color profiles for certain camera models/brands/film stocks.  Darktable usually defaults the profile brand/model it reads from exif info, but I sometimes use an Agfa profile on my Canon raw images.  Of course, Darktable also allows one to create and save custom profiles.  I would imagine that RawTherapee and a lot of the other open source raw image processors offer similar preset/custom profile capability.

     

     

    13 hours ago, John Matthews said:

    Do you know if RawTherapee is still being actively developed? Or, have most people moved on to Darktable?

    Judging from the fact that RawTherapee already has the capability to import the GX80/85 raw files, I would guess that there is some current activity in that project.

     

    I don't know if people have moved from RawTherapee to Darktable -- there are so many options in the open source world, as evident from the above list if photo editors and raw image processors.  I use Darktable because that's what I started with years ago.

  14. 1 hour ago, John Matthews said:

    I have a question about workflow. I've resorted to shooting just jpegs because they look great from this camera and also I refuse to pay Adobe for the newest Lightroom when it comes out for GX80 raw support. I also use OSX and fcpx, but would love to move to Linux. Any ideas?

    If you want to try Linux but have no experience, start with one of the newbie distros:  Mint, Ubuntu, PCLinuxOS, Mageia, OpenSUSE,etc.

     

    Also, you can try most of these distros without installing them by booting "live" versions (liveCD, liveDVD, liveUSB, liveSD, etc.).  The live versions of these big newbie distros will usually run more slowly than installed versions, but a live OS running off of a USB 3.0 flash drive might be fairly snappy.

     

    By the way, there are multimedia distros designed for video/audio production and photography, such as Ubuntu Studio, AVLinux and Apodio.  These multimedia distros will often come with a lot of codecs already installed, but it is fairly easy to install codecs on the non-multimedia distros.

     


    In regards to GX80/GX85 raw support, you could just install open source Raw Therapee on OSX.  It reportedly works with GX80/GX85 which uses the open source DCRaw library, upon which Adobe Camera Raw converter (and pretty much every other raw file converter) is based.  Consequently, a lot of the other open source raw "darkroom" might also already have the ability to read GX80/GX85 raw files, as many open source projects tend to move faster than their proprietary counterparts.  I use open source Darktable which can also be installed on OSX.

     

     

    If you start moving to Linux, there are other things of which it might be good to be aware, such as which NLEs (both open source and proprietary) are the most actively developed and robust, and such as which audio editor is ideal for your situation.

  15. 2 hours ago, TheRenaissanceMan said:

    I think the Personal-View links let you buy cameras that specifically have their record limits removed.

    14 minutes ago, August McCue said:

    I'v never heard of Personal View. This is Interesting! Do they use a hack like magic lantern?

    It's a good thing that @TheRenaissanceMan mentioned Personal-View, otherwise you would have never known about it!   :glasses:

    http://www.personal-view.com is a camera web site and forum run by Vitaliy Kiselev, who happens to be the founding developer of the GH1 and GH2 hacks.  He also uses the site to sell gear.

  16. 5 hours ago, jcs said:

     after summing we multiply by .25: 1.0 is max value, the fraction contains the extra info. I'm a developer- this is how we write the code.

    No.  Don't do that

     

    By summing four values and then multiplying by 0.25, you are actually averaging -- you are not summing.  The best way to convert from 4k, 8-bit to HD, 10-bit is to simply sum the four values, which retains the full color depth/accuracy of the original image and which is a perfect mathematical conversion between 8-bit and 10-bit.  No multiplier is necessary after summing.

     

    5 hours ago, jcs said:

    You examples for color depth equivalence is likely true in practice- folks would not be able to tell the difference (I think you meant 4K 420 8-bit, which converts to 1080p as 444 ~9-bit).

    Your original hypothetical scenario has equivalent color depth to 1080p, 444, "~9-bit."  If you start with 420, the color depth would be less than that original scenario.

  17. 19 hours ago, jcs said:

    Hey tupp, 420 is full Y (Luma) and 1/2 resolution (both vertical and horizontal (could say 1/4 for both)

    Where did 420 come from?  We are talking about 422.    You started with:

    On 5/12/2016 at 7:21 PM, jcs said:

    4K 8-bit 422 becomes pseudo 10-bit Luma 8-bit Chroma 444 1080p. 'pseudo' as the 4 8-bit sample's variation summed to 10-bit is helped by noise/dither.

     

     

     

    19 hours ago, jcs said:

    If we downsample 4K 420, we'll average together 4 Y's to get one new, low-pass-filtered (noise and alias reduced) pixel.

    Don't average! -- NEVER AVERAGE!!    Always SUM!

     

    Don't sacrifice overall accuracy/depth for a few wayward pixels.  Reduce noise some other, more direct way.


    If Dugdale started out with 420 instead of 422, of course, that affects the end result.  However, my point in response to your hypothetical color depth equivalence example is that the color depth is essentially identical in these three scenarios:
    4k-UHD, 422, 8-bit;
    full-HD, 444, 10-bit-luma/8-bit-chromas (your example);
    full-HD, 422, 10-bit.

  18. 5 hours ago, jcs said:

    4K 8-bit 422 becomes pseudo 10-bit Luma 8-bit Chroma 444 1080p. 'pseudo' as the 4 8-bit sample's variation summed to 10-bit is helped by noise/dither.

    I think that you mean to say that 4k (UHD), 422, 8-bit image has equivalent color depth to a hypothetical HD, 444, 10-bit-luma/8-bit-chromas image, which is correct.  However, I don't think that hypothetical end result is an accurate description of Dugdale's conversion.  As I recall, he actually converted a 4k, 422, 8-bit image to HD, 10-bit, 422, which, if properly executed, also retains the full color depth of the original 4k, 422, 8-bit image.  Of course, HD, 10-bit, 422 has an equivalent color depth to your hypothetical HD, 444 10-bit/8-bit image.

     

    By the way, if the pixels are properly summed in the down-conversion, there is no "pseudo" necessary.  All three scenarios have equivalent color depth, with or without dithering.  The dithering primarily helps eliminate banding artifacts.

  19. 3 hours ago, User said:

    Thanks for the input Tupp. My only concern is that you using words like "I think' and "Fairly sure' and this does not exactly inspire great confidence.
    It would be beneficial to have an authority weigh in with fact. I have a lot of material in front of me and it would be good to know for sure and get this right.

    Just try it -- ffmpeg is free and open source.

     

    Here is a tutorial on an easy way to install ffmpeg (with extra codecs) on a Mac, using Homebrew.  Here is another tutorial on how to manually install ffmpeg (with extra codecs).

     

    The ffmpeg command to split a file without re-encoding is fairly simple, because one is merely copying the video and audio.  It is probably easiest to use a separate command for each part of the file desired.  So, the ffmpeg commands to split a five minute file directly in half will be something close to these:

    Quote

    ffmpeg -ss 00:00:00 -t 00:02:30 -i input_video.mov -c copy first_half.mov

    ffmpeg -ss 00:02:30 -t 00:02:30 -i input_video.mov -c copy second_half.mov

     

     

     

     

     

     

×
×
  • Create New...