Jump to content

tupp

Members
  • Posts

    1,148
  • Joined

  • Last visited

Posts posted by tupp

  1. tested the sony a7s together with 4k recording in shogun today in direct comparison to the canon 1DC (4k internal recording).

    ​Thank you for posting your test.

     

    It appears that the A7S image was exposed (or displayed) at about 2-3 stops brighter than that of the 1DC, which could explain much of the noise difference.  Do you have any shots in which the exposure is identical?

  2. Dan Lavry wrote a pretty deep paper on it. If I'm honest, the DSP maths in that paper is lost on me. but a 96khz environment is beneficial even with 48khz sources, audibly so with good monitoring.

    Bitdepth wise, 16 bit dynamic range is fine for delivery. For recording, do 24bit. Your software will probably mix in 32 or 64 float by default. If not, set it to 32bit yourself.

    ​If Dan Lavry's paper implies that bit depth and dynamic range are the same thing, then his paper is flawed.

     

    Sorry to harp, but there is no such thing as "16 bit dynamic range" -- not in video and not in audio.  Bit depth and dynamic range are two independent properties.

  3. Cocker was a accomplished singer and songwriter whose presence will be missed.

     

    Not sure, but I think that the only song performed by two different acts at Woodstock was the Cocker-penned "Something Coming On." Cocker did it, and so did Blood, Sweat & Tears.

  4. I just tested it on the NX1 and I can get a very nice 4K feed working. Details coming on the blog.

    ​​Thank you for all the great work and info.

     

    Interested in knowing if the NX1's best feed is cinema-4K at 10-bits, 422, etc.

     

    Also, it would be very helpful to be able to download a 5-second, unaltered clip of a person "tits up," in both the NX1's native h265 and in the Shogun's best format.  :)

     

    Thanks!

  5. Thanks for the info and the video, and especially for the whip pans of the books.  Definite rolling shutter.

    By the way, I think ffmpeg decodes h265, which could mean that Handbrake will also convert the codec.  Not sure how well the ffmpeg decoder performs.

     

  6. As mentioned in the linked ML thread, the increase in dynamic range is achieved with ML's dual iso feature in which both "interlaced fields" are assigned a different iso.

     

    The only drawbacks of this technique is that it reduces the veritcal resolution, and it also increases the tendency for moire/aliasing.

     

    The concept behind the Panavision Dynamax sensor was similar, in that it was supposed to utilize neighboring RGB pixel groups of differing sensitivities (through the use of pixel apertures, as I recall).

     

  7. Wow, there's a fork of Magic Lantern? Interesting.


    Sort of... the Magic Lantern developers/forum-moderators effectively killed it.

     

    Personally at 1k I'm not sure. I'm not sure investment into Canon is any decent at this rate, but 1k doesn't seem enough for a system switch either.


    Part of the beauty of Tragic Lantern's h264 control is that it allows capture at full HD (or the Canon HDLSR-h264 variant of full HD) in files that have quality similar to that of higher bit rate mjpeg. TL can yield fairly rich 1920x1080 frames from the T3i, 7D and EOSM.
  8. I would spend my money on used lenses. Lenses with Nikon F mounts are ideal becuase they can be adapted to many cameras (and actual Nikkor glass is usually a good value, too).

    One little known advantage of the T3i -- it is one of the few cameras that has a Tragic Lantern build.

    Tragic Lantern allows control of the h264 GOP (group of pictures) setting, so one can shoot h264 with all I-frames.  In this scenario, each frame is essentially its own jpeg picture, with none of the blocky interframe artifacts that often plague h264 encondings.

    Here's an article that further explains things.  As mentioned in this article, the early "TL 1" build for the T3i/600D allows more control of the h264 settings than the later builds.

    With both Tragic Lantern and Magic Lantern, one can also boost the h264 bit rate to one's liking.

    I think that the 5D mkIII is the only Canon HDSLR that offers "out-of-the-box" h264 with all I-frames.  Don't know whether or not Canon offers any control of h264 bit rate on the 5D mkIII.

    I only know of two other cameras with TL builds:   the 7D; and the EOSM.

    I have the EOSM, and

  9. The other thing to take into consideration is the quality of the 1080p. I'm sure the detail they can pull from their 1080p is a little bit more than the detail I can pull from my bmpcc...


    If you are suggesting that converting to 4K to 1080 gives more detail than just capturing at 1080, I think the jury is still out on that issue. There are other variables involved, such as how precise one can focus in 4K compared to in 1080, and such as how sharpening algorithms in 4K affect the results after conversion to 1080.

    On the other hand, one can certainly retain more color depth when converting from 4k to 1080, all other variables remaining the same.

     

    And obviously my T3i is a joke for anything other than closups of people's faces.


    Not at all.

    Aside from this thread's point that lighting and content are probably more important than resolution, the T3i/600D is one of the few cameras that can take advantage of Tragic Lantern's GOP 1 capability, which eliminates the blockiness from interframe H264 compression.

    To maximize TL's special H264 controls:
    1. disable the camera audio;
    2. get a fast SDHC/XC card;
    3. install TL;
    4. boost your bit rate to at least 2x;
    5. use a flat picture style with the sharpness set to 1 (sharpen later in post);
    6. and set your GOP to 1.

    You might be pleasantly surprised by the results.

     

    If I could shoot 1080p on an Arri Alexa I wouldn't trade it for 4K from a GH4... and for that matter neither would anyone else on this forum.


    For 1080, I wouldn't necessarily choose an Alexa over a Sony F35.

     

    The problem is we are not getting 1080p worth of information out of a bmpcc.


    Not sure what is meant by this statement.

    The BMPCC gives an enormous amount of information in raw. I just shot a feature using two BMPCCs with speedboosters, and we captured about 8 terabytes of raw, 12-bit footage.

     

    And the bmpcc is arguably the most detailed image till you get to... 4K in the GH4!


    Generally, I would rather shoot 1080 with a Sony F35 than a BMPCC.

    Also, there are other cameras that shoot resolutions between 1080 and 4K.

    Furthermore, other than the GH4, there are cameras that shoot raw cinema 4K (and greater).

     

    Those DPs were talking amoungst themselves and to the high end $30+ million budget movie people.


    They spoke of the prevalence of down-scaling -- how gaffers complained about the trend of reduction in power of the lighting package, due to greater camera sensitivity. The gaffers can no longer skim as many of the big 10-ton truck and 1500-amp genny kit-rentals.

    Now, it is more common to use 5-ton LED/fluo-heavy packages with 500-amp (or less) gennies.

  10. Well this should be an interesting topic, but kind of got killed off by all the geeky stuff some of us don't bother to know


    Sorry. Didn't mean to hijack the thread. Just trying to give a friendly reminder that resolution is integral to color depth.

    On the other hand, there certainly is nothing "geeky" about a thread titled, "Is 4K Not as Important as We Think It Is?" :)

     

    Those DP's are spot on in my opinion. I personally like 4k but only as a post production tool to crop, reframe, stabilise and move images.


    I don't think that there is anything inherently wrong with 4K (or with any other given resolution).

    Of course, the higher the resolution, the greater the bandwidth/resource requirements (all other variables remaining the same).

    In addition, those DPs stressed lighting over technical specs (including dynamic range and "color science").

     

    Most of the 4k on Youtube looks awful on my monitor (very brittle, moire galore).


    YouTube looks awful on a lot of monitors.

    However, the moire problem could be peculiar to your set-up. What is the resolution of your monitor?

     

    On a TV it reminds me of The Hobbit in 48fps - hyper-real. Hyper-real is on the majority not the best way to do things.


    That's an interesting topic.

    Just the other day, I was talking to my Indian filmmaker friend, and he explained that the reason why a lot of Indian films were completely looped (other than practical shooting advantages) was that Indian audiences like the "other-worldly" feel of overdubbed dialog.

     

    4k is an exciting format - but dynamic range and colour beat it to shreds.


    I don't think that 4K is particularly "exciting." 4K cinema cameras have been around since the first Dalsa Origin (2002?). 4K is yet another step in the cinematography megapixel race -- a conspicuous technical spec for producers (and others) to demand.

    In regards to the notion that dynamic range and color are more important than resolution, again, resolution is integral to color depth.

    However, a system with 10-bit depth and at least HD resolution would certainly suffice for a lot of the work done today. As I said earlier in the thread, I would rather shoot HD with a Sony F35 than 5K with a RED Epic.

    As those DPs suggested, lighting usually trumps all of the technical specs.
  11. but i noticed in one of your other posts you asked why the banding in a blue sky was so bad with the A7s and wanted to know if there was a solution.


    No. >I started a thread for members to post on how they dealt with the banding in the A7s. I already knew why banding is prevalent in the A7s -- it's 8-bit.

     

    an 8bit file is considered low dynamic range for this (and other) reasons.


    Not by anyone who knows the difference between bit depth and dynamic range.

    Bit depth and dynamic range are two completely independent properties.

     

    you are trying to encode too much contrast variation into too few code values,


    Not necessarily. Banding is the phenomenon of a value threshold transition that falls within a broad smooth, tonal gradation. It is more of a result of bit depth and the smoothness of tonal transitions in the subject.

    Other 8-bit cameras exhibit the same (or more) banding as the A7s, and most of those other cameras have less dynamic range.

    For instance, the A7s has a large capture dynamic range with value thresholds that are spaced far apart in amplitude. So, the A7s might have one value threshold that falls within in a sky that ranges, say, 1/16th of a stop, while an 8-bit camera with less dynamic range might have 3 value thresholds that fall within the same sky. In such a scenario, the A7s has two bands in the sky, while the other 8-bit camera has four bands in the sky (and probably more contrast!).

     

    if you only had a narrow range of light intensity or gamma, particularly in the shadows, no compression etc, it would be less noticeable, or higher bit depth encoding.


    The smoother the gradation of intensity, the greater the chance of banding.

    Yes, higher bit depth can make banding less noticeable.

     

    image formats below 16bit are often referred to as low dynamic range in a render pipeline..


    Not by those who know the difference between bit depth and dynamic range.

    Bit depth and dynamic range are two completely independent properties.

     

    even 10bit technically is considered LDR. for this, the bit depth and format are both important.. i.e. integer, float or 1/2 float (exr).


    Bit depth and dynamic range are two completely independent properties.

     

    from Wikipedia on OpenEXR:
    It is notable for supporting 16-bit-per-channel floating point values (half precision), with a sign bit, five bits of exponent, and a ten-bitsignificand. This allows a dynamic range of over thirty stops of exposure.


    What can one say, except don't believe everything that you read on Wikipedia.

    The fact is that a 4-bit digital system can have over thirty stops of dynamic range, while a 16-bit system can have only one "stop" of dynamic range.

    Furthermore, analog systems (which have zero bit depth) can have a dynamic range of over thirty "stops."

    Put simply, bit depth is the potential number of value increments within a digital system. Dynamic range is the ratio between the largest and smallest values of usable amplitude, in both analog and digital systems.
  12. so the 1bit camera example was theoretical, not in fact how you would implement a 1bit camera in the real world for best results etc..


    Okay.

     

    in which case you would be leaving out too much info really to cover.


    What is the meaning of this statement? Given enough resolution and barring noise, a 1-bit camera could certainly provide the same amount of color info as an Alexa 65.

     

    but even summing the results of each pixel group with infinite resolution would not account for the loss of information.


    Forget the summing -- summing implies that we are using more than 1-bit in our system (after the summing), and summing is completely unnecessary. I edited-out the summing step just before your response.

    In regards to the "loss of info" -- to what are you referring? Given infinite resolution and barring any other practical problems (such as noise), our 1-bit camera could certainly have more color info than an Alexa 65.

     

    in order to get that (you called it 1bit) black and white image you see in the newspaper, you often need to do several things. first, start with a much higher resolution


    The subject/scene that you are shooting with the 1-bit camera has resolution down to the atomic level.

     

    and generally 8bit image..


    One could start with an analog image that has 0-bit depth.

    Regardless of the original bit depth and resolution (or color depth), the image is rendered within the color depth limits of the printing system.

     

    if it's a newspaper like the LA Times for example, you need to convert to grayscale, increase contrast, possibly add unsharp mask and print to a 100 line per inch screen velox (that's probably really dated info).


    Thanks for reminding me of Velox.

     

    anyway, what we see is the direct result of a series of changes to a much more detailed image, which we can't then recreate from the printed image, no matter what the resolution.


    Not necessarily -- not if the printing system is higher quality than that of the original image.

    If the original image was shot on Royal-X and the printing screen is 10x finer than the Royal-X grains, there could be is no essential loss in quality -- even though the final image is not exactly the same as the original.

    The same principle applies to transcoding video.

     

    this is often misunderstood because of the "resolution influences color depth as much as bit depth..." type of statement.


    I fail to see how the fundamental relationship between resolution, bit depth and color depth causes misunderstanding, since most are completely ignorant of that actual relationship.

     

    "usable dynamic range and bit depth"

    you have argued that bit depth and dynamic range are not related,


    Dynamic range and bit depth are two completely different and independent properties.

    By the way, dynamic range also exists in analog systems, but bit depth exists ONLY in digital systems
  13. there are a lot of misconceptions is probably an understatement, especially with what we're referring to when we say "dynamic range".. because that can describe the scene, the sensor, the codec, and the bit depth of the end file format (which determines usable dynamic range).. but with respect to your 1bit example:


    Bit depth and dynamic range are two completely independent and unrelated properties.

    One can map various bit depth ranges to any section of the amplitude range. Dynamic range refers to the "usable" section of the amplitude range.

     

    if you had a 1bit camera (or maybe better to say a sensor with a dynamic range of 1) that could shoot infinite resolution, it would still need to look at the overall relative scene luminance to determine what it would encode as white or black. in a perfect world scenario, and using 18% gray as middle, anything above an 18% gray reflector would be white.. anything below would be black, there would be no additional tones above or below that, regardless of resolution, so you would never have the details that you see in the newspaper


    That's not how a 1-bit camera would be configured. With such a low bit-rate camera, one would have to create pixel groups with a range of sensitivities among the pixels in each group.

    There are four basic ways to vary the sensitivities of adjacent pixels: 1. electronically; 2. with optical filtration (similar to that of RGB filters on pixel groups, but with ND instead); 3. with different sized apertures (or obstructions) in front of each pixel in the group; 4. with various sized pixels.

    I think that some early digital imaging experimenters tried an electronic cascading technique with CCDs to get a greater range of tones, and Magic Lantern's "Dual ISO" feature can change pixel sensitivity on a per-line basis. Also, I am fairly sure that Panavision was using either varied pixel NDs or varied pixel apertures with their original Dynamax HDR sensor -- a true example of getting greater tonal range by increasing resolution with a fixed bit depth. In addition, Fuji is currently using various pixel sizes on their X-Tran sensor (for moire/aliasing elimination -- not for increased tonal range).

     

    example.. which has images that started out as photographs and then went through a line screen, essentially a dithering process, or as an artists engraving. in other words, they started with the additional dynamic range data in order to know what pixels to throw away, or in the mind's eye of an artist that was imagining that information.


    I am not sure if screen capture/printing is considered to be an actual dithering process.

     

    i'm also sorry to harp on this point..


    Huh? How are you "harping?"
  14. This dream team of cinematographers is far more interested in dynamic range and color science than they are in resolution,...


    Sorry to harp on this point, but resolution is fundamental to color science. Resolution influences color depth just as much as bit depth influences color depth.

    Yes, bit depth is not the same thing as color depth -- they are decidedly different properties. Essentially, bit depth is the potential of color information in one digital pixel, whereas color depth (in digital imaging) is the total potential of color information from a given group of pixels. That "given group of pixels" involves the resolution side of the equation.

    By the way, color depth also applies to analog imaging -- bit depth only applies to digital imaging.

    Resolution is so crucial to color depth that a 1-bit image can exhibit an amazing range of photographic tones and colors, given enough resolution. In fact, every day we see what are essentially 1-bit images, when we look at printed magazines, newspapers and many posters and billboards.

    Misconceptions abound regarding the basics of color depth and how it relates to resolution, so much so that it is doubtful that any one of these "cinematography dream team" experts is aware of these color fundamentals.
     
     

    ... especially when it comes to shooting dramatic narrative content. 4K and higher resolutions don't necessarily help audiences suspend their disbelief, which is (or should be) one of the primary goals for narrative filmmakers.


    Agreed, and, by the same token, "Gilligan's Island" would be just as funny on 4k as it is on SD.

    In addition, I certainly would rather shoot HD with a Sony F35 (or with a BMPCC and a speedbooster) than 5K with a Red Epic. 10-bit HD is enough color depth for most of my work, and the F35 (and the BMPCC) footage looks better to me than that from any of the Red cameras.
  15. Good work on both videos!

     

    However, I don't see anything super special with the skin tones in the color video.  It just looks like it was well exposed and nicely graded, with some interesting lighting in a couple of shots and with excellent moves/expressions from the models (which is 90% of the battle).  High speed certainly helps with the "sexiness" of the feel.

     

    In regards to the brief black & white video, it looks like there could be reduced red channel and/or boosted green (and possibly blue) channel in the B&W conversion.  Gotta be careful with such a technique, because, although it can darken skin in a nice way and make blue forearm veins disappear, subtracting red and/or boosting green gives more contrast to red blemishes.  Boosting blue can also increase noise.

     

  16. Great interview! Lots of good information. Thanks!


    The only other camera to shoot 4:3 for anamorphic is the Arri Alexa Studio (which Roger Deakins used on the latest Bond film Skyfall)

     
    As I understand, all of the Alexa XT models (XT, XT Plus, XT M and XT Studio) have 4:3 sensors.
     
    Furthermore, I think that there are a few 4k machine vision cameras that have 4:3 sensors.
  17. While this may be a useful exercise for you on learning how to run commands through the terminal on a mac, highly recommended by the way, however the whole down scale to 10bit theory is interesting but fundamentally flawed. Simply put it doesn't work...


    Actually, it does work and people have been doing it for years.

    You CAN sacrifice resolution for increased bit depth, with certain caveats:
    - banding in the original image won't automatically disappear with the increased bit depth;
    - the color depth can never be increased (bit depth is not color depth).

    This process was discussed extensively in >this thread.

  18. Change the name of folder "/Users/TaJnB/Movies/untitled folder" to a name without any spaces (or special characters), such as: /Users/TaJnB/Movies/gh4files

     

    Make sure that the gh444 app and your gh4 mov files are in the directory /Users/TaJnB/Movies/gh4files

     

    Type at the terminal prompt:

    cd /Users/TaJnB/Movies/gh4files

    Press "enter" (or "return).

     

    The working directory of your terminal should now be "/Users/TaJnB/Movies/gh4files"

     

    Now, type at the terminal prompt:

    ./gh444 your_gh4_mov_file

    Replace "your_gh4_mov_file" with the actual name of your first gh4 mov file (it seems that you have to do each file one at a time).

     

    Press "enter"  (or "return").

     

    The gh444 app should now be converting your first gh4 mov file.

  19. not a chance! focal reducers will not work on APSC cameras , flange / mount distance is the problem

     

    The problem is not the flange-focal distance, as Sony E-mount and Canon EOSM (EF-M) mounts have very short flange-focal distances (18mm), yet ASP-C sensors are offered with Canon EF-M mounts and full frame/ASP-C sensors are offered in E-mount.

     

    The flange-focal distance for micro 4/3 mount is longer than that of the EF-M mount and E-mount -- micro 4/3 is 19.25mm

     

    Indeed, there are plenty of E-mount focal reducers.  Don't know how they work with APS-C sensors.  My guess is that there would be some vignetting/softness in the corners of the frame.  I seem to recall that someone posted some test shots in this forum a while back.

     

    Alas, there seems to be no focal reducers for the EF-M mount.

     

    So, I would guess that the problem is incorporating an optical element in the focal reducer that is large enough to fully cover ASP-C/Super-35 sensors.

×
×
  • Create New...