Jump to content

GMaximus

Members
  • Posts

    79
  • Joined

  • Last visited

Posts posted by GMaximus

  1. Which one has the best dynamic range? In the comparison, it seems to assume both GH4 and A7S have same dynamic range.

    DXO Labs test result:
    k5_gh4_a7s.jpg

    We have to keep in mind that if we shoot slog2, the base iso is 3200, so in bright conditions a7s loses lots of DR compared to GH4 or itself.
  2. None of those gamma curves are providing a real increase in dynamic range.

    They do
     

    While the A7S isn't quite a one-trick-pony (low light), the GH4 is a much better all around camera.

    TE4C5b9DT2g.jpg

    "Who said it to be so? Let the sword resolve it! Contest! And God be the judge!" (which means we want side-by-side footage :) )

  3. I'll go first. 

     

    I bought and then sold my FS100 because I thought that the cadence looked all wrong to me. Here's an example (not shot by me), which in all other respects looks good to me: 

     

     

    To me it looks like a combination of a terrible image stabilisation (especially when she walks @0:24) and slow shutter speed. So when the camera is handheld, every second you get a smooth motion followed by a hard drop - not very pleasant.

  4. it's mostly codec implementations that do this i reckon.

     

    Since non i-frame codecs dont display individual frames, motion is usually more "smudged". All Sony cams look like this to me. Things like AVCHD are taking one complete image, then making, lets say the next 12 or so, by altering the full, or "I" frame. Oddly, the C100 is AVCHD but has great cadence.... there's always an exception ;)

     

    Raw streams and ProRes streams tend to look great motion wise, but they're more like individual film frames, so they would.

     

    It's often down to taste, I wasn't a fan of FS100 or GH2, but liked even the 550D, somehow I think Canon nailed it. RAW and I-frame codecs get rid of most of this issue though. 

     

    Then it comes down to other factors, like colour and just the overall "look".

     

    Those are ALL-I, not just i-frame codecs :)

    Every blueray movie we watch has lots of p- and b-frames between i-frames, but you usually don't experience those cadence perception problems. That's just how MPEG works since the DVD days. Or even older Video-CD times.

     

    If motion cadence problem is actually the frame jitter, then you can test it experimentally.

  5. I don't think Cinelike D is full log, it is slightly flat though.
     
    You don't need a "colourist" to shoot Log Gamma, I'm not sure where that comes from. You just need to learn a few things a colourist would have to know.

    I'm sorry, i've never did it. That's just what i've read about it:

    I know the directors want total control over their image and I do respect the colorists suggestion and their hard work. But still with the low budget shoots , where you don't create a LUT for every scene beforehand and can't afford a monitor with custom LUTs, the DPs are losing control shooting on LOG.

    Advice to producers - the 30 second version
    I don't have time for a grade; I want a finished picture straight out of the camera: Use Standard Gamma 5 (STD5)
    I / my editor will be grading it on our in-house edit suite: Use Hypergamma 7 or 4 (HG7 / HG4)
    It's being finished in a dedicated grading suite, by a full-time colourist: Use S-log


    S-Log:
    Exploits the full range of the F55 chip, but must be graded by an experienced colourist. Easy to over-expose; read up before shooting. Sony says S-log has 1300% dynamic range, capturing 14 stops of latitude.


    Nate Weaver: It's pretty simple. Hypergammas make an image that is kind of "ready to go" from an editorial standpoint. It's intended for people who like to paint their cameras and create in-camera what gets seen in the final product. Intended for minimal color correction.

    S-Log on the other hand is intended for folks who want the cleanest, un-adultrated image going into post. Full sensor dynamic range, no matrix "looks" in the camera, no sharpening, nothing. The philosphy of "raw" but still recorded into a video signal. It also assumes there is somebody in post who has the tools and skills to make a pretty image, most likely a colorist.


    In post production as the curve is so close to Cineon you will be able to use almost any Cineon compatible LUT’s or Looks. It’s also very, very close to Arri’s Log-C curve so LUT’s designed for Log-C will work very well with SLog3 and Sgamut3.cine making it much easier for many colourists to transition from cameras like the Alexa or a film based workflow to material from the F5/F55.

    ---
     

    Log gamma is a bumped logaritmic gamma curve effectively squeezing in more dynamic range. If you start in post with a purpose-made LUT you'll be in normal gamma again and you can grade from there, or you can grade from log if you like. Using it doesn't require that you make your living exclusively colouring commercial footage!
     
    It's worth noting that you get more tolerant highlight protection in log, but unfortunately it becomes even harder to nail exposure correctly. 
     
    So if you're going to shoot log, spend some days in post correcting different shots so you get a feel for it.

    So what's the workflow?
    Do you just nail the exposure and then use some standard LUT (or de-log) in post - and voila - the colors are right?
    Or do you need to create a LUT for every scene before you shoot?
    Do you have to spend days of work on the colors only if you create a unique palette?
    For example, if someone shoots with F55, slog3 - and then simply applies Cineon compatible LUTs - will there be a good film-looking result, without extensive color corrections?
  6. Yeah, this confused me a little too. Does sound redundant. I think it's just a matter of the writer not understanding the lingo... Although admittedly I've never fully understood why one log curve is different from another. Like why is log in the blackmagic cameras so much flatter than Log C in Canon cinema cameras? Obviously for practical reasons it's harder to deal with such a flat curve when shooting 8bit, but I've never really gotten it from a mathematical standpoint.

    Those gammas are intended to be used in different shooting and production situations:
    http://www.johnhoare.tv/f55gammas.htm
    http://blog.abelcine.com/2013/01/18/sonys-s-log2-and-dynamic-range-percentages/
  7. Really excited about this and I feel like it will come out right when the Shogun launches.

    Aren't those Cinelike- profiles "loggy" already? This roumor sounds to me like they're going to introduce another log profile.

    Log is amazing but most people who dont know how to color have no need for it

    If you shoot log, you NEED a colourist ) There are plenty of GH4 vids at Vimeo shot Cinelike- and shouting - i don't know what to do with the colours! )
  8. An yes ... They prove in the video that the base ISO in S-LOG 2 is 3200 ISO minimum. That sucks

    That's so strange. The only purpose of S-log2 existence is to capture up to 14 stops of DR. Why would they limit s-log2 usage by sensor sensivity? What's the point of s-log2 if the sensor captures 10.5 stops DR at best and it's color capturing performance drops so much?

  9. I'm as excited as anybody over this camera, but one reality check: if this sensor is really as good as everybody's hoping, why is it showing up for the first time ever in a $2500 NEX body?  It just doesn't make sense from Sony's perspective. Having Den Lennie shoot the promos and all the non-line skipping, low megapixel (or rather, APPROPRIATE megapixel) etc talk gives it the whiff of being aimed at video people strongly.

     

    Far more people are going to buy it for stills. Almost any enthusiast photographer is keen to shoot in low light.

    And yet Sony can put this sensor in FSsomething, make a 10-bit color output and sell it for 10 grand.

  10.  

     

    Nobody seems to be talking about "compression efficiency" in regards to the A7S. But most bash the 50mbit/s, furthermore without having used it. How is that for conceptual pixel peeping?

     

     

    I don't see people here bashing A7s for the bitrate... let's be solid on our opinions and wait for some A7s footage to discuss )

  11. The GH4 is also a bit thin in the skintone domain. The reason 5D3 RAW is called the baby Alexa is that skintones are handled extremely well, without having to go to extreme lengths to make skin look good.

     

    The reason why we'd like Canon to make something to compete with GH4 and A7s. OK, if it's a crop, and maybe even not 4k, but with all that jazz color stuff.

  12. There seems to be a lot of criticism about this 50Mbit/s of A7s but no one complains about the 4K from GH4 being "crippled". The equivalence if I understand correct would be to have 4K at 200Mbit/s.

     

    You were given an answer in a nearby thread which you've probably missed. Yet again, that "transfer rate" is just how much data you get for every pixel when you record internally. Compressor efficiency and workflow not taken into account, so that's just useless conceptual pixel peeping with theoretic pixels.

     

    There is no equivalence, unless you're trying to say "if i put off 2 wheels from my car, its equivalent speed would double".

    Or, as they say, if grandma had balls, she would be grandpa.

  13. Argh. Just saw a fantastic aquarium clip from the gh4. Wow! What clarity.

    But regarding the compression, isn't it true that the A7s sends more data? Not many people talk about that. Since 4k is presumably the main use of the gh4 and this is done in 100mbps- that is 24mbps equivalent to HD, whereas as we know the A7s HD is 50mbps. No?

    Those mbps are not about sending data somewhere in the middle, it's just what you get after compression, at the end.

    GH4 compresses HD at 100 and even 200mbps, so no. Looks like A7s doesn't have enough processing power to compress 4k with adequate quality.

  14.  

    ...it is likely H.265 will win the battle in the long run even if VP9 offers better quality for the same bitrate. On the other hand Google’s role in developing VP9 and the lack of licensing fees could really push it in front of H.265.

    It happened once, when VP6 was better than h.263. Now you find your old VP6 video - and - ooh, how do i playback this?

    Hardware support is a big deal.

  15. The smaller sensor needs the lens to be opened wider in order to get as shallow dof and thus is closer to its theoretical limit.  

     

    Nah, to keep that DOF with smaller sensor you can have a longer lens and make a few steps further away )

     

    To me, a larger sensor system just gives you more options in shooting -  you have more control in capturing light, angles, geometry, dof, in exchange of mobility and money.

     

    So you think of your shooting and check if you can make it with this or that gear. That's one approach.

     

    '>

    For example, can you shoot like this with GH4?

  16. If the coming camera can record the full sensor in a 4:3 aspect ratio at 10bit 4:2:2 I'd be very, very happy. It's the way anamorphic shooting was intended many years ago, and for good reasons.

     

    Fingers crossed.

     

    As it's there in the spec sheet, the sensor readout speed is only 22.5 fps.
    If we crop the central part of the sensor, 4k sized: the speed is 1/81300 sec per line, we need 2160 lines => that's 37.6 fps.

    If we crop the readout part to 16:9 (full sensor width), it would read 31.1 fps.

  17. The 12bit raw Cinema DNG for some reason did not utilise the full 1080p frame in Resolve – as you can see in the sample shots it has a black border. I really have no idea what is going on there!

    I've talked to a guy working with Ikonoskop, he told me that it's resolution is somewhat bigger than FullHD, so you can crop. That could be the reason for black borders.

    Right, 1966 x 1092 as stated here.

  18. Hi,
    > So they key to home cinema is probably more in the seating position and screen size than whether you choose 4K or 2K

    it's also the choice of how much of your viewing space (FOV) would be filled with the screen. If we match the optical resolution of 2 screens, 2k and 4k, the 4k one would go much farther into your peripheral vision. That has an interesting effect involving vestibular apparatus.
×
×
  • Create New...