Jump to content

cpc

Members
  • Posts

    211
  • Joined

  • Last visited

Posts posted by cpc

  1. 14 minutes ago, hyalinejim said:

    As a starting point, middle grey as understood by the 5D3's internal meter clocks in at around 128 in RGB values, or 50% if you use the "percent" readout. This remains roughly true across all stock picture styles and contrast levels. But values above and below this middle point would change when you change the picture style.

    If you set the spotmeter to show RAW EV (number of stops below clipping) then you're getting a readout of the actual RAW data, which is independent of picture style. Here, middle grey as understood by the metering system is approx -4 EV.

    @kidzrevil You could also try setting your histogram to show RAW values - this helps an awful lot with exposure.

    I recall something like 115 for midgrey but it's been 4 years since I've last shot the 5d3, I may be wrong. Having raw white clip referred values is pretty cool, we didn't have these back then. IMO, the problem with using a histogram for exposure is that it kind of promotes post unfriendly habits like ETTR. The spotmeter on the other hand is all about consistency.

  2. 14 hours ago, seku said:

    (as much as i love the RAW of the 5D ... i want a waveform for monitoring. When i got exposure right, i love the 5D. The histogram helps me with ETTRing, and making sure i don't overexpose. But i have no clue where my skintones fall).

     

    You can use the spotmeter for this. This simple tool is faster/better than a waveform for judging skin exposure and not nearly as obtrusive as false color (you can have it on ALL the time). All you need to know to make good use of it is the map between the numbers you see in the profile you are monitoring with (say, you have the camera set to Standard while shooting raw) and the the numbers you'll get in post after doing your raw import routine. Shoot a grey chip chart, record what goes where in live view (or just record a clip in Standard), import the raw footage and make a table with two columns. Voila,  you now know that +1 is ~175 in "spotmeter values" and falls wherever in your imported footage. You don't really need to memorize the mapping with great precision. All you need is knowing where a -3 to +3 range falls as this is where the important stuff is in an image.

    Knowing your tonal curves is useful in most situations anyway. But it happens to be priceless when shooting raw and monitoring an image which you know is different than what you'll be seeing in post.

  3. 1 hour ago, jax_rox said:

    As you've sort-of covered... it isn't overexposure necessarily. Knowing the curve and where it places its values is extremely important when shooting log. Moreso when shooting 8-bit log. Blanket over-exposure won't necessarily give you better results. As for Arri, I regularly rate them at 400. 800 may give you the greatest spread of dynamic range, and results are generally acceptable. I find rating at 400 better. Knowing the curve and placing your exposure properly will give you even better results.

    One discriminating characteristic of log curves compared to negative is that (on most of them) there is no shoulder. (Well, the shoulder on Vision3 series films is very high, so not much of a practical consideration unless you overexpose severely.) An effect of this lack of shoulder is that you can generally re-rate slower without messing up color relations through the range, as long as clipping is accounted for. Arri's Log-C has so much latitude over nominal mid gray that rating it at 400 still leaves tons for highlights. I don't think any other camera has similar (or more) latitude above the nominal mid point? Pretty much all the other camera manufacturers inflate reported numbers by counting and reporting the noise fest at the bottom in the overall DR. No wonder that a camera with "more" DR than an Alexa looks like trash in a side-by-side latitude comparison.

  4. Banding is a combination of bitdepth precision, chroma subsampling, compression and stretching the image in post. S-log3 is so flat (and doesn't even use the full 8-bit precision) that pretty much all grades qualify as aggressive tonal changes. S-log2 is a bit better, but still needs more exposure than nominal in most cases.

    Actually, I can't think of any non-Arri cameras that don't need some amount of overexposure in log even at higher bitdepths. These curves are ISO rated for maximizing a technical notion of SNR which doesn't always (if ever) coincide with what we consider a clean image after tone mapping for display. That said, ETTR isn't usually the best way to expose log (or any curve): too much normalizing work in post on shot by shot basis. Better to re-rate the camera slower and expose consistently.

    In the case of Sony A series it is probably best to just shoot one of the Cine curves. They have decent latitude without entirely butchering  mids density. Perhaps the only practical exception is shooting 4k and delivering 1080p, which restores a bit of density after the downscale.

  5. You will still get the best HD/2K delivery quality from a 4K camera even if you never deliver in 4K. A Sony FS700 + Odyssey 7Q goes for $3-4k nowadays and can shoot great 4K 12-bit raw. And it can also do high fps for slow-mo, which may be appealing for your music video/slow movement scenes.

  6. 15 hours ago, Don Kotlos said:

    He said the following at the Toronto Film Festival: 

    "But with 16 you have an immediate patina, an immediate abstraction, an immediate thing that makes it a piece of work"

    And I wonder, could it be that this "immediate patina & abstraction" almost dictates a frame of mind and actually helps in the creative process? 

    At least thats how I felt back in the day when shooting with T-MAX 400, and I guess a similar feeling could exist when shooting jpegs with the various film styles of Fuji or Olympus. Shooting RAW has its advantages but I see myself enjoying the process less and less.  

    Every movie from Aronofsky (other than Noah), left a lasting impression so I can't wait to get my eyes on mother. 

    Well, shooting negative film is certainly much closer to shooting raw, than baked (as in jpeg). The negative is unusable until printed and there is a great deal of choices that need to be taken during exposure, development and printing. Choices which command the tonal and color characteristics of the image. There may be "immediate patina and abstraction" in the result, but getting the result isn't immediate by any means.

  7. In the ASC mag interview Yedlin readily acknowledges there is nothing new in the video (for technically minded people). It is deliberately narrated using language targeting non-technical people. His agenda is that decision makers lack technical skills but impose technical decisions, easily buying into marketing talk. Also, in the video he explicitly acknowledges the benefits of high res capture for VFX and cropping.

  8. Testing camera profiles comes with the job characteristic of being a videographer/cinematographer. The easiest (cheapest) way to get an idea of DR distribution is to shoot a chip chart at a few exposures with a constant increment and use these to make a DR curve. Here is an example I did years ago using a Canon DSLR:

    chartDR.png

     

    Also, it is been mentioned already but log profiles do not introduce noise. They raise the blacks making the noise visible. If you see more noise in the graded image, you are not exposing the profile properly. Most log profiles need to be exposed slower than nominal because the nominal ISO of the profile is chosen to maximize some notion of  SNR, which is not necessarily the rating you want for a clean image (after grading). In fact, cameras do it all the time. Take a Sony A7 series camera, for example, and look through the minimal ISOs of their video profiles. See how the minimal ISO moves around between 100 and 3200 depending on the profile? That's because the camera is rated differently, depending on how it is supposed to redistribute tones according to the profile.

  9. 4 minutes ago, Jimmy said:

    Just ran a full test.

    Filled the card with 3072x1308 24fps... ISO100… not much movement.

    Data rate is 82MB/s... Stayed in the green throughout.

    Will test some more difficult shots later. See if it crashes.

    High detail--deep focus--bright will be the highest rate. Movement is irrelevant cause compression is intra-frame.

  10. 36 minutes ago, mercer said:

    I just read that the human eye cannot perceive anything over 8bit... is that true? If so, I still see the benefit of it for professional colorists, but for the novice, a light grade in 8bit should be more than enough. I also understand why Panasonic gave 10bit because they needed that internal Log to compete with Sony. Since Panny's implementation of VLog with the GH4 was disastrous for anybody who only used it internally, 10bit was necessary. And cLog was designed with 8bit color in mind. I would think the 4:22 aspect is more important than the 10bit?

    The human eye does not perceive bits so such a claim makes no sense by itself. The necessary precision depends on many factors. There are multiple experiments on the tonal resolution of the eye. Perhaps the most applicable are the ones done during the research and setup of the digital cinema specifications, which determined that for movie theater viewing conditions and for the typical tonal curves dialed in by film colorists during the DI process and for the encoding gamma of the digital cinema spec (2.6 power gamma), you'd need around 11 bits to avoid noticeable posterization/banding artifacts after discretization (seen in gradients in the darks, by the way). This was rounded up to 12 bits in the actual specification. In an office viewing environment (brighter environments) you can do fine with less precision.

    This is about delivery though -- meant for images for display. You'd usually need more tonal precision when you capture an image, because it needs to withstand the abuse -- pushing and stretching -- of post production. The precision will mostly depend on transfer curves -- you need relatively more for linear images, and relatively less for logarithmic images. With today's DRs 8 bits is absolutely not enough anymore for log curves (and not even remotely close for linear images). It usually does fine for delivery for consumer devices (some of these displays are 8-bit, some are 6-bit; likely moving to 10 bits in the future).

  11. 35 minutes ago, mercer said:

    @fuzzynormal you can process the color from the outset, but you don't have to. For CDNGs, once you bring the Raw files into Resolve, the only real necessity is to turn them into ProRes, unless you only use Resolve from ingest to export. 

    It's probably smart to also fix your WB, if needed, before rendering as ProRes.

     Then open the ProRes files in your NLE, edit and color as you like. If you want more latitude you can send the project back to Resolve for color and then send it back to your NLE for delivery.

    I usually just use the ProRes files in my NLE for edit, color and delivery and never go back to Resolve.

    Some people use different Log profiles. For the 5D3 I use an app called MLRawViewer or a newer one called Footage that I can turn the MLV files directly into ProRes files as C-Log, sLog, Log C, etc. I highly doubt they are the exact curves but Log to Rec709 LUTS work well with them. It's a fairly simple process but it is time consuming. There are other processes that make ML even faster (MLVFS) With BlackMagic you can just ingest, edit, color, title, deliver using the native CDNGs.

    With the C200, it seems like it will be possible to edit natively but either way it will be a lot simpler with the proxy workflow. 

    The true power of raw shooting is that you don't need to fix anything in advance and you can raw develop, edit and color at the same time in a liquid, flexible and creative post workflow. At least in Resolve you can. Proxies shouldn't be necessary. I don't see how a proxy workflow can be simpler -- you add a step (possibly two steps, if you round trip). :) 

  12. 1 hour ago, Ehetyz said:

    Just went and tested this out, 4,6k60p is available in lossless RAW as well as 3:1 and 4:1. The RAW/Prores compression ratios are completely separate from the frame rate/resolution.

    Well, this kind of makes my argument irrelevant then. :)

  13. 1 hour ago, andrgl said:

    You have a source for your blackmagic claims? None of the information you've posted is available from official channels.

    It's funny because your post omits the other half of the usual internet claim of 2 x 11-bit values conformed to 12-bit log. (15 stops has to come from somewhere.) 60fps on the 4.6k can be captured lossless btw.

    Well, I am the source. I don't need official channels for this, I trust math better than any channel. :) 

    The ~11 bits used by the BMDFilm curve is orthogonal to how their pipeline constructs a 12-bit digital image. It is the result of the lack of a linear toe (which creates sparsities in the darks, i.e. values which never happen), as well as unused values at the top end of the range.

    I haven't looked at the specs recently, but 60 fps 4.6K lossless raw would be in excess of 700 MB/s, which I am not sure is even supported by CFast cards currently.

    edit: FWIW, I've written this, I guess it should be ok as far as credentials go: http://www.slimraw.com/

  14. 1 hour ago, andrgl said:

    How is that possible? Any 12-bit capture will have 4 times the amount of bits than a 10-bit file.

    Tried to find a white sheet on Canon RAW but was unable to. Care to share one?

    12 bits is 2 bits more than 10 bits, that is 1.2 times the amount of bits (and 4 times the values that can be encoded). These are nominal bits though. You can think of this as the available coding space. What you put in there is another story. First, the tone curve might not use all the available coding space (for example, the BM 4.6K log curve is closer to 11 bits). Then compression comes in. A BM camera may reduce actual precision to as low as 8-7 bits in HFR (4:1 compression), depending on image complexity.

    You can read the paper on canon log (c-log), canon raw uses the same curve sans debayer.

  15. 7 hours ago, andrgl said:

    12bit RAW is only 30p. :< Sooooooo weak! Damn, basically a perfect camera expect for the gimped output.

    RED and Blackmagic still own the throne for true internal 60+fps RAW.

    Hopefully the C200 lights a fire under the asses of Sony and Panasonic, would be amazing to get 12/14 bit RAW internal recording on their consumer grade shit.

    Don't get fooled by nominal bitdepth. A 10-bit raw frame from the Varicam LT or the Canon C500 contains much more color info than a 12-bit 4:1 Blackmagic raw frame.

  16. 2 hours ago, tugela said:

    No, I don't think it has anything to do with withholding stuff. I have just noticed from their press release that the camera has a DV6 processor - that is a new one. It probably has a different encoder than the earlier DV5 processors, so hardware encoding for 4K will have different bit rates (as well as higher frame rates). There may not be any higher bit rates as a result. 

    The high bit rates with DV5 were likely due to computational restraints to keep the processor in it's thermal envelope. A lower bit rate means more computation and implies that the DV6 is more thermally efficient, which is very good news for Canonites in the greater scheme of things. The implication of the new processor that people are perhaps not picking up on is that there will be a corresponding stills processor, the Digic 8, which will be the sibling of the DV6. It means that hardware encoding of 4K video might finally arrive in Canon consumer cameras (as well as the prosumer ones which currently are forced to use mjpeg as a codec).

    It is actually the other way around. For the same compression algorithm, higher bit rates need more processing power than lower bit rates in both encoding and decoding. :)

  17. 5 minutes ago, Jimmy said:

    And the c300 mk ii

    For now, it is safe due to the codec... But once the c200 gets the full range of codecs... What next for the c300 mk ii... Gotta be worth $7k tops?

    24-bit audio, time code, genlock. Possibly other niceties.

    Also, c300II is priced at $12k, so it is $3K more, not 7k.

    6 minutes ago, tugela said:

    Apparently you get RAW or 8 bit 100/150 mbps H.264 (30/60p), those being the only two options. It makes some sense since RAW has no hardware encoding while H.264 is limited by the thermal envelop of the Digic processor. But, IIRC the other Cx00 cameras can shoot H.264 at higher bit rates, suggesting that the camera the pre-reviewers have is not finished.

    At the price they are charging unless you really need RAW footage, you would be better off with an alternative camera unless something changes before release.

    At 1 gbps their raw is less data than ProRes444. It is around 3:1 compressed. Withholding the XF-AVC codec for 2018 is actually kinda smart cause they buy some time to see what comes next from Pana/Sony, and at the same time don't kill the C300II immediately.

  18. 5 hours ago, mercer said:

    Yeah that's basically how I monitor, unless it's a really cloudy day, I'll bump it a little. What monitoring Profile do you recommend. I'm so used to the Prolost Flat look when shooting, so I have it set to that, but I am open to other ideas if it will yield better results in the long run. For instance, I am working on testing some B&W Raw, do you recommend setting the Picture Profile to Monochrome?

    No profile --flat or otherwise -- will be able to properly capture what you will be able to recover in post, so you might as well use it to help with other things. A punchy profile like Standard will help with manual focusing. Couple this with digic peaking + magic zoom, and you'll be surprised at what focusing precision is possible with just the camera display. Also, learning how the monitoring profile relates to the specific post workflow pays good dividends, especially if you are in the habit of exposing consistently. And if you know which profile value ends where in post, then all you need for exposure is the digital spotmeter (which is really the ultimate digital exposure tool anyway).

    For B&W Monochrome might be useful in that you get to view the tones only, without color distractions, and this might help with lighting choices and judging separation. But this can also be a detriment since color cues can help with space orientation and framing if there is movement in the shot. And the profile's mapping to grey values will not necessarily match the channel mix you will do in post, so it might be misleading.

  19. It shouldn't really matter even if you are on auto WB. You can always adjust one shot in Resolve and replicate the raw settings to all the others. And not bothering with WB while shooting is one of the advantages of shooting raw. In most cases leaving it at 5600K or 3200K should be fine for monitoring.

  20. 58 minutes ago, tupp said:

    ...

    I am sure you can link to a hundred examples like these. Also sure that you don't need me to produce links to 900 focus tracking shots for you. :)

    These are cool and all, but they are certainly not the norm. It is likely that you remember them because they stand out. Do we really need to establish that a huge part of focusing is entirely technical, with no creative intent whatsoever? You wouldn't use AF so that it gets creative on you; software has no intent of its own, it can't conceive anything (well, unless we dive into some metaphysical depths, which we shouldn't). You'd use it to do the chores, not the thinking.

     

  21. If you think you can track focus in a moving shot better than decent AF you are overestimating your focus pulling skills. In a few years even pro pullers won't come close to AF success rates. Focus is simply one of these things an algorithm can do vastly better than humans. Yes, there are some specific shots where doing it through AF would require more work, but how often do you actually rack or do fancy creative focus work? 90% of pulling is tracking objects.

  22. On 5/2/2017 at 4:09 PM, Arikhan said:

    @cpc

    Just beeing curious: How does the 1080p-ML-footage looks, when upscaled to 4K? Is the look better when

    A. You upscale to 4K in post or
    B. When you display the 1080p-ML-footage on a 4K TV (as most 4K TVs do a very good upscale of FullHD footage) ?

    No idea. I haven't attempted upscaling to 4K. I don't think it makes much sense. People used to upscale 2.8K Alexa to 4K before Arri enabled open gate, and it looks fine, but 1080p might be a stretch.

  23. 8 hours ago, mercer said:

    Has anybody tried a modest bump to 2.5k or 2.7k. I would love to be able to get 2.5k 2:39 working one day. Hell, even cinema 2K at 48-60p would get me off.

    At 2K you'll be giving up the full frame benefits for a miniscule increase of pixel count (I deliberately don't say "resolution", it is likely you won't get true higher res at 1:1 2K as 1:1 puts much higher requirements on lens sharpness). It is almost certainly better to just upscale 1080p to 2K.

×
×
  • Create New...