Jump to content

Shirozina

Members
  • Posts

    805
  • Joined

  • Last visited

Posts posted by Shirozina

  1. 1 hour ago, webrunner5 said:

    Bullshit. I have never seen any footage out of the PK4 that looks like a BMPCC or a BMCC. And apparently everyone that has bought one on here must be broken camera because I haven't seen really Any footage from them on here either. Talk is cheap on both sides.

    But sure as goofy as the PK4 is, it is better in a few respects than the old ones. But that was a not a hard task to accomplish. I am not buying one, that's a fact. I'll buy a GH5s.  Same look, better camera..

    I'm glad the P4k doesn't look like the BMPCC as I didn't want a 'pseudo film look' to have to wrangle back into something that is mixable with other cameras. I didn't buy the GH5s as it was not as good as the P4k for what I wanted for an addition to my GH5 i.e -  RAW,  internal 60p 4k and ProRes and it's also a hell of a lot more expensive esp when you fit it out with a set of V90 cards. 

  2. 4 hours ago, dia3olik said:

    hey guys! any news about this? maybe someone is still in contact with Atomos about a new firmware with this fix?

    Thanks!

    A few months ago (after raising this twice with Atomos) they said they would refer it to their development team......

    I also think it's the responsibility of the camera manufacturers to allow users to select data or video range as the HDMI output. Having said that I don't believe it's simply a data vs Video range issue as I observed that the scopes were accurate but the issue was with the zebras. Someone from BM in an old thread said the zebras in the P4k are driven from the Luma (Y) channel which is fair enough in a YCbCr codec like ProRes but the scopes are in RGB and NLE grade controls work in RGB so there is obviously some problem with the YCbCr - RGB translation. Other external device manufacturers seem to be able to do it right though so I advise people to keep on telling Atomos they need to fix this.

  3. 4 minutes ago, newfoundmass said:

    You had a bum camera if your 1080p looked like upscaled 720p. The GH3 has lovely 1080p. Moving from the 6D to the GH3 would be like night and day; the GH3 is miles ahead of most Canon cameras, even some of the newer ones (minus the obvious exceptions) in terms of image quality. 

    I was still using my GH3 as recently as last year as a b-cam before retiring it and giving it to my nephew, who has shown interest in shooting video. 

    With that said I wouldn't recommend the GH3 in 2019 unless you absolutely can't afford something a bit newer. It's a great camera, and it is a work horse for sure, but it's hard to recommend a camera that is over 7 years old. 

    A G7 is a very good option on a budget. I don't recommend shooting in 4K and downscaling to 1080p though unless your system can handle 4K editing or you don't mind very long render times. In most instances it just makes sense to shoot in the resolution you'll be delivering in. 

    I agree I must have been doing something wrong looking at some of the samples here and from my archive. 

  4. The colour and grade on that video looks terrible but maybe that's the 'organic/filmic' look that people want?

    I've got the original BMPCC and I liked it a lot but the P4k is better in every way apart from size. The codec ( esp RAW) is so malleable that you can get any look out of it you want as long as you have the grading skills. If these are just limited to slapping on a 'film LUT' then I can see why it may be a disappointment to some people. 

  5. I had a GH3 and got rid of it pretty quickly. It was nice to use but the HD quality was pretty mediocre and resolution wise not true HD and looked more like 720p upscaled to HD. 4k-HD is  the way to go and most good NLE's will allow you to work in an HD timeline with 4k material so you have no need to do any manual downscaling. 

  6. If you need to trancode for the reasons you explained I can understand this as a reasonable solution. I tried it a while back and found  an obvious increase in things like banding in smooth tones like skies unless I used one of the highest quality settings with 4.4.4 chroma subsampling but the file sizes got incredibly large. ProRes is after all a lossy compressed codec. Depending on your type of subject matter and how much you need to grade this may or may not be important.

  7. 5 hours ago, webrunner5 said:

    I don't think the Dell XPS 9570 is a great idea. It is a turd according to Amazon. I guess if you get a good one but.. Not many others people reviews around?

    https://www.amazon.com/product-reviews/B07D3M3377/ref=cm_cr_arp_d_hist_1?ie=UTF8&filterByStar=one_star&showViewpoints=0&pageNumber=1

    On the basis of reading any amazon review you wouldn't buy anything......

    No thin laptop is a good idea for tasks that require prolonged high intensity CPU and GPU use.

     

  8. On 12/21/2018 at 4:56 AM, TurboRat said:

     even though the highlights are not clipped, they can't be recovered even by highlight recovery in Resolve. 

    If the highlights are not recoverable then they 'are' clipped. The zebras are inaccurate IME and need to be set at 95% or even 90% when exposing using ETTR.

  9. 1 hour ago, Danyyyel said:

    I dont give a damn about the nyquist theorem because I and 99.99% of people can barely see above 2k in a cinema screen. So what difference it will make, weven if I do know that Bayer sensor with the fact that they contain only one color per pixel need about 1/3 more pixel to get to the said resolution. But, all this math mean nothing if you can't see it.  

    I can tell the difference. Not saying that 2k is not enough but 4k does have more detail. Detail / resolution  is however the least important ingredient in 'cinema'. 

  10. 1 hour ago, webrunner5 said:

     really don't think Any of the variable ones seem to work when you get up to 6 stops or more.

    That's why they have stop limits. If you are brave you can split the filter and remove the stops and get a bit more density range as long as you watch out for the X.

  11. IME you have to spend big to get a VND without most of the common issues; softness with long lenses, flare and color shifting

    https://www.schneideroptics.com/Ecommerce/CatalogSubCategoryDisplay.aspx?CID=1882

    If you spend money on good lenses, cameras etc why risk it by sticking 2 layers of cheap uncoated glass infront of it?

    If these prices are out of your budget then I'd stick with a set of single strength ND filters (with multi coating)

  12. 11 hours ago, thebrothersthre3 said:

    You are going to be shooting at 400 iso? I wouldn't think you'd need any noise reduction unless you were shooting higher.

    Even at 400 ISO NR is needed if you want to push the shadows / have clean deep shadows which the OP may well be dealing with shooting in a cave.....

  13. 1 hour ago, canonlyme said:

    thanks guys,
    I actually had the same issue with sharpening and it takes me so long to sharpen every clip with unsharp mask in post, next time I will film with more in camera sharpening! 

    Even with sharpening set to minimum it's still too sharp for some - each to their own I guess. I nealy ruined a couple of shots a couple of months ago when I accidentally left the sharpening at 0 and not -5 and had to do some softening in resolve to get it to look natural. and not over sharpened.

  14. 2 hours ago, kaylee said:

    since when is "good enough" good enough?

    there isnt a camera on the market that has skin tones as good as im getting with 5d3 raw. its the most lifelike image that money can buy!

    alexa skin tones are the most overrated thing on the plant earth. they make people look

    a) dead

    and

    b) yellow

    and thats the gold standard right now

    SAD!

    Canon makes everyone's skin look like they have just been on a sunbed - flattering is not accurate.

  15. I guess readout speeds will just get shorter so it's not as obvious but I'm not sure a true global shutter will be around the corner anytime soon as manufacturers concentrate on more resolution and DR. If you compare the original BMPCC with the new 4k model then rolling shutter was an obvious problem with the first model and did restrict it's suitability in certain situations whereas although the new camera has rolling shutter it's much less and is not an obvious 'artefact' most users need to be concerned with.

  16. The GH5 is getting on a bit now but is still delivering top notch image quality due to it's internal ( and external at 60p) 4k 4.2.2 10 bit codec. If you can't get excellent footage with this then your problems is not with the camera. Add to this the superb IBS and general ergonomic usability other manufacturers are still playing catchup. If anyone is worried about getting stuck in an M43 lens system then just get a speedbooster and use  APS-C/ s35 or full frame glass. A BMPCC4k is a cheap addition to any GH5 based system as it adds RAW, 60p internal and low light capability but it's not IMO a replacement.

  17. 11 hours ago, Mokara said:

    Having a sensor that can do these things is one thing, having a processor capable of handling the data flow is quite another.

    You will get more accurate color from downsampling because the composite pixel is based on more information than a single physical pixel. It will also increase the bit depth of the composite pixel (assuming the original data was 8 bit, it would convert it to 10 bit). That does not mean increased dynamic range however, but it would result in more accurate color and luminosity. Shooting in 8K with a beyer filter in place means that you should be able to resolve true color at 4K resolution (assuming you are using a RAW feed of course) since each composite pixel would be receiving input from two green pixels and single red and blue pixels.

    The CFA density dictates the colour fidelity and accuracy. Increased subsampling will help but I repeat it can't invent colours that were not captured. By this I'm talking the very slightest differences in hues needed to get critical colours in skin tones right. I shoot 8k stills in 14bit RAW but I don't get a miraculous improvement in colour by sunsampling to 2k and certainly not anything approaching the quality in colour from a good MF digital back which isn't trading sensitivity for colour fidelity with weaker CFA density.  

  18. 38 minutes ago, KnightsFan said:

    Actually you can.

    For DR, downscaling reduces noise because for each pixel in the downscaled image, you combine four signal values that are (almost always) highly correlated, and combine 4 noise values that are not correlated. Thus, your SNR will be lower and with a lower noise floor, you have more dynamic range in the shadows. You can verify this by using dynamic range testing software, or by a simple calculation-- imagine 16 adjacent pixels that each have an ideal value of 127 out of 255 (e.g. you are filming a grey card). Add a random number between -10 and 10 to each one. Calculate your standard deviation. Now, combine the pixels into groups of four, each of which has an ideal value of 508 out of 1020. Calculate the standard deviation again.

    The standard deviation on the downscaled image will be lower if the random number generator is, in fact, random and evenly distributed.

    (This works because in the real world, the signal of each pixel is almost always correlated its neighbors. If you are filming a random noise pattern where adjacent pixels are not correlated, you could expect to see no gain in SNR.)

     

    As for color fidelity, a 4:2:0 color sampled image contains the same color information as a 4:4:4 image that is 25% of the size. Each group of 4 pixels, which had one chroma sample in the original image, is now 1 pixel with 1 chroma sample.

    In the context of the original post I was responding to you can't make up the difference between and Alexa's  DR and colour capture buy subsampling an 8k 'consumer grade' sensor. 

×
×
  • Create New...