Jump to content

pietz

Members
  • Posts

    136
  • Joined

  • Last visited

Everything posted by pietz

  1. i have the transcend card myself also 32GB. im gonne try sanity 5 first, since the bitrate goes only to 38MB or so. if i have trouble i will look for flowmotion.   thanks
  2. i did a few tests on my own. Smooth did much better than Nostalgic. i also tested Vibrant but that was even more awful than nostagic (at least on 1250). it seems to me that whatever nostagic offers more in dynamic range compared to smooth is so full of noise that id rather not see it. i like the colors in smooth much better anyway :)   i can also confirm that the top level ISOs are better than the middle row ones, BUT ONLY if you preselect the middle ones first. if you switch directly from 320 to 640 the noise is much worse than coming from 800 to 640. so no surprises here as well.   i can also confirm that iso 640 is better than 320 but 640 seems also to be alot worse than 160, which is no surprise to me, but is different than QuickHitRecords test results.   plus a lot of the noise seems to come from the low bitrate since it doent look that much grainy but more like artefacts.   i now wanna get into hacking the GH2. i found this website which seems to explain it pretty well:  [url="http://www.sam-mallery.com/2011/11/an-ez-guide-to-hacking-the-panasonic-gh2/"]http://www.sam-mallery.com/2011/11/an-ez-guide-to-hacking-the-panasonic-gh2/[/url]   however i just dont know which hack to use. i want it as stamble as possible and i dont want anything much higher than 50MBit. can somebody recommend a good hack and provide a link?   thank you
  3. may i dig up this thread again? i bought a GH2 a couple of month ago and just shot my first project with it. it was just some video coverage, which is why i shot with the 14-140 most of the time. with a iso of 1250 almost the entire image had terrible noise. i was a little underexposed, but the noise level was just ridiculous! it cant just be a problem with the 14-140mm, can it? i mean its not very fast, but if i expose correctly and use 640 iso i should be fine right? because in reality its really not. im using nostalgic most of the time. can that be the problem?   im going to do some tests in the next days. if you guys have any ideas or thoughts to share i would highly apprectiate it. :)
  4. please QuickHitRecord check that myth of having to step down to get lower noise.
  5. summarizing what youre saying: if you only switch horizontally the middle row is best. but the top row is even better if you first switch to the next higher setting in the row underneath and then to the top setting. wow thats weird as fuck. i think that i was shooting on 1250 and only switched horizontally. is the noise really that bad then? i remember a sheet for noise of canon cameras. multiples of 160 are best, then 200, then 250. its that big of a difference that iso 1250 has less noise than 250. for that reason i love the iso layout of the GH2, but what you guys just came up with seems really weird. thank you everybody for your help.
  6. everything i learned about video dslr happend on my companies 5D2 and now that i got my own GH2 there still a few things that i find confusing for now. maybe it just different. 1. the LCD and EVF keep adjusting the brightness of the liveview to what it should look like, but not what it actually does look like. so the actual picture might look much darker or brighter. i find this confusing and cant really see the point. i noticed that this doesnt apply for the manual video mode. im not sure if i changed a setting by accident, but i deffinately want this for photo modes as well. how do i do it? 2. do i understand correctly that when im in regular manual mode (M) and i hit the video record button the video bitrate will be lower as in high-bitrate manual-video-mode? and why? 3. is it by any chance possible to have one mode where i can record high bitrate video through the video record button and still take pictures? not at the same time but in the same mode. 4. some of the videos i recorded show a very high level of noise. can i somehow check what the iso was when i recorded that clip? thank you for your time!
  7. thanks so much axel, i didnt know about the "interpret footage" option. i always used Speed/Duration which worked fine for 50 to 25fps since its just 50%, but for 24 to 25fps its a hassle to calculate and premiere probably wouldnt do it exact this way.
  8. im still deciding if i wanna shoot in 24 or 25 frames per second. i often see hollywood movies that have been converted from 24 to 25fps, so that they play on european dvd players. this has been done with speedup, which makes them a couple of minutes shorter in total. its a difference that i dont notice and just cannot see. i find it hard to imagine people looking at it and saying: "thats not a hollywood look anymore". now im not quite sure if this is also true for the GH2. does the 24fps mode somehow look "better" than 25fps? are there other things that play into account except for the physical change of the framerate? i know the reason why 24fps looks more cinematic but 25fps is so much closer than 30fps for example, that im just not sure if you can really see the difference. the reason im trying to defend 25fps is because im from europe and dvd compatability for example only comes with 25fps. any thoughts on this? oh and does somebody have a good program on windows for framerate modification? i used cinema tools on the mac, which simply let me edit the fps without reencode. i havent been able to find something similar on windows. i dont like to do this through premiere because converting 24 to 25fps is a very odd number in percentage and i dont trust premiere doing it exact enough. Edit: uhm...can the gh2 actually record 25fps? :D
  9. thanks galenb. i agree, go with the GH2 for superior video quality and sharpness or a FF of choice for the look and slight moire and aliasing problems. with the 5D3 being probably the best FF fpr video right now, i keep thinking about the 6D because for video purposes its essentially just as good as the 5D3 except for the missing headphone jack. would you agree? only 97% viewfinder coverage, little worse AF system and a little lower resolution dont hurt me as a videographer. and for that its 40% cheaper! can anybody say something about a comparison between the D600 and the 6D? they are both about the same price. an advantage for the 6D would be that a lot of my friends are using canon cameras, borrowing their lenses is a big plus. is the D600 that much better to compensate for that? is the A99 woth the extra $800 dollars?
  10. many people are scared of the word "radioactive" because of obvious reasons. however the radioactive level of these lense coating are almost impossible to measure and can be neglected for health reason. they do however change the colorlook of the image. thats something some people like and others dont :) you have to see for yourself. its nothing that a great lense needs though, if that was your question. No that Nikon lense doesnt have it. to give you a little more input, i would recommend the Super Takumar 50mm 1.4. look at a few examples and take your pick! may i join this thread with my own questions? dont want to open up a new one :) i cant decide between the following cameras: GH2, GH3, 6D, D600 and A99. although the last one is actually over my budget. $2000 should be the max. the first question i asked myself is: do i need FF? and actually i believe right now that i do. i can achieve the shallow depth of field with faster lenses, but youll always get a more cinematic look with FF. can you convince me of the opposite :) ? than it would come down to the gh2 and gh3. the gh2 is much cheaper and with the hack i dont see big disadvantages to the gh3. i mean the sensor is even a little larger and we dont have to worry about moire. what do you think?
  11. so except for the headphone jack there are absolutely NO disadvantages to the 5D3? and it costs 40% less? thats going to be interesting. a few rumors said the GH3 would cost just under 2000$ which is about the same as the 6D. so now consumers can decide if they want a brilliantly equipped camera, but with a 4/3 sensor (GH3) or a Full Frame camera with a few known flaws (6D) for about the same price.
  12. all of the changes really sound fantastic. however the verge names a price point of just under 2000$ and for that price i would like to bring back the crop discussion. especially if the new sensor isnt multi aspect. that means a 2x crop for video. at that price the question comes up if i shouldnt spend an extra 800$ and get the A99 which is Full Frame. There isnt any footage yet, but i imagine that sony worked their magic on video quality and that camera really looks like a dream. sony has really impressed me in the passed weeks, showing of technology that other companies can only dream of. in other words. with all the amazing new stuff they put in the GH3, the M43 sensor now seems not enough. Everything appears as an professional powerhouse of features, but the sensor is even smaller than before, which many people see as the most important hardware component in a camera. i would sacrifice a little sharpness for a more cinematic look of FF any day.
  13. [quote name='EOSHD' timestamp='1346500644' post='17089'] What a condescending tone. ProRes on the Alexa is not the same as ProRes on the Blackmagic. I don't yet know how ProRes performs on this camera, I very much doubt it will give you 13 stops of usable dynamic range or as much as raw. It certainly doesn't give you as clean resolution or as much or if or a way to reduce aliasing by downsampling in post to 1080p and equally it doesn't up-res as well to 4K. So let me get this straight, with your $2.5k budget you spend a boat load of cash on a monitor, 20 people to construct a tent so you can see it, a truck with a generator so you can power it and then two more trucks so you can move it around. Takes you an hour to move 100m with that crap. And you have this Alexa beast that shoots ArriRaw... AND... You choose NOT to shoot raw to gain a little hard drive space. Insane! I'm not anti-ProRes. I'm just in love with the look of CinemaDNG on the Blackmagic and that extra resolution provided by 2.5K and the way the raw material can be pulled around so much in post. Image quality all the way for me. I feel that if the film and TV industry really wanted convenience and to save money, ProRes is the last thing I'd look at frankly. If only you guys listened to all that new blood with the better ideas THEN you would save time and effort, instead of dismissing them as not knowing what they're talking about. [/quote] the point was not that we're shooting ProRes because its cheaper. thats actually why i mentioned the budget, to make sure that you know its not about the money. but shooting in raw is just not neccessary on the alexa. it just isnt in a real life example. shooting in prores however makes the workflow faster and easier. and the "faster" part is priceless in the advertising industry. yes my tone is condescending because there are a few people here who worked with raw stills instead of JPGs and they translate this example into motion picture with raw and prores and thats just not the same! thats why i told you to try it yourself. Andrew, i know that there are many professionals out there doing something that has always been done like that, but that doesnt apply in the digital world. everything is still so new to professionals that they do every possible test to determine if you get any real life benefits when you shoot raw. you, thinking that a couple of guys with new ideas would make it better than professionals, without actually knowing how and why they do it that way, makes you ignorant.
  14. i work for a commercial production studio and we just shot a job with a budget of 2.5 million $. with this kind of budget you would expect the freedom to shoot on every possible camera out there and youre right. we shot on the arri alexa in 1080p prores for so many reasons. i find people here who say "people who dont see the benefits of raw, dont know what they are talking about" and the funny thing is that i get the feeling that many of YOU dont know what they re talking about. for example mattbatt, if you honestly believe that in a few years prores will be seen on the web, you have NO idea what youre talking about and clearly dont know a thing about prores. honstely. ProRes was created to be graded in professional purposes. you absolutely cannot compare this to a high bitrate H264, it will never be the same! you cannot put the argument out there saying raw gives you 12bit and prores only 10bit if you dont know what it means in real life. there is absoluty no way in hell and physics that you can tell the difference between 10bit and 12bit material. its so far from being possible. 10bit means more then 1 billion different colors that can be created. even if you decide that you want 75% grey to be white, thats more than enough. also andrew 13 stops is not a plus for raw, it doesnt have anything to do with that. the alexa has 14 stops and records in prores, whats the point youre trying to make? as one of the people in the 2012 camera shootout said, "its much more about workflow these days" you can import the alexa files directly into AVID and also use them for grading. thats about as easy as it gets. not only the space on HDDs but the transfer speeds and computing power you need to seemlessly edit uncompressed is extremly pricy. if you havent compared Uncompressed footage to ProRes you should not be talking here. do your homework and come back with evidence, because it blows my mind every time i realise that prores seems to have no boundaries and its about a tenth the size.
  15. first of all. dont just take my word for it :) [url="http://en.wikipedia.org/wiki/YUV"]http://en.wikipedia.org/wiki/YUV[/url] on the wiki link youll find a section "Conversion from RGB" it describes how a YUV signal is created from the 3 RGB channel. Y is the luminance channel, the one that holds information for the brightness of an image. if you look at the calculation you can find the exact percentages that i stated above, to create the luma channel. thats where the numbers come from. 1. well the human vision adjusted to what it needed to see. thats why green is right in the middle of the human visible color spectrum and red and blue are more to the lower and upper limits. so yes, we can see different wavelengths of light (=different colors) differently good. 2. i know that this guy talks really bad about the bayer pattern but you have to understand what we achieved through this. we dont need 3 seperate sensors anymore, but just 1. OBVIOUSLY, through this reduction we have some quality loss, BUT because of the bayer pattern this quality loss is REALLY low. [url="http://scien.stanford.edu/pages/labsite/2007/psych221/projects/07/demosaicing/bayer_cfa.JPG"]http://scien.stanfor...g/bayer_cfa.JPG[/url] take the first 2 by 2 pixels of the linked picture. the color of the pixel relates to the color information the pixel can store. out of these 2x2 (B-G-G-R) single color pixels we can create 1 fully custom colored pixel, the result we want for our picture. we create 1 finished pixel out of 4 R, G and B pixels. thats means we need 4 times as many R, G and B pixels as the resolution we want to achieve, right? WRONG! see, to get the next custom colored pixel we dont have to go 2 pixels to the right, but only 1! we can re-use the 2 pixels in the second column and combine them with those in the third to get the next fully colored pixel. every intersection of the black lines in the picture stands for 1 finished pixel that can be created from the pixels surrounding it. that means if we want a picture with a resolution of 800x600Px on a bayer pattern, we need a sensor with a resolution of 801x601Px. thats pretty amazing! i hope you were able to follow me with this.
  16. ok let me try to help you as good as i can or as far as my knowledge goes. your answer in one word, would be: "[b]Evolution[/b]". if you have a 3-sensor camera (1 sensor each for red, green and blue) you essentially get 3 different color streams that you have to put back together. if you mix these colors 33,3% each, you would notice the final image looking very odd. to get a correct image you need to mix the colors like this: [b]29,9% Red, 58,7% Green and 11,4% Blue[/b]. Here we can find the reason why there are twice as many green pixels on a Bayer sensor then red and blue. Because the green color has to be boosted almost twice as much to get a natural looking picture for the human eye. Now imagine a picture that is mixed 3x33,3%. The reds would be much stronger and the blues even more than the reds! this wasnt very practical for the first human beings 100.000 years ago. the bright blue sky blinded the shit out of them! the color that was most important back in the days was green! because they went hunting in the woods. they didnt give a shit for how bright the sky was, they wanted to get the most detail in nature. obviously this isnt something that changed from 1 day to the other, it took thousands and thousands of years for any creature to adapt to its surrounding. for humans that meant improving what their life depended on: hunting and not getting killed. i know this sounds a lot like what your parents tell you when they dont know the answer either, but its the reason i learned in filmschool and the one that makes the most sense to me as well. same thing goes for audio. the human ear is more sensitive for certain frequencies. especially around 12kHz and everything lower than 100Hz. 12kHz applies to rustling leaves and the lows to footsteps of large animals on the ground. i hope this helps.
  17. im looking for a good but not too expensive video tripod. i know there is a lot of crap equipment out there, so i wanted to know if you guys could recommend anything that you found to be a good deal. the new manfrotto MVT502AM with MVH502A (http://www.youtube.com/watch?v=xD1M-iu0mzU) makes a good impression and you can get it below $350, but is possible to pay even less for a good one? thanks!
  18. pietz

    New H.265

    i believe i already said this somewhere in this forum but 24MBit H.264 isnt all the same. it depends on the encoder (x264 being currently the best) and obviously on the internal settings. thats why x264 gives you different presets. the slower you go, the smaller your video file will be with comparable quality. cameras have to encode in real time thats why they need high bitrates because they dont have the option to do it slower. however its easily possible to encode 44MBit H264 video at about 10MBit (depending on the footage) without any visible lack of quality loss. that means if H.265 can deliver videos 50% in size, but needs more processing power, its not twice as efficient. you can do the same thing with H.264! choose slower presets and speed up your processor to get smaller videos.
  19. what do think of the variety of Super-Takumar lenses? many of them can be bought through ebay for a very low price, like the 50mm/f1.4 for under $100. i already read a few great reviews on them from photographers, so i wanted to know if any cinematographers had any experience with them. thanks
  20. before people start to think that bitrate is ALL that matters, i just want to point out the following: H.264 can be seen as a programing language with different elements and rules. then companies start to build their own encoding engines within this set of elements and rules, so that what comes out of these encoders can be called H.264 Video. but these implementations can differ quite a bit. for example, if theres a faster processor inside the camera more complex calculation can be done in real time, which can result in a lower bitrate with the same level of quality. the point im trying to make is Pansonic's 40mbit video stream is not necessarily better than Sony's 24mbit stream. and dont get into thinking that you need 24mbit to have a good looking full hd video. depending on the source you can easiliy get this down to 8mbit without seeing a difference.
×
×
  • Create New...