Jump to content

pietz

Members
  • Posts

    136
  • Joined

  • Last visited

Everything posted by pietz

  1. pietz

    Sony a6300 4k

    the last time i was this impressed was when the a7rII came out. The autofocus of the a6000 was the first and only one i would actually consider using as a videographer and the a6300 seems to be even better. just wow
  2. @Emanuel Sorry mate, i can't answer your two questions as I'm not an engineer. I just know that things move forward and overheating might soon be a problem of the past, as sensors get more efficient. Your entire post sounds like you are arguing that 10bit will not be available in the future of semi professional cameras. well, you might be right and i never said anything against that. My post covered two things: A) it was stupid of you to blame 4:2:0 for the banding in the mentioned example. B) in my opinion 10bit is much more important than 4:2:2 for already stated reasons. i never argued about availability of 10bit in the future and i do hope you didnt refer to me when talking about "Whining". Now comes a part thats not obvious at all and i don't blame you for being wrong again: 10bit h264 footage will be smaller and less cpu heavy compared to 8bit footage, thus also helping overheating. "WHAAAAAT, you're completely insane pietz". Thats a fair reaction. See, because most of the cameras we talk about can output uncompressed 10bit footage over hdmi, its easy to assume that its actually recorded that way. For saving it inside an 8bit h264 file it needs to be down sampled to 8bit before its actually being encoded. thats cpu heavy and skipping this step might actually result in less heat. it also results in smaller files because a higher bit depth and therefore higher color accuracy results in less truncations errors in the motion compensation stage of the encoding. this increases efficiency because theres less need to quantize. I apologize, as you successfully stepped into my trap of making this point. It sounds like magic, but its Science bitch! btw dont take my word for it: http://x264.nl/x264/10bit_02-ateme-why_does_10bit_save_bandwidth.pdf
  3. hey man, i'm no photographer, but i think this is a quite simple question to answer. lets start out at the beginning: 1) New Camera: Is 12MP enough for you? If yes, then the A7s and A7sII are excellent choices. The best, really, for what you want to do with them. You will be surprised how good they are in low light. Probably even better than your high expectation expect . If no, think about going broke for the A7rII, as its also brilliant in low light, plus also comes with IBIS which you seem to dig. Its super expensive but completely worth it and a good investment for the next decade if you ask me. 2) A7s vs A7sII: The major differences for a photographer between the A7s and A7sII are IBIS and more focus point. I would argue that the upgrade should not be worth it to you at all. I'm actually quite certain. The A7s is so good in low light already, that you probably dont need IBIS to capture more light to begin with. And since your aim is astro photography (where objects usually dont move so quickly) you dont need those extra focus points anyway. based on your question i would easily recommend the A7s. the only thing holding you back should be its resolution of 12MP. save the extra cost of the A7sII for other equipment. I dont know anything about astro photography, but i thought that especially there a high res sensor is super important. but if the A7s's made it to your short list, thats probably not the case. to my understanding pictures taken with the A7s are tack sharp at native resolution and wait with your purchase another 2-4 weeks. the price of the old A7s will probably go down even further. and dont forget to send me some sweet sweet wallpapers, ok?
  4. I believe that the jump from 8bit to 10bit is far more important than 4:2:2 instead of 4:2:0. The bump to 4:2:2 increases the theoretical gain of picture information by 1,3x, whereas 10bit increases it by 64x more information. The A-B-C example that Emanuel posted doesn't prove the opposite for quite a few reasons: 1. When testing the impact of a certain change or setting you can only vary one factor to make assumptions regarding that setting. But as it has been pointed out it not only changed from 8bit 4:2:0 to 8bit 4:2:2, but the entire system that recorded the scene changed including the codec thats being used. thats a massive flaw in the test and therefore doesn't prove anything regarding 4:2:0 vs 4:2:2. 2. Better color subsampling only increases accuracy of chroma information meaning that a black and white picture looks identical in 4:2:0 vs 4:2:2. The ABC example still shows banding even when converted to black and white, meaning the banding isn't caused by the color subsampling. 3. if 4:2:0 were to blame for the banding in the picture then the 4:2:2 example would show the same kind of banding on the horizontal line of the vignette circle. why? well, because 4:2:2 and 4:2:0 have the exact same amount of information on any horizontal line of pixels. its only the vertical chroma resolution thats improved. Quite opposite to Emanuels opinion I think people are way to focused on 4:2:2. Its really just a leftover from interlaced footage times that only increases new picture information by a tiny bit. if something bothers you about 4:2:0 it should also bother you while looking at horizontal lines from 4:2:2 footage as it has the same chroma resolution. 10bit on the other hand is HUGE. companies went from 10bit raw to 12bit raw to 14bit raw and its still going. Now, in my personal opinion we have reached what makes sense in that matter and everything above is just marketing, but since 8bit is the lowest amount of color we need to not see any banding, it makes A LOT of sense to go at least one step higher to have a little bit of room for grading. 8bit only accounts for 0,02% of all the color that 12bit includes and if people don't even think thats enough, we might see a bunch of reasons to upgrade to 10bit.
  5. Andrew builds some amazing stuff and I totally think he should make some money with the time he invests into EOSHD, but isnt GH4 LOG Converter total bullshit in theory? Log gives benefits in terms of higher dynamic range and smooth highlight roll of, but these two factors only play into account if they are applied between capturing the light and saving it in a video file. after that the dynamic range is set and so is the definition in highlight roll of. taking regular footage and making it flat doesnt give you any more information that you had in the first place. quite the opposite: flattening footage and making it pop again, means loosing information in the process. Yes, I agree the results look good in the examples, but one should be able to achieve this based on the original footage directly. What am I missing?
  6. wow, this makes complete sense. thank you for your time! so why is everybody so psyched about 4:2:2 these days? sure its twice as much color information compared to 4:2:0, but i never looked at footage (even graded material) saying "4:2:2 or 4:4:4 would have helped to make this look any better." if the lack of color information bothers you in 4:2:0, it should also bother you in the horizontal of 4:2:2. why favor the quality of vertical color information over horizontal? and yet all editing codec come at least with 4:2:2. i dont get it. i think it makes a lot of sense to start delivering 10bit to semi professional cameras internally, but 4:2:2 i can easily live without. from what ive experienced theres just no good point in using it...
  7. knowing how chroma subsampling works i often ask myself why 4:2:2 is the (semi) professional standard in case 4:4:4 is not an option. it bothers me to imagine that the color information will be full in vertical terms but only half in horizontal direction. and why does everybody use 4:2:2 and nobody ever talks about 4:4:0? why is the vertical color information more important than the horizontal? to my understanding it should be the other way around. since most of the motion in a film is horizontal, shouldnt be the horizontal color information be full instead of the vertical? somebody here that can elaborate? would be highly appreciated. -Pietz
  8. ​and im 70% sure the both of you don't understand how percentage works.
  9. ​ive been saying this for years and always get weird looks all over the place, but the relationship of Sony and Panasonic is strikingly similar to RED and ARRI. One tries to innovate wherever they can, which always looks great on paper. the other focuses on complete reliability and only uses what works 100%. If moire isnt a huge deal on the a7r II there is another benefit opposed to the A7s. native iso at 3200 is annoying as f*ck and while its awesome to have nightvision in our camera, we wont need it more then 2% of the time. Shooting in bright daylight and having to use iso 3200 happens a whole lot more often. using variable NDs with these settings is almost impossible because all of them suck at high ND values. and fixed NDs are just as annoying. native iso at probably 800 is a lot better to deal with if you want use S-Log
  10. i disagree with those saying that "especially now Panasonic needs to deliver V-Log for the GH4". We're lucky that the update is rumored to drop very soon, because if anything Panasonic needs us to buy their next camera. And thats not going to happen if the GH4 takes away another aspect that the GH5 could show off. They will only make us buy their next product if they amaze us. V-Log is definitely the first thing on my list, closely followed by 10bit. the latter got even more important now that the A7r II doesnt have it. Personally i never cared for 4:2:2 and dont see why this is such a big deal to some, but 10bit prores or cineform would be a great start. Since the announcement of the A7r II IBIS is also a must now. this will mess up Panasonics entire lens line up with OIS but thats a shot they have to take. Lastly a 4k multi aspect sensor to come as close as possible to a larger sensor format. since this results in a 12MP senor this will also help in low light.The problem with sensors is that there are not so many great ones out there and you have to take what you get. But if we're honest: Panasonic is done, they wont have a market after this camera drops and after the A7s II even less. those Samyang lenses are still a lot cheaper than MFT glass surprisingly enough so for filmmakers the cost of lenses isnt even a downside with the A7 series. oh man Pana, you gotta strike hard and start delivering RELEVANT firmware updates frequently. id love to work for them in hard times like these.
  11. ​thanks for your time john and no, the crop seems to be native pixels, just like the regular 4k mode. even for anamorphic shooters, this just doesnt sound like big news at all then. sorry, i dont wanna spread bad attitude, but seriously whats all the fuzz about? we were able to shoot 4:3 video in the same resolution before. i challenge everybody to see a difference between 25p and 24p. like one of us is shooting a hollywood picture with the gh4 that needs to be 24p... and even it you needed 24p it was all there you just needed to crop the 4k width from 3840 to 2880. and since you need to adjust the footage in post eitherway, its not a big deal at all. Panasonic could at least given you anamorphic shooters destreched live preview and footage...
  12. i read the changelog and i understand whats in it, but what exactly is now possible with the 2.2 firmware that wasnt possible before? im asking because to my understanding the new anamorphic recording is not destreched in live view nor is the actual recorded footage. so whats better now than shooting in 4:3 photo mode or just using 4k and cropping the sides? i feel extremely stupid. i must be overlooking something because everybody is so psyched, but what?
  13. the human eye can see between 2-10 million different colors depending who you ask and what study you put believe in. thats between 7 and 7,75 Bit of different colors per channel. if you think you can distinguish 14bit original footage in an 8bit video uploaded to youtube, you must have super powers, because its just not possible. its impossible because not only has the video playing a higher number of colors that you can see, but it also is the original brought down to the 8bit color space. that video you're referring to looks so absolutely disgustingly oversaturated that it hurts my eyes. in the very first shot when her head covers the sun, she has the same color as a pig that has been eating too many pumpkins in bright sunlight. there isnt even any color separation in her face. its just different shades of pumpkin-pig-skin-color. i find it hard to believe how one finds this look attractive but thats personal opinion i guess. try it yourself: download the clip, bring it to a nle, pull the saturation all the way and than bring it up until it looks "right" without looking at the numbers. i got to -24%. taking it back to the original afterwards shows you how oversaturated it is. and youll also see that there is no color separation in the color tones.
  14. this doesnt sound like something that panasonic would do. i hope. however the wording sounds very solid and not like "the gh5 will have ibis!!1 yolo" -trash rumors. i would most definitely switch system if its a paid update. even if its just a few bucks. the competition is extremely high these days, with lots of other companies to switch to. Panasonic has never been "great" about firmware updates (not terrible either) and going this route would make me sell my gh4 immediately and get a NX1. (im on the edge anyway) this kind of marketing and selling scheme disgusts me. Panasonic said themselves that the feedback and sales on the gh4 are way better than expected. why not give a little bit back to the users, as an action of good faith? that way theyd be building trust for future cameras, which is something the need badly. lets face it, we all know that the NX2 will be better than the GH5, so trust, believe and good faith is something they should be aiming at. i have trouble understanding, why so many companies dont get that. but until its official ill be leaning back and sipping ice tea
  15. every update bringing new features is a good update, but i clearly dont agree with Panasonics aproach here. Anamorphic recording on a device like the GH4 is still such a niche kind of feature. i dont think more than 2% of all gh4 owners will actually use it. plus, the implementation is rather poorly without a destretched live preview. and lets not forget, anamorphic recording was possible before and just meant some adjustments on the computer. so why not work on something that a majority of people actually need, like putting more pressure and time into V-Log. with the history and knowledge of Panasonic they could have delivered this months ago. What about punching in while recording? or a 21:9 4,5k recording? that features less data than the photo mode 4:3 4k recording and would make so many people happy. why not also deliver a regular 3k recording that covers the entire sensor and doesnt push the crop further like 4k mode? the gh4 is a beast of a camera but there are still so many things they could work on... on a different topic: does anybody have a good article about the physics of anamorphic recording? everybody always talks about the "awesome look" thats not so generic, but i would like some actual technical explanation of the matter. why it actually looks to different and where the horizontal flaring comes from. thanks
  16. ​i find it funny to hear you call Eizo monitors a rip off and yet spend $620 on a EF->E-Adapter.
  17. wow, truly impressed by this. if somebody had told me that its possible to take pictures with half a second exposure by hand, i would ve laughed at their face ive long decided that my next camera will have in body stabilization. im thinking em1II or A7SII, lets see.
  18. @lin2log thanks for putting me into your standard "its always been that way, why change it" box, because thats really not what i am at all. i tried fcpx shortly after it came out and even though i gave it a chance, there were many things i didnt like. so i didnt continue. congratulations that you know more about the applications possibilities today, as you clearly have a longer history with the software. im very open to new things but they have to make sense to me as a whole. and thats not the case with fcpx. and obviously i havent tried fcpx every time a new update came out. aint nobody got time for that. i work with photoshop, illustrator, after effects and premiere, which are all layer based. now, it might be possible that there is a better way than "layers" for video editing (even though i havent experienced it yet). but having the same structure through all the creative software i use, helps me to be a lot faster this way. using photoshop and AE along with fcpx would make me less productive. rendering and exporting stuff from ae? no way in hell. i have animations opened in ae, graphics in photoshop and when i change anything, they update LIVE in premiere. thats an amount of time saving, that fcpx can never make up for. that being said. i tried x again over the past days and there are certain things i like. oh, i hope that its ok for you lin2log that i now talk about fcpx again without knowing as much as you do... anyway, ive been experiencing weird lags in fcpx after installing lut utility. its always somewhere between a little laggy and completely unusable. im using a hackintosh, maybe that my problem, but i just cant work like this. and since fcpx doesnt support LUTs by default (right?), im in need of lut utility. so for now there is no love between fcpx and me. plus, i cant customize the ui which really drives me mad!
  19. ​"Many other transcoding apps on the market use a reverse engineered implementation of Apple ProRes. By leveraging the official Apple version, EditReady avoids compatibility issues." i would assume that the results look identical. editready doesnt do any other magic in adjusting the picture. you sure it looks better?
  20. hey axel, thanks for the feedback. i appreciate people thinking like you do, but i really did give fcpx a shot. to answer your question, i can think of two things from the top of my head: 1) Editing on music. i sometimes do little documentary pieces where i pick a song i like and fill the visual part with footage. at the very beginning i throw in the song, drop a clip here, drop a clip there and kinda puzzle the entire thing until there re no more blanks. using this workflow fcpx feels horrible! sure its possible, i didnt say otherwise, but everything feels like a workaround. you delete a clip here, oh shit, something was magnetic and filled the gap. everything is off now and you dont notice it until 5min later. i cant simply drag a clip to a new position because its fixed to this other clip. why do i have to attach something to something else? i dont want that. i want them to be completely seperate objects that i can move freely. it doesnt make sense to me to attach a clip to another and letting it sit there, but also being visible over other clips that follow. in a scenario like this, having multiple layers is the only way that makes sense to me. 2) a-roll & b-roll switch. whats my a-roll? the interview? or the footage of the person at work? right now it might be the interview, but 2 hours later i notice "nah that doesnt make sense" and i wanna change it. damn now i have to switch my main story with the little thingies i attached to it. workaround. with layers i dont attach anything to anything, i can move everything freely. attaching a clip to another feels like i bind it there. hell i dont wanna bind it there. right now i wanna see how it looks right there and next i want a completely new structure. and that takes such a long time in fcpx. i see a use case for people that start their edit in the very beginning and keep going that direction until the credits roll. but i dont edit that way. i switch things around, i put something here, i put it there. i want to move clips freely and not stick anything to anything. seems unnatural to me. when using fcpx it happens so often that i want to place something somewhere and it magically flies to the left or sticks to something. now i would like to see the other side. where do you see the revolutionary opportunities that this new timeline offers? i like fcpx for two things: only one videoplayer window and hovering over effects to see them apply immidiately. something else i also hate is how little you can change the ui elements. EDIT: i compared editready to the fcpx transcoding. a 32sec clip took 47sec in editready and 38sec in fcpx.
  21. yeah fcpx's performance is amazing. i tried so often but i simply cannot get used to this style of editing. a hand full of people fell in love with it, but i cant get it to work for me. everything i do in the timeline feels like a workaround when using fcpx. i dont understand why they dont add a regular timeline to it as an option. cant be that hard...
  22. i dont think H265 is such a big deal for filmmakers at all. at least when it comes to editing footage. some people out there seriously believe that by switching to H265 they get the same image 50% smaller, which just isnt the case. as a consumer codec H265 will compress the shit out of everything that isnt visible to the human eye in its orignal. thats why blacks seem to look even worse on H265. The human eye isnt focused on the dark parts of an image. but if you apply only the slightest color correction it will tend to fall apart even quicker. the smaller file size must come from somewhere and people seem to forget this stuff. im really impressed with the NX1 though. first real try by samsung to get in the filmmaking business and they nailed it. impressive.
  23. 21 stops, although absolutely impressive, seems like a major overkill. when will there ever be a realistic situation where 21 stops of dr come in handy? 14 stops of the alexa seem more than enough. with all this dr you probably lose 1-2Bit of color depth from all the unused colorspace, no?
  24. i have a Hackintosh running Yosemite from a Samsung 830 SSD without any issues. As a matter of fact I know 4 other people also running a Hackintosh with SSD drives on Yosemite. The chance that all 5 of us run a Apple-used-SSD is pretty low. I suspect that this is purely a problem for people running original Apple hardware, but with a third party drive. Hackintosh users seem unaffected.
  25. im so picky with what many people call the "video look" and many seem to complain about this on the GH4. and yet, i just dont get it. this camera captures such beautiful images, so far away from anything like the broadcasting look, that it blows my mind. +1000 to what drokeby said. being able to record 4512x1920 would blow my mind. not only for the higher resolution but also for the full sensor readout and less crop!
×
×
  • Create New...