Jump to content

maxotics

Members
  • Posts

    957
  • Joined

  • Last visited

Everything posted by maxotics

  1.   It is very weird because I believe many studies have shown the exponential effect of mavens.  I have a Nikon D600.  But I only bought it because there was a good deal on Craigslist.  Lots of nice stuff about it, but not enough to stop me from recommending Canons to all my friends and family.  Why?  Because I bought a Canon Powershot a LONG time ago and fiddled with CHDK.  Then I bought your 50D RAW guide, and though Canon didn't get any money from the 50D I bought used, they did from the 2 EOS-Ms!   What makes this all the weirder is Canon did NOT make their DSLRs compatible with their old lenses, Nikon did (another reason I bought the D600, I had an old sweet Nikkor).  Canon made the decision to wow people with the future, so why become slow-footed now?     I've been spending gobs of time creating software that will convert EOS-M RAW (or any hybrid canon ML RAW) to good 422 or 444 color depth video (ProRes, Cineform, DxHd, etc).  IMHO, one of the reasons ML isn't taking off is too many people are aiming at the Alexa rather than the higher end H.264 cameras.   At least that's what I want.  I want to take my EOS-M out of my bag, take some ML RAW video, convert it to Cineform, say, plop it into my Vega Studio, post it on Vimeo for friends and family.   It will look a world better than H.264.  Not enough people see that because too much time is spent, again, chasing Alexa.
  2. What Damphousse is saying is the CODEC (compression/decompression) is only an agreed upon protocol for how image information is saved. It's what happens BEFORE compression that makes a difference; that is, what information are you going to save? Do you want that rose to come out with a lot of nuance in its shading, or do you want each petal to be as sharp as possible? The CODEC requires that you keep within a certain data "bandwidth" so it's critical the camera's internal software (firmware) make good decisions. It's a tremendously complex subject and the camera manufacturers closely guard their secrets. That's part of what this site does, Andrew figures out what they're doing well, or not so well. All that said, it's difficult for me to see the difference in most situations. The irony is that both 60fps and in-camera stabilization will help with motion smoothness and neither camera has both. In the end, for video, my personal preference in Panasonic. Someone on this forum may also suggest the G6. I don't want to steal his thunder ;)
  3. Yes, I wake up cynical every day ;)   The reality is, in the thousands and thousands of small TV stations, college a/v departments, churches, local access cable stations ,etc., they get a catalog from B&H Photo, or similar a/v vendor, and pick out a camera that most closely fits their BUDGET and is either in the brand they already use or is the darling of an intern who won't shut up about it.   For everyone person in that process who says RAW is the future, there is another guy who says it's over-hyped and stupid.   The number one question of 99% of the people who buy even this very technical equipment is, "can I watch the footage on my Macbook Air".  If someone said, "I'm not sure," they won't research it.  They'll just buy the camera on the facing page!
  4. Olly, some PhD student could certainly go to town on the second video!  There's an interesting detachment.  You didn't linger on any shots that one might linger on ;)  He's living the dream, yet it goes by so quickly, you can't see him actually enjoying it.  Is life lived as a dream enjoyable?   As for my narrative videos.  As much as I would love to shoot some of my scripts, I don't have the time, money, or appetite for the sacrifice involved.  When one says, "just go out and do it", that's easy to say if you're single or don't know just how much you need from others to get it done.  If you have a family, the risks are very high.  Once we had our first child I recognized that people don't watch the films I would have loved making.  I don't know if I could do a good film; I do know my natural audience would be very small.  In any case, masterpieces exist and few people watch them as it is.  So I spend my time trying to turn people onto them.    I've been working on technical issues involved in making the EOS-M a good RAW shooter.  If I succeed some filmmaker might use the camera to shoot a reel that gets them the opportunity to do a film I would call a classic. Certainly, I would love to shoot something worthwhile with it.   We are all part of each other's narratives.  Or as one of my friends so wittily put it, "we all play bit-parts in each other's screenplays"
  5. Here's the Windows version   http://www.adobe.com/support/downloads/product.jsp?product=106&platform=Windows
  6. I started watching La Jette last night but my daughter said the soundtrack was going to give her nightmares :)   But I could already see that it deserves to be on the "minimalist" list.     I went to see Rush last night with my brother in law.  Looks like it was shot on high-ISO film-stock.  Afterwards, I looked it up and discovered they shot on a multitude of cameras, from Arris down to 5Ds.  Lots of FX too, from backgrounds to crash sequences.  I enjoyed the film and what I thought impressive is how the technology never interfered, that is, I never felt something was done for technology's sake.      They could have used a 5D3 with the ML hack, but they used an Arri, because that is what was available to them and they used the best tool in their budget.  That didn't stop them from using 5Ds for other stuff, or other cameras.  Most important, they didn't use film though though they lovingly recreated that look.   JG, my oldest daughter has probably watched Kill Bill a gazillion times.  She in art school now.  I play that old foggy "that stuff is stupid" :)  But I agree, Pulp Fiction is straight up great film-making.
  7.   LOL.  @jsc Thanks!  I used ACR once, and have been looking for the best de-bayering solution since and had forgotten about it.  Will go back to it!  
  8. I have some links on my EOS-M shooters guide that you might find worth reading   http://www.magiclantern.fm/forum/index.php?topic=8825.msg82944#msg82944   The short answer is that ALL video you see in cameras under $5,000 has smoothed individual pixels into blocks of sampled chroma.     ISO doesn't really mean much with RAW, except where the sensor will record its 14bit (16,383 range of values).  What you're looking at it what the camera sees BEFORE it is encoded into H.264 or MJPEG, whatever.     It's up to you how hard you want to smooth out your pixel values.  Neat works well, but is a subject into itself.  (I still haven't been able to spend the time to learn all it's settings).     What you're seeing isn't a defect, it's a feature :)   Again, raw2gpcf is limited because it is making all the RAW to image decisions for you.  I agree, it's easy, and is good for me.  But if you really want to finesse your image you need to go RAW to DNG to (debayering algo) to TIFF (or other intermediary) to NLE     Hope this helps.
  9.   I can't watch Tarkovsky's films either.  To me, he's the antithesis of the tech guy who talks nothing but tech.  With Takovsky, it's nothing but personal expression.  "American Beauty" came out just before I read that book.  That movie left me with mixed feelings, which I couldn't articulate.  Tarkovsky articulated the problem I had with that film perfectly, which is the more cliche's and tropes you use, the further you get from the core thing you want to express.     I looked up "Stories We Tell".  It's on Amazon so will watch soon.  Thanks for the suggestion!   Agree, that bio of Cardiff should be on any film-makers list of docs to watch.  The Archers made so many interesting films.  The Cohen brothers of their day.
  10. Let me recommend a book I think every film-maker should read   http://www.amazon.com/Sculpting-Time-Tarkovsky-Filmaker-Discusses/dp/0292776241   In my 20s my girlfriend took me to her favorite film, The Sacrifice.  I have never sat through a longer, more boring, 2 hours in my life!  When I later read that they forgot to load film into the camera before shooting the burning house, and had to shoot another burning house, i was not in the last surprised.  I made a mental note that Tarkovsky is the WORST filmmaker ever.   Then I read the book and watched some of his other films.     What I love about this site is that it talks about film-making technology in a serious way.  People who think it's about one camera vs another are COMPLETELY missing the point.  The questions trying to be answered is "what is the best technological approach, and trade-off, I should take for what I want to shoot."     Let me finish my rant with this.  At the Dartmouth Film Society they showed "Black Narcissus".  There are so few real film-theaters around I was very excited.  I wished my oldest kids were around to take them.  Anyway, I went by myself and watched it.  It didn't look good at all.  I thought, "I guess digital technology has advanced so far Technicolor now looks like crap."  I had remembered seeing the film in NYC and being blown away with it's cinematography.  Afterwards, I asked what they showed it on.  The young woman (student) says, "DVD, we wanted to show it in film, but the projector was broken.  And then our Bluray wasn't working.  Fortunately, we had a DVD too."  I was floored.     It wasn't that a film-society showed one of the greatest Technicolor masterpieces in DVD, it was that they did not tell the audience.  People went away not having experienced the technology/story as it was meant to be shown.     That's the way I feel reading many of those off-posts.  Many readers don't know the beauty of real film because it has been swamped by digital content.  Eventually, everyone learns their mistakes, just like I did about Tarkovsky and the real goal of filmmaking.  Many posters here that go off-topic will one day get it.     Now, to get back on this topic, what are the best films that use the most minimal of technology.  I'd love to see a best 10 lists.  I'll start with Fast Runner.
  11.   Hi richg, having built a crude version of a DOF adapter myself, I can tell you that the reason the glass is vibrated is to stop the same grain from showing in the frame.  The ground glass has very fine grain to diffract the light enough to be focused on.  If you don't vibrate, you see the grain in the same place.  Vibrating eliminates seeing the ground glass graininess,though you still get "graininess" because the ground/glass/vibration is adding softness to the image.   I have studied the moire problem in some depth.  I have tried many kinds of software filters, bent plastic, and other goofy ideas.  I have also thought of the vibration idea, but feel it is no different that not pulling perfect focus or using a heavy soft filter.   I've created many raw (DNG) images of moire-inducing charts.  My sense is that much of the aliasing effects have a similar interference pattern to them.  I believe software can eliminate much of it, indeed, believe the camera manufacturers have it as part of their firmware when converting to H.264.  Wish I could find some open-source versions.     I want to point out, to anyone starting with this that the de-bayering algo greatly influence the severity of chroma problems in this.  Amaze and LMMSE work well fro me.     There are tons of chroma smoothing, etc., algorithms, but they all work on a center pixel or block of pixels.  An algo to deal with moire needs to work on multi pixel lines of these interference patters.    It needs to scan a line and say, this 8 pixels of RGRGRGRGRG are too bright, and interpolate them to match the line above and below.  The logic needs to think about the problem in lines.   All that said, this must be a VERY difficult problem.  The sony VG cameras, which aren't cheap, suffer from moire and you would think if Sony could fix it, they would.    The filters from Mosaic may be the only solution.  However, they need to be taken out before photos are taken, or one needs to live with the fact that the image must be degraded enough to prevent a lot of visible moire in video, which can be seen in high resolution photo mode.
  12. The main reason they used DV cameras was they had only a few minutes to shoot once closing down the streets. The cameras were cheap, so they could put them in many places. Doubt they would have been able to afford 10 Alexa's every day :) What new people to this discussion should know is that when someone says "RAW can deliver X quality I couldn't get before" someone will say, "Nonsense, I can get that same quality with my hacked GH2". It's tantamount to calling Andrew a liar. So I understand why he hits them back with "puke" and "trash". It doesn't matter that he covers all cameras and has NEVER said the opposite, that RAW shooting is as easy a G H 3 :)
  13.   Hi QuickHitRecord, I made good progress on my effort to create a single EOS-M to ProRes (equiv) workflow last night.  I'm able to read RAW frames into a format I can fix the focus pixel issue in.  This should also work with the 650d and 700d t4i/t5i RAW output.
  14.   You can't!  In the way I mean it, 14bits of each primary color.  Looks like I have to go into those "complications".  Camera sensors are monochrome.  They read light be placing little filters over each pixel, either red, green or blue.  Each pixel then "borrows" the 2 other colors it doesn't have.  So if it is a red pixel, it take the green and blue color information from neighboring pixels to create a full 24bit color.  (BTW, they don't work with RGB but YUV, oh this stuff is so f'ing torturous!)  But, for explanation sake...   Let's say we're in a perfect world.  You have 3 color values, each from 1 to 16,000 (red, green or blue).  That means, from those, you can create a full color at 16k x 16k x 16k depth, or 4 trillion!  You can't discern 4 trillion colors.   So now you have more color information than you can physically see.  In the end, we always need to reduce to 16 million.   Here's the rub.  You can't see 4 trillion colors.  The camera can record the 16 million you can see in 8bit video.  So what's the problem?  The camera may not chose the 16 million color values you would chose from a palette of 4 trillion colors.  As the article shows, it is never smart enough to do that.   RAW allows you to  SELECT which colors to scale down to your 16 million painting.  As Andrew said, do you want to start with 4 shades of pink, or 255?  It's all about CHOICE in what you want your final 8bit channel image to be.   Are we getting there?
  15.   That tripped me up too.  When we speak of 8bits for monitors (255 integer) we mean, per color channel.  As you'll see when you set your display settings, you want 24bit (3 x 8 (byte)).  8bit video delivers about 16million color values--the range of human vision.   When your camera takes a photo/video frame, each sensor pixel is taking a 14bit reading, that is 1 to 16,383 (or something like that).Each pixel actually reads only a red, green or blue value.  Another complication.   Anyway, that number is ultimately converted to 0 to 255. So you're giving up a lot of accuracy about just how much color there was.   In RAW video, you get those 14bit values BEFORE the cameras converts them into 8bit equivalants.  Complex subject, hope this gets you on right track.
  16.   Finally!   :)  If Andrew can convince you he can convince anyone!  (Of course, if I was shooting my niece or nephew's concert I'd rather have a G6).     Another awesome article!     A comment on this: "Both JPEGs and raw video start off as raw image data but because the 5D Mark III does not shoot JPEGs at 24fps, it has time to think. Therefore the debayering quality is far better on a JPEG than it is on a frame of video."   In this context, there is probably little difference in bebaying quality between photo and video (though, of course, debayering in post is always preferable) .  A more important difference is the size of each photo/frame saved to JPG.  I'm going to have to work with numbers off the top of my head, but should hold up.  Anyway, a 1920x1080 photo or video image would probably take up at least 1 megabyte of JPG space.  That means 24 megabytes per second, if you're using those images for video.    Even if motion compression can reduce that 50%, you're still dealing with 12MB x 60 or 720 megabytes per minute.   let's just say a gigabyte.     Supposedly, All-I should be able to do a version of this, but the footage I see never looks very good.  I believe the reason is All-I video is like compressing frames in the highest-compression, lowest quality settings.      Even today, most consumer cameras seem to keep at that 28Mbit/sec video rate.  There's just no way to fit photo quality video in at that rate.  Most people, myself included, wonder why they can't come up with say 6MBS, or something like that.  I think the reason is the time it takes to compress.  Too much CPU power is needed.   Anyone who has ever compressed anything knows it takes no time to de-compress, but a lot of time TO compress.   The only thing the camera manufacturers can do (to keep the lowest-common denominator of shooters happy), is, as Andrew elegantly puts it, "trashing" the image (throwing out data before compression).
  17. I don't believe anyone chooses to be a fan-boy, I think it borne out of frustration.  You want feel you have the right machine for your vision.    Even if everyone on this thread owned all three cameras, they would be tortured if a friend said, let's go for a hike,  bring A camera (not 3 of them).     For me, I would think, hm, I want to take some really great videos of the mountain ranges (5D3), and some photos!  No, I want to also take some nice high dynamic range shots of us walking through the woods (BMPCC), no, I want to document our trip with interviews, silly banter (GH3).  There is NO PERFECT camera for every situation.  There are only painful decisions (made worse if you don't OWN the camera you would pick for THAT particular event!)   Seriously, who among us wouldn't find the whole decision making processing completely torturous?  Who hasn't brought their RAW camera to an event that ran too long and the H.264 would have been better?  Or took their H.264 to an event where you ended up wanting a perfect 30 second clip.   (Finally, Matt, I didn't mean to suggest you didn't know your stuff!)
  18. Another reason I believe this rumor to be either true, or on the right track is that face it, no one buys Panasonic cameras primarily to take photos with.  The Sony camera is going to take some wind out of Panasonic's sails.  There are focus pixel issues that make RAW post-processing difficult on hybrid focus cameras from Canon, but Canon could fix that in a heart-beat if they wanted.  They could put out a firmware upgrade tomorrow, or camera, that would do  moire-less14bit 720p in crop-mode.  Obviously, this has limitations.     A big question, which I'd love to hear your thought on, is if it's possible to get a good un-distorted (minimal aliasing) image from an APS-C sensor through line-skipping (in Canon's current line-up/manufacturing)?  If it is, Panasonic has to be worried.  If not, they know that Canon and Sony must beef up their chips and power to sample all pixels on their APS-C platform.     Normally, the industrial division of a company would be loathe to see their features in consumer cameras.  But for Panasonic, a loss of leadership in consumer video, where they're strongest, would put questions in professionals minds about their high end cameras.     If there is a market for the BMPCC, and it drops to $800, in a year, say, the whole Panasonic G line would be crushed (for video) IMHO.  Many would just move their lenses over. 
  19.   Matt, you, like every new person to photography/video, think always in finished images, usually jpegs.  Consumer cameras (photo or video) take an image with a sensor and compress it down to manageable file size, both for photo and video.  Welcome to a world of confusion and pain :)   The larger the sensor, the better low light performance because the sensor pixels are larger and the light has to bend less.     The better the processing/compression of an image (in camera software) the better the low light appearance.     So it's possible that a badly processed image from a full-frame, might not be as good as perfectly processed image of a small sensor camera (though usually that isn't the case).   If you don't compress the original image, or save more information when you do so,  it's possible to get better low-light results later on.  In fact, it's possible to do a lot of things--but at the expense of faster cards, cameras, PCs. etc.   In short, I wouldn't be surprised if the BMPCC gives better low light performance than a 5D3 in H.264 mode because the (small sensor) BMPCC is saving all the sensor information (RAW), and the (full-frame) 5D3 is throwing a lot of it out to stay within certain file-size/bandwidth requirements.  However, the 5D3 in RAW mode would probably do better than the BMPCC.   Hope that helps.
  20. What I loved about this review is that Andrew assumed his readers would understand the trade-offs between the GH3 and BMPCC and 5D3.  I don't look at the GH3 footage to answer the question if it is as "good" as the 5D3 RAW, but if the speed, ease-of-use, etc., benefits of the camera would be worth what I'm thinking about shooting.  D.L. has shot both 50D RAW and GH3.  He has chosen GH3.  That carries a lot of weight from me because 1.) His talent is obvious to anyone who watches his stuff and 2.) He spent a lot of time with the 50D Raw, even shot a whole piece with it.  Does that mean the GH3 is for me?  No, but who knows what I'll want tomorrow?     As for other cameras, these are the 3 most representative DSLR type consumer cameras.  
  21.   If you want portability, the EOS-M will do BMPCC like dynamic range at 720p. You can use all your lenses.  If you can live with 720p, and some post-processing extra work, bodies are about $250.  I maintain a thread on ML here: http://www.magiclantern.fm/forum/index.php?topic=8825.msg82944#msg82944  (hope you don't mind the link Andrew)
  22. A lot of credit goes to Andy who really drives the conversation deeply.  This forum would be a pretty dull place without him.  I mean that sincerely.
  23. Cool, I get to be the first posted comment, which is:  SIMPLY F'ING AWESOME!  I don't know how you get the time to do all this but this is what I've wanted to know for months!  Finally I can get on with my life.  THANK YOU!
  24. I think what Axel is saying, citizenkaden, is that if you want a nice 3d like chromakey, the background wrapping around the actors, you would build the set like that pictured.  The problem with such a set is that the side walls in green will reflect green light onto the actors and that will confuse the chromakey.  So you want the software to differentiate between green on the screen, and green that has been "spilled" onto the actors.     When I first read about this I thought it was stupid.  Until I tried shoot green screen in a small room and it kept spilling onto arms and hair (that is, a green reflection).  If the OP can't place his actors far enough from the green screen he/she will run into this problem.  dishe's link clearly explains why 4:2:0 does not give accurate pixel level chroma for the software to work with.  Even 4:2:2 is not perfect.   In practical terms I would say this,   SCENE A: Actors are in big sword fight on some crazy tropical island.  The green screen is in a large room with plenty of room for lights, actors, etc.  A G6 would be fine.   SCENE B:  A man and woman are having an intimate talk aboard a "starship" with a window overlooking space.  Plenty of closeups.  The woman has fine, flowing hair.  Then xenogears BMCC is, if you ask me, would be the only chance you have of making it look real.
×
×
  • Create New...