Jump to content

maxotics

Members
  • Posts

    957
  • Joined

  • Last visited

Reputation Activity

  1. Like
    maxotics reacted to Michael1 in Best DSLR For Property Video?   
    You won't get full resolution without good lighting as Maxonics pointed out.  In low light, the camera goes into noise reduction mode, and there goes your resolution.
     
    For stills, a full frame would work the best, both from a low light performance perspective, and from the standpoint of getting wide shots with less degradation in the image, including less of a fish eye look.  There are some good quality, modestly priced, fast full frame lenses, but you would have to watch your depth of field.  In movie mode, the full frames still do fairly well in low light, although the difference isn't as great compared to crop lenses, since they throw away some of the light with sensor line skipping.
     
    If you want the ultimate in sharpness, I would be looking at the new Panasonic GH4 when it comes out next month, with its 4K resolution.  You would get full 1080 if downres'd in post, too, something I have yet to see with any 1080 camera output (yes, the file is 1080, but the image is not).
     
    Michael
  2. Like
    maxotics got a reaction from Jolley in Best DSLR For Property Video?   
    I've been working on house photography lately.  You can might find some of this interesting  http://maxotics.com/?p=199  Here are my suggestions.
     
    1. Almost no house looks good in direct sunlight.  Shoot either early morning, golden hour or when cloudy. If you must shoot in direct sunlight, as Pascal says,  don't let all your shadows go to black.
     
    2. I used to shoot some house videos.  Used with an EOS-M or Nex7 (both worked great).  I bought some used Smith Vector quartz lights for about $100.  I just shone them mostly at the ceiling and let bounce light fill the room.  The windows will usually all blow out, but at least the interior will have less noise.  
     
    3. Mind your lines! Picasso said if you want someone to look at your painting hang it crooked.  However, I think he meant just a pinch ;)  In any case, make sure everything you shoot, photo or video, has good lines.  (I've been working on this and feel it is something I'll be working on for the rest of my life.) in any case, you want the viewer to feel the house is "striaght". 
     
    4. If you have the money, get a Blackmagic Pocket Cinema camera and an 8mm lens.  That will give you about 24mm.  With lights and a fluid head tripod you will get amazing quality.  Just shoot in prores and do the auto thingJ in Resolve.  Extra work, but NO ONE will touch your videos in quality.  Will the difference show in youtube?  Yes (because h.264 cameras are shadow/detail killers--unless you're lit perfectly within a few stops).  
     
    JCS is my God lately, so I don't want to disagree, but I think any sort of speed-booster will have limited use because, except for those cool shallow depth of field shots of a flower pot on the window-sill, you want depth.  For that you need LIGHTS.  Maybe I didn't say that enough.  If you had to get anything I'd get a bunch of small and large flat-panel lights.  Batteries wouldn't be bad.  My biggest problem was running wire from outlets.  
     
    I guess that's enough of my silly advice for now.
     
    My favorite quote, "Amateurs talk bodies, professionals talk glass and photographers talk light."  LIGHT, LIGHT, LIGHT!  
  3. Like
    maxotics reacted to jcs in What does Pixel Format (8bit-32bit Floating point Video levels-Full Range) mean?   
    There was a time when integer/fixed-point math was faster than floating point. Today, floating point is much faster. For GPU accelerated apps such as Premiere, Resolve, (FCPX?), they always operate in floating point. The only area where 8-bit can be faster is on slower systems where memory bandwidth is the bottleneck (or really old/legacy systems, perhaps After Effects).
  4. Like
    maxotics reacted to jcs in getting the best footage on Vimeo   
    Last I checked both YouTube and Vimeo use customized ffmpeg (with x264) to transcode. x264 has been the best H.264 encoder for a while now. Thus if you want the most efficient upload you could use any tool which uses a recent version of ffmpeg (rendering out ProRes/DNxHD then using Handbrake is a decent way to go).

    The challenge with your footage is high detail, fast motion. Adding grain or more detail (by itself) can make it worse. In order to help H.264 compress more efficiently in this case you need less detail in the high motion areas. You can achieve this by first shooting with a slower shutter speed (1/48 or even slower if possible). Next, use a tool in post which allows you to add motion blur. In this example you could cheat and use tools to mask off the skateboarder and car and Gaussian blur everything else in motion (mostly the sides but not so much the center/background). You could also apply Neat Video to remove noise and high frequency detail (in the moving regions only) and not use any additional motion blur as this will affect energy/tension of the shot (through adding more blur to motion will help the most).

    Once you have effectively lowered detail in the high motion areas (however achieved), H.264 will be able to better preserve detail for the lower motion areas- the skateboard, car, and distant background.
  5. Like
    maxotics reacted to see ya in What does Pixel Format (8bit-32bit Floating point Video levels-Full Range) mean?   
    32bit floating point is higher precision color processing over 8bit, float versus integer precision, 32bit is also usually done in the linear domain rather than on gamma applied image data. 32bit float there should be no loss of data from clipping, image data values can be negative or greater than 1.0 although you won't see that on your monitor and it will look like clipping is happening on your scopes but as you grade you'll see the data appear into and out of scope range, where as 8bit processing will clip below 0 and above 1 ie: 0 to 255 in 8bit terms.
     
    Full versus Video levels. Whether the image is encoded in camera based on a RGB to YCbCr conversion that derives YCbCr values, (luma & chroma difference) based on luma over limited range or full range, you're aim is to do the correct reverse for RGB preview on your monitor.
     
    You can monitor / preview & work with either limited or full as long as you are aware of what your monitor expects, is calibrated accordingly and that you feed it the right range. If you're unsure then video levels. Video export 'should' be limited range certainly for final delivery, full range only if you're sure of correct handling further along the chain for example to grade in BM Resolve you can set 'video' or 'data' interpretation of the source.
     
    Your 1DC motion jpegs are full range YCbCr but as the chroma is normalized over the full 8bit range along with luma (JPEG/JFIF), it's kind of equivilent to limited range YCbCr and the MOV container is flagged full range anyway so as soon as you import it into an NLE it will be scaled into limited range video levels YCbCr. Canon DSLR, Nikon DSLR and GH3 MOV's are all h264 JPEG/JFIF, flagged 'full range' in the container, interpreted as limited range in the NLE etc.
     
    What you want to avoid is scaling levels back and forth through the chain from graphics card to monitor, including ICC profiles and OS related color management screwing with it on the way as well.
     
    You may also have to contend with limited versus full range RGB levels as well depending on the interface you're using from your graphics card, DVI versus hdmi for example, NVidia feeding limited range RGB over DVI full over hdmi.
  6. Like
    maxotics reacted to Shirozina in Move out of full frame system?   
    You want the shallow DOF look so stick with full frame and get a 5d3 and experiment with MLRAW. Getting shallow DOF from M43 requires expensive and exotic fast glass and even then it's debatable if it actually convincingly achieves the 'full frame look'. I doubt the GH4 will have significantly better DR than the GH3 - Panasonic claim 1/3 stop better DR for RAW and this may translate into nothing for video. Stills quality from a 5d2 is better than from a GH3 (speaking from owning both) so that's another reason for you not to jump. I've moved my video capture like this 5D2 - GH3 - BMPCC. I actually hate shallow DOF cinema style and so the BMPCC suits me perfectly. Once you have used 10bit 4.2.2 in post you will never want to deal with 8bit 4.2.0 again if you can help it. For stills I still use the Canon system although I may get a Sony A7R and eos adapter as Canon don't seem in any hurry to update the 5d3 to competitive levels of DR or MP......
  7. Like
    maxotics got a reaction from yiomo in Move out of full frame system?   
    No camera will give you the best of both worlds, video/photo or Portability/Professional Features
     
    I have gone through a lot of cameras.  For background.  I currently use a Sigma DP1M for medium-format still quality in a small/inexpensive package.  I use a BMPCC with a 14-45 for video.  I use a Nikon D600 with 24-85 or 85mm 1.4 for portraits.  (I also have 24mm and 50mm primes which I don't use that much anymore).  I have an EOS-M, which I use as Magic Lantern camera,etc.  I also have a GF3 with a 14mm pancake which my daughter uses mostly now.
     
    What you should keep in mind, about this forum, is that it is video focused.  So most people like my hero Andy :) will be more biased (thought also more insightful) about video.  
     
    I have ABSOLUTELY no desire to carry, or pay for, full-frame cameras.  My experience is that the dynamic range, low-noise, color saturation benefits are real.  If you are serious about still photography, can afford it, and weight is not a factor, than full-frame is what you would use.  You can get shallower DOF in full-frame, as you point out, but that is NOT why I use it.  
     
    Panasonic makes superb interchangeable lens video cameras that shoot decent stills.  As a stills camera, in less than perfect light, I believe the quality from your 5D will be much better, especially if you print large.
     
    My biggest problem in suggesting you go with MFT is that I too wanted to go backward from my Sigmas and a original 5D I had.  Once you've worked with full-frame images you see that 3-d look and you, or I, get very fussy when it is harder to get back.
     
    As a video camera, I would get the GH3 over the 5D (native video).  But I would not get the GH3 over the 5D hacked with Magic Lantern to shoot RAW.   So you should consider getting a 1000x CF card and learning ML and Davinci Resolve (or ACR workflow).
     
    That is another thing to consider on this forum.  There are people who shoot mostly Panasonic H.264 video (which is best in class) and those who shoot RAW.  Panasonic lets you focus on shooting, composition, editing, etc.  HOWEVER, you'd have to be blind to not see the difference between H.264 shot footage and RAW based footage.  There's a lot to read on this forum about all that.  
     
    If you want to focus on video, your plan has merit.  If photography is important, prepare for potential regret.   I love my BMPCC.  You can use Canon lenses on it with an adapter.  
  8. Like
    maxotics reacted to Andrew Reid in Video quality charts - February 2014   
    Pretty much a NEX 5N with a worse form factor for a lot more money, so I've never used one, therefore I wouldn't know where to place it on the chart. Pretty close to the bottom I'd say.
  9. Like
    maxotics reacted to fuzzynormal in Ready to Invest in Some Primes   
    That's probably a better phrase than "pixel peeking" to be honest.
  10. Like
    maxotics reacted to QuickHitRecord in Ready to Invest in Some Primes   
    No one has mentioned Nikkor AI and AI-S primes yet. I have a set of five, cine-modded by Duclos, and I have been pleased with them; first on my GH2, then on my FS100, and finally on my 5D3. I've also had them on a RED Scarlet, and they looked great at 4K. They are compact but well-built, and the lens characteristics match closely across the set. The only downside that I can think of is that the focus ring goes the other way, making a reversible follow focus a necessity.  
     
    Here's a nice rundown on them by Caleb Pike of DSLRVideoShooter:
     

  11. Like
    maxotics reacted to kadajawi in A very short Nikon D4S review - poor video quality yet again!   
    According to interviews etc. they do understand. They also say that it is hard to get full sensor readout for such a big sensor, so aliasing is to be expected. Basically since this is just a mild facelift/update they only can do so much to the video functionality, and pray that people will still buy it while they work on the next generation camera which will hopefully be much more competitive. Meanwhile enjoy the D5300 and soon
    (?) the D7200.
  12. Like
    maxotics got a reaction from skiphunt in New Sony $2k 4k vs GH4k   
    I couldn't agree more, Skip.  I was trying to point out that Panasonic, or MFT cameras, are fighting an uphill marketing battle.  When they came out there was hope they would reach full-frame quality in low-light, etc.  Right or wrong, I get the sense more photographers want full-frame, whether they can shoot or not is another question.  When the GH2 came out, it appealed to both photographers and videographers because it did each one reasonably well.  Since then, high-end photographic camera users are moving away from MFT, which makes the GH4 less desirable for photos (again, not saying you can't take great photos with it).  So that leaves the GH4 having to fight better in the video space.  
     
    I have no idea how it will play out.  As Andrew pointed out, the stills from the 4K cameras are as good as most cameras (but not full-frames in special circumstances). I do think it is possible 4K camcorders could make a resurgence and be preferable to a GH4.  
     
    If the Sony 4K downsampled gave me more of a Blackmagic look, I'd want that camcorder convenience.  Hand-held, nothing beats a camcorder like nothing beats a minivan if you have kids ;)  
     
    Or let me say, I know people who shoot video on DV tapes that run circles around those professoinals shooting 4K ;)
  13. Like
    maxotics reacted to jcs in Discovery: 4K 8bit 4:2:0 on the Panasonic GH4 converts to 1080p 10bit 4:4:4   
    Julian's images: saving the 4K example at quality 6 creates DCT macroblock artifacts that don't show up in the 444 example at quality 10. All the images posted are 420: that's JPG. To compare the 1080p 444 example to the 4K 420 example: bicubic scale up the 1080p image to match exactly the same image region as the 4K image (examples posted are different regions and scale). The 1080p image will be slightly softer but should have less noise and artifacts. Combining both images as layers in a image editor then computing the difference (and scaling the brightness/gamma) up so the changes are clearly visible will help show exactly what has happened numerically; helpful if the differences aren't very obvious on visual inspection.
     
    We agree that 420 4K scaled to 1080p 444 will look better than 1080p captured at 420 (need to shoot a scene with camera on tripod and compare A/B to really see benefits clearly). 444 has full color sampling per pixel vs 420 having 1/4 the color sampling (1/2 vertical and 1/2 horizontal). My point is that we're not really getting any significant color element bit depth improvement which allows significant post-grading latitude as provided by a native 10-bit capture (at best there's ~8.5-9-bits of information encoded after this process: will be hard to see much difference when viewed normally (vs. via analysis)). Another thing to keep in mind is that all > 8-bit (24-bit), e.g. 10-bit (30-bit) images, need a 10-bit graphics card and monitor to view. Very few folks have 10-bit systems (I have a 10-bit graphics card in one of my machines, but am using 8-bit displays). >8-bit systems images need to be dithered and/or tone mapped to 8-bit to take advantage of the >8-bit information. Everything currently viewable on the internet is 8-bit (24-bit) and almost all 420 (JPG and H.264).
     
    re: H.264 being less that 8-bits- it's a effectively a lot less than 8-bits not only from initial DCT quantization and compression (for the macroblocks), but also from the motion vector estimation, motion compression, and macro block reconstruction (which includes fixing the macroblock edges on higher quality decoders).
  14. Like
    maxotics reacted to AaronChicago in Panasonic VariCam 35 - 4K and 14+ stops dynamic range   
    No global shutter?  Yawn.
     
     
     
    Just kidding of course. This thing looks amazing.
  15. Like
    maxotics got a reaction from skiphunt in Nikon D5300 Review and why DSLRs are dead for video   
    "When you're a hammer you treat everything as a nail."  That's the way I look at every new H.264 camera.  The only difference I've seen in Vasiley's GH hacks and this HDMI capture stuff is the reduction of motion artifacts generated by the video compression.  I've done tests myself, could never see a difference.  So I step back and ask myself, 'what artifacts ever bothered me in my videos?' 
     
    The only time macro-blocking annoys me is when I'm watching a video on Amazon, which I paid for, and it slows down to a lesser compression and I see it :)
     
    Everyone has their aesthetic.  I have not noticed any H.264 that I thought much better than another camera's H.264 in a similar price range.  Of course, there are DOF differences.  The Panny cameras are a pleasure to use (but that is ergonomics, not what I see on the screen).  I could see the differences in Matt's video above, but I think more a DOF difference between cameras.  I do see better color saturation in APS-C cameras vs MFT, but the larger sensors also create worse moire (because of the line skipping).  That's mostly a physics problem.  Anyway, the difference isn't big enough that it would stop me from filming my Citizen Cane :)
     
    Even H.264 with all I-frames and a 4:4:2 color space is not that much difference from the stock video I get out of my used GF3 body.
     
    Many people, like Skip, get fantastic results from H.264.  Getting that quality has to do with photographic skill to me, however.  
     
    Just my conclusion so far.  There are two major video technologies.
     
    1. ) Compressed video in an 8-bit channel color space
    2. ) RAW sensor-data sourced video
     
    There is no in-between.  If there was Panasonic would be offering RAW.  Why aren't they?  The cameras create RAW images in photo mode?  Why not just string 24 of them along every second?  Why do 4K instead?  If you find the honest answer to that question you will see, again, there is no in-between.
     
    I strongly urge everyone to get their hands on a RAW based camera, Blackmagic, or Magic Lantern.  Shoot a couple of clips.  Ezra Pound said music criticism was pointless because 10,000 pages couldn't describe the first 4 notes of Beethoven's 5th.  
     
    Shoot some RAW.  Shoot some H.264.  Pick which works best for your aesthetic.
  16. Like
    maxotics got a reaction from skiphunt in Nikon D5300 Review and why DSLRs are dead for video   
    I liked the story too, by the way.  If it wasn't a good story it wouldn't have been distracting to me ;)
     
    This whole thing reminds me to spend more time reading people's posts and considering more carefully what I have to say.  We're all good now I think.  I looked at Skip's photo sites.  Very impressive stuff!  
  17. Like
    maxotics got a reaction from Marino215 in Mysterious Bolex-style camera appears from Japanese camera manufacturer - Bellami HD-1   
    Looks like a re-incarnation of the Sanyo Xacti.  Except they added a trigger for those filmmakers not in Philly ;)
     
    I had one of the first ones.  Neat little cameras and very cheap.
     
    http://us.sanyo.com/Cameras-Camcorders-Previous-Models
  18. Like
    maxotics reacted to Guest in Nikon D5300 Review and why DSLRs are dead for video   
    https://vimeo.com/87551084
     
    Nikkor 85mm f2 AI-s on both cameras (G6 with Speed Booster). All shots graded. This is a personal test, it is not intended to be an objective comparison. Please Download the original .MOV file on Vimeo.
  19. Like
    maxotics reacted in Nikon D5300 Review and why DSLRs are dead for video   
    Despite all the silly bickering that goes on on this forum, the fact is that we're all completely spoiled for choice and all of these cameras, even the lowly Rebels, have put a cinematic image into the hands of anybody who wants it. 
     
    For the scientists amongst you (and yes, I should be working):
     

  20. Like
    maxotics reacted to Danyyyel in Nikon D5300 Review and why DSLRs are dead for video   
    I am not the biggest fan of full frame video. Perhaps because it has been overused since the Canon 5d2 when we came from one extreme (small sensor camera) to another extreme with full frame. The habit of completely blurring the background until the subject was like floating in a mist and more than often being out of focus if he moved 5 cm in front or back was really annoying. I think that the Apsc Cine 35mm look is a good balance between subject isolation and focus. If an actor is in an environment he should at least be part of it.
  21. Like
    maxotics got a reaction from Axel in What is the film look? Define it   
    A lot of the film look has to do with 3 trucks worth of lighting, modifiers, gels, cranes and dollies.  It has to do with set designers who carefully pick colors and furniture.  It has to do with story-board artists who pre-visualize what will best convey the intent of the scene.  It has to do with colorists, or film graders.  It has to do with actors hitting their marks.  It has to do with wardrobe. 
     
    I would make an argument that any camera only contributes 5% to the "film" look.  
  22. Like
    maxotics got a reaction from andy lee in Panasonic G6 for corporate video: first impressions   
    I bought a 14mm with a GF3 for a little over $200 on  CL (the GF3 is a fun/useful cam by the way).  So you may be able to pick up a b-cam panny body for almost nothing with that lens, or the 14-42 as Matt James Smith mentions.  
     
    These lenses will also work on Blackmagic Cinema cameras.  However, the detail from the BMPCC is unreal so the images picks up the smallest camera shake.  Therefore, you want lenses with OIS (and that have an external button for it).  That makes the Nikon lenses less attractive from that perspective.  However, stills are better from an APS-C or full-frame sensor, so the NIkon lenses are a hug plus there.  What's great about Nikon is all their lenses work on their digital cameras (unlike Canon).  So if you find a manual Nikkor at a garage sale you could use it both on any Nikon or the Panny with Adapter.
     
    I agree with Andrew's frustration about the DSLRs.  They aren't nearly as easy to use as Panny (as you wrote) and their quality doesn't touch the BMPCC. 
  23. Like
    maxotics got a reaction from gepinniw in Olympus E-M1 firmware update to address video mode - but will we get 24p and higher bitrates?   
    It may not be as easy as one would think.  When ML sets the EOS-M to 24fps to shoot RAW video, the display goes crazy in photo mode because the camera wants to show that at 30fps.  IN other words, Olympus my run all the video out through a 30fps timer.  It may be built into read only chips.  Firmware can only do so much.   It may be able to shoot in 24fps, but not display it, for example.  A lot of what these cameras do is "hard-wired" into CODEC and sensor IO chips.  Olympus has a further problem of corporate problems that probably don't help them in focusing in on their mirrorless cameras.
  24. Like
    maxotics got a reaction from Loma Graphics Oy in Video Data Fundamentals   
    fuzzynormal, think about what you're saying.  Why do you set the aperture to be 2.8 vs f8.  Why might you not use f22 if you want everything in focus?  Why would use use 25fps in Europe, but not the U.S.?  Why would you turn sharpness down in the camera?  Why wouldn't you expect to use your pancake lens with an adapter on a camera not designed for it?  Why might you use a Blackmagic camera for stuff you plan on showing at your local theater, but would use a GH3, say, for an on-line video series?  You think more like an engineer that you realize ;)
     
    Andrew has argued this before.  You can't separate the technical from the artistic.  Yes, you DO NOT have to be a technical expert to create great art.  That's why movie-making is the most collaborate of efforts.  No one can know/do it all.  You have to have multiple experts.  However, if you are doing this yourself, YOU want to know as much as possible.  For the guy-and-a-dog filmmaker, this site is an oasis.
     
    Most people here are not learning the tech to be "curious and cool".  They're learning it to be better artists.  
     
    This is how I got here.  I've never liked skin tones in compressed video.   I come from a film background.  I tried all kinds of things, but nothing worked.  Then I read a blog post here by Andrew on the 50D and how it was shooting RAW video (which I had no idea about).  So I bought his 50D guide, a camera, and tried it.  I just followed Andrews step-by-step instructions.   The first clip changed my life.  And I've been here every since, learning and sharing my knowledge and clips with others.  
     
    I admit that I get lost in the weeds in the technology, which becomes counter-productive artistically.  We all do.  I think that's why Andrew shot his latest video in the dark with a non-RAW camera using internal stabilization.   That's a real video, a real work of art, for a real client.  Andrew has amazing cameras.  He could have shot with RED.  But he shot with the Olympus because his technical knowledge told him what would be THE BEST EQUIPMENT TO REALIZE HIS ARTISTIC VISION.  
     
    You can get into the technology and still use a super-8 film camera!  One does not preclude the other.
  25. Like
    maxotics got a reaction from nazdar in Discovery: 4K 8bit 4:2:0 on the Panasonic GH4 converts to 1080p 10bit 4:4:4   
    If a pixel in the camera reads 14 bits of data you CANNOT get all of it back once you truncate to 8 bits of data.  
     
    Certainly, there will probably be modest color/luma improvements by downsampling fomr 4K, but only within its 8bit dynamic range.  
     
    That is to say, IN PRACTICE, if you shoot a scene that falls within the CODEC's dynamic range output you may get better color nuance through average of neighboring pixels.  But if the neighboring pixels are choppy then you're just going to create artifacts.
     
    However, you cannot get values from those RAW pixels that were out of the 8bit range they took from the 14bits.
     
    I don't mean to be rude, but you're confusing color bits with compression bits.  People who read this thread who think the GH4 is going to do what the Blackmagic cameras, ML RAW, or high end RAW based cameras do should understand this.
×
×
  • Create New...