Jump to content

maxotics

Members
  • Posts

    957
  • Joined

  • Last visited

Posts posted by maxotics

  1. 5 hours ago, Drew Allegre said:

    Can you explain how raw data would be encoded into a log space?  I always thought raw was just raw (to an extent) with very little manipulation.  Wouldn't all raw be linear, and any "log" space could be created from the linear raw?  I'm completely ignorant on this stuff.

    Almost everything we experience/measure, is either counting (linear) or some form of exponential (log).  We measure the sensor by counting (linear); that is how many photons have hit the sensel, but we generally express/measure the amount of light exponentially, like f-stops 1/1, 1/4, 1/9, 1/18, 1/32, 1/64 etc. (we put fractional stops in between).  That is, if we count 100 photons at one pixel, and 200 photons at another pixel, we generally don't come to the conclusion that the difference in light, measured through the inverse law https://petapixel.com/2016/06/02/primer-inverse-square-law-light/ is double in the second pixel. Double would actually be 400 photons!  In the real world, light behaves exponentially.  In our data world, we measure/or store number in a linear space.  Put another way, humans think in numbers for counting.  God thinks, or nature behaves, exponentially.

    @cantsin can explain better than I why measuring light linearly, or I should say, using the sensor's linear readings at the low and high end generally mean you end up with a lot of useless data.  

    Many of us have hashed this to death.  The core problem is you can't save 12 stops of full-color dynamic range in an 8-bit space.  It doesn't matter how those numbers are represented, LOG or otherwise.  The 8-bit space was designed for our viewing, not for our capturing.  LOG can't magically "compress" the sensor data into an 8-bit space.  It can help in telling you where you can trade off some data in one area for another, as Don mentions above, but the rub is the area you're getting a lot of noise is also the area you can least afford to lose if you're extending beyond the 6-8 stops you can capture in 8-bit!

    LOG is also used in video processing to distort data into an 8-bit space.  Just to 'f' with us humans, LOG also does a better job of explaining light as we really experience it :)  Unfortunately, our computers, which are binary, don't work with data (like RAW sensor data) in LOG, let alone the decimal system ;)  

    As a side note, Canon RAW, being saved in 14 bits created huge problems for me (and others) in software development.  Everything in programming makes it easy to work with in bytes.  All electronics are based around that. It's easy to say, get me this byte, or two bytes.  To say/program get me this 14 bits...no fun.  Take my word for it !!!!  My guess is most cameras are based around 8-bit (1 byte chunk) storage.  The RAW ML works with is just a stream of bits, which again, you have to parse out into 14-bit chunks.  To make the data easy to work with in camera, or PC, they'd really need to make them 2-byte (16 bit) chunks.  Well, most small electronics do not use 16-bit processors.  You'll read many conspiracy theories about this stuff, but the mundane truth is our electronics aren't up to working with 12-stop (12 to 14-bit) dynamic range data.

     

     

  2. 19 minutes ago, Don Kotlos said:

    What rule, what exceptions? I am trying to explain how it works. What you perceive as dynamic range in typical daylight conditions is much higher than what your eyes can detect in an instantaneous moment. I can go into more details if you want, I am a neuroscientist specializing in vision :)  

    I'm 56 and have been doing photography since early early teens--you'd think I'd get good at it by now ;) I appreciate your knowledge, I do, but it's confusing the issue here, which is what the filmmaker should focus on.  The stuff you're pointing out, true as it is, is in the weeds.  Like many others here I used to focus on how much DR I could get in my image.  Then I had the epiphany that I was thinking outside in (physical world to viewing world) when improvements can come by looking at it from inside-out (viewing world to physical world).  That is, by thinking about what the viewer ultimately sees, and how that image can be the best, I can understand what's important and what isn't.  I wasn't trying to start a debate ;)

    It's like Plato's cave.  Everyone wants to talk about the metaphysical forces that create the image on the wall.  I enjoy doing that too.  But much can be learned by talking about ONLY what we see and what makes for good quality there.

  3. 4 minutes ago, cantsin said:

    Ansel Adams' zone system has 10 stops and is meant for mere prints, not the extended dynamic range of film projection.

    Oh brother :)  The zone system is ALL about fitting 10 stops of physical world detail into 2 stops print-world of DR!  It is NOT about displaying 10 stops!!!!!  Indeed, think about it, how much DR does film capture in relative brightness?  Almost none.

  4. 4 minutes ago, Mattias Burling said:

    Good luck with the debate

    I was presenting my findings, which I've spent hours and hours upon.  I don't look at your work as a "debate" 

     

    2 minutes ago, Don Kotlos said:

    So in few words, while the dynamic range for a moment is small, in reality (longer times) it is much higher. 

    .Are you helping?  Do exceptions prove the rule?

      

  5. You guys!  Please follow me.  Put your brain on neutral.  When we go about our day, we move from dim areas, say our bedroom, at 2 EV say, to the outdoors which might be 12 EV and our eyes adjust.  Our brain composes images that make sense to us, say our dimly lit hallway to the outside when we open the door.  We experience 20 stops of dynamic range, or more, whatever.  I have NEVER said otherwise.  However, our eye is capturing these images in 6-stop exposures, so to speak.  That's what the scientists have determined and what Wiki has published.

    When we view film/video we do not need high dynamic range to make the image appear real!  That's why we don't complain about photographs that have a 2 DR spread.  Our brain can figure it out within 6 stops.  If our TV could display the 20 stops that are in the physical world, it would certainly look more real, but would it be practical?  Would we be able to tolerate quick shifts in dynamic range from 2 EV to 12 EV?  You tell me.  I suggest you think about this and don't just assume it would work.  I believe you will come to the same conclusion, that the visual artist works within 6 stops, 2 stops for print, 6 for projection.

    And by the way, most displays don't even do 6 stops well, you need to spend real money for that.  

     

  6. 1 minute ago, cantsin said:

    Yes, read the full paragraph - perceived dynamic range also includes the adaptation of the human eye and composite image created by the brain

    "The human eye can detect a luminance range of 1014, or one hundred trillion (100,000,000,000,000) (about 46.5 f-stops)".

    Not sure if this is directed to me or Shirozina.  So to be clear, I've never said we don't experience 20 stops of dynamic range.  Never!   

  7. 3 minutes ago, Shirozina said:

    How about doing some proper research and getting your data from several sources -  just like a scientist?

    I presented a fact.  You said wrong with NO SUPPORTING evidence on your side.  I give you evidence.  Now you want more?  Why do I have to do all the work here, Shirozina?  You want to believe the eye can see a static 20 stops of DR fine, and you've made your point here, and people can either believe your evidence (which is none) or mine, which IS based on scientific research, go read the footnotes of the Wiki article.  I don't understand why you're upset.  You called me on something and I agree, I should give evidence, which I did.  We're all good!

  8. 3 minutes ago, Shirozina said:

    Wikipedia is clearly wrong then as anybody with a pair of working eyes can easily prove.

    I know, those scientists, right!!!!  Global warming isn't real either.  You're confusing apparent DR AFTER the brains composition of an image with what the eye can resolve at any given moment.  Mattias is making the argument "who gives a fudge, as long as we end up with 20 stops".  And I'm answering, you'd give a fudge if you ever tried to look at dramatic changes in DR.  

  9. 1 minute ago, Shirozina said:

    Where and who has measured this because I'm telling you it's just factually wrong. You make some valid points in other areas but your whole argument falls down if you keep on insisting that our eyes are limited to a very narrow DR and therefore any more than this in a capture or display device is not needed.

    Scroll down to "Dynamic Range"  https://en.wikipedia.org/wiki/Human_eye  

  10. 1 hour ago, Mattias Burling said:

    What the eye can do doesn't really matter. Because the brain can modify it any way it pleases.

    Our eye has variable dynamic range and HDR. We can expose for highlights, mids, and shadows all at one and the brain shows us a perfect 20 stop image.
    The brain is super fast and adaptable. If I remove one of my contacts I never have the chance to see the decrease in resolution and sharpness. My brain is way to fast in applying the proper corrections.

    So trying to mix DR from a camera with that of screens is already a bit weird. I know some say that 13 stops isn't needed if a screen is only 10.. just wrong. With the 13 in the camera I can adjust more info into the 10 for the screen. Simple as that.

    To add the eye into the mix is just silly imo.

    Why is "adding the eye" into the mix silly?  It's the only way I can see photographs.  How do you see them?

    Our eye, like our camera, does not have variable dynamic range.  It is around 6 stops, that's what they've measured. however, in low light the eye can do amazing things in those 6 stops and the same with bright light; HOWEVER, the pupil must resize for it do it.  The brain doesn't so much modify what the eye sees as in create a real-time composite of all the information the eye has looked at.  

    Sure, in real life we can take wide DR for granted.  How often do we change brightness levels, say go from inside to out, or outside to in.  In VIDEO however, there are quick cuts and to change brightness to the point where the eye would need to keep adjusting its pupil every few seconds would create a very disorienting experience.  Stand in your doorway and look inside, then outside on a bright day, back and forth and you tell me how good your brain really is at switching.  It's not because, believe it or not, the pupil doesn't change size instantaneously.

    I'm not trying to mix DR from a camera with that of screen.  Sorry, but I feel you're really not paying attention to what I'm explaining and why I'm doing it.  You're looking to disagree with what I'm saying.  Fine.  Assume you can always mix your 13 in the camera to the 10 for the screen (don't know where you get that EV spread).  Most of your photographs I see are processed to very high contrast.  Such post processing hides many defects of the subtle DR you have to work with.    

    And if you believe the quality of your videos has remained the same since you stopped shooting with a RAW based camera, well, you should explain why Andrew just wasted his time with that guide ;)

    (I LOVE your stuff Mattias, just having a little fun here!  Get a 5D3, Andrew's guide, and come back to real digital film (RAW) :)  Which by the way, is not about DR, it's about the fact that RAW doesn't chroma-sub-sample and degrade the image.  Noise is organic.  It doesn't contain the various artifacts of a digital processor)

  11. People talk about only "8 stops of dynamic range" as if it's unusable.  I love following Andrew's and everyone else's analysis of how to maximize various camera's potential, but we sometimes let things get out of perspective.  The order of important in my experience is 1. Lighting 2. Focus/stabilization 3. Lens 4. Sensor 5. Ergonomics 6. CODEC.   Yes, we can't control the lighting all the time, but we also have to compare cameras in how they do in properly lit scenes; that is, scenes at/under 8 stops of DR.  Here's my take on why we must never forget that shooting scene with more than 8 stops of dynamic range is something we really want to avoid.  No camera can truly save it.  It becomes a calculus of atrocities.  

     

  12. On 8/28/2017 at 0:05 PM, Axel said:

    Didn't read every posting in this thread, so maybe this was already covered. I mean the part about EditReady. There is a much more powerful tool that does a lot more. It's called Kyno. A beta version for Windows (free) is announced. Didn't look too spectacular to me initially, but now I've bought it, and I think it's fantastic. Plays all* media (you can filter by video, audio and stills), dives trough folder structures (called drilldown-mode), allows to import, trim (> subclip), batch-rename, tag (Premiere: mark), transcode, wrap and copy directly from card. Allows to send data bases to FCP (shift cmd f) and Premiere (shift cmd p). 

    *well, no raw video for now, they're working on it.

    GREAT SUGGESTION!  I just installed Kyno.  I know I've become a little slow, but I always struggle with creating clips from my footage in Premiere, or anywhere else, that I can use later.  I always want to do a pre-trim right after I shoot so I remember what I want.  With Kyno, I figured it out in 5 minutes and it has all the options I can think of.  I look at the video, mark my in and out (it scrubs through quickly), then cut the clip (no transcoding) to the same file name + "something that describes what this freakin' clip is" text.  Very exciting piece of software!  For what it's worth, it's as if they read my mind about all the scut-work that makes it so I do far fewer YouTube videos than I'd like.  Again, thanks Axel!!!

  13. 9 minutes ago, Shirozina said:

    It's physically impossible to capture more than 8 stops of DR if you only have an 8 bit signal from the A/D - if you are 'hard data guy' then this should be easy to understand ;)

    HA HA.  Couldn't agree more!  But I'd still rather have 8-bits RAW than 8-bits from a 4:2:2 data block ;)  .... well, maybe, I've never tested that, not sure you could expose correctly, or don't need more than 8-bits of RAW data to calculate down to a decent image.  Another thing for Andrew to do ;)

  14. 21 minutes ago, Shirozina said:

    Sony A7xx cameras can get way more DR in video  than it's theoretically possible to do if the data is being truncated to 8bit after the A/D stage and before it gets compressed. 

    That's not how I understand it.  Again, I'm a hard-data guy.  8-bit is 8-bit is 8-bit ;)  Larger sensor cameras can capture more theoretical DR, which is born out by DxO and Bill Claff tests.  There is more DR from a A7S, especially in low light, because its pixel to area ratio is much higher than MFT sensors with more pixels than are necessary for video (but important for photography).   

    One of traps is assuming an 8-bit value from compressed video matches an 8-bit value from RAW.  There are NO real 8-bit color values in compressed videos.  They are pseudo-generated from a 24-bit color value which itself is saved in a 6 color matrix in some 4:X:X form.   In RAW, the 14bit value is a REAL value of a red, green or blue filtered pixel/sensel.  There are no real colors in RAW, or dynamic range.  They are just physical readings of light hitting silicon.   In sense, you HAVE to pick a black level in all 8-bit compressed video.  When shooting 10-bit RAW, you don't need to do that so you can be off a stop in your exposure and set your black level to make a nice image, whereas in 8-bit compressed video, you're more locked in.

    So even though you're truncated from 14 to 10, even at 10 you have recorded/save more DR than you can in 8-bit (RAW or Compressed)

    Although his focus is photography, Bill Claff has done amazing stuff: http://www.photonstophotos.net/Charts/PDR.htm

  15. 16 minutes ago, Andrew Reid said:

    Ok I take on the challenge.

    I'll shoot 10bit GH5 and compare it to 8bit GH4

    And I'll shoot some more 10bit RAW on the 5D Mark III and compare it to 8bit 4K on the Sony cameras.

    I thought Blackmagic already showed that 10bit was superior... Problem was maybe their sensors were a bit behind, so shadows were a bit noisier and so on, compared to an A7S or the like.

    AWESOME!  I tried to do comparisons using the Sony X-70 but arrived at the conclusions above.  I also want to clarify that I'm not saying 10-bit doesn't have an advantage.  Obviously, any improvement of the image is welcome.  It was just disappointing to me because I wanted to capture more DR.  It took a me a while to understand why there wasn't much of an improvement in 10-bit.  I didn't make any sense to me.  

    BTW, I'd buy your guide if you had one for my camera ;)  Why can't you just sell a master RAW for Canon guide and include most cameras.  When friends ask me how to do it and I'm not in that mode I have to send them to the ML forum (good luck to them ;) ).  As I've mentioned elsewhere, the 7D is a monster set-up.  I got one 6 months ago for $400 and a 17-50 2/8 for $250.  But I don't know what the best/stable ML build is.  I may be behind the times.  If you had a guide for all the cameras I'm pretty sure I would buy it every year.   Brush up the 50D guide ;)

    It's a sh_t load of stuff to keep up with but only you have enough contacts/expertise to make it possible.  

     

  16. 14 minutes ago, Andrew Reid said:

    10bit has more advantages than just less banding my friend

    Then please prove it to me, or show someone who has.  I tried and couldn't do it (but recognize I may have screwed up!).   Show me a frame that was shot in 8-bit video and one in 10-bit video, and then a best effort to recover detail at the high or low end of the image.  

    16 minutes ago, Andrew Reid said:

    In rec.709 your dynamic range is limited in 8bit because you simply run out of room

    It isn't limited because in rec 709 you have enough color space to show every color a human eye is able to discern without changing pupil size.  It is limited in that it can't capture the full DR of a sensor.

     

    18 minutes ago, Andrew Reid said:

    That's because with 0-255 you have regions of large variation in brightness crammed into the last 5 or 10 shades, so things like a sky looks like crap, no accurate colours in it

    Only if you're trying to "re-expose"/grade an image that isn't already perfect within that 8-bit space.  The root problem is most camera record high dynamic range images into display-ready data spaces.  I mean, I agree with you.  Which is why you really have to nail exposure and lighting long before you aim your camera at anything ;)

    21 minutes ago, Andrew Reid said:

    LOG is a way around this to a certain extent but is also a form of compression

    Sorry, compression always bothers me because it's a data/computer term which implies that if done correct you don't lose information.  A compressed word document is as good as an uncompressed one.  In visual data, LOG is more of a data distortion, to me at least.

    23 minutes ago, Andrew Reid said:

    So in short, 10bit = more dynamic range.

    In RAW yes, compressed video, no.  Again, I would LOVE to see some real-world proofs.  Not pixel peeps of banding on some wall.  Two videos played, one in 8-bit and one in 10-bit where you can see a worthwhile improvement in DR.

    25 minutes ago, Andrew Reid said:

    Also 10bit raw = more dynamic range, plus more control over colour and white balance as it isn't baked in

    Yes, because you're reading data from a sensor that isn't trying to fit into an 8-bit space for recording.  

    28 minutes ago, Andrew Reid said:

    I'd say the image quality at 10bit is very close to 14bit... 12bit even closer as to makes barely any difference. 14bit has a bit more accuracy in the highlights. Shadows still look AMAZING in 10bit, like the Blackmagic raw cinema cameras but less noisy.

    Yes, less noisy because you have a larger sensor, anyway, with shallower DOF looks pretty freakin' nice.  This is the point I don't know why you don't hammer in on.  10-bit RAW IS amazing!!!!

  17. 3 hours ago, Shirozina said:

    Banding is easily seen in 8 bit moving images but not necessarily due to them being 8 bit but because of the high levels of compression on internal codecs. 8bit files recorded via HDMI to an external recorder in a good codec like ProRes can be very smooth and banding free even when pushed around in grading. 10 bit allows even more manipulation and opens up the use of LOG without fear of banding.Some cameras on the other hand can't record a blue sky without showing banding even straight out of the camera in a non log profile. 10 Bit RAW is probably just about OK for a Canon 5D3 which tops out at a theoretical 11ish stops of DR but in practice due to it not being able to ETTR perfectly on every shot means 10 stops or less as a practical limit. From this you can generate good 10 or 8 bit files with the benefit of a RAW conversion done outside the camera where quality rather than speed can be prioritised. We know for instance that the RAW converters in NLE's like Resolve are not as good as those for stills in say Adobe Camera RAW or Capture One due to the CPU overheads required  so imagine what a tiny camera CPU does with RAW sensor data in real time in order to spit out an HD 8bit 4.2.0 file at 25mbs........

    I didn't mean to suggest 8-bit is what causes banding, only that the benefit of 10-bit in an internal compression reduces it.  I take issue with "10 bit allows even more manipulation", more manipulation how?  I feel there's  lot of misinformation out there, with flmmakers believing that "manipulation" is more than it is.  Whether there is banding or not, a filmmaker will grade to their taste and will NOT see a real-world difference between 8 and 10-bit compressed video.  That's my experience.  10-bit doesn't improve grading.  It only makes for small improvements at for some smooth color patches.  If you have some sample where you can see a real difference in "manipulation" I'd love to see it.  Otherwise, my fun-sucker opinion is 10-bit compressed video is more marketing pitch than real-world benefit.

    Of course, you say as much with "so imagine what a tiny camera CPU does with RAW sensor data in real time in order to spit out an HD 8bit 4.2.0 file at 25mbs.."   I agree that 10 bit RAW is probably "just about OK", but again, compared to internal compressed video, it is MORE than OK.  It's giving you 10 stops from the PHYSICAL IMAGE, which is different than 10-stops of gray bars from a severely color-compromised 10-bit compresssed data stream.

    Anyway, we're saying same thing I think.

  18. 56 minutes ago, noone said:

    I am a stills shooter who just dabbles in video and this really needs someone better at this than me to properly demonstrate but it is simple to do with a TS lens.

    I think the 17 might best show for exteriors of buildings from fairly close in.     I don't have any other 17mm lens but it is the local show here this weekend and I am thinking (if I get there) that it might be ideal to shoot a Ferris wheel from close in at night both shifted and not (and if I am allowed a tripod in a busy sideshow alley).

    You just need to shoot the first clip with no shift, than the second clip with it shifted to straighten the verticals, say, to give people an idea of what TS lenses can do.   The main problem is that to shoot vertical doorways the angle to the farthest point of the frame at the top but equal the angle at the bottom.  In other words, the camera must be facing at center of your verticals.  The beauty of TS lenses is you can shoot towards the lower 3rd of the doorway say, and shift the image down to square the image so to speak.   That is, you can "bend" the top 3rd of your vertical to appear straight.

  19. @noone it may be difficult for people to appreciate what you've done without having that next to footage shot with a normal lens.  

    I agree, doing TS architectural stuff on a 5D3 (especially since it now has 3K and 10bit RAW) would definitely make the OP stand out.

  20. 9 hours ago, Andrew Reid said:

    I will be uploading some of the original ProRes clips and Cinema DNGs. They amount you can push them in post without them falling apart is incredible.

    If I may embellish.  I suspect many people are confused about 10-bit video.  When we look at a colors on our 8-bit screen we're looking at 256 shades of each primary color, R,G,B.  The dynamic range is generally around 6 EV between darkest and brightest.  In 10-bit compressed video, like that of the GH5, We have 1,024 shades of colors, but STILL IN THAT 6 EV dyamic range.  What this means is that if you shoot a wide DR shot of the sky and some people under and umbrella, you can't bring back more detail from the clouds in 10-bit then you could in 8-bit.  The only real benefit of 10-bit compressed video, that I could see, is banding  where there are fine gradations of color.  And that benefit, is almost impossible to see in a moving image.

    In 10-bit DNGs, that Andrew is mentioning above, you're getting 1,024 shades OUTSIDE that 6 EV gamut, so you can recovered highlights or shadows.  If all ML did was get 10-bit RAW to work on the 5DIII, that alone, is worth the guide right there!!! :)  

    As soon as they port this stuff to the 7D I'll buy that guide immediately!

  21. 6 hours ago, Marc T said:

    Nikkor, Hyalinejim, HockeyFan12, Maxotics and IronFilm... your recomendation of ML is something to consider, but I am scared about developing times. My clients are architects, builders and business owners whom I am trying to persuade to spend some money in video marketing about their works or spaces. It is going to be really low cost, but I hope to compensate it with my architect viewpoint and photography background. I can't make my current workflow any longer if I want it to stay profitable. Nowadays I feel comfortable whith the footage the XC10 gives me and the time it takes from there to the finished video. Maybe (or most likely) I am missing something to properly consider ML workflow equivalent to the workflow I am doing now... ?

    On the other hand, ML is only 2K in 5D3 in FF, and 4K in 2.25x crop. So here it behaves like any other cropped sensor. I expect to use 4k to move and zoom around the footage because 2K is still my delivery format. Limiting factor there...

    I still don't understand what you're trying to achieve.  I agree, the XC10 is a great camera for most video projects.  

    The problem with Canon interchangeable-lens cameras, which Andrew has sort-of touched on ;) is the Cinema cameras (C100/200/300 etc) produce a fantastic image, sharp and with nice colors, while the consumer cameras, like the D80 and 5D3/4 are very soft in H.264 (my opinion).  I have an old C100 and the 80D doesn't hold a candle to it.  So if you put those lenses on a Cinema camera I think you'll either want to rob a bank to get one, or will get depressed that you don't have one.  Anyway, they are the final goal for you.

    My guess is the image with one of your lenses on the 5D2 looks soft, which is partially why you started this post.  But yes, in order to get a sharper image you need to use ML RAW and you're right, it is a LONG/HARD workflow.  If you're doing short architectural shots, however, the setup-time may be a lot more than the time you'll spend developing ML.  Post processing RAW video, like everything else, is something you CAN do quickly once you figure out a work flow.  That is, it isn'st as bad as you might think, after the initial tears ;)

    I discovered ML from this site when Andrew did the 50D RAW handbook.  I can definitely say that working with Magic Lantern has grown my photographic knowledge a thousand percent.  Even if you never use it, I bet you find any time you put into it vastly rewarding.  If you're on a PC then ML RAW is pretty easy to work with in MLVProducer.  Indeed, I feel there is no excuse for someone not to get ML RAW into their tool-chest.  The tools these days are light years ahead of what were available when Andrew wrote that first guide (when you have to use command line programs to create DNGs, etc.)  Now you can do a basic grade and go straight to ProRes!

    Also, I don't know how to say this without annoying some people, but even 720p RAW video provides a look you simply can't get with any compressed CODEC.  That look may turn out to be crucial for what you want to achieve.  So you need to see for yourself. 

    Here's a video where I touch on it.  Sorry if most of it is boring.  I also have a video after it with 7D footage.

     

     

     

  22. My goal was "Hollywood" in my 20s (did work there a bit) but ended up in financial software/data for the past 30+ years. My guess is DP work is similar. My experience.

    1. Film/Photography/Video, has always been a near impossible field to make a living in.  My observation 30+ years.  Unless you specialize in a very complex area where a shortage of talent develops.  In photography and film, bad news, those areas DO NOT EXIST ;)  There is always an excess supply of talent.

    2. One's availability today is worth more to a prospective client than someone else's genius available tomorrow.  Don't kid yourself.  Whatever the client says, you're replaceable and a minor part of their world.  You can be a raving egomaniac in your domain, but get in the way of someone signing your check and no amount of genius will save you.  Make yourself available.  

    3. A client only needs 10% of your skill.  When you try to give them more, it confuses them and can work against you when they want to hire again.  Understanding and matching a client's priorities, which will ALWAYS be slightly different than your expertise, is paramount.  Anticipating the client's needs, which may be some form of "cleaning the windows" is 90% of completing a success project.  Keep your head out of your head out of your ass.

    Does all this mean you shouldn't become the most skillful DP possible?  No, but you learn for YOU, for your pride in your work.  Do not connect skill with ability to get work.  It will have very little to do with what work you get.  I know that sounds unbelievable.  I don't quite believe it myself.  Yet if I objectively look at all the work done out there, it seems random, the scale of stuff, from bad to great.  In other words, the quality should be better IF THERE was a meritocracy.  There is simply too much poorly done stuff, in my eyes, in all areas of tech, to indicate that quality is the prime factor to employment.  Good quality stuff is there by luck.

    Human endeavors are complex, emotionally laden efforts to give meaning to our lives.  What gives you meaning, say great lighting, doesn't give the actor meaning, or the producer, etc.  Be compassionate to others.

    Bottom line, if you're thrilled to have the opportunity to even get coffee on the set you'll find a place.  If you're thinking about "saturated markets" and "making money" clients will pick up on that and get someone they think will work for free, because yeah, we all just suck!  It's just a job.  Money is always an issue.  9 out 10 pats on the back you must give yourself.

     

     

  23. I've used the Canon 24 TS on an Sony A7 but ran into glare issues because, as you know, the image plane is shifted off the sensor and it hits the surrounding area (which would would be an adapter on a Sony camera) and reflects back.  For various reasons, I'd stick with Canon if I were you.

    As others have pointed out, the video RAW you can get from the 5DIII is seriously nice.  That's probably what I'd do, upgrade to that camera (not the IV).  Then learn Magic Lantern.  You could also get a used 7D I and run ML on it too.  ML will also allow you to higher res time-lapse, or low FPS video shooting.  The ML forum is a quite the adventure, but I feel you'll find some interesting stuff that will apply to what you're doing.  So worth the effort!

    You could do some really cool timelapse videos with those TS lenses!  As you know, it's easy enough to straighten in photoshop when you have pixels to burn.  But you don't have those pixels to burn in video.   

     

×
×
  • Create New...