Jump to content

Ty Harper

Members
  • Posts

    396
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Ty Harper got a reaction from IronFilm in OPEN AI VIDEO TECH ONE YEAR LATER...   
    Agreed. As you say, if people want to use cams and other traditional forms of real life capture for home/family use, no one will stop them. But it's unlikely that media/film production companies in the future will be hiring/paying people who offer camera capture, set design, lighting, etc, etc, as a sole and primary service - which is really what we're talking about. Also, the AI approach won't be seen as a 'forgery' to mass consumers in most circumstances. The ones intended for insidious deep-fake purposes? Yes, of course. But most AI-based video will be seen/consumed as a valid representation of real life ala a painting. It will also be impossible to tell the difference in the future. That's just based on how far a company like Open AI has come in a year. Also, these distinctions we're making around real vs fake will be irrelevant to the vast majority of humans born into it from here on out. All realms of commerce have experienced crushing human labor disruptions in the past and present times (car manufacturing being the most obvious example). What makes this stunning and unique is that it is happening to the realm of commerce (i.e art-based commerce) that we instinctively know humans will continue to do whether they are paid for it or not. You can't say the same for alot of other realms of the human labor economy. So it will be, imo, one of the most poignant blows in the history of human labor to date. 
  2. Like
    Ty Harper got a reaction from IronFilm in OPEN AI VIDEO TECH ONE YEAR LATER...   
    Some will - most won't and my kid might wax nostalgic about the days when people took pictures and shot video, but will be just fine with AI, the way we were just fine with mp3s.
  3. Like
    Ty Harper got a reaction from IronFilm in OPEN AI VIDEO TECH ONE YEAR LATER...   
    Yeah he covers this exact topic toward the end, and agrees with you. The entire film and media production industry will be vastly different in the coming years, and that is obviously an understatement. But also the pandemic did not help. I've watched my audio engineer and video peers fight for the tech/quality standards that have governed terrestrial tv and radio since its creation - and the pandemic, which forced all productions to use Zoom video/audio conferencing tech via hosts' living rooms with crappy audio situations - actually proved to the higher ups that audiences don't notice the loss in audio/video production quality. Which is also due to the reality that social media had already crept up and normalized subpar audio/video quality in the minds of the audience. Now YouTube/TikTok/IG audio is a 'style' or 'sound' - not an example of inferior quality. 
  4. Like
    Ty Harper got a reaction from KnightsFan in OPEN AI VIDEO TECH ONE YEAR LATER...   
    This is what I was getting at whenever I've gone on rants about the changes coming to the way we think about storytelling via 'video' and why our convos around cams are discussions about tech that already has an expiry date: 
     
  5. Like
    Ty Harper got a reaction from Tim Sewell in Getting things right in camera   
    I think the idea of nailing it in-camera is in and of itself not that big of a deal nowadays - if you have a basic color workflow and most importantly, know how to really nail the story you're trying to tell.
    That statement isn't meant to dismiss the importance of nailing it in-camera - but to say that imo nailing it in-camera has always been less about hitting a fixed target dead on - and more about a ballpark. And that ballpark is so much easier to hit thanks to the aggregated knowledge base of spaces like this, and of course YouTube. To the point where asking someone with experience for assistance is usually not as prudent as simply typing the question in YouTube's search engine! But that ball park around the target of perfection is also much wider than ever before, thanks to the corrective tools we now have within Resolve, Premiere Pro, Finalcut, etc. Not to mention basic in-the -field tools like a color chart and/or Sekonic C-700 or C-800.
    I was a deejay for most of my life, and I still pride myself on knowing the basic theories of bar-counting and melody-matching and taking the time to know the unique bar construction of every single song I was gna play, so that when I hit the party and got on the 1200s, my 'in-camera' settings were pretty good. But then Serato came along with other recent mind-blowing advances - and now in 2023, a deejay simply does not need to have/understand those 'in-camera' settings. 
    Is it something deejays like myself lament? 100%!!! But the REAL deejays of those past generations also understand that the actual mission, first and foremost, was always: to tell a great story.
    So while there are deejays I know and admire, who still love to do 'all-vinyl' parties as a way to exercise/show-off the skills they HAD TO developed in the past, as a means to the ends of telling great stories, again, the REAL ones within that cohort also understand they are doing those parties for the sake of nostalgia, which is also very important and life affirming.
    All that to say, hopefully when we are in passionate pursuit of getting it right in-camera (as I am myself, to a degree) we aren't losing sight of the more important plot of getting the story right in our minds - which is a whole other skill that still demands that we strive to master it at the highest of levels.
     
  6. Like
    Ty Harper reacted to kye in Getting things right in camera   
    The issue at hand is "getting things right in camera".
    Should I expose for the highlights, the shadows, or for the skintones?  The answer depends on what "getting things right" actually means.  The only "correct" way to use a camera is to understand the objectives of the project first, then use the camera to "get things right" in that context.
    Here's a recent interview with ARRI, taken in their technical testing facility in Canada.  The place is literally a location for testing equipment.  How do ARRI think about the tech?  "It starts with the story".
     
  7. Like
    Ty Harper got a reaction from FHDcrew in Getting things right in camera   
    I think the idea of nailing it in-camera is in and of itself not that big of a deal nowadays - if you have a basic color workflow and most importantly, know how to really nail the story you're trying to tell.
    That statement isn't meant to dismiss the importance of nailing it in-camera - but to say that imo nailing it in-camera has always been less about hitting a fixed target dead on - and more about a ballpark. And that ballpark is so much easier to hit thanks to the aggregated knowledge base of spaces like this, and of course YouTube. To the point where asking someone with experience for assistance is usually not as prudent as simply typing the question in YouTube's search engine! But that ball park around the target of perfection is also much wider than ever before, thanks to the corrective tools we now have within Resolve, Premiere Pro, Finalcut, etc. Not to mention basic in-the -field tools like a color chart and/or Sekonic C-700 or C-800.
    I was a deejay for most of my life, and I still pride myself on knowing the basic theories of bar-counting and melody-matching and taking the time to know the unique bar construction of every single song I was gna play, so that when I hit the party and got on the 1200s, my 'in-camera' settings were pretty good. But then Serato came along with other recent mind-blowing advances - and now in 2023, a deejay simply does not need to have/understand those 'in-camera' settings. 
    Is it something deejays like myself lament? 100%!!! But the REAL deejays of those past generations also understand that the actual mission, first and foremost, was always: to tell a great story.
    So while there are deejays I know and admire, who still love to do 'all-vinyl' parties as a way to exercise/show-off the skills they HAD TO developed in the past, as a means to the ends of telling great stories, again, the REAL ones within that cohort also understand they are doing those parties for the sake of nostalgia, which is also very important and life affirming.
    All that to say, hopefully when we are in passionate pursuit of getting it right in-camera (as I am myself, to a degree) we aren't losing sight of the more important plot of getting the story right in our minds - which is a whole other skill that still demands that we strive to master it at the highest of levels.
     
  8. Like
    Ty Harper got a reaction from ghostwind in Getting things right in camera   
    I think the idea of nailing it in-camera is in and of itself not that big of a deal nowadays - if you have a basic color workflow and most importantly, know how to really nail the story you're trying to tell.
    That statement isn't meant to dismiss the importance of nailing it in-camera - but to say that imo nailing it in-camera has always been less about hitting a fixed target dead on - and more about a ballpark. And that ballpark is so much easier to hit thanks to the aggregated knowledge base of spaces like this, and of course YouTube. To the point where asking someone with experience for assistance is usually not as prudent as simply typing the question in YouTube's search engine! But that ball park around the target of perfection is also much wider than ever before, thanks to the corrective tools we now have within Resolve, Premiere Pro, Finalcut, etc. Not to mention basic in-the -field tools like a color chart and/or Sekonic C-700 or C-800.
    I was a deejay for most of my life, and I still pride myself on knowing the basic theories of bar-counting and melody-matching and taking the time to know the unique bar construction of every single song I was gna play, so that when I hit the party and got on the 1200s, my 'in-camera' settings were pretty good. But then Serato came along with other recent mind-blowing advances - and now in 2023, a deejay simply does not need to have/understand those 'in-camera' settings. 
    Is it something deejays like myself lament? 100%!!! But the REAL deejays of those past generations also understand that the actual mission, first and foremost, was always: to tell a great story.
    So while there are deejays I know and admire, who still love to do 'all-vinyl' parties as a way to exercise/show-off the skills they HAD TO developed in the past, as a means to the ends of telling great stories, again, the REAL ones within that cohort also understand they are doing those parties for the sake of nostalgia, which is also very important and life affirming.
    All that to say, hopefully when we are in passionate pursuit of getting it right in-camera (as I am myself, to a degree) we aren't losing sight of the more important plot of getting the story right in our minds - which is a whole other skill that still demands that we strive to master it at the highest of levels.
     
  9. Thanks
    Ty Harper got a reaction from kye in Getting things right in camera   
    I think the idea of nailing it in-camera is in and of itself not that big of a deal nowadays - if you have a basic color workflow and most importantly, know how to really nail the story you're trying to tell.
    That statement isn't meant to dismiss the importance of nailing it in-camera - but to say that imo nailing it in-camera has always been less about hitting a fixed target dead on - and more about a ballpark. And that ballpark is so much easier to hit thanks to the aggregated knowledge base of spaces like this, and of course YouTube. To the point where asking someone with experience for assistance is usually not as prudent as simply typing the question in YouTube's search engine! But that ball park around the target of perfection is also much wider than ever before, thanks to the corrective tools we now have within Resolve, Premiere Pro, Finalcut, etc. Not to mention basic in-the -field tools like a color chart and/or Sekonic C-700 or C-800.
    I was a deejay for most of my life, and I still pride myself on knowing the basic theories of bar-counting and melody-matching and taking the time to know the unique bar construction of every single song I was gna play, so that when I hit the party and got on the 1200s, my 'in-camera' settings were pretty good. But then Serato came along with other recent mind-blowing advances - and now in 2023, a deejay simply does not need to have/understand those 'in-camera' settings. 
    Is it something deejays like myself lament? 100%!!! But the REAL deejays of those past generations also understand that the actual mission, first and foremost, was always: to tell a great story.
    So while there are deejays I know and admire, who still love to do 'all-vinyl' parties as a way to exercise/show-off the skills they HAD TO developed in the past, as a means to the ends of telling great stories, again, the REAL ones within that cohort also understand they are doing those parties for the sake of nostalgia, which is also very important and life affirming.
    All that to say, hopefully when we are in passionate pursuit of getting it right in-camera (as I am myself, to a degree) we aren't losing sight of the more important plot of getting the story right in our minds - which is a whole other skill that still demands that we strive to master it at the highest of levels.
     
  10. Like
    Ty Harper reacted to Jedi Master in Getting things right in camera   
    I use a Sekonic C-500 color meter. It reads directly in degrees Kelvin.
  11. Like
    Ty Harper reacted to kye in Getting things right in camera   
    Sure.
    Perhaps the important part that I didn't mention is that I often miss moments because I was a second or three late to get the camera pointed in the right direction and rolling, so to say I don't have time to do a manual WB before hitting record would be an understatement because often I have a negative amount of time I can devote to such things!
    Doing what you can to get it right in-camera is definitely preferable, but in how I shoot, getting it into the camera at all is something that is by no means guaranteed.  Of course, I am in a very tiny minority of all the people who shoot and know how to change the WB in post, but while my process doesn't apply to the majority of shooters, I still think there can be things learned by sharing 🙂 
  12. Like
    Ty Harper reacted to BenEricson in Getting things right in camera   
    Years ago, I used to work on these Walmart commercials. They were all shot on the Canon 5DMk3 set to Auto WB. Every market all over the country shot on the same camera, lens, and settings so the commercial spots would always look the same. It also sped up post product. EOS Standard I think? Or whatever was available on that camera. It's been a while. 
    You will get the most accurate skin tones if you do a proper manual WB off of white. I sometimes use auto WB to read the scene, then lock my WB to whatever it tells me is correct. 
    For studio stuff, I really like to shoot with baked in color, typically EOS Standard. I will often tweak the color matrix based on the setting but that profile on the Canon Cinema cameras is really really nice in my opinion. It gives you a very honest reading of the scene. Really nice for lighting fill ratios on set etc. 
    Shooting with baked in color doesn't mean you can't grade or tweak the image. You can still shoot plates to control a blown out window away from the subject, etc. You can also add vignettes or tweak colors. I just find this to incredibly accurate and better than most people can do without a proper color workflow.
  13. Like
    Ty Harper reacted to kye in Thoughts on Nikon Z9/Z8 vs. Canon R3/R5(c)?   
    I suspect that another variable in your observations will be the particular implementation of the system.
    One thing that frustrates me no end is how these stabilisation systems are specified.  The "5 stops" rating is a measure of how much vibration is left in the image compared to how much the camera was subjected to.  That's fine, but in my experience this isn't the factor that is dominant at all.  My experience is that if you are able to hold a camera steady and only have a small amount of shake then they all do a good job - where the residual is practically invisible.
    The situation where the system isn't able to do such a great job is where the amount of camera shake is more than the mechanism is able to eliminate - i.e. the mechanism reaches the mechanical limits of its movement.  These are the situations that show the limits to the mechanism, and (I suspect) one of the reasons that MFT is often reported as being better than larger sensors.  This would make sense as to get the same range of stabilisation an MFT sensor would only have to move half as far as a FF sensor, and do so with a quarter the weight.
    So, if the systems reveal themselves at their limits, another aspect is how they behave at their limits.  No manufacturer implements this in a way that absorbs all movement and then instantly transitions to having zero impact when the mechanisms boundaries are met, and this would be a terrible design choice, so there needs to be some sort of transition zone between full stabilisation and none, like a 'knee' or 'rolloff', and the characteristic of this function would likely impart some sort of 'feel' to the user I would expect.
    Also, each of the mechanisms is likely to be quite benign when they are operating well within their range of stabilisation, but the further towards their limits you push, they will reveal various artefacts.  The combination of IBIS and ultra-wide-angle lenses is well known with the corners wobbling about like jelly on a rickshaw, but I would imagine that there are other combinations that all have their own issues with differing aesthetics too.
    The sad thing is that with a very rudimentary testing setup, someone that had access to all the cameras (like DXO or some other tester) would be easily able to measure these things, and give some data around them.  I think it would be exceedingly useful to know that one camera was able to stabilise twice as much movement as another camera, despite both of them having the same ratio of reduction ("stops") when both are well within their functional range.
    There are also other aspects that bear mentioning that just aren't well known, for example that IBIS and DIS stabilises camera roll motion, but OIS does not.  With all the reading I do, and with the huge emphasis that my own shooting places on stabilisation, I never knew that and learned it the hard way on a shoot of my own.
  14. Thanks
    Ty Harper got a reaction from Emanuel in Thoughts on Nikon Z9/Z8 vs. Canon R3/R5(c)?   
    Random test between R5C and FX3 that just popped up in my feed: 
     
  15. Like
    Ty Harper got a reaction from kye in Thoughts on Nikon Z9/Z8 vs. Canon R3/R5(c)?   
    Thanks @kyeand I should clarify that it was my bad for seeing all the hype around R5's IBIS vs the R5C's DIS and not taken the time to fully understand exactly what each system did/didn't do.
    The thing is, when lens IS first came on the scene it was very clear why it was better than no IS at all.
    And sure IS, IBIS and DIS at their core are all able to make a handheld static video shot feel like it's on a tripod - but where lens IS shines for my use cases is in the way it translates movement above a certain threshold to look natural/organic vs IBIS and DIS.
    Had I fully grasped that IBIS was never going to give me anything better than DIS when it came to that particular aspect, I would've just bought another R5C and called it a day! But ah well, again totally my bad, tho it's not like the R5 isn't a great hybrid camera in its own right.
  16. Like
    Ty Harper got a reaction from gt3rs in Thoughts on Nikon Z9/Z8 vs. Canon R3/R5(c)?   
    Thanks @kyeand I should clarify that it was my bad for seeing all the hype around R5's IBIS vs the R5C's DIS and not taken the time to fully understand exactly what each system did/didn't do.
    The thing is, when lens IS first came on the scene it was very clear why it was better than no IS at all.
    And sure IS, IBIS and DIS at their core are all able to make a handheld static video shot feel like it's on a tripod - but where lens IS shines for my use cases is in the way it translates movement above a certain threshold to look natural/organic vs IBIS and DIS.
    Had I fully grasped that IBIS was never going to give me anything better than DIS when it came to that particular aspect, I would've just bought another R5C and called it a day! But ah well, again totally my bad, tho it's not like the R5 isn't a great hybrid camera in its own right.
  17. Like
    Ty Harper got a reaction from PannySVHS in Thoughts on Nikon Z9/Z8 vs. Canon R3/R5(c)?   
    Thanks @kyeand I should clarify that it was my bad for seeing all the hype around R5's IBIS vs the R5C's DIS and not taken the time to fully understand exactly what each system did/didn't do.
    The thing is, when lens IS first came on the scene it was very clear why it was better than no IS at all.
    And sure IS, IBIS and DIS at their core are all able to make a handheld static video shot feel like it's on a tripod - but where lens IS shines for my use cases is in the way it translates movement above a certain threshold to look natural/organic vs IBIS and DIS.
    Had I fully grasped that IBIS was never going to give me anything better than DIS when it came to that particular aspect, I would've just bought another R5C and called it a day! But ah well, again totally my bad, tho it's not like the R5 isn't a great hybrid camera in its own right.
  18. Like
    Ty Harper reacted to eatstoomuchjam in Thoughts on Nikon Z9/Z8 vs. Canon R3/R5(c)?   
    The extra DR from the R5C vs the R5 would have to been substantial for it to match with the C70.  As I said, I shot R5 and C70 on set over the weekend and the difference seems substantial to me.  I have no reason to doubt that the R5C (and R5) offer benefits at ISO 3200, but thankfully I'm not lighting for that on set.  In my case, the R5C also gets a 1 stop boost because I've screwed on the EF speed booster.  That'll cover me most of the time.

    As far as matching color between them, I have a color chart taped to the back of my slate and ask the PA to flip it around after calling the shot.  I just use the color chart tool in Resolve on the first or second node and other than DR and noise, I'd be hard-pressed to say which camera is which after that.  They both look great.  Unless the R5C changes the colors a lot, I can't imagine that it would be too hard to match to the C70 or C300II.
  19. Like
    Ty Harper reacted to ade towell in Thoughts on Nikon Z9/Z8 vs. Canon R3/R5(c)?   
    Hey I'm only reporting my friends frustrations - but I do think it is a bit ridiculous that they don't have Clog2 on the R5c it is supposed to be part of the Cinema line and would make matching to their other cinema cameras a lot easier.
    I have no scientific proof but imagine the reason they don't have Clog2 in the R5c is because it doesn't have the DR to utilise it fully. Maybe they could create a Clog2 lite like Panasonic did with GH5 and Vlog 
  20. Like
    Ty Harper reacted to kye in 24p is outdated   
    I absolutely agree with @Ty Harper that with enough data it will be able to differentiate the movies that got nominated for an academy award from those that didn't, those that did well in the box office from those that didn't, etc.
    What it won't be able to do, or at least not by analysing only the finished film, is know that the difference between one movies success and the next one is that the director of one was connected in the industry and the second movie lacked that level of influence.  But, if we give it access to enough data, it will know that too, and will tell a very uncomfortable story about how nepotism ranks highly in predicting individual successes...
    I also agree with @JulioD that the wisdom will be backwards-looking, but let's face it, how many of the Hollywood blockbusters are innovative?  Sure, there is the odd tweak here or there that is enabled by modern production techniques, and the technology of the day changes the environment that stories are set in, but a good boy-meets-girl rom-com won't have changed much in its fundamentals because humans haven't changed in our fundamentals.
    Perhaps the only thing not mentioned is that while AI will be backwards looking, and only able to imitate / remix past creativity, humans inevitably use all the tools at their disposal, and like other tools before it, I think that AI will be used by a minority of people to provide inspiration for the creation of new things and new ideas, and also, it will give the creative amongst us the increased ability to realise our dreams.
    Take feature films for example.  Lots of people set out to make their first feature film but the success rate is stunningly low for which ones get finished.  Making a feature is incredibly difficult.  Then how many that do get made are ever seen by anyone consequential?  Likely only a small fraction too.
    Potentially these ideas might have been great, but those involved just couldn't get them finished, or get them seen.  AI could give everyone access to this.  It will give everyone else the ability to spew out mediocre dross, but that's the current state of the industry anyway isn't it?  YT is full of absolute rubbish, so it's not like this will be a new challenge...
  21. Like
    Ty Harper reacted to mercer in 24p is outdated   
    What it will be able to do is combine visions or auteurs... what if Kubrick made John Wick or what if Spielberg made ...
    What would naturally take a person with his/her own vision and craft will soon be realized by a soulless machine.
    Sad.
  22. Like
    Ty Harper got a reaction from Jedi Master in 24p is outdated   
    Well the most obvious reasons it can are: (i) because AI has almost 100 years of data (i.e. film history) to draw from, and (ii) because it likely has (or eventually will have) access to data mined from all these programs we're using right now to make our art. I mean why else do you think some of these apps with these fantastic tools are being offered for us to use for free? And remember AI doesn't need the entirety of that data - it just needs a large enough sample size to crack the code.
    The mistake we continue to make as humans is thinking that the things that make us complex cannot be reduced to 1s and 0s. But it totally can, if given enough data.
    And again, none of this will ever end our human need to create or be creative. It will however make it harder for us to monetize our creativity in economically profitable and sustainable ways.
  23. Haha
    Ty Harper got a reaction from ghostwind in 24p is outdated   
    Well the most obvious reasons it can are: (i) because AI has almost 100 years of data (i.e. film history) to draw from, and (ii) because it likely has (or eventually will have) access to data mined from all these programs we're using right now to make our art. I mean why else do you think some of these apps with these fantastic tools are being offered for us to use for free? And remember AI doesn't need the entirety of that data - it just needs a large enough sample size to crack the code.
    The mistake we continue to make as humans is thinking that the things that make us complex cannot be reduced to 1s and 0s. But it totally can, if given enough data.
    And again, none of this will ever end our human need to create or be creative. It will however make it harder for us to monetize our creativity in economically profitable and sustainable ways.
  24. Like
    Ty Harper got a reaction from kye in 24p is outdated   
    Well the most obvious reasons it can are: (i) because AI has almost 100 years of data (i.e. film history) to draw from, and (ii) because it likely has (or eventually will have) access to data mined from all these programs we're using right now to make our art. I mean why else do you think some of these apps with these fantastic tools are being offered for us to use for free? And remember AI doesn't need the entirety of that data - it just needs a large enough sample size to crack the code.
    The mistake we continue to make as humans is thinking that the things that make us complex cannot be reduced to 1s and 0s. But it totally can, if given enough data.
    And again, none of this will ever end our human need to create or be creative. It will however make it harder for us to monetize our creativity in economically profitable and sustainable ways.
  25. Like
    Ty Harper reacted to zlfan in 24p is outdated   
    i have a youtube channel, kind of news oriented. i use all kinds of cameras i have at 24p 30 60p and aspec tatios of 16:9 and 2.4. no viewer complains. with thew new platform like youtube, the old hierarchy of distribution of theater, network, etc does not rule. and youtube etc is very tolerant. 
×
×
  • Create New...