Jump to content

kye

Members
  • Posts

    7,489
  • Joined

  • Last visited

Reputation Activity

  1. Like
    kye got a reaction from Juank in Sensor vs. Processor   
    To expand on the above, here is a list of all the "layers" that I believe are in effect when creating an image - you are in effect "looking through" these items:
    Atmosphere between the camera and subject Filters on the end of the lens The lens itself, with each element and coating, as well as the reflective properties of the internal surfaces Anything between the lens and camera (eg, speed booster / TC, filters, etc) Filters on the sensor and their accompanying coatings (polarisers, IR/UV cut filters, anti-aliasing filter, bayer filter, etc) The sensor itself (the geometry and electrical properties of the photosites) The mode that the sensor is in (frame-rate, shutter-speed, pixel binning, line skipping, bit-depth, resolution, etc) Gain (there are often multiple stages of gain, one of which is ISO, that occur digitally and in the analog domain - I'm not very clear on how these operate) Image de-bayering (or equivalent for non-bayer sensors) Image scaling (resolution) Image colour space adjustments (Linear to Log or 709) Image NR, sharpening, and other processing Image bit-depth conversions Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc) Image container formats This is what gets you the file on the media out of the camera.  Then, in post, after decompressing each frame, you get:
    Image scaling and pre-processing (resolution, sharpening, etc) Image colour space adjustments (from file to timeline colour space) All image manipulation done in post by the user, including such things as: stabilisation, NR, colour and gamma manipulation (whole or selectively), sharpening, overlays, etc Image NR, sharpening, and other processing (as part of export processing) Image bit-depth conversions (as part of export processing) Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc) (as part of export processing) Image container formats (as part of export processing) This gets you the final deliverable.  Then, if your content is to be viewed through some sort of streaming service, you get:
    Image scaling and pre-processing (resolution, sharpening, etc) Image colour space adjustments (from file to streaming colour space) All image manipulation done in post by the streaming service, including such things as: stabilisation, NR, colour and gamma manipulation (whole or selectively), sharpening, overlays, etc Image NR, sharpening, and other processing (as part of preparing the steam) Image bit-depth conversions (as part of preparing the steam) Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc) (as part of preparing the steam) Image container formats (as part of preparing the steam) This list is non-exhaustive and is likely missing a number of things.  It's worth noting a few things:
    The elements listed above may be done in different sequences depending on the manufacturer / provider The processing that is done by the streaming provider may be different per resolution (eg, more sharpening for lower resolutions for example) I have heard anecdotal but credible evidence to suggest that there is digital NR within most cameras, and that this might be a significant factor in what separates consumer RAW cameras like the P2K/P4K/P6K from cameras like the Digital Bolex or high-end cinema cameras ..and to re-iterate a point I made above, you must take the whole image pipeline into consideration when making decisions.  Failure to do so is more likely to lead you to waste money on upgrades that don't get the results you want.  For example, if you want sharper images then you could spend literally thousands of dollars on new lenses, but this might be fruitless if the sharpness/resolution limitations are the in-camera-NR or you might spend thousands of dollars getting a camera that is better in low-light when there is no perceptible difference after the streaming service has compressed the image so much that you have to be filming at ISO 10-bajillion before and grain is visible (seriously - test this for yourself!).
  2. Like
    kye got a reaction from Juank in Sensor vs. Processor   
    1) colour
    The sensor in cameras is a linear device, but does have a small influence in colour because each photo site has a filter on it (which is how red, green, and blue are detected separately) and the wavelengths of light that each colour detects are tuned by the manufacturer of the filter to give optimal characteristics, so this is a small influence in the colour science of the camera.
    The sensor then just measures the light that hits each photo site and is completely Linear.  Therefore, all the colour science (except the filter on the sensor) is in the processor that turns the Linear output into whatever 709 or Log profile is written to the card.
    2) DR
    DR is limited by the dynamic range of the sensor, and of the noise levels, at the given ISO setting.  If a sensor has more DR or less noise then the overall image has more DR.
    The processor can do noise reduction (spatial or temporal) and this can increase the DR of the resultant image.  The processor can also compress the DR of the image through the application of un-even contrast (eg crushing the highlights) or clipping the image (eg when saving JPG images rather than RAW stills) and this would decrease the DR.
    3) Highlight rolloff
    Sensors have nothing to do with highlight rolloff - when they reach their maximum levels they clip harder than an iPhone 4 on the surface of the sun.
    All highlight rolloff is created by the processor when it takes the Linear readout from the sensor and applies the colour science to the image.
    There is general confusion around these aspects and there is frequently talk of how one sensor or other has great highlight rolloff, which is factually incorrect.  I'm happy to discuss this further if you're curious.
  3. Like
    kye reacted to Emanuel in Sony New Camera Launch - 29th March 2023   
    Very true @kye, priceless post, mate : ) I think it's more how to learn with the limitations but you're pretty on the spot about everything you said :- )
  4. Like
    kye reacted to kaylee in Artificial voices generated from text. The future of video narration?   
    so many ppl are going to loose their jobs lol. this is gen 1 beta
     
  5. Like
    kye reacted to Django in Sony New Camera Launch - 29th March 2023   
    Academy award winning cinematographer Greig Fraiser (Dune, Rogue One, Lion) shot his upcoming sci-fi thriller entirely on an FX3 apparently. The ZV-E1 has the same sensor, image processor, 10-bit codec, S-LOG3 & even LUT support. I'm sure we'll end up seeing incredible cinematic footage produced with it. It is certainly capable of it for sure.
    That being said, not to sound like a broken record but the fact the camera overheats does limit its use. So maybe not the most reliable option for pro work vs A7S3/FX3 with passive/active cooling systems. 
    I've done work myself with overheating cameras in the past (R5,R6) so its doable but its stressful and better suited as B or C cams to be on the safe side. The ZV-E1 seems even more unreliable than those because of its ultra compact size. We know for instance that indoors it is very sensitive to ambient temperature. What the latest findings show is that air flow is key. Hence why it is better suited as a Vlog cam where you are outdoors in motion, or as a travel cam where you are generally shooting short clips outdoors. And in both cases, even if the camera overheats, no biggie, take a coffee break let it cool off.
    But when there is a will there is a way, rig a mini fan on it and you may have yourself indeed a budget mini FX3! 
    Not that bizarre. No cooling = overheat. FX3/FX30/R5C/S1H have active cooling so don't overheat.
    Sony has not played into the resolution wars with their A7S3/FX3/FX6/ZVE1 cameras. All 12MB 1:1 4K cameras. 
    Overheating isn't resolution related its high frame rate and high bitrate compressed 10-bit codec related. Its when these appeared that overheating started plaguing mirrorless bodies. Its also why Sigma chose not to implement internal 10-bit compressed on their ultra compact FP. Its either RAW or 8-bit.
     
     
  6. Like
    kye got a reaction from ntblowz in Sony New Camera Launch - 29th March 2023   
    Absolutely!
    If we all only did what the manufacturer suggested we do then you might as well erase half of the footage online, across all streaming services and content hosting sites.
    Some notable mentions (that make using a ZV-E1 for film-making look completely normal) include:
    Using GoPros in major feature films, like the $66M Need For Speed - https://www.filmmakersacademy.com/gopro-hero3/ Filming feature films entirely using a smartphone, like Tangerine - https://en.wikipedia.org/wiki/Tangerine_(film) Using the Magic Lantern firmware on Canon cameras etc etc In fact, using DSLRs to record professional video at all was not intended by the manufacturers.  Had we followed their guidance we wouldn't have had the entire DSLR revolution, this blog, and an entirely new chapter of indy film-making which includes indy features but also all the forms of video social media around.  Ironically, had we only followed the manufacturers guidance, the ZV-E1 probably wouldn't exist.  
    So when @markr041 talks about how the ZV-E1 should only be used for travel and vlogging, it goes against the entire idea that created the camera, both travel film-making and vlogging genres, and also the existence and purpose of this whole site.
  7. Like
    kye got a reaction from FHDcrew in Sony New Camera Launch - 29th March 2023   
    Absolutely!
    If we all only did what the manufacturer suggested we do then you might as well erase half of the footage online, across all streaming services and content hosting sites.
    Some notable mentions (that make using a ZV-E1 for film-making look completely normal) include:
    Using GoPros in major feature films, like the $66M Need For Speed - https://www.filmmakersacademy.com/gopro-hero3/ Filming feature films entirely using a smartphone, like Tangerine - https://en.wikipedia.org/wiki/Tangerine_(film) Using the Magic Lantern firmware on Canon cameras etc etc In fact, using DSLRs to record professional video at all was not intended by the manufacturers.  Had we followed their guidance we wouldn't have had the entire DSLR revolution, this blog, and an entirely new chapter of indy film-making which includes indy features but also all the forms of video social media around.  Ironically, had we only followed the manufacturers guidance, the ZV-E1 probably wouldn't exist.  
    So when @markr041 talks about how the ZV-E1 should only be used for travel and vlogging, it goes against the entire idea that created the camera, both travel film-making and vlogging genres, and also the existence and purpose of this whole site.
  8. Like
    kye got a reaction from mercer in Sony New Camera Launch - 29th March 2023   
    Absolutely!
    If we all only did what the manufacturer suggested we do then you might as well erase half of the footage online, across all streaming services and content hosting sites.
    Some notable mentions (that make using a ZV-E1 for film-making look completely normal) include:
    Using GoPros in major feature films, like the $66M Need For Speed - https://www.filmmakersacademy.com/gopro-hero3/ Filming feature films entirely using a smartphone, like Tangerine - https://en.wikipedia.org/wiki/Tangerine_(film) Using the Magic Lantern firmware on Canon cameras etc etc In fact, using DSLRs to record professional video at all was not intended by the manufacturers.  Had we followed their guidance we wouldn't have had the entire DSLR revolution, this blog, and an entirely new chapter of indy film-making which includes indy features but also all the forms of video social media around.  Ironically, had we only followed the manufacturers guidance, the ZV-E1 probably wouldn't exist.  
    So when @markr041 talks about how the ZV-E1 should only be used for travel and vlogging, it goes against the entire idea that created the camera, both travel film-making and vlogging genres, and also the existence and purpose of this whole site.
  9. Like
    kye reacted to Django in Sony New Camera Launch - 29th March 2023   
    Well let's face it, everybody likes to save money no matter your income. So when the Sony shills on YouTube are out there spinning the ZV-E1 is an A7S3/FX3 for almost half the price, eyebrows get raised. What's the catch? Well its quite simple, Sony stripped the camera to its bare bones and it overheats during long takes as result. And the shit show commences.
    Overheating is a controversial subject. Most people I think feel a camera past a certain price point shouldn't have such limitations. Sony did put a lot of effort into making sure this camera is identified as a VLog camera. It's not in the A7 series its ZV. And I guess that for its intended use, the camera performs to satisfaction. So why the fuss?
    Because price aside, there are a few cool things about the ZV-E1. It has features neither the A7S3 or the FX3 have thanks to the AI processor. The AF is really impressive, it has breathing comp etc. Also the touchscreen supports gestures etc. The camera just feels snappy and the AF doesn't skip a beat no matter what you throw at it.
    So in a few ways, the ZV-E1 kinda feels like an updated A7S3/FX3. And that's rather alluring, especially in the US at its price point. But that's when reality hits and you must realise its a vlog/travel C-cam intended for short takes. If that's your method of shooting, well you're in luck. If not, its probably going to be a hard pass. Its as simple as that. 
  10. Like
    kye got a reaction from PannySVHS in Sony New Camera Launch - 29th March 2023   
    One thing that is essentially invisible on these forums is relative cost of these devices.  When I joined a bunch of Facebook groups related to MFT and GH5 etc a few years ago I realised a few very interesting things:
    There were pros shooting music videos on MFT cameras like the GX85 and G9 - fully booked working professionals There were people who didn't know anything technical at all doing real paid work...  posts like "I've just bought a GH4 and a vintage 50mm lens, my first real camera setup.  I've got 8 paid gigs scheduled starting in 5 days time - what does the mode knob do? and is 50mm a good lens to use?" There were people who were incredibly excited to get (what we would dismiss as being) old has-been cameras..  I saw many posts of people saying that they'd saved up enough money to buy a GH4 or GH5 as it was "their dream camera".  These were often people in poorer countries / areas. Adding to this other factors such as:
    There are countries that still broadcast in SD, or 720p People do work for community media channels (which have no money) People do work for not-for-profits (which have less than no money) What this means is that working with cameras that are sub-optimal or lower budget is very much a consideration and reality for many or even most people out there.  When you add to the situation that to shoot events with a multi-cam setup you have to spread your budget across multiple cameras, all this becomes amplified and there are people literally sleeping on the floor of their friends and family to be able to buy equipment.
    So, to you and me this $2K camera might seem like a "low-end" option, and for more "serious" work people should spend double or more for a better model, this is to many a completely ridiculous price for a camera (maybe more than a years salary) and so if this is a way to get into FF Sony then why not add fans and all manner of jerry-rigging to it.
    All of the above applies even more-so to people trying to film their first (or tenth) feature film, or filming documentaries (where long takes are required for interviews etc).
    The more I get exposed to the wider world, the more I realise that my home videos are often shot, edited, graded and delivered better than a lot of stuff that appears on commercial TV.  
    So, is this camera the right choice for such works?  Probably not.  Will some people try to use it for these things by adding fans and all manner of other things? Absolutely.  
    But let's ignore that and just assume that no-one would ever try to do anything that this camera isn't capable of doing.  Should we just discuss cameras in a non-critical way?  Should we just thank the manufacturers for giving us whatever half-crippled products they decide will keep their bottom line as rosy as possible?
    No.  We should push against the manufacturers at all times to do better.  We should explore the options provided by the manufacturers from whatever angles we think of, so that not only will the less wealthy lurkers who read the forums but don't post have ideas about what is and isn't possible with each camera but also so that the manufacturers can see what improvements make sense in the context of each model.
    Panasonic was greatly admired over the time when it released the GH1-GH5 line because....  *drum roll please*  .... they improved each model from the last by basically doing what people requested.  The reviews had consistent themes of "in our review of the last model we made a wish list and this new model ticks all those boxes".
  11. Like
    kye got a reaction from Juank in Sensor vs. Processor   
    To return to the original question, perhaps the most important element in all this is the ability of the operator to understand the variables and aesthetic implications of all of the above layers, to understand their budget, the available options on the market, and to apply their budget to ensure the products they use are optimal to achieve the intellectual and emotional response that the operator wishes to induce in the viewer.
  12. Like
    kye reacted to Matt Kieley in Lenses   
    I have a Fujinon 18-108mm 1.8 c-mount zoom that covers s16 that's very light and compact. Here it is next to my Canon 17-102 and Panasonic 12-35, and mounted on my GH5:



    I don't know if it's compact enough for you, but it's the smallest c-mount zoom that covers s16 that I've owned (and I've owned quite a few).
     
  13. Thanks
    kye reacted to PannySVHS in Blackmagic Micro Cinema Super Guide and Why It Still Matters   
    Though I got to use my Bmmcc to film two buddies of mine last year, which turned out to be an awesome display of colour prowess and though I got to do some test footage for an upcoming project and some cool BTS from a friends project, I still have nothing from this beauty to post here due to the nature of the material. But I got to test my OG Pocket which I couldnt resist to buy about three months or so ago when the price temptingly low.:)
    So I went with the tiny Panny 14mm F2.5 pancake, 800iso of course, Prores 422- not HQ, 360 shutter. Fstop up to 22. Lol. This camera is a dream in daylight. I had a test under tungsten and i found it to be just okay. But natural light or 5600K light fixtures are this cameras beautiful friends. So finally some rough n ready footage, collected over the course of a late afternoon to evening, shaky, clumsy but not withouth charme imho. Display and batteries were a challenge but workable for me. Sometimes sihouettes on the screen are enough when your eyes can see the real thing beyond the screen. Focussing was mostly prefocus and via AF button, so was the aperture adjustment with a bit of sneaking the histogram.
    Anyway, here it goes, my almost Bmmcc footage. 🙂
     
     
  14. Like
    kye got a reaction from PannySVHS in Blackmagic Micro Cinema Super Guide and Why It Still Matters   
    From the April edition of Film and Digital Times page 42:
    https://www.fdtimes.com/pdfs/free/120FDTimes-Apr2023-150.pdf
    and later on...
    and
    It also showed that the BMMCC were put in fireproof boxes.
    BMMCC and P4K on a €30,000,000 film shot in 2021 that screened on IMAX.  Just goes to show that these cameras are still relevant and in active use in productions larger than anything being discussed on here.
  15. Like
    kye got a reaction from ntblowz in Sony New Camera Launch - 29th March 2023   
    Quite a few people on here are filming things like weddings or concerts where you need to run multiple cameras for a long time and have them run unsupervised because they're operating a roaming camera while the others are rolling.  In these situations it's not uncommon for each camera to need to record for an hour or more, sometimes in full sun, without encountering any issues.  If you are shooting a wedding with three cameras and one of them shuts down after 25 minutes then you're potentially screwed in the edit.  
    I've watched wedding videographers edit a 3-camera multi-cam from a wedding ceremony and even though they didn't have a camera overheat and stop working, there were points in the edit when two of their three camera angles were unusable and they were down to one usable angle.  Had that camera overheated, they'd have been down to zero (or forced to use the angle that was setup when everyone was sitting but in the moment that everyone stood up it just showed the backs of the people sitting in the back row - not exactly a professional moment).
    Add to this the fact that in these situations it's useful to have identical cameras so that all the lenses and cards and accessories are all interchangeable.  So if one camera is at risk of overheating then it isn't impossible to have multiple cameras overheat, which would well and truly screw you.
    Besides, 87F is pathetic in terms of overheating tests....  I have overheated an iPhone before because it was 107F, I was recording clips that were several minutes long, it was in full sun, and the brightness of the screen was up full so that I could see what I was pointing the camera at.  I literally submerged half of it in a river to cool it down because I was missing moments (the people swimming in the river).  You can't do that with most cameras, and if they overheated without them being attended, you'd never know until it was too late.
    One thing that causes almost all the head-scratching (and starts almost all the arguments) is when one person cannot understand that someone else uses their equipment differently, to achieve a different result, for a different audience.  I suggest you start paying closer attention to how people talk about their camera choices - video is one of those fields where there are lots of ways of doing things and where techniques from one approach can be really handy to understand in your own work which might be very different.
  16. Like
    kye got a reaction from PannySVHS in Sony New Camera Launch - 29th March 2023   
    Quite a few people on here are filming things like weddings or concerts where you need to run multiple cameras for a long time and have them run unsupervised because they're operating a roaming camera while the others are rolling.  In these situations it's not uncommon for each camera to need to record for an hour or more, sometimes in full sun, without encountering any issues.  If you are shooting a wedding with three cameras and one of them shuts down after 25 minutes then you're potentially screwed in the edit.  
    I've watched wedding videographers edit a 3-camera multi-cam from a wedding ceremony and even though they didn't have a camera overheat and stop working, there were points in the edit when two of their three camera angles were unusable and they were down to one usable angle.  Had that camera overheated, they'd have been down to zero (or forced to use the angle that was setup when everyone was sitting but in the moment that everyone stood up it just showed the backs of the people sitting in the back row - not exactly a professional moment).
    Add to this the fact that in these situations it's useful to have identical cameras so that all the lenses and cards and accessories are all interchangeable.  So if one camera is at risk of overheating then it isn't impossible to have multiple cameras overheat, which would well and truly screw you.
    Besides, 87F is pathetic in terms of overheating tests....  I have overheated an iPhone before because it was 107F, I was recording clips that were several minutes long, it was in full sun, and the brightness of the screen was up full so that I could see what I was pointing the camera at.  I literally submerged half of it in a river to cool it down because I was missing moments (the people swimming in the river).  You can't do that with most cameras, and if they overheated without them being attended, you'd never know until it was too late.
    One thing that causes almost all the head-scratching (and starts almost all the arguments) is when one person cannot understand that someone else uses their equipment differently, to achieve a different result, for a different audience.  I suggest you start paying closer attention to how people talk about their camera choices - video is one of those fields where there are lots of ways of doing things and where techniques from one approach can be really handy to understand in your own work which might be very different.
  17. Like
    kye got a reaction from SRV1981 in Sensor vs. Processor   
    To expand on the above, here is a list of all the "layers" that I believe are in effect when creating an image - you are in effect "looking through" these items:
    Atmosphere between the camera and subject Filters on the end of the lens The lens itself, with each element and coating, as well as the reflective properties of the internal surfaces Anything between the lens and camera (eg, speed booster / TC, filters, etc) Filters on the sensor and their accompanying coatings (polarisers, IR/UV cut filters, anti-aliasing filter, bayer filter, etc) The sensor itself (the geometry and electrical properties of the photosites) The mode that the sensor is in (frame-rate, shutter-speed, pixel binning, line skipping, bit-depth, resolution, etc) Gain (there are often multiple stages of gain, one of which is ISO, that occur digitally and in the analog domain - I'm not very clear on how these operate) Image de-bayering (or equivalent for non-bayer sensors) Image scaling (resolution) Image colour space adjustments (Linear to Log or 709) Image NR, sharpening, and other processing Image bit-depth conversions Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc) Image container formats This is what gets you the file on the media out of the camera.  Then, in post, after decompressing each frame, you get:
    Image scaling and pre-processing (resolution, sharpening, etc) Image colour space adjustments (from file to timeline colour space) All image manipulation done in post by the user, including such things as: stabilisation, NR, colour and gamma manipulation (whole or selectively), sharpening, overlays, etc Image NR, sharpening, and other processing (as part of export processing) Image bit-depth conversions (as part of export processing) Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc) (as part of export processing) Image container formats (as part of export processing) This gets you the final deliverable.  Then, if your content is to be viewed through some sort of streaming service, you get:
    Image scaling and pre-processing (resolution, sharpening, etc) Image colour space adjustments (from file to streaming colour space) All image manipulation done in post by the streaming service, including such things as: stabilisation, NR, colour and gamma manipulation (whole or selectively), sharpening, overlays, etc Image NR, sharpening, and other processing (as part of preparing the steam) Image bit-depth conversions (as part of preparing the steam) Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc) (as part of preparing the steam) Image container formats (as part of preparing the steam) This list is non-exhaustive and is likely missing a number of things.  It's worth noting a few things:
    The elements listed above may be done in different sequences depending on the manufacturer / provider The processing that is done by the streaming provider may be different per resolution (eg, more sharpening for lower resolutions for example) I have heard anecdotal but credible evidence to suggest that there is digital NR within most cameras, and that this might be a significant factor in what separates consumer RAW cameras like the P2K/P4K/P6K from cameras like the Digital Bolex or high-end cinema cameras ..and to re-iterate a point I made above, you must take the whole image pipeline into consideration when making decisions.  Failure to do so is more likely to lead you to waste money on upgrades that don't get the results you want.  For example, if you want sharper images then you could spend literally thousands of dollars on new lenses, but this might be fruitless if the sharpness/resolution limitations are the in-camera-NR or you might spend thousands of dollars getting a camera that is better in low-light when there is no perceptible difference after the streaming service has compressed the image so much that you have to be filming at ISO 10-bajillion before and grain is visible (seriously - test this for yourself!).
  18. Like
    kye got a reaction from hyalinejim in Sensor vs. Processor   
    To expand on the above, here is a list of all the "layers" that I believe are in effect when creating an image - you are in effect "looking through" these items:
    Atmosphere between the camera and subject Filters on the end of the lens The lens itself, with each element and coating, as well as the reflective properties of the internal surfaces Anything between the lens and camera (eg, speed booster / TC, filters, etc) Filters on the sensor and their accompanying coatings (polarisers, IR/UV cut filters, anti-aliasing filter, bayer filter, etc) The sensor itself (the geometry and electrical properties of the photosites) The mode that the sensor is in (frame-rate, shutter-speed, pixel binning, line skipping, bit-depth, resolution, etc) Gain (there are often multiple stages of gain, one of which is ISO, that occur digitally and in the analog domain - I'm not very clear on how these operate) Image de-bayering (or equivalent for non-bayer sensors) Image scaling (resolution) Image colour space adjustments (Linear to Log or 709) Image NR, sharpening, and other processing Image bit-depth conversions Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc) Image container formats This is what gets you the file on the media out of the camera.  Then, in post, after decompressing each frame, you get:
    Image scaling and pre-processing (resolution, sharpening, etc) Image colour space adjustments (from file to timeline colour space) All image manipulation done in post by the user, including such things as: stabilisation, NR, colour and gamma manipulation (whole or selectively), sharpening, overlays, etc Image NR, sharpening, and other processing (as part of export processing) Image bit-depth conversions (as part of export processing) Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc) (as part of export processing) Image container formats (as part of export processing) This gets you the final deliverable.  Then, if your content is to be viewed through some sort of streaming service, you get:
    Image scaling and pre-processing (resolution, sharpening, etc) Image colour space adjustments (from file to streaming colour space) All image manipulation done in post by the streaming service, including such things as: stabilisation, NR, colour and gamma manipulation (whole or selectively), sharpening, overlays, etc Image NR, sharpening, and other processing (as part of preparing the steam) Image bit-depth conversions (as part of preparing the steam) Image compression (codec, bitrate, ALL-I vs IPB and keyframe density, etc) (as part of preparing the steam) Image container formats (as part of preparing the steam) This list is non-exhaustive and is likely missing a number of things.  It's worth noting a few things:
    The elements listed above may be done in different sequences depending on the manufacturer / provider The processing that is done by the streaming provider may be different per resolution (eg, more sharpening for lower resolutions for example) I have heard anecdotal but credible evidence to suggest that there is digital NR within most cameras, and that this might be a significant factor in what separates consumer RAW cameras like the P2K/P4K/P6K from cameras like the Digital Bolex or high-end cinema cameras ..and to re-iterate a point I made above, you must take the whole image pipeline into consideration when making decisions.  Failure to do so is more likely to lead you to waste money on upgrades that don't get the results you want.  For example, if you want sharper images then you could spend literally thousands of dollars on new lenses, but this might be fruitless if the sharpness/resolution limitations are the in-camera-NR or you might spend thousands of dollars getting a camera that is better in low-light when there is no perceptible difference after the streaming service has compressed the image so much that you have to be filming at ISO 10-bajillion before and grain is visible (seriously - test this for yourself!).
  19. Like
    kye got a reaction from SRV1981 in Sensor vs. Processor   
    I don't think there is a "most important".  
    I tend to think of photos/video like you're looking through a window at the world, where each element of technology is a pane of glass - so to see the outside world you look through every layer.  If one layer is covered in mud, or is blurry, or is tinted, or defective in some way, then the whole image pipeline will suffer from that.
    In this analogy, you should be trying to work out which panes of glass the most offensive aspects of the image are on, and trying to improve or replace that layer.  
    Thinking about it like this, there is no "most important".  Every layer is important, but some are worth paying more or less attention to, depending on what their current performance is and what you are trying to achieve.
    Of course, the only sensible test should be the final edit.  Concentrating on anything other than the final edit is just optimising for the wrong outcome.
  20. Like
    kye got a reaction from Michael S in Sensor vs. Processor   
    1) colour
    The sensor in cameras is a linear device, but does have a small influence in colour because each photo site has a filter on it (which is how red, green, and blue are detected separately) and the wavelengths of light that each colour detects are tuned by the manufacturer of the filter to give optimal characteristics, so this is a small influence in the colour science of the camera.
    The sensor then just measures the light that hits each photo site and is completely Linear.  Therefore, all the colour science (except the filter on the sensor) is in the processor that turns the Linear output into whatever 709 or Log profile is written to the card.
    2) DR
    DR is limited by the dynamic range of the sensor, and of the noise levels, at the given ISO setting.  If a sensor has more DR or less noise then the overall image has more DR.
    The processor can do noise reduction (spatial or temporal) and this can increase the DR of the resultant image.  The processor can also compress the DR of the image through the application of un-even contrast (eg crushing the highlights) or clipping the image (eg when saving JPG images rather than RAW stills) and this would decrease the DR.
    3) Highlight rolloff
    Sensors have nothing to do with highlight rolloff - when they reach their maximum levels they clip harder than an iPhone 4 on the surface of the sun.
    All highlight rolloff is created by the processor when it takes the Linear readout from the sensor and applies the colour science to the image.
    There is general confusion around these aspects and there is frequently talk of how one sensor or other has great highlight rolloff, which is factually incorrect.  I'm happy to discuss this further if you're curious.
  21. Thanks
    kye got a reaction from Rinad Amir in Sensor vs. Processor   
    1) colour
    The sensor in cameras is a linear device, but does have a small influence in colour because each photo site has a filter on it (which is how red, green, and blue are detected separately) and the wavelengths of light that each colour detects are tuned by the manufacturer of the filter to give optimal characteristics, so this is a small influence in the colour science of the camera.
    The sensor then just measures the light that hits each photo site and is completely Linear.  Therefore, all the colour science (except the filter on the sensor) is in the processor that turns the Linear output into whatever 709 or Log profile is written to the card.
    2) DR
    DR is limited by the dynamic range of the sensor, and of the noise levels, at the given ISO setting.  If a sensor has more DR or less noise then the overall image has more DR.
    The processor can do noise reduction (spatial or temporal) and this can increase the DR of the resultant image.  The processor can also compress the DR of the image through the application of un-even contrast (eg crushing the highlights) or clipping the image (eg when saving JPG images rather than RAW stills) and this would decrease the DR.
    3) Highlight rolloff
    Sensors have nothing to do with highlight rolloff - when they reach their maximum levels they clip harder than an iPhone 4 on the surface of the sun.
    All highlight rolloff is created by the processor when it takes the Linear readout from the sensor and applies the colour science to the image.
    There is general confusion around these aspects and there is frequently talk of how one sensor or other has great highlight rolloff, which is factually incorrect.  I'm happy to discuss this further if you're curious.
  22. Like
    kye got a reaction from SRV1981 in Sensor vs. Processor   
    1) colour
    The sensor in cameras is a linear device, but does have a small influence in colour because each photo site has a filter on it (which is how red, green, and blue are detected separately) and the wavelengths of light that each colour detects are tuned by the manufacturer of the filter to give optimal characteristics, so this is a small influence in the colour science of the camera.
    The sensor then just measures the light that hits each photo site and is completely Linear.  Therefore, all the colour science (except the filter on the sensor) is in the processor that turns the Linear output into whatever 709 or Log profile is written to the card.
    2) DR
    DR is limited by the dynamic range of the sensor, and of the noise levels, at the given ISO setting.  If a sensor has more DR or less noise then the overall image has more DR.
    The processor can do noise reduction (spatial or temporal) and this can increase the DR of the resultant image.  The processor can also compress the DR of the image through the application of un-even contrast (eg crushing the highlights) or clipping the image (eg when saving JPG images rather than RAW stills) and this would decrease the DR.
    3) Highlight rolloff
    Sensors have nothing to do with highlight rolloff - when they reach their maximum levels they clip harder than an iPhone 4 on the surface of the sun.
    All highlight rolloff is created by the processor when it takes the Linear readout from the sensor and applies the colour science to the image.
    There is general confusion around these aspects and there is frequently talk of how one sensor or other has great highlight rolloff, which is factually incorrect.  I'm happy to discuss this further if you're curious.
  23. Like
    kye got a reaction from inspiredtimothy in The dilema of being a Nikon Z6 shooter in 2023   
    I think the best way to do things is this:
    Test your equipment to understand its limitations Use your equipment to shoot real footage Edit and deliver a real project Look at what issues / limitations there were in the final edit, then.. If you can fix them by using your equipment differently, do that (and do lots of tests) If you can fix them by learning to plan/capture/light/edit/colour/master better then go learn (understand the theory, then go do lots of tests and practice) If you can't fix them any other way AND they're worth spending the money to solve AND it won't hurt you in some other way THEN spend money to fix them Go to step 2 That's it.  Only ever fix problems that you encounter in post on real projects - the rest is just BS.
    Step 7 is particularly brutal as well, because often we're faced with choices where we can improve one thing but it kills other aspects of the process.  This is where the "60 seconds can seem so long" comes in.  I routinely find that I have clips that contain 1s of good video but that's hard to use in an edit and I'd actually want to include 1s prior to that, but I wasn't fast enough in setting up the shot in order to get it.  I developed the technique of hitting record on the camera before I frame and focus because it means that I can use the clip from the very first frame that is in focus.  In that sense even 1s is the difference between a usable shot and it being very difficult to use.
  24. Like
    kye got a reaction from inspiredtimothy in The dilema of being a Nikon Z6 shooter in 2023   
    I totally understand why people are trying to buy their way to better videos, it's a much easier experience to research cameras and discuss (dream) about what cool new things you could buy, and it's brutal to admit you don't know much about a subject and start studying it (forcing your brain to work hard) and to do that for months and months.
    Unfortunately, that's what it takes to actually become a better film-maker.  I posted over in the "Once in a lifetime shoot" thread about the Tokyo episode of Parts Unknown that won a bunch of awards, but long story short, the cinematography didn't include any shots that were amazing in a grandiose kind of way, but it had a huge variety of solid shots from creative angles the editing and sound design were absolutely spectacular - end result..  awards and nominations, and a great viewing experience that is far from the pedestrian nature of most professional content, let alone us mere mortals.
    The innovative nature of that episode alone is enough to make you crawl into the foetal position under the blankets, but the news is actually tremendous...  most of the content in the world is so bland by comparison that to create solid professional-level edits you don't have to get to knowing 80% of what the greats know - a solid 20% will do just fine.
  25. Haha
    kye reacted to IronFilm in Artificial voices generated from text. The future of video narration?   
    A friend is currently writing an AI trading bot. 
    (one of zillions of such people trying to do this I am sure) 
×
×
  • Create New...