Jump to content

iaremrsir

Members
  • Posts

    29
  • Joined

  • Last visited

Reputation Activity

  1. Like
    iaremrsir got a reaction from tomastancredi in Is a sub $900 or 1.200 laptop enough for raw and/or 4k in Davinci?   
    My Dell 5000 series gaming laptop can playback a UHD timeline with GH5 and 4K XAVC from an F5. Also played with 4K Sony F5 raw in a 2K timeline and no hiccups there. In all honesty though, I think if you went with an alienware 13 or 15 you'd be better set up for more complex projects. A Dell 5000/7000 series gaming laptop, you'd be good for editing, grading, and low intensity stuff at high resolution. This is all in Resolve 14 by the way. Premiere has been too sketchy with raw performance for me.
    But I do agree that a small desktop setup for around the same price will get you much more bang for your buck.
  2. Like
    iaremrsir got a reaction from austinchimp in Are S-LOGS More Destructive Than They're Worth?   
    Hi, I'm Eddie. I designed the Bolex Log color specification and the image processing pipeline for Digital Bolex towards the end of its production run. Also wrote the plugin that lets people process CineForm Raw color in Davinci Resolve as if it were DNG. And I'm not saying this to get into a pissing contest. I'm saying this as someone who is on the manufacturer side of things and has to know the ins and outs of the product and how it's used with other tools.
    Shoot 8-bit H.264 from a Canon C100 or any of their DSLRs for that matter, then compare it to ProRes recorded over HDMI to an external device. You'll often seen that the ProRes has more noise and fine details/texture. This is because H.264 smooths out the noise in camera and HDMI outputs uncompressed data. You're speaking as if the main reason detail is lost in 8-bit is that it's the log curve, when in reality, the main loss of detail has been heavy compression. I already agreed that using a log curve in 8-bit will redistribute code values so that more space is given to the mids and shadows, meaning less codes per stop when storing HDR data. We're not debating sensor data, otherwise we'd be talking analog stages, 16-bit, 14-bit, uncompressed. So, in this case, compression has everything to do with the image data.
    If you apply a logarithmic gain at the analog stage (ignoring the electron multiplying case), noise would be much higher than if you applied the log curve after digitization. It's not pretty. Definitely not philosophical in any measure. And these log curves can't pull data that isn't there. It's well known that using a log curve will boost the appearance of recorded noise in any bit depth, color space, etc. I'm not arguing that log fixes noisy data. I'm arguing that recording log allows you to record data in a way that keeps more detail across the range and expose in such a way that allows you to better minimize the appearance of noise in post (ETTR). Trust me, I let out a heavy sigh any time I try to see someone compensating for low light scene or poor scene lighting by recording log.
    I didn't say you said they don't. But you said professionals are lighting within 6-stop ranges. They have millions to pour into set design, lighting, and wardrobe. I was just pointing out the fact, that in spite of them lighting like that, they still shoot log or raw (which is later interpreted as log in the grading tool). Also, they're gonna be shooting 10-bit, 12-bit, and 16-bit more often than not, so they aren't worried about losing code words per stop, which takes away the main argument of using a standard profile.
    I didn't say you did. But it isn't nonsensical. Canon raw has C-Log applied to it before being sent over SDI to external recorders for being saved as rmf. ARRI, BMD, CineForm, all write their raw formats with log as part of the specification. While it's not the Cineon-type log they use, it's log nonetheless.
    But you can grade in a log color space. Hence Davinci Resolve Color Managed timeline and ACEScc/ACEScct. Whenever you grade on top of log image data, your working/"timeline" color space is log.
    This is where we get out of the realm of objectivity. Because there are technical trade-offs for both sides, it's up to who's shooting as to which is preferable. There is no clearly defined technical mathematical winner in this case (which is one of the reasons I'm happy I don't have to deal with H.264 compressed, 8-bit recorded data anymore). And I'm not saying there is one that checks all the boxes in this case.
    When I say "retain detail through compression" I'm talking about codec compression. Like I stated earlier, a standard profile combined with high levels of compression will reduce texture in the low mids and lows. It reduces flexibility and the overall naturalness of the image.
    That is an issue of grading, not the source material. When I grade log material, I have no issue getting thick colors from it.
    Again, log doesn't add noise. It just increases the appearance. Once the image is graded, the appearance of the noise will look similar to that of the standard profile, but the texture of the shadows will look more natural, especially in motion.
    But you didn't grade to match which means the higher contrast and extra sharpness of the standard profile will have increased perceived detail of the aliasing and text. And in spite of that, we can see that the aliasing that you point out in the standard profile is present in the log image! When graded down to match, you'll get similar, if not exact sharpness from the log image. The difference between the two being flexibility. I took the log one you posted and graded it down more closely to what your standard profile looked like without the boost in saturation and you can see the apparent sharpness is similar and the noise isn't as much as you were making it out to be.
     

  3. Like
    iaremrsir got a reaction from EthanAlexander in Are S-LOGS More Destructive Than They're Worth?   
    I directly addressed the 8-bit H.264 log encoded footage. I already said detail in the lower end is more likely to be destroyed by heavy compression. That includes noise. They are getting a wider range of color intensities, but less color saturation when shooting a wide gamut color space.
    No, the effect of compression is not the same in both cases because that's not how compression works. If the image isn't identical, it isn't going to compress the same.
    I said that compression smooths noise and details in the shadows. A log color space image is less likely to have compression artifacts and the strange blockiness, and will look more natural and retain the structure of the noise. The noise before compression is going to be the same before compression, regardless of color space. Only after a destructive process, like compression, do things like noise change.
    There's no good or bad guy in this. It's just debate. The nature of the beast so to speak. I enjoy the mental exercise.
    Here are some color managed samples from Dave Newman of CineForm from a decade ago. These show the effects of compression for data in the shadows. And before you say standard profiles aren't linear, I know they aren't. But they are fairly similar once you get below 3 to 4 stops below middle gray showing single digits per stop and being below 16/255. This is an extreme example being the equivalent of 10-bit 4:2:2 50 Mbps H.264, except these are wavelet compressed and generally handle noise a bit better than DCT based codecs.
    Uncompressed Linear:

    Compressed Linear CF Low and Linear J2K:


    Cineon CF Low and Cineon J2K:

     
    And some reading on the benefits of log curves compared to power curves.
    https://cineform.blogspot.com/2012/10/protune.html
    https://cineform.blogspot.com/2007/09/10-bit-log-vs-12-bit-linear.html
  4. Like
    iaremrsir got a reaction from hyalinejim in Are S-LOGS More Destructive Than They're Worth?   
    Hi, I'm Eddie. I designed the Bolex Log color specification and the image processing pipeline for Digital Bolex towards the end of its production run. Also wrote the plugin that lets people process CineForm Raw color in Davinci Resolve as if it were DNG. And I'm not saying this to get into a pissing contest. I'm saying this as someone who is on the manufacturer side of things and has to know the ins and outs of the product and how it's used with other tools.
    Shoot 8-bit H.264 from a Canon C100 or any of their DSLRs for that matter, then compare it to ProRes recorded over HDMI to an external device. You'll often seen that the ProRes has more noise and fine details/texture. This is because H.264 smooths out the noise in camera and HDMI outputs uncompressed data. You're speaking as if the main reason detail is lost in 8-bit is that it's the log curve, when in reality, the main loss of detail has been heavy compression. I already agreed that using a log curve in 8-bit will redistribute code values so that more space is given to the mids and shadows, meaning less codes per stop when storing HDR data. We're not debating sensor data, otherwise we'd be talking analog stages, 16-bit, 14-bit, uncompressed. So, in this case, compression has everything to do with the image data.
    If you apply a logarithmic gain at the analog stage (ignoring the electron multiplying case), noise would be much higher than if you applied the log curve after digitization. It's not pretty. Definitely not philosophical in any measure. And these log curves can't pull data that isn't there. It's well known that using a log curve will boost the appearance of recorded noise in any bit depth, color space, etc. I'm not arguing that log fixes noisy data. I'm arguing that recording log allows you to record data in a way that keeps more detail across the range and expose in such a way that allows you to better minimize the appearance of noise in post (ETTR). Trust me, I let out a heavy sigh any time I try to see someone compensating for low light scene or poor scene lighting by recording log.
    I didn't say you said they don't. But you said professionals are lighting within 6-stop ranges. They have millions to pour into set design, lighting, and wardrobe. I was just pointing out the fact, that in spite of them lighting like that, they still shoot log or raw (which is later interpreted as log in the grading tool). Also, they're gonna be shooting 10-bit, 12-bit, and 16-bit more often than not, so they aren't worried about losing code words per stop, which takes away the main argument of using a standard profile.
    I didn't say you did. But it isn't nonsensical. Canon raw has C-Log applied to it before being sent over SDI to external recorders for being saved as rmf. ARRI, BMD, CineForm, all write their raw formats with log as part of the specification. While it's not the Cineon-type log they use, it's log nonetheless.
    But you can grade in a log color space. Hence Davinci Resolve Color Managed timeline and ACEScc/ACEScct. Whenever you grade on top of log image data, your working/"timeline" color space is log.
    This is where we get out of the realm of objectivity. Because there are technical trade-offs for both sides, it's up to who's shooting as to which is preferable. There is no clearly defined technical mathematical winner in this case (which is one of the reasons I'm happy I don't have to deal with H.264 compressed, 8-bit recorded data anymore). And I'm not saying there is one that checks all the boxes in this case.
    When I say "retain detail through compression" I'm talking about codec compression. Like I stated earlier, a standard profile combined with high levels of compression will reduce texture in the low mids and lows. It reduces flexibility and the overall naturalness of the image.
    That is an issue of grading, not the source material. When I grade log material, I have no issue getting thick colors from it.
    Again, log doesn't add noise. It just increases the appearance. Once the image is graded, the appearance of the noise will look similar to that of the standard profile, but the texture of the shadows will look more natural, especially in motion.
    But you didn't grade to match which means the higher contrast and extra sharpness of the standard profile will have increased perceived detail of the aliasing and text. And in spite of that, we can see that the aliasing that you point out in the standard profile is present in the log image! When graded down to match, you'll get similar, if not exact sharpness from the log image. The difference between the two being flexibility. I took the log one you posted and graded it down more closely to what your standard profile looked like without the boost in saturation and you can see the apparent sharpness is similar and the noise isn't as much as you were making it out to be.
     

  5. Like
    iaremrsir got a reaction from Mattias Burling in Are S-LOGS More Destructive Than They're Worth?   
    I directly addressed the 8-bit H.264 log encoded footage. I already said detail in the lower end is more likely to be destroyed by heavy compression. That includes noise. They are getting a wider range of color intensities, but less color saturation when shooting a wide gamut color space.
    No, the effect of compression is not the same in both cases because that's not how compression works. If the image isn't identical, it isn't going to compress the same.
    I said that compression smooths noise and details in the shadows. A log color space image is less likely to have compression artifacts and the strange blockiness, and will look more natural and retain the structure of the noise. The noise before compression is going to be the same before compression, regardless of color space. Only after a destructive process, like compression, do things like noise change.
    There's no good or bad guy in this. It's just debate. The nature of the beast so to speak. I enjoy the mental exercise.
    Here are some color managed samples from Dave Newman of CineForm from a decade ago. These show the effects of compression for data in the shadows. And before you say standard profiles aren't linear, I know they aren't. But they are fairly similar once you get below 3 to 4 stops below middle gray showing single digits per stop and being below 16/255. This is an extreme example being the equivalent of 10-bit 4:2:2 50 Mbps H.264, except these are wavelet compressed and generally handle noise a bit better than DCT based codecs.
    Uncompressed Linear:

    Compressed Linear CF Low and Linear J2K:


    Cineon CF Low and Cineon J2K:

     
    And some reading on the benefits of log curves compared to power curves.
    https://cineform.blogspot.com/2012/10/protune.html
    https://cineform.blogspot.com/2007/09/10-bit-log-vs-12-bit-linear.html
  6. Like
    iaremrsir got a reaction from iamoui in Are S-LOGS More Destructive Than They're Worth?   
    Hi, I'm Eddie. I designed the Bolex Log color specification and the image processing pipeline for Digital Bolex towards the end of its production run. Also wrote the plugin that lets people process CineForm Raw color in Davinci Resolve as if it were DNG. And I'm not saying this to get into a pissing contest. I'm saying this as someone who is on the manufacturer side of things and has to know the ins and outs of the product and how it's used with other tools.
    Shoot 8-bit H.264 from a Canon C100 or any of their DSLRs for that matter, then compare it to ProRes recorded over HDMI to an external device. You'll often seen that the ProRes has more noise and fine details/texture. This is because H.264 smooths out the noise in camera and HDMI outputs uncompressed data. You're speaking as if the main reason detail is lost in 8-bit is that it's the log curve, when in reality, the main loss of detail has been heavy compression. I already agreed that using a log curve in 8-bit will redistribute code values so that more space is given to the mids and shadows, meaning less codes per stop when storing HDR data. We're not debating sensor data, otherwise we'd be talking analog stages, 16-bit, 14-bit, uncompressed. So, in this case, compression has everything to do with the image data.
    If you apply a logarithmic gain at the analog stage (ignoring the electron multiplying case), noise would be much higher than if you applied the log curve after digitization. It's not pretty. Definitely not philosophical in any measure. And these log curves can't pull data that isn't there. It's well known that using a log curve will boost the appearance of recorded noise in any bit depth, color space, etc. I'm not arguing that log fixes noisy data. I'm arguing that recording log allows you to record data in a way that keeps more detail across the range and expose in such a way that allows you to better minimize the appearance of noise in post (ETTR). Trust me, I let out a heavy sigh any time I try to see someone compensating for low light scene or poor scene lighting by recording log.
    I didn't say you said they don't. But you said professionals are lighting within 6-stop ranges. They have millions to pour into set design, lighting, and wardrobe. I was just pointing out the fact, that in spite of them lighting like that, they still shoot log or raw (which is later interpreted as log in the grading tool). Also, they're gonna be shooting 10-bit, 12-bit, and 16-bit more often than not, so they aren't worried about losing code words per stop, which takes away the main argument of using a standard profile.
    I didn't say you did. But it isn't nonsensical. Canon raw has C-Log applied to it before being sent over SDI to external recorders for being saved as rmf. ARRI, BMD, CineForm, all write their raw formats with log as part of the specification. While it's not the Cineon-type log they use, it's log nonetheless.
    But you can grade in a log color space. Hence Davinci Resolve Color Managed timeline and ACEScc/ACEScct. Whenever you grade on top of log image data, your working/"timeline" color space is log.
    This is where we get out of the realm of objectivity. Because there are technical trade-offs for both sides, it's up to who's shooting as to which is preferable. There is no clearly defined technical mathematical winner in this case (which is one of the reasons I'm happy I don't have to deal with H.264 compressed, 8-bit recorded data anymore). And I'm not saying there is one that checks all the boxes in this case.
    When I say "retain detail through compression" I'm talking about codec compression. Like I stated earlier, a standard profile combined with high levels of compression will reduce texture in the low mids and lows. It reduces flexibility and the overall naturalness of the image.
    That is an issue of grading, not the source material. When I grade log material, I have no issue getting thick colors from it.
    Again, log doesn't add noise. It just increases the appearance. Once the image is graded, the appearance of the noise will look similar to that of the standard profile, but the texture of the shadows will look more natural, especially in motion.
    But you didn't grade to match which means the higher contrast and extra sharpness of the standard profile will have increased perceived detail of the aliasing and text. And in spite of that, we can see that the aliasing that you point out in the standard profile is present in the log image! When graded down to match, you'll get similar, if not exact sharpness from the log image. The difference between the two being flexibility. I took the log one you posted and graded it down more closely to what your standard profile looked like without the boost in saturation and you can see the apparent sharpness is similar and the noise isn't as much as you were making it out to be.
     

  7. Like
    iaremrsir got a reaction from andrgl in Are S-LOGS More Destructive Than They're Worth?   
    Hi, I'm Eddie. I designed the Bolex Log color specification and the image processing pipeline for Digital Bolex towards the end of its production run. Also wrote the plugin that lets people process CineForm Raw color in Davinci Resolve as if it were DNG. And I'm not saying this to get into a pissing contest. I'm saying this as someone who is on the manufacturer side of things and has to know the ins and outs of the product and how it's used with other tools.
    Shoot 8-bit H.264 from a Canon C100 or any of their DSLRs for that matter, then compare it to ProRes recorded over HDMI to an external device. You'll often seen that the ProRes has more noise and fine details/texture. This is because H.264 smooths out the noise in camera and HDMI outputs uncompressed data. You're speaking as if the main reason detail is lost in 8-bit is that it's the log curve, when in reality, the main loss of detail has been heavy compression. I already agreed that using a log curve in 8-bit will redistribute code values so that more space is given to the mids and shadows, meaning less codes per stop when storing HDR data. We're not debating sensor data, otherwise we'd be talking analog stages, 16-bit, 14-bit, uncompressed. So, in this case, compression has everything to do with the image data.
    If you apply a logarithmic gain at the analog stage (ignoring the electron multiplying case), noise would be much higher than if you applied the log curve after digitization. It's not pretty. Definitely not philosophical in any measure. And these log curves can't pull data that isn't there. It's well known that using a log curve will boost the appearance of recorded noise in any bit depth, color space, etc. I'm not arguing that log fixes noisy data. I'm arguing that recording log allows you to record data in a way that keeps more detail across the range and expose in such a way that allows you to better minimize the appearance of noise in post (ETTR). Trust me, I let out a heavy sigh any time I try to see someone compensating for low light scene or poor scene lighting by recording log.
    I didn't say you said they don't. But you said professionals are lighting within 6-stop ranges. They have millions to pour into set design, lighting, and wardrobe. I was just pointing out the fact, that in spite of them lighting like that, they still shoot log or raw (which is later interpreted as log in the grading tool). Also, they're gonna be shooting 10-bit, 12-bit, and 16-bit more often than not, so they aren't worried about losing code words per stop, which takes away the main argument of using a standard profile.
    I didn't say you did. But it isn't nonsensical. Canon raw has C-Log applied to it before being sent over SDI to external recorders for being saved as rmf. ARRI, BMD, CineForm, all write their raw formats with log as part of the specification. While it's not the Cineon-type log they use, it's log nonetheless.
    But you can grade in a log color space. Hence Davinci Resolve Color Managed timeline and ACEScc/ACEScct. Whenever you grade on top of log image data, your working/"timeline" color space is log.
    This is where we get out of the realm of objectivity. Because there are technical trade-offs for both sides, it's up to who's shooting as to which is preferable. There is no clearly defined technical mathematical winner in this case (which is one of the reasons I'm happy I don't have to deal with H.264 compressed, 8-bit recorded data anymore). And I'm not saying there is one that checks all the boxes in this case.
    When I say "retain detail through compression" I'm talking about codec compression. Like I stated earlier, a standard profile combined with high levels of compression will reduce texture in the low mids and lows. It reduces flexibility and the overall naturalness of the image.
    That is an issue of grading, not the source material. When I grade log material, I have no issue getting thick colors from it.
    Again, log doesn't add noise. It just increases the appearance. Once the image is graded, the appearance of the noise will look similar to that of the standard profile, but the texture of the shadows will look more natural, especially in motion.
    But you didn't grade to match which means the higher contrast and extra sharpness of the standard profile will have increased perceived detail of the aliasing and text. And in spite of that, we can see that the aliasing that you point out in the standard profile is present in the log image! When graded down to match, you'll get similar, if not exact sharpness from the log image. The difference between the two being flexibility. I took the log one you posted and graded it down more closely to what your standard profile looked like without the boost in saturation and you can see the apparent sharpness is similar and the noise isn't as much as you were making it out to be.
     

  8. Like
    iaremrsir got a reaction from Deadcode in Are S-LOGS More Destructive Than They're Worth?   
    Hi, I'm Eddie. I designed the Bolex Log color specification and the image processing pipeline for Digital Bolex towards the end of its production run. Also wrote the plugin that lets people process CineForm Raw color in Davinci Resolve as if it were DNG. And I'm not saying this to get into a pissing contest. I'm saying this as someone who is on the manufacturer side of things and has to know the ins and outs of the product and how it's used with other tools.
    Shoot 8-bit H.264 from a Canon C100 or any of their DSLRs for that matter, then compare it to ProRes recorded over HDMI to an external device. You'll often seen that the ProRes has more noise and fine details/texture. This is because H.264 smooths out the noise in camera and HDMI outputs uncompressed data. You're speaking as if the main reason detail is lost in 8-bit is that it's the log curve, when in reality, the main loss of detail has been heavy compression. I already agreed that using a log curve in 8-bit will redistribute code values so that more space is given to the mids and shadows, meaning less codes per stop when storing HDR data. We're not debating sensor data, otherwise we'd be talking analog stages, 16-bit, 14-bit, uncompressed. So, in this case, compression has everything to do with the image data.
    If you apply a logarithmic gain at the analog stage (ignoring the electron multiplying case), noise would be much higher than if you applied the log curve after digitization. It's not pretty. Definitely not philosophical in any measure. And these log curves can't pull data that isn't there. It's well known that using a log curve will boost the appearance of recorded noise in any bit depth, color space, etc. I'm not arguing that log fixes noisy data. I'm arguing that recording log allows you to record data in a way that keeps more detail across the range and expose in such a way that allows you to better minimize the appearance of noise in post (ETTR). Trust me, I let out a heavy sigh any time I try to see someone compensating for low light scene or poor scene lighting by recording log.
    I didn't say you said they don't. But you said professionals are lighting within 6-stop ranges. They have millions to pour into set design, lighting, and wardrobe. I was just pointing out the fact, that in spite of them lighting like that, they still shoot log or raw (which is later interpreted as log in the grading tool). Also, they're gonna be shooting 10-bit, 12-bit, and 16-bit more often than not, so they aren't worried about losing code words per stop, which takes away the main argument of using a standard profile.
    I didn't say you did. But it isn't nonsensical. Canon raw has C-Log applied to it before being sent over SDI to external recorders for being saved as rmf. ARRI, BMD, CineForm, all write their raw formats with log as part of the specification. While it's not the Cineon-type log they use, it's log nonetheless.
    But you can grade in a log color space. Hence Davinci Resolve Color Managed timeline and ACEScc/ACEScct. Whenever you grade on top of log image data, your working/"timeline" color space is log.
    This is where we get out of the realm of objectivity. Because there are technical trade-offs for both sides, it's up to who's shooting as to which is preferable. There is no clearly defined technical mathematical winner in this case (which is one of the reasons I'm happy I don't have to deal with H.264 compressed, 8-bit recorded data anymore). And I'm not saying there is one that checks all the boxes in this case.
    When I say "retain detail through compression" I'm talking about codec compression. Like I stated earlier, a standard profile combined with high levels of compression will reduce texture in the low mids and lows. It reduces flexibility and the overall naturalness of the image.
    That is an issue of grading, not the source material. When I grade log material, I have no issue getting thick colors from it.
    Again, log doesn't add noise. It just increases the appearance. Once the image is graded, the appearance of the noise will look similar to that of the standard profile, but the texture of the shadows will look more natural, especially in motion.
    But you didn't grade to match which means the higher contrast and extra sharpness of the standard profile will have increased perceived detail of the aliasing and text. And in spite of that, we can see that the aliasing that you point out in the standard profile is present in the log image! When graded down to match, you'll get similar, if not exact sharpness from the log image. The difference between the two being flexibility. I took the log one you posted and graded it down more closely to what your standard profile looked like without the boost in saturation and you can see the apparent sharpness is similar and the noise isn't as much as you were making it out to be.
     

  9. Like
    iaremrsir got a reaction from EthanAlexander in Are S-LOGS More Destructive Than They're Worth?   
    Hi, I'm Eddie. I designed the Bolex Log color specification and the image processing pipeline for Digital Bolex towards the end of its production run. Also wrote the plugin that lets people process CineForm Raw color in Davinci Resolve as if it were DNG. And I'm not saying this to get into a pissing contest. I'm saying this as someone who is on the manufacturer side of things and has to know the ins and outs of the product and how it's used with other tools.
    Shoot 8-bit H.264 from a Canon C100 or any of their DSLRs for that matter, then compare it to ProRes recorded over HDMI to an external device. You'll often seen that the ProRes has more noise and fine details/texture. This is because H.264 smooths out the noise in camera and HDMI outputs uncompressed data. You're speaking as if the main reason detail is lost in 8-bit is that it's the log curve, when in reality, the main loss of detail has been heavy compression. I already agreed that using a log curve in 8-bit will redistribute code values so that more space is given to the mids and shadows, meaning less codes per stop when storing HDR data. We're not debating sensor data, otherwise we'd be talking analog stages, 16-bit, 14-bit, uncompressed. So, in this case, compression has everything to do with the image data.
    If you apply a logarithmic gain at the analog stage (ignoring the electron multiplying case), noise would be much higher than if you applied the log curve after digitization. It's not pretty. Definitely not philosophical in any measure. And these log curves can't pull data that isn't there. It's well known that using a log curve will boost the appearance of recorded noise in any bit depth, color space, etc. I'm not arguing that log fixes noisy data. I'm arguing that recording log allows you to record data in a way that keeps more detail across the range and expose in such a way that allows you to better minimize the appearance of noise in post (ETTR). Trust me, I let out a heavy sigh any time I try to see someone compensating for low light scene or poor scene lighting by recording log.
    I didn't say you said they don't. But you said professionals are lighting within 6-stop ranges. They have millions to pour into set design, lighting, and wardrobe. I was just pointing out the fact, that in spite of them lighting like that, they still shoot log or raw (which is later interpreted as log in the grading tool). Also, they're gonna be shooting 10-bit, 12-bit, and 16-bit more often than not, so they aren't worried about losing code words per stop, which takes away the main argument of using a standard profile.
    I didn't say you did. But it isn't nonsensical. Canon raw has C-Log applied to it before being sent over SDI to external recorders for being saved as rmf. ARRI, BMD, CineForm, all write their raw formats with log as part of the specification. While it's not the Cineon-type log they use, it's log nonetheless.
    But you can grade in a log color space. Hence Davinci Resolve Color Managed timeline and ACEScc/ACEScct. Whenever you grade on top of log image data, your working/"timeline" color space is log.
    This is where we get out of the realm of objectivity. Because there are technical trade-offs for both sides, it's up to who's shooting as to which is preferable. There is no clearly defined technical mathematical winner in this case (which is one of the reasons I'm happy I don't have to deal with H.264 compressed, 8-bit recorded data anymore). And I'm not saying there is one that checks all the boxes in this case.
    When I say "retain detail through compression" I'm talking about codec compression. Like I stated earlier, a standard profile combined with high levels of compression will reduce texture in the low mids and lows. It reduces flexibility and the overall naturalness of the image.
    That is an issue of grading, not the source material. When I grade log material, I have no issue getting thick colors from it.
    Again, log doesn't add noise. It just increases the appearance. Once the image is graded, the appearance of the noise will look similar to that of the standard profile, but the texture of the shadows will look more natural, especially in motion.
    But you didn't grade to match which means the higher contrast and extra sharpness of the standard profile will have increased perceived detail of the aliasing and text. And in spite of that, we can see that the aliasing that you point out in the standard profile is present in the log image! When graded down to match, you'll get similar, if not exact sharpness from the log image. The difference between the two being flexibility. I took the log one you posted and graded it down more closely to what your standard profile looked like without the boost in saturation and you can see the apparent sharpness is similar and the noise isn't as much as you were making it out to be.
     

  10. Like
    iaremrsir got a reaction from Mattias Burling in Are S-LOGS More Destructive Than They're Worth?   
    Hi, I'm Eddie. I designed the Bolex Log color specification and the image processing pipeline for Digital Bolex towards the end of its production run. Also wrote the plugin that lets people process CineForm Raw color in Davinci Resolve as if it were DNG. And I'm not saying this to get into a pissing contest. I'm saying this as someone who is on the manufacturer side of things and has to know the ins and outs of the product and how it's used with other tools.
    Shoot 8-bit H.264 from a Canon C100 or any of their DSLRs for that matter, then compare it to ProRes recorded over HDMI to an external device. You'll often seen that the ProRes has more noise and fine details/texture. This is because H.264 smooths out the noise in camera and HDMI outputs uncompressed data. You're speaking as if the main reason detail is lost in 8-bit is that it's the log curve, when in reality, the main loss of detail has been heavy compression. I already agreed that using a log curve in 8-bit will redistribute code values so that more space is given to the mids and shadows, meaning less codes per stop when storing HDR data. We're not debating sensor data, otherwise we'd be talking analog stages, 16-bit, 14-bit, uncompressed. So, in this case, compression has everything to do with the image data.
    If you apply a logarithmic gain at the analog stage (ignoring the electron multiplying case), noise would be much higher than if you applied the log curve after digitization. It's not pretty. Definitely not philosophical in any measure. And these log curves can't pull data that isn't there. It's well known that using a log curve will boost the appearance of recorded noise in any bit depth, color space, etc. I'm not arguing that log fixes noisy data. I'm arguing that recording log allows you to record data in a way that keeps more detail across the range and expose in such a way that allows you to better minimize the appearance of noise in post (ETTR). Trust me, I let out a heavy sigh any time I try to see someone compensating for low light scene or poor scene lighting by recording log.
    I didn't say you said they don't. But you said professionals are lighting within 6-stop ranges. They have millions to pour into set design, lighting, and wardrobe. I was just pointing out the fact, that in spite of them lighting like that, they still shoot log or raw (which is later interpreted as log in the grading tool). Also, they're gonna be shooting 10-bit, 12-bit, and 16-bit more often than not, so they aren't worried about losing code words per stop, which takes away the main argument of using a standard profile.
    I didn't say you did. But it isn't nonsensical. Canon raw has C-Log applied to it before being sent over SDI to external recorders for being saved as rmf. ARRI, BMD, CineForm, all write their raw formats with log as part of the specification. While it's not the Cineon-type log they use, it's log nonetheless.
    But you can grade in a log color space. Hence Davinci Resolve Color Managed timeline and ACEScc/ACEScct. Whenever you grade on top of log image data, your working/"timeline" color space is log.
    This is where we get out of the realm of objectivity. Because there are technical trade-offs for both sides, it's up to who's shooting as to which is preferable. There is no clearly defined technical mathematical winner in this case (which is one of the reasons I'm happy I don't have to deal with H.264 compressed, 8-bit recorded data anymore). And I'm not saying there is one that checks all the boxes in this case.
    When I say "retain detail through compression" I'm talking about codec compression. Like I stated earlier, a standard profile combined with high levels of compression will reduce texture in the low mids and lows. It reduces flexibility and the overall naturalness of the image.
    That is an issue of grading, not the source material. When I grade log material, I have no issue getting thick colors from it.
    Again, log doesn't add noise. It just increases the appearance. Once the image is graded, the appearance of the noise will look similar to that of the standard profile, but the texture of the shadows will look more natural, especially in motion.
    But you didn't grade to match which means the higher contrast and extra sharpness of the standard profile will have increased perceived detail of the aliasing and text. And in spite of that, we can see that the aliasing that you point out in the standard profile is present in the log image! When graded down to match, you'll get similar, if not exact sharpness from the log image. The difference between the two being flexibility. I took the log one you posted and graded it down more closely to what your standard profile looked like without the boost in saturation and you can see the apparent sharpness is similar and the noise isn't as much as you were making it out to be.
     

  11. Like
    iaremrsir reacted to EthanAlexander in Are S-LOGS More Destructive Than They're Worth?   
    The misunderstanding is from a misalignment of ultimate goals: I think you're just going about this as "how do I get the most saturated, punchy look?" in which case you can't really beat shooting a standard profile. But to my eyes, the last one of your examples looks like shit, and my goal in filming is rarely to make people's eyes bleed with color like that.
    I'm being hyperbolic here, but really it's a matter of goals, and that's why people are disagreeing with you, @maxotics. Feature films don't look like rec709. They just don't. A lot of this has to do with the fact that movies are NOT supposed to look like real life. There's an art to how the colors of the set, wardrobe, lighting, and color grade create feelings, and S Log and wider color gamuts allow for this creativity whereas shooting a standard profile will make it look like a live TV production.
  12. Like
    iaremrsir got a reaction from hyalinejim in Are S-LOGS More Destructive Than They're Worth?   
    Okay, I've skimmed through some of the comments here and hopefully I can add something helpful/useful and clear some things up for everyone here.
    S-Log, Alexa Log C, Canon C-Log, Redlogfilm, Bolex Log, BMD Film, V-Log, all the Cineon-type log specifications on modern cinema cameras were designed for exactly that purpose. The main goal of having these color spaces (consisting of mainly a color gamut and an optoelectronic transfer function, and sometimes a little more) were the following:
    Retain most amount of detail possible through digitization, bit reduction, and/or compression. Integrate/mix footage with motion picture film (Cineon) and other cameras. Consistency and mathematical accuracy, which translate to less guess work. And I think there's some misconception that these log specifications were designed to be looks. Yes, they have a look designed in them/inherent to them, but that is not their main function. By retaining the most possible detail you're able to reproduce scene referred data more accurately, which means you can color with everything the camera was capable of seeing/capturing. So, while there is a bit of 'lookery' happening with these color specifications, their main purpose is to retain data.
    No. All else equal, any noise in an image recorded in a log color space will be present in that image when viewed in another color space. These color transforms can't add or subtract noise from an image, only change the appearance just like every other bit of signal.
    Not sure what you mean by dated, but as long as you're viewing wide gamut or HDR data on a display with a gamut lesser in size or more restrictive transfer function, it will look desaturated and low contrast.
    Log specifications are not inherently destructive. Bit reduction and compression, however, are destructive (in most cases).
    The data is already present in the image. A LUT is literally just a quantized representation of a color transform (continuous functions like our log specifications). So all it does is change the appearance of the data that's already there.
    If further explanation is needed, I'd be happy to oblige.
  13. Like
    iaremrsir got a reaction from EthanAlexander in Are S-LOGS More Destructive Than They're Worth?   
    No. Small gamut does not equal more data.
    Okay, this isn't exactly the best representation, but this is the line of thought that I came up with when it clicked for me. Disclaimer: it is literally a childish example. So I apologize in advance haha
    Color space is a sandbox and data is your sand.
    Say we have a sandbox of arbitrary size and our current point of reference. We start to fill it with sand up until just after one single grain of sand spills out of the boundary of the sandbox. This would represent an image with colorfulness at its highest level and a linear lightness that gets mapped to 1.0 or 255 or 1023 etc.
    Let's fill the sandbox with more sand so that it spills overwhelmingly out to the side. But, we can't play with that sand because it's outside our sandbox.
    Now we're going to build a sandbox encompassing the spill and the other sandbox; far enough out so that there is a good amount of space between the boundary and our sand. Our point of reference is still the smaller sandbox, so even though it's the same amount of sand in and around the small sandbox, occupying the same space, it looks like a lot less sand because of the size of the sandbox (sorta the same idea of using smaller plates when eating. The same amount of food on a small and large plate will look like 'more' on the small plate). This is the same has having a wide gamut or HDR color space. Since most of us have a point of reference that is small, a relatively small amount of data will look normal or well saturated and contrasty. And what we would consider a relatively or exceedingly large amount of data in our small color space would look desaturated and low contrast in a wide gamut or HDR color space because the data is 'small' relative to the size of the larger space.
    Hope this makes sense.
    Cheers
  14. Like
    iaremrsir got a reaction from EthanAlexander in Are S-LOGS More Destructive Than They're Worth?   
    Okay, I've skimmed through some of the comments here and hopefully I can add something helpful/useful and clear some things up for everyone here.
    S-Log, Alexa Log C, Canon C-Log, Redlogfilm, Bolex Log, BMD Film, V-Log, all the Cineon-type log specifications on modern cinema cameras were designed for exactly that purpose. The main goal of having these color spaces (consisting of mainly a color gamut and an optoelectronic transfer function, and sometimes a little more) were the following:
    Retain most amount of detail possible through digitization, bit reduction, and/or compression. Integrate/mix footage with motion picture film (Cineon) and other cameras. Consistency and mathematical accuracy, which translate to less guess work. And I think there's some misconception that these log specifications were designed to be looks. Yes, they have a look designed in them/inherent to them, but that is not their main function. By retaining the most possible detail you're able to reproduce scene referred data more accurately, which means you can color with everything the camera was capable of seeing/capturing. So, while there is a bit of 'lookery' happening with these color specifications, their main purpose is to retain data.
    No. All else equal, any noise in an image recorded in a log color space will be present in that image when viewed in another color space. These color transforms can't add or subtract noise from an image, only change the appearance just like every other bit of signal.
    Not sure what you mean by dated, but as long as you're viewing wide gamut or HDR data on a display with a gamut lesser in size or more restrictive transfer function, it will look desaturated and low contrast.
    Log specifications are not inherently destructive. Bit reduction and compression, however, are destructive (in most cases).
    The data is already present in the image. A LUT is literally just a quantized representation of a color transform (continuous functions like our log specifications). So all it does is change the appearance of the data that's already there.
    If further explanation is needed, I'd be happy to oblige.
  15. Like
    iaremrsir got a reaction from TheRenaissanceMan in Are S-LOGS More Destructive Than They're Worth?   
    No. Small gamut does not equal more data.
    Okay, this isn't exactly the best representation, but this is the line of thought that I came up with when it clicked for me. Disclaimer: it is literally a childish example. So I apologize in advance haha
    Color space is a sandbox and data is your sand.
    Say we have a sandbox of arbitrary size and our current point of reference. We start to fill it with sand up until just after one single grain of sand spills out of the boundary of the sandbox. This would represent an image with colorfulness at its highest level and a linear lightness that gets mapped to 1.0 or 255 or 1023 etc.
    Let's fill the sandbox with more sand so that it spills overwhelmingly out to the side. But, we can't play with that sand because it's outside our sandbox.
    Now we're going to build a sandbox encompassing the spill and the other sandbox; far enough out so that there is a good amount of space between the boundary and our sand. Our point of reference is still the smaller sandbox, so even though it's the same amount of sand in and around the small sandbox, occupying the same space, it looks like a lot less sand because of the size of the sandbox (sorta the same idea of using smaller plates when eating. The same amount of food on a small and large plate will look like 'more' on the small plate). This is the same has having a wide gamut or HDR color space. Since most of us have a point of reference that is small, a relatively small amount of data will look normal or well saturated and contrasty. And what we would consider a relatively or exceedingly large amount of data in our small color space would look desaturated and low contrast in a wide gamut or HDR color space because the data is 'small' relative to the size of the larger space.
    Hope this makes sense.
    Cheers
  16. Like
    iaremrsir got a reaction from TheRenaissanceMan in Are S-LOGS More Destructive Than They're Worth?   
    Okay, I've skimmed through some of the comments here and hopefully I can add something helpful/useful and clear some things up for everyone here.
    S-Log, Alexa Log C, Canon C-Log, Redlogfilm, Bolex Log, BMD Film, V-Log, all the Cineon-type log specifications on modern cinema cameras were designed for exactly that purpose. The main goal of having these color spaces (consisting of mainly a color gamut and an optoelectronic transfer function, and sometimes a little more) were the following:
    Retain most amount of detail possible through digitization, bit reduction, and/or compression. Integrate/mix footage with motion picture film (Cineon) and other cameras. Consistency and mathematical accuracy, which translate to less guess work. And I think there's some misconception that these log specifications were designed to be looks. Yes, they have a look designed in them/inherent to them, but that is not their main function. By retaining the most possible detail you're able to reproduce scene referred data more accurately, which means you can color with everything the camera was capable of seeing/capturing. So, while there is a bit of 'lookery' happening with these color specifications, their main purpose is to retain data.
    No. All else equal, any noise in an image recorded in a log color space will be present in that image when viewed in another color space. These color transforms can't add or subtract noise from an image, only change the appearance just like every other bit of signal.
    Not sure what you mean by dated, but as long as you're viewing wide gamut or HDR data on a display with a gamut lesser in size or more restrictive transfer function, it will look desaturated and low contrast.
    Log specifications are not inherently destructive. Bit reduction and compression, however, are destructive (in most cases).
    The data is already present in the image. A LUT is literally just a quantized representation of a color transform (continuous functions like our log specifications). So all it does is change the appearance of the data that's already there.
    If further explanation is needed, I'd be happy to oblige.
  17. Like
    iaremrsir got a reaction from Amro Othman in Are S-LOGS More Destructive Than They're Worth?   
    No. Small gamut does not equal more data.
    Okay, this isn't exactly the best representation, but this is the line of thought that I came up with when it clicked for me. Disclaimer: it is literally a childish example. So I apologize in advance haha
    Color space is a sandbox and data is your sand.
    Say we have a sandbox of arbitrary size and our current point of reference. We start to fill it with sand up until just after one single grain of sand spills out of the boundary of the sandbox. This would represent an image with colorfulness at its highest level and a linear lightness that gets mapped to 1.0 or 255 or 1023 etc.
    Let's fill the sandbox with more sand so that it spills overwhelmingly out to the side. But, we can't play with that sand because it's outside our sandbox.
    Now we're going to build a sandbox encompassing the spill and the other sandbox; far enough out so that there is a good amount of space between the boundary and our sand. Our point of reference is still the smaller sandbox, so even though it's the same amount of sand in and around the small sandbox, occupying the same space, it looks like a lot less sand because of the size of the sandbox (sorta the same idea of using smaller plates when eating. The same amount of food on a small and large plate will look like 'more' on the small plate). This is the same has having a wide gamut or HDR color space. Since most of us have a point of reference that is small, a relatively small amount of data will look normal or well saturated and contrasty. And what we would consider a relatively or exceedingly large amount of data in our small color space would look desaturated and low contrast in a wide gamut or HDR color space because the data is 'small' relative to the size of the larger space.
    Hope this makes sense.
    Cheers
  18. Like
    iaremrsir got a reaction from Mattias Burling in Are S-LOGS More Destructive Than They're Worth?   
    No. Small gamut does not equal more data.
    Okay, this isn't exactly the best representation, but this is the line of thought that I came up with when it clicked for me. Disclaimer: it is literally a childish example. So I apologize in advance haha
    Color space is a sandbox and data is your sand.
    Say we have a sandbox of arbitrary size and our current point of reference. We start to fill it with sand up until just after one single grain of sand spills out of the boundary of the sandbox. This would represent an image with colorfulness at its highest level and a linear lightness that gets mapped to 1.0 or 255 or 1023 etc.
    Let's fill the sandbox with more sand so that it spills overwhelmingly out to the side. But, we can't play with that sand because it's outside our sandbox.
    Now we're going to build a sandbox encompassing the spill and the other sandbox; far enough out so that there is a good amount of space between the boundary and our sand. Our point of reference is still the smaller sandbox, so even though it's the same amount of sand in and around the small sandbox, occupying the same space, it looks like a lot less sand because of the size of the sandbox (sorta the same idea of using smaller plates when eating. The same amount of food on a small and large plate will look like 'more' on the small plate). This is the same has having a wide gamut or HDR color space. Since most of us have a point of reference that is small, a relatively small amount of data will look normal or well saturated and contrasty. And what we would consider a relatively or exceedingly large amount of data in our small color space would look desaturated and low contrast in a wide gamut or HDR color space because the data is 'small' relative to the size of the larger space.
    Hope this makes sense.
    Cheers
  19. Like
    iaremrsir got a reaction from Mattias Burling in Are S-LOGS More Destructive Than They're Worth?   
    Okay, I've skimmed through some of the comments here and hopefully I can add something helpful/useful and clear some things up for everyone here.
    S-Log, Alexa Log C, Canon C-Log, Redlogfilm, Bolex Log, BMD Film, V-Log, all the Cineon-type log specifications on modern cinema cameras were designed for exactly that purpose. The main goal of having these color spaces (consisting of mainly a color gamut and an optoelectronic transfer function, and sometimes a little more) were the following:
    Retain most amount of detail possible through digitization, bit reduction, and/or compression. Integrate/mix footage with motion picture film (Cineon) and other cameras. Consistency and mathematical accuracy, which translate to less guess work. And I think there's some misconception that these log specifications were designed to be looks. Yes, they have a look designed in them/inherent to them, but that is not their main function. By retaining the most possible detail you're able to reproduce scene referred data more accurately, which means you can color with everything the camera was capable of seeing/capturing. So, while there is a bit of 'lookery' happening with these color specifications, their main purpose is to retain data.
    No. All else equal, any noise in an image recorded in a log color space will be present in that image when viewed in another color space. These color transforms can't add or subtract noise from an image, only change the appearance just like every other bit of signal.
    Not sure what you mean by dated, but as long as you're viewing wide gamut or HDR data on a display with a gamut lesser in size or more restrictive transfer function, it will look desaturated and low contrast.
    Log specifications are not inherently destructive. Bit reduction and compression, however, are destructive (in most cases).
    The data is already present in the image. A LUT is literally just a quantized representation of a color transform (continuous functions like our log specifications). So all it does is change the appearance of the data that's already there.
    If further explanation is needed, I'd be happy to oblige.
  20. Like
    iaremrsir got a reaction from Kurtisso in NO!!! Digital Bolex has stopped making cameras!   
    That was actually one of things Olan mentioned in his post on the user group. He decided to go for ISO 400 instead of 200 because he didn't want the clean look for this. While it is possible to get the clean look, I think a lot of people like the mojo that ISO 400-800 bring.
    Now, on a D16 mkII using the KAE-02150 sensor, a clean image would be extremely easy to get, and you wouldn't be sacrificing much high end range at all.
  21. Like
    iaremrsir got a reaction from Kurtisso in NO!!! Digital Bolex has stopped making cameras!   
    Thought I would show some new footage that was recently posted in our user group by Olan Collardy. Processed in Resolve with Bolex Log.
    User demand (lots) and time. And of course, no guarantees of anything until it comes from the head honchos.
  22. Like
    iaremrsir reacted to kaylee in NO!!! Digital Bolex has stopped making cameras!   
    i love this footage... theres another thread right now about what "filmic" means.... hard to find a digital camera that beats bolex on filmic imo
  23. Like
    iaremrsir got a reaction from kaylee in NO!!! Digital Bolex has stopped making cameras!   
    Thought I would show some new footage that was recently posted in our user group by Olan Collardy. Processed in Resolve with Bolex Log.
    User demand (lots) and time. And of course, no guarantees of anything until it comes from the head honchos.
  24. Like
    iaremrsir got a reaction from Liam in NO!!! Digital Bolex has stopped making cameras!   
    Thought I would show some new footage that was recently posted in our user group by Olan Collardy. Processed in Resolve with Bolex Log.
    User demand (lots) and time. And of course, no guarantees of anything until it comes from the head honchos.
  25. Like
    iaremrsir got a reaction from Andrew Reid in NO!!! Digital Bolex has stopped making cameras!   
    I developed the color science and did all my tests with the camera on Windows. There's the free HFS explorer tool, or you can buy HFS drivers for Windows for 20 USD.
    If we were to make other cameras, I think a D35 and a D16 mk II based on EMCCDs would be my preference.
    The D16 mk II would hopefully be based on the KAE-02150. So 30fps, 14+ EV of DR without any of the highlight reconstruction. That's without any noise reduction in the circuitry of the sensor or in the software, so it'd keep the same mojo of the D16.
    The D35 is more of a dream camera for me. Nothing like it currently exists. I actually made a list a while ago and posted on our user group that Joe was actually on board with had the funding been there to pursue it. It'd basically be an EMCCD version of our sensor, but double the dimensions and resolution horizontally and increase the dimension vertically enough to achieve 16:9.
×
×
  • Create New...