Jump to content

slonick81

Members
  • Posts

    40
  • Joined

  • Last visited

Reputation Activity

  1. Like
    slonick81 reacted to newfoundmass in CG/VFX for free   
    In the new year I might be able to send some work your way, @slonick81. I'll reach out soon! 
  2. Like
    slonick81 got a reaction from newfoundmass in CG/VFX for free   
    Hmmm... Maybe it's an option but I feel uneasy in many ways about it. Anyways, thank you for kind words, it really helps and matters these days. But let's not test Andrew's banhammer itch on political threads - I'd like to chat on the topic anywhere else and to keep this thread around graphics and effects.
  3. Like
    slonick81 got a reaction from SMGJohn in CG/VFX for free   
    Yes, this is true, and it's obvious from my profile on Upwork. I'm taking current situation into account but I'm trying to find someone who doesn't care.
  4. Like
    slonick81 got a reaction from newfoundmass in CG/VFX for free   
    Yes, this is true, and it's obvious from my profile on Upwork. I'm taking current situation into account but I'm trying to find someone who doesn't care.
  5. Like
    slonick81 got a reaction from Davide DB in CG/VFX for free   
    Yes, this is true, and it's obvious from my profile on Upwork. I'm taking current situation into account but I'm trying to find someone who doesn't care.
  6. Like
    slonick81 got a reaction from SMGJohn in CG/VFX for free   
    Can't get enough job these days, so my skills may be useful here. I can do green screen keying, cleanups/object replacement, tracking, basic 3d modelling and stuff around such topics. Nothing fancy, I am not a world wide acknowledged cg artist but have some skills and experience. Here is my eclectic reel - https://disk.yandex.ru/i/V9USqVPwl0DDOg (it's accessible in Georgia so I assume it's accessible in other countries, too).
    Rules are:
    1. You give me one shot/clip at a time, I do it for free.
    2. I get the right to publish it here, on my youtube channel and use the resulting clip and the creation process as demonstration of skills and self promotion to other people and organizations.
    3. You can use the result of my work as you wish until it contradicts term no.2. Or at least inform me about the change of situation.
    Of course there may be clips I won't be able to do, and I'll do my best to point it out right at the start. Also I'm still trying to earn for life, there may be a delay before I can do your clip (and I'll tell you about it). So if you have a complicated case in burning ASAP situation it may be better to contact an established CG facility on commercial basis.
    In general I'm trying to build up portfolio and maybe get in touch with new customers but at the moment I just don't want to get rusty. Ask questions if any. Chat on the topic (CG/VFX) is also welcomed in this thread.
  7. Thanks
    slonick81 got a reaction from Davide DB in CG/VFX for free   
    Can't get enough job these days, so my skills may be useful here. I can do green screen keying, cleanups/object replacement, tracking, basic 3d modelling and stuff around such topics. Nothing fancy, I am not a world wide acknowledged cg artist but have some skills and experience. Here is my eclectic reel - https://disk.yandex.ru/i/V9USqVPwl0DDOg (it's accessible in Georgia so I assume it's accessible in other countries, too).
    Rules are:
    1. You give me one shot/clip at a time, I do it for free.
    2. I get the right to publish it here, on my youtube channel and use the resulting clip and the creation process as demonstration of skills and self promotion to other people and organizations.
    3. You can use the result of my work as you wish until it contradicts term no.2. Or at least inform me about the change of situation.
    Of course there may be clips I won't be able to do, and I'll do my best to point it out right at the start. Also I'm still trying to earn for life, there may be a delay before I can do your clip (and I'll tell you about it). So if you have a complicated case in burning ASAP situation it may be better to contact an established CG facility on commercial basis.
    In general I'm trying to build up portfolio and maybe get in touch with new customers but at the moment I just don't want to get rusty. Ask questions if any. Chat on the topic (CG/VFX) is also welcomed in this thread.
  8. Thanks
    slonick81 got a reaction from Juank in RAW Video on a Smartphone   
    Some DR tests on rare sunny winter days - raw images vs. native camera app video recording. No detail enhancements for raw (no NR, sharpening, dehazing and such), just shadows/highlights and gamma curve adjustments in ACR to get the better idea of what's going on. All shots are in native resolution, 1:1, no scaling.
    I'm still not confident about exposure settings because of live feed high contrast. I tend to check the overall image with AE on and then dial manual settings to nearly same looking picture. Native video is shot in auto mode. It actually represents well in terms of contrast the image you see in live view shooting raw.
    So, as you can see there is no big difference in DR (like you may expect comparing raw and "baked-in r709" footage from cinema camera). "Bridge" scene got a bit overexposed in video compared to raw, I have a feeling that it was possible to bring back overexposed building in the back without killing shadows with minor exposure shift. What makes the difference for me is highlight rolloff - it is natural and color consistent, just look at snow in front in "hotel" shot and sun spots on the top of "bridge" shot. Shadows are more manageable for me, too. Yes, there is a lot of chroma noise but it's easily controlled by native ACR/Resolve tools, luma noise is very thin and even, so you can balance between shadow details and noise suppression up to your taste.
    But the most prominent difference is due to zero sharpening. Just check small details like branches, especially on contrast background. It's not DR related, but I can't avoid mentioning it.
    Is raw a necessity for such result? Dunno. Maybe no postprocessing, log profile and 10bit 400Mbps h26x instead 8bit 40Mbps will give comparable results. If you have a phone that is capable of such recording modes I would love to see the compare. 


  9. Thanks
    slonick81 got a reaction from PannySVHS in RAW Video on a Smartphone   
    Some DR tests on rare sunny winter days - raw images vs. native camera app video recording. No detail enhancements for raw (no NR, sharpening, dehazing and such), just shadows/highlights and gamma curve adjustments in ACR to get the better idea of what's going on. All shots are in native resolution, 1:1, no scaling.
    I'm still not confident about exposure settings because of live feed high contrast. I tend to check the overall image with AE on and then dial manual settings to nearly same looking picture. Native video is shot in auto mode. It actually represents well in terms of contrast the image you see in live view shooting raw.
    So, as you can see there is no big difference in DR (like you may expect comparing raw and "baked-in r709" footage from cinema camera). "Bridge" scene got a bit overexposed in video compared to raw, I have a feeling that it was possible to bring back overexposed building in the back without killing shadows with minor exposure shift. What makes the difference for me is highlight rolloff - it is natural and color consistent, just look at snow in front in "hotel" shot and sun spots on the top of "bridge" shot. Shadows are more manageable for me, too. Yes, there is a lot of chroma noise but it's easily controlled by native ACR/Resolve tools, luma noise is very thin and even, so you can balance between shadow details and noise suppression up to your taste.
    But the most prominent difference is due to zero sharpening. Just check small details like branches, especially on contrast background. It's not DR related, but I can't avoid mentioning it.
    Is raw a necessity for such result? Dunno. Maybe no postprocessing, log profile and 10bit 400Mbps h26x instead 8bit 40Mbps will give comparable results. If you have a phone that is capable of such recording modes I would love to see the compare. 


  10. Like
    slonick81 got a reaction from billdoubleu in RAW Video on a Smartphone   
    Wow
    Tested on my Poco X3. TLDR: barely functional but promising.
    Settings: anything aside RAW10<->RAW16 show little effect on performance. "Raw video memory usage" seems to benefit from being maxed out but it's more a feeling than fact. You'll have to select "raw video" and exposure settings every time you switch from app's main screen (like switching between apps or entering settings). OIS can be activated but doesn't work in my case, thus the image is shaky. Focus control is kind of distributed strangely: 90% of slider is mapped to nearest 1m, the rest of focusing range is cramped in the tiny space at the right end. Nice for macro, I guess, not for anything else. AWB should be locked or it will affect the image otherwise.
    Raw: I was able to get mostly reliable recording in all conditions at max crop (40% H/V, 2772x2082) at 24/25 fps for durations about up to 1-1,5 min. Was not testing for longer times because of huge file sizes and some quirks. I was able to record 4K-ish resolutions (4112x2082) outdoors (it's about 0°C now) but it drops frames starting from 15-20s indoors. Overheating? No framing guides for crop area - use your imagination looking at full sensor image feed. Raw images are initially written in chunks of 900 frames as zip archive somewhere in system folders (haven't found location yet). You have to manually unzip and transfer them as dngs in "manage videos" by tapping "queue" button. You can set the destination folder once at first conversion, I was not able to find this setting anywhere else, the only way to change this directory was to reinstall the app. And yeah, no sound at all yet.
    Processing: transfer times are huge on my phone mostly because of USB2 transfer speed. Files are bulky, I filled 60GB of free space just with a dozen of clips. Considering buying TF card to unzip dngs and swapping it out to card reader (if projects goes well in this direction). AE2021 and Resolve 17 work well with dng sequences, Premiere 2021 refused to import. I had to rename dng sequence according to unique folder name manually because they were named with same base name - frame-#####.dng. There is some glitchy "transitional zone" between two 900-frame chunks where frames are randomly tossed to different chunks. At first I thought it was just frame dropouts but later I was able to reconstruct frame order manually at post - see image. Image quality is much better, especially at moderately high ISOs (800-1600). Compressed image degrades at this range - details are getting mushy, colors - muddy. Raw on the contrary has very manageable grain structure and highlight rolloff, lacks any sharpening. DR is better in a sense that you're getting more usable DR but it's far from being cinema camera wide - you should be careful with highlights, and the absence of any exposure assisting tools doesn't help at all.
    Is it worth trying? Yes, I think. It's not the way you'd shoot something for fast turnarounds, or for a long duration. It's more an artistic experiment. The project is in a very early stage now but it's all about polishing interface and performance, adding useful features, improving general stability - the core idea is functional. I'm really happy to stumble upon this project - I haven't feel such joy and excitement since shooting photos with Nokia 808. Basically it's a 8mm raw camera you have no excuses not to carry with you all the time.

  11. Thanks
    slonick81 got a reaction from sanveer in RAW Video on a Smartphone   
    Wow
    Tested on my Poco X3. TLDR: barely functional but promising.
    Settings: anything aside RAW10<->RAW16 show little effect on performance. "Raw video memory usage" seems to benefit from being maxed out but it's more a feeling than fact. You'll have to select "raw video" and exposure settings every time you switch from app's main screen (like switching between apps or entering settings). OIS can be activated but doesn't work in my case, thus the image is shaky. Focus control is kind of distributed strangely: 90% of slider is mapped to nearest 1m, the rest of focusing range is cramped in the tiny space at the right end. Nice for macro, I guess, not for anything else. AWB should be locked or it will affect the image otherwise.
    Raw: I was able to get mostly reliable recording in all conditions at max crop (40% H/V, 2772x2082) at 24/25 fps for durations about up to 1-1,5 min. Was not testing for longer times because of huge file sizes and some quirks. I was able to record 4K-ish resolutions (4112x2082) outdoors (it's about 0°C now) but it drops frames starting from 15-20s indoors. Overheating? No framing guides for crop area - use your imagination looking at full sensor image feed. Raw images are initially written in chunks of 900 frames as zip archive somewhere in system folders (haven't found location yet). You have to manually unzip and transfer them as dngs in "manage videos" by tapping "queue" button. You can set the destination folder once at first conversion, I was not able to find this setting anywhere else, the only way to change this directory was to reinstall the app. And yeah, no sound at all yet.
    Processing: transfer times are huge on my phone mostly because of USB2 transfer speed. Files are bulky, I filled 60GB of free space just with a dozen of clips. Considering buying TF card to unzip dngs and swapping it out to card reader (if projects goes well in this direction). AE2021 and Resolve 17 work well with dng sequences, Premiere 2021 refused to import. I had to rename dng sequence according to unique folder name manually because they were named with same base name - frame-#####.dng. There is some glitchy "transitional zone" between two 900-frame chunks where frames are randomly tossed to different chunks. At first I thought it was just frame dropouts but later I was able to reconstruct frame order manually at post - see image. Image quality is much better, especially at moderately high ISOs (800-1600). Compressed image degrades at this range - details are getting mushy, colors - muddy. Raw on the contrary has very manageable grain structure and highlight rolloff, lacks any sharpening. DR is better in a sense that you're getting more usable DR but it's far from being cinema camera wide - you should be careful with highlights, and the absence of any exposure assisting tools doesn't help at all.
    Is it worth trying? Yes, I think. It's not the way you'd shoot something for fast turnarounds, or for a long duration. It's more an artistic experiment. The project is in a very early stage now but it's all about polishing interface and performance, adding useful features, improving general stability - the core idea is functional. I'm really happy to stumble upon this project - I haven't feel such joy and excitement since shooting photos with Nokia 808. Basically it's a 8mm raw camera you have no excuses not to carry with you all the time.

  12. Thanks
    slonick81 got a reaction from Juank in RAW Video on a Smartphone   
    Wow
    Tested on my Poco X3. TLDR: barely functional but promising.
    Settings: anything aside RAW10<->RAW16 show little effect on performance. "Raw video memory usage" seems to benefit from being maxed out but it's more a feeling than fact. You'll have to select "raw video" and exposure settings every time you switch from app's main screen (like switching between apps or entering settings). OIS can be activated but doesn't work in my case, thus the image is shaky. Focus control is kind of distributed strangely: 90% of slider is mapped to nearest 1m, the rest of focusing range is cramped in the tiny space at the right end. Nice for macro, I guess, not for anything else. AWB should be locked or it will affect the image otherwise.
    Raw: I was able to get mostly reliable recording in all conditions at max crop (40% H/V, 2772x2082) at 24/25 fps for durations about up to 1-1,5 min. Was not testing for longer times because of huge file sizes and some quirks. I was able to record 4K-ish resolutions (4112x2082) outdoors (it's about 0°C now) but it drops frames starting from 15-20s indoors. Overheating? No framing guides for crop area - use your imagination looking at full sensor image feed. Raw images are initially written in chunks of 900 frames as zip archive somewhere in system folders (haven't found location yet). You have to manually unzip and transfer them as dngs in "manage videos" by tapping "queue" button. You can set the destination folder once at first conversion, I was not able to find this setting anywhere else, the only way to change this directory was to reinstall the app. And yeah, no sound at all yet.
    Processing: transfer times are huge on my phone mostly because of USB2 transfer speed. Files are bulky, I filled 60GB of free space just with a dozen of clips. Considering buying TF card to unzip dngs and swapping it out to card reader (if projects goes well in this direction). AE2021 and Resolve 17 work well with dng sequences, Premiere 2021 refused to import. I had to rename dng sequence according to unique folder name manually because they were named with same base name - frame-#####.dng. There is some glitchy "transitional zone" between two 900-frame chunks where frames are randomly tossed to different chunks. At first I thought it was just frame dropouts but later I was able to reconstruct frame order manually at post - see image. Image quality is much better, especially at moderately high ISOs (800-1600). Compressed image degrades at this range - details are getting mushy, colors - muddy. Raw on the contrary has very manageable grain structure and highlight rolloff, lacks any sharpening. DR is better in a sense that you're getting more usable DR but it's far from being cinema camera wide - you should be careful with highlights, and the absence of any exposure assisting tools doesn't help at all.
    Is it worth trying? Yes, I think. It's not the way you'd shoot something for fast turnarounds, or for a long duration. It's more an artistic experiment. The project is in a very early stage now but it's all about polishing interface and performance, adding useful features, improving general stability - the core idea is functional. I'm really happy to stumble upon this project - I haven't feel such joy and excitement since shooting photos with Nokia 808. Basically it's a 8mm raw camera you have no excuses not to carry with you all the time.

  13. Like
    slonick81 got a reaction from MaverickTRD in Camera resolutions by cinematographer Steve Yeldin   
    So Natron we go!
    As you can see, latest win version with default settings shows no subsampling on enlarged view. The only thing we should care about is filter type in reformat node. It complements original small 558x301 image to FHD with borders around, but centering introduces 0.5 pixel vertical shift due to uneven Y dimension of original image (301 px) so "Impulse" filter type is set for "nearest neighbour" interpolation. If you uncheck "Center" it will place our chart in bottom left corner and remove any influence of "Filter" setting.
    The funniest thing is that even non-round resize in viewer won't introduce any soft subsampling with these settings. You can notice some pixel line doubling but no soft transitions.
    And yes, I converted the chart to .bmp because natron couldn't read .gif.
    It's the only thing you're percieving. Unless you're Neuralink test volunteer, maybe.
    Well, that's how any kind of compositing is done. CG artist switches back and forth from "fit in view" to any magnification needed for the job, using "1:1" scale to justify real details. Working screen resolution can be any, the more the better, of course, but for the sake of working space for tools, not resolution itself. And this is exactly what Yedlin is doing: sitting in his composing suite of choice (Nuke), showing his nodes and settings, zooming in and out but mostly staying at 1:1, grabbing at resolution he is comfortable with (thus 1920x1280 file - it's a window screen record from larger display)
    In general: I mostly posted to show one simple thing - you can evaluate footage in 1:1 mode in composer viewer, and round multiple scaling doesn't introduce any false details. I considered it as a given truth. But you questioned it and I decided to check. So, for AE, PS, ffmpeg, Natron and most likely Nuke it's true (with some attention to settings). Сoncerning Yedlin's research - it was made in a very natural way for me, as if I was evaluating footage myself, and it summarised well a general impression I got working on video/movie productions - resolution is not a decisive factor nowdays. Like, for last 5 years I need one hand's fingers to count projects when director/DoP/producer was seeking intentionally for more resolution. You see it wrong or flawed - fine, I don't feel any necessity to change your mind, looks like it's more a kye's battle to fight.


  14. Like
    slonick81 got a reaction from tupp in Camera resolutions by cinematographer Steve Yeldin   
    So Natron we go!
    As you can see, latest win version with default settings shows no subsampling on enlarged view. The only thing we should care about is filter type in reformat node. It complements original small 558x301 image to FHD with borders around, but centering introduces 0.5 pixel vertical shift due to uneven Y dimension of original image (301 px) so "Impulse" filter type is set for "nearest neighbour" interpolation. If you uncheck "Center" it will place our chart in bottom left corner and remove any influence of "Filter" setting.
    The funniest thing is that even non-round resize in viewer won't introduce any soft subsampling with these settings. You can notice some pixel line doubling but no soft transitions.
    And yes, I converted the chart to .bmp because natron couldn't read .gif.
    It's the only thing you're percieving. Unless you're Neuralink test volunteer, maybe.
    Well, that's how any kind of compositing is done. CG artist switches back and forth from "fit in view" to any magnification needed for the job, using "1:1" scale to justify real details. Working screen resolution can be any, the more the better, of course, but for the sake of working space for tools, not resolution itself. And this is exactly what Yedlin is doing: sitting in his composing suite of choice (Nuke), showing his nodes and settings, zooming in and out but mostly staying at 1:1, grabbing at resolution he is comfortable with (thus 1920x1280 file - it's a window screen record from larger display)
    In general: I mostly posted to show one simple thing - you can evaluate footage in 1:1 mode in composer viewer, and round multiple scaling doesn't introduce any false details. I considered it as a given truth. But you questioned it and I decided to check. So, for AE, PS, ffmpeg, Natron and most likely Nuke it's true (with some attention to settings). Сoncerning Yedlin's research - it was made in a very natural way for me, as if I was evaluating footage myself, and it summarised well a general impression I got working on video/movie productions - resolution is not a decisive factor nowdays. Like, for last 5 years I need one hand's fingers to count projects when director/DoP/producer was seeking intentionally for more resolution. You see it wrong or flawed - fine, I don't feel any necessity to change your mind, looks like it's more a kye's battle to fight.


  15. Like
    slonick81 got a reaction from John Matthews in Camera resolutions by cinematographer Steve Yeldin   
    So Natron we go!
    As you can see, latest win version with default settings shows no subsampling on enlarged view. The only thing we should care about is filter type in reformat node. It complements original small 558x301 image to FHD with borders around, but centering introduces 0.5 pixel vertical shift due to uneven Y dimension of original image (301 px) so "Impulse" filter type is set for "nearest neighbour" interpolation. If you uncheck "Center" it will place our chart in bottom left corner and remove any influence of "Filter" setting.
    The funniest thing is that even non-round resize in viewer won't introduce any soft subsampling with these settings. You can notice some pixel line doubling but no soft transitions.
    And yes, I converted the chart to .bmp because natron couldn't read .gif.
    It's the only thing you're percieving. Unless you're Neuralink test volunteer, maybe.
    Well, that's how any kind of compositing is done. CG artist switches back and forth from "fit in view" to any magnification needed for the job, using "1:1" scale to justify real details. Working screen resolution can be any, the more the better, of course, but for the sake of working space for tools, not resolution itself. And this is exactly what Yedlin is doing: sitting in his composing suite of choice (Nuke), showing his nodes and settings, zooming in and out but mostly staying at 1:1, grabbing at resolution he is comfortable with (thus 1920x1280 file - it's a window screen record from larger display)
    In general: I mostly posted to show one simple thing - you can evaluate footage in 1:1 mode in composer viewer, and round multiple scaling doesn't introduce any false details. I considered it as a given truth. But you questioned it and I decided to check. So, for AE, PS, ffmpeg, Natron and most likely Nuke it's true (with some attention to settings). Сoncerning Yedlin's research - it was made in a very natural way for me, as if I was evaluating footage myself, and it summarised well a general impression I got working on video/movie productions - resolution is not a decisive factor nowdays. Like, for last 5 years I need one hand's fingers to count projects when director/DoP/producer was seeking intentionally for more resolution. You see it wrong or flawed - fine, I don't feel any necessity to change your mind, looks like it's more a kye's battle to fight.


  16. Like
    slonick81 got a reaction from kye in Camera resolutions by cinematographer Steve Yeldin   
    So Natron we go!
    As you can see, latest win version with default settings shows no subsampling on enlarged view. The only thing we should care about is filter type in reformat node. It complements original small 558x301 image to FHD with borders around, but centering introduces 0.5 pixel vertical shift due to uneven Y dimension of original image (301 px) so "Impulse" filter type is set for "nearest neighbour" interpolation. If you uncheck "Center" it will place our chart in bottom left corner and remove any influence of "Filter" setting.
    The funniest thing is that even non-round resize in viewer won't introduce any soft subsampling with these settings. You can notice some pixel line doubling but no soft transitions.
    And yes, I converted the chart to .bmp because natron couldn't read .gif.
    It's the only thing you're percieving. Unless you're Neuralink test volunteer, maybe.
    Well, that's how any kind of compositing is done. CG artist switches back and forth from "fit in view" to any magnification needed for the job, using "1:1" scale to justify real details. Working screen resolution can be any, the more the better, of course, but for the sake of working space for tools, not resolution itself. And this is exactly what Yedlin is doing: sitting in his composing suite of choice (Nuke), showing his nodes and settings, zooming in and out but mostly staying at 1:1, grabbing at resolution he is comfortable with (thus 1920x1280 file - it's a window screen record from larger display)
    In general: I mostly posted to show one simple thing - you can evaluate footage in 1:1 mode in composer viewer, and round multiple scaling doesn't introduce any false details. I considered it as a given truth. But you questioned it and I decided to check. So, for AE, PS, ffmpeg, Natron and most likely Nuke it's true (with some attention to settings). Сoncerning Yedlin's research - it was made in a very natural way for me, as if I was evaluating footage myself, and it summarised well a general impression I got working on video/movie productions - resolution is not a decisive factor nowdays. Like, for last 5 years I need one hand's fingers to count projects when director/DoP/producer was seeking intentionally for more resolution. You see it wrong or flawed - fine, I don't feel any necessity to change your mind, looks like it's more a kye's battle to fight.


  17. Like
    slonick81 got a reaction from tupp in Camera resolutions by cinematographer Steve Yeldin   
    Shure. Exactly this image has heavy compression artifacts and I was unable to find the original chart but I got the idea and recreated these pixel-wide colored "E" and did the same upscale-view-grab pattern. And, well, it does preserve sharp pixel edges, no subsampling.
    I dont have access to Nuke right now, not going to mess with warez in the middle of a work week for the sake of internet dispute, and I'm not 100% shure about the details, but last time I was composing something in Nuke it had no problems with 1:1 view, especially considering I was making titles and credits as well. And what Yedlin is doing - comparing at 100% 1:1 - it looks right.
    Yedlin is not questioning the capability of given codec/format to store given amount of resolution lines. He is discussing about _percieved_ resolution. It means that image should be a) projected, b) well, percieved. So he chooses common ground - 4K projection, crops out 1:1 portion of it and cycles through different cameras. And his idea sounds valid - starting from certain point digital resolution is less important then other factors existing before (optical system resolving, DoF/motion blur, AA filter) and after (rolling shutter, processing, sharpening/NR, compression) the resolution is created. He doesn't state that there is zero difference and he doesn't touch special technical cases like VFX or intentional heavy reframing in post, where additional resolution may be beneficial.
    The whole idea of his works: starting from certain point of technical resolution percieved resolution of real life images does not suffer from upsampling and does not benefit from downscaling that much. For example, on the second image I added a numerically subtle transform to chart in AE before grabbing the screen: +5% scale, 1° rotation, slight skew - essentially what you will get with nearly any stabilization plugin, and it's a mess in term of technical resolution. But we do it here and there without any dramatic degradation to real footage.


  18. Like
    slonick81 got a reaction from John Matthews in Camera resolutions by cinematographer Steve Yeldin   
    Shure. Exactly this image has heavy compression artifacts and I was unable to find the original chart but I got the idea and recreated these pixel-wide colored "E" and did the same upscale-view-grab pattern. And, well, it does preserve sharp pixel edges, no subsampling.
    I dont have access to Nuke right now, not going to mess with warez in the middle of a work week for the sake of internet dispute, and I'm not 100% shure about the details, but last time I was composing something in Nuke it had no problems with 1:1 view, especially considering I was making titles and credits as well. And what Yedlin is doing - comparing at 100% 1:1 - it looks right.
    Yedlin is not questioning the capability of given codec/format to store given amount of resolution lines. He is discussing about _percieved_ resolution. It means that image should be a) projected, b) well, percieved. So he chooses common ground - 4K projection, crops out 1:1 portion of it and cycles through different cameras. And his idea sounds valid - starting from certain point digital resolution is less important then other factors existing before (optical system resolving, DoF/motion blur, AA filter) and after (rolling shutter, processing, sharpening/NR, compression) the resolution is created. He doesn't state that there is zero difference and he doesn't touch special technical cases like VFX or intentional heavy reframing in post, where additional resolution may be beneficial.
    The whole idea of his works: starting from certain point of technical resolution percieved resolution of real life images does not suffer from upsampling and does not benefit from downscaling that much. For example, on the second image I added a numerically subtle transform to chart in AE before grabbing the screen: +5% scale, 1° rotation, slight skew - essentially what you will get with nearly any stabilization plugin, and it's a mess in term of technical resolution. But we do it here and there without any dramatic degradation to real footage.


  19. Like
    slonick81 got a reaction from John Matthews in Camera resolutions by cinematographer Steve Yeldin   
    The attached image shows 1px b/w grid, generated in AE in FHD sequence, exported in ProRes, upscaled to QHD with ffmpeg ("-vf scale=3840:2160:flags=neighbor" options), imported back to AE, ovelayed over original one in same composition, magnified to 200% in viewer, screengrabbed and enlarged another 2x in PS with proper scaling settings. And no subsampling present, as you can see. So it's totally possible to upscale image or show it in 1:1 view without modifying original pixels - just don't use fractional enlargement ratios and complex scaling. Not shure about Natron though - never used it. Just don't forget to "open image in new tab" and to view in original scale.
    But that's real life - most productions have different resolution footage on input (A/B/drone/action cams), and multiple resolutions on output - QHD/FHD for streaming/TV and DCI-something for DCP at least. So it's all about scaling and matching the look, and it's the subject of Yedlin's research.
    More to say, even in rare "resolution preserving" cases when filming resolution perfectly matches projection resolution there are such things as lens abberations/distorions correction, image stabilization, rolling shutter jello removal and reframing in post. And it works well usually because of reasons covered by Yedlin.
    And sometimes resolution, processing and scaling play funny tricks out of nothing. Last project I was making some simple clean-ups. Red Helium 8K shots, exported as DPX sequences to me. 80% of processed shots were rejected by colourist and DoP as "blurry, unfitting the rest of footage". Long story short, DPX files were rendered by technician in full-res/premium quality debayer, while colourist with DoP were grading 8K at half res scaled down to 2K big screen projection - and it was giving more punch and microcontrast on large screen then higher quality and resolution DPXes with same grading and projection scaling.

  20. Like
    slonick81 got a reaction from kye in Camera resolutions by cinematographer Steve Yeldin   
    Shure. Exactly this image has heavy compression artifacts and I was unable to find the original chart but I got the idea and recreated these pixel-wide colored "E" and did the same upscale-view-grab pattern. And, well, it does preserve sharp pixel edges, no subsampling.
    I dont have access to Nuke right now, not going to mess with warez in the middle of a work week for the sake of internet dispute, and I'm not 100% shure about the details, but last time I was composing something in Nuke it had no problems with 1:1 view, especially considering I was making titles and credits as well. And what Yedlin is doing - comparing at 100% 1:1 - it looks right.
    Yedlin is not questioning the capability of given codec/format to store given amount of resolution lines. He is discussing about _percieved_ resolution. It means that image should be a) projected, b) well, percieved. So he chooses common ground - 4K projection, crops out 1:1 portion of it and cycles through different cameras. And his idea sounds valid - starting from certain point digital resolution is less important then other factors existing before (optical system resolving, DoF/motion blur, AA filter) and after (rolling shutter, processing, sharpening/NR, compression) the resolution is created. He doesn't state that there is zero difference and he doesn't touch special technical cases like VFX or intentional heavy reframing in post, where additional resolution may be beneficial.
    The whole idea of his works: starting from certain point of technical resolution percieved resolution of real life images does not suffer from upsampling and does not benefit from downscaling that much. For example, on the second image I added a numerically subtle transform to chart in AE before grabbing the screen: +5% scale, 1° rotation, slight skew - essentially what you will get with nearly any stabilization plugin, and it's a mess in term of technical resolution. But we do it here and there without any dramatic degradation to real footage.


  21. Like
    slonick81 got a reaction from tupp in Camera resolutions by cinematographer Steve Yeldin   
    The attached image shows 1px b/w grid, generated in AE in FHD sequence, exported in ProRes, upscaled to QHD with ffmpeg ("-vf scale=3840:2160:flags=neighbor" options), imported back to AE, ovelayed over original one in same composition, magnified to 200% in viewer, screengrabbed and enlarged another 2x in PS with proper scaling settings. And no subsampling present, as you can see. So it's totally possible to upscale image or show it in 1:1 view without modifying original pixels - just don't use fractional enlargement ratios and complex scaling. Not shure about Natron though - never used it. Just don't forget to "open image in new tab" and to view in original scale.
    But that's real life - most productions have different resolution footage on input (A/B/drone/action cams), and multiple resolutions on output - QHD/FHD for streaming/TV and DCI-something for DCP at least. So it's all about scaling and matching the look, and it's the subject of Yedlin's research.
    More to say, even in rare "resolution preserving" cases when filming resolution perfectly matches projection resolution there are such things as lens abberations/distorions correction, image stabilization, rolling shutter jello removal and reframing in post. And it works well usually because of reasons covered by Yedlin.
    And sometimes resolution, processing and scaling play funny tricks out of nothing. Last project I was making some simple clean-ups. Red Helium 8K shots, exported as DPX sequences to me. 80% of processed shots were rejected by colourist and DoP as "blurry, unfitting the rest of footage". Long story short, DPX files were rendered by technician in full-res/premium quality debayer, while colourist with DoP were grading 8K at half res scaled down to 2K big screen projection - and it was giving more punch and microcontrast on large screen then higher quality and resolution DPXes with same grading and projection scaling.

  22. Like
    slonick81 got a reaction from mirekti in Keep struggling with Davinci scaling   
    Set "Mismatched resolution files" (in "Input scaling") to "Scale full frame with crop". Or you can set it individually for any clip on timeline: Inspector - Retime and scaling - Scaling - Fill.
  23. Like
    slonick81 got a reaction from hansel in Should all political discussion be banned on EOSHD?   
    1) After russian internets it's like a victorian gentlemen's club here. But problem is present, so better take some measures.
    2) Total prohibition of any political topics is overkill, as for me. You won't choke it to death, it'll still rise in indirect and ugly forms. Besides, Andrew himself has started some discussions on political topics, I guess it's important for him, too.
    3) So I voted for subforum. All hot heads can quarrel there and should be banned from any other threads for trying. It will take more moderation efforts but it's lesser evil among those three.
  24. Like
    slonick81 got a reaction from Kubrickian in Blackmagic Pocket Cinema Camera 4K   
    I'm really curious to read this thread in october'18...
  25. Thanks
    slonick81 got a reaction from IronFilm in Apple leaving professional market?   
    Is it about raw or industry standart part?
    Compressing raw is just common practice - Red, Cineform, cDNG, Canon raw lite, it's rational to compress before debayer and leave it to post. It would be strange if Apple ignores this opportunity.
    Standarts... Apple introduced intermediate codec not only with obvious vendor lock-in but with platform lock-in as well. And it's fine, it's Apple in their right to follow thier business model. But somehow the industry abandoned the idea of open standard intermidiate codec, ignored other intermidiate codecs and here we now: "- We need masters in ProResLT! - What about DNxHR? - What's that?" Ok, "ffmpeg.exe -c:v prores_ks -profile:v 1 ..." and here we go but still some frustration left.
    And I'm afraid that industry will switch to this raw flavour of ProRes, and it won't have effective implementation outside OS X.
×
×
  • Create New...