Jump to content

dhessel

Members
  • Posts

    393
  • Joined

  • Last visited

Everything posted by dhessel

  1. Thank you for all of that information. I believe it was some of your posts on the foundry forums that were a starting point for all this. I understand that the s-gamut offers a wider range of colors but what is somewhat unclear is what benefit it would have for grading compared to using a different color mode. I have not worked much with files outside the usual srgb colorspace. Thank you for confirming what I suspected about a 3D lut not being as accurate as using a matrix, I didn't want to make that claim until I looked into it more. I am not real familiar with the all the options for 3D luts but was pretty sure I had not heard of any that have more than 64 samples per channel. That would result in quite a bit of interpolating compared to a matrix which has no interpolation and is lossless. The 1D lut I generated has 1024 samples so it has plenty of data for an 8bit source.
  2. I have considered combining them but haven't yet since I am new to luts and wanted to keep the gamma correction separate from the color at this stage. If this looks worthwhile then that would be the next logical step. Not sure what benifit s-gamut may have but I do feel the other modes like pro and cinema are somewhat looks and would just likento have the option of using s-gamut. Sony put that combo on their high end cameras for a reason I am sure.
  3. I have been frustrated by the fact that there is not really any good way of working with s-log2/s-gamut with Adobe software. So far I have not been able to find any way of working with default PP7 other than look luts, like Impulz. The problem is that I don't always want to use a look lut and I have found that the luts from Impulz don't seem to work that well with footage from the A7s anyway. The colors always seem off no matter what I had tried and grading without luts was even more problematic. So I have worked out a way to convert s-log2/s-gamut to Rec709 without any look luts. So far the results have been good but I have not had any time to shoot with our A7s myself so I have been using stills generously uploaded by Roman Legion on vimeo. So here it goes, it is two effects that must be done in this order. First I have attached a lut to correct the s-log2 gamma curve to rec709. This is based off a formula I found from sony for its s-log2 curve. Took some tweaking but mainly just had to deal with using video levels instead of full range. It is a 1D lut so it only affects luma. Apply this lut using a lumetri/apply color lut effect. This is a matrix transformation from s-gamut to sRGB we will use it to fix the colors next. S-Gamut to sRGB 1.87785101 -0.79411894 -0.08373153 -0.1768095 1.35097992 -0.17417008 -0.02620544 -0.14844233 1.17464781 Now apply a channel mixer effect and enter the follows values. The channel mixer effect appears to only allow integer values from -200 to 200 but it seems to still work fine. So you can enter them like this in the following format, what out for the const fields and leave them at 0. Red-Red Red-Green Red-Blue Green-Red Green-Green Green-Blue Blue-Red Blue-Green Blue-Blue 188 -79 -8 -18 135 -17 -3 -15 117 That is it, so far this looks really promising but would like to hear how it works out for the rest of you. I am adding some corrected still from Roman Legion as well. These images are ungraded I only converted to Rec709 and adjusted exposure as they were under exposed a little. slog2-to-Rec709-dhessel.zip
  4. Maybe I should clarify. Yes it is possible to downscale to 10bit 4:4:4. This app and that thread are all about the theory that you could use this down scale as an alternative to recording 10bit native and then use that extra depth during color grading. That theory is flawed and doesn't work since the down scaled 10bit is no more accurate in terms of the recorded color information than it's 8bit source and that it is not the same as recording 10bit during capture. Sure there are benefits in super sampling the image during downscale but you don't need a special app for that.
  5. While this may be a useful exercise for you on learning how to run commands through the terminal on a mac, highly recommended by the way, however the whole down scale to 10bit theory is interesting but fundamentally flawed. Simply put it doesn't work and there is really nothing to gain by using this app over just simply bringing your 4K content into a 1080P timeline and down scaling to fit. I believe it will still be 4:4:4 even when done in a NLE, made no attempts to verify that however. Years of research have gone into to scaling algorithms to get the best possible results and I suspect if anything this app would result in a lower quality downscale if anything. Has anyone actually compared the output of this app to down scaling in a NLE to see if there is any improvement at all?
  6. dhessel

    NEX lens on A7s

    Not really for video. For video in APS-C mode the camera downsamples 2.8K to HD instead of the whole 4K sensor to HD like it does in FF mode. The results are still very good and there is not much of a visual loss in quality if any at all. The video coming out of the A7s is very good. There are some complaints about the colors especially if using slog-2 and s-gamut. There have also been reports of highlight aliasing. I feel the color issues are mainly due to working with the slog-2 and that they will get addressed in time.
  7. There is a FF gear made specifically for the 54 on ebay. No personal experience with it though. http://www.ebay.com/itm/Seamless-Follow-Focus-Gear-for-Iscorama-54-Lens-/251486599934?pt=LH_DefaultDomain_0&hash=item3a8dc4fafe
  8. No only the 50mm and 75mm can be swapped, the anamorphic front is different on the 35mm compared to the others.
  9. Yes he did, I don't know the specifics but I have been in contact with him for months about this. It is old anamorphic lenses with new custom made optics that allow for single focus. I believe it is somewhat similar to a Isco where you have the anamorphic block fixed for infinity focus and then a variable power diopter in front, you set your taking lens to infinity and focus with the adapter. Although I believe the optics are different but the concept is probably similar. This one works the same way but I believe that the this adapter is not fixed at infinity focus so you can get closer focus. Set the focus on the lens and anamorphic block to infinity and focus with the new focusing system and you can focus infinity to say 2 meters. Set the anamorphic block and taking lens to 2 meters now you can focus from 2 meters to 1 meter. This is somewhat speculation on my part from my conversations about this lens in the past, the website so far has not been super clear to me on exactly how it works but I am sure John will pop in here soon and clear things up. Over all it looks like a very high quality and very flexible option.
  10. Yep I have used this feature with my lomo square front to give me a 50mm to 125mm zoom. That is a 2.5x zoom and still gives a 1:1 pixel density with the 2x stretch and downscale of 75% to a 2.4 ratio on an HD timeline. Also I can shoot FF with it, sure it will vignette but all vignetting is remove once cropped to 2.4. The a7s is the perfect camera with a lomo anamorphic prime.
  11. Not necessarily true, the camera has a digital zoom feature that lets you go to 4x zoom in FF mode, which is a crop factor of 4 i believe. There will be a loss of quality for sure though.
  12. Wow, this has to be one of the most divisive test video threads I have come across in a long time. It seems to me that a lot of people are missing the point of this test and the A7s in general. This test was to push the camera to its limit to see what it can do. I don't think the point was to have a artifact free flawless image but to show how well this camera handles this very extreme situation. There is no other camera out there that could produce these results, even if you consider them to still be unusable. The strength of the A7s is not doing shoots like this or doing night for day or any other wildly extreme tests. It's strength is that it will allow me to get very clean footage using available light indoors without having to be wide open all the time. It will offer more flexibility and options in situations were controlling the light is not practical or possible. Also, the low light ability of this camera is just one of the features it has that make it appealing. It is a very practical, portable, high speed, log shooting, feature rich camera that is capable of producing a very good image with a lot of dynamic range.
  13. I generally change pixel aspect ratio when working with footage and have my final output be square pixels. Not everything handles pixel aspect ratio's properly so I find it is best to get rid of it for anything that I consider a final product. I do find that using pixel aspect ratio's during color correction allows me to keep my full resolution and minimize memory since a 2.0 pixel aspect changes the size of the image and unsqueezes it but doesn't change the number of pixels unlike scaling the footage by 2. For images in photoshop I would scale the image rather than use pixel aspect, I imagine that image viewers would be much more likely to not support pixel aspect ratio's. For AE or Premiere I always have the sequence/comp the exact same height as my raw footage and use pixel aspect ratios to preview the un-squeezed footage. There are par's for 1.33, 1.5, and 2.0 in both apps. I then set the width of the comp/sequence to have a final aspect of 2.39 or 2.67 (usually 1440 x 1080), and re-frame my footage during edit. A 2x anamorphic shot 16:9 is way too wide and I like to get it back to a standard aspect in the beginning and frame my footage horizontally as I go rather than keeping it ultra wide and re-framing at the end as a whole since I like to have the option to individually frame each shot. Once all editing and CC is done I export out a master in a high quality codec, 10bit black magic at this point since I am on windows and DNxHD doesn't allow resolutions like 1440 x 1080. From there I use my master to make my final square pixel version. As a work around for 2x squeeze with a 1440 x 1080 comp and DNxHD I have found I can set my footage to have a 1.33 aspect ratio and render out 1920 x 1080 as 1440 * 1.333 = 1920. Then when working with the DNxHD files I assign a 1.5 pixel aspect ratio to it. That gives it a 2x final stretch since 1.5 * 1.333 = 2 and still keeps some of the benefits of using pixel aspect instead of scaling..
  14. dhessel

    Grading

    Very interesting, thanks for posting.
  15. As these user names seemed familiar to me, and I felt I had seen useful and helpful post from these guys in the past I decided to see what kind of trouble the were causing. Of baltic's 8 posts I would consider 7 o them to be helpful normal discussions, this being the only exception. And of Nog's 13 posts I didn't find any that were looking to cause trouble, the were all offering opinions, help, or information. This was by far his most troublesome post. Baltic's fanboy comment was out of line and I don't agree with telling someone what kind of videos they should make, but is that single post ban worthy? As for Nog, unless you removed all of his troublesome posts so I was not able to find them he absolutely didn't deserve getting banned for that statement. Do you really feel that is trouble making? I am a huge fan of your site and the work that you do, I check in here almost daily since it is one of the best sources for information on anamorphics which is a large interest of mine. But I have to say that banning these individuals, especially Nog in this way is more damaging for the community than anything they did. Am I missing something here?
  16. Yes that could work, I try to avoid proxies if I can. I am not sure about the latest version but as far as I know premiere doesn't have a good way of switching between online and offline editing. In the past I have rendered out high quality 'fat' versions to a blackmagic codec and then edit friendly proxies to another format, each to there own master folder. I made sure to keep the name and flie type identical as well as file structure in each master folder the same for both versions. Premiere would read files from the proxy folder for edit, when I wanted to render the final I would rename the folders so the 'fat' version root folder was the same as the old name for the proxy folder. Then when you open premiere it will automatically read the 'fat' versions. A total hack but it worked well enough for small projects.
  17. You can go out to tiff, but I would personally do 10bit dpx or 16bit exrs. You will however suffer a larger playback performance hit if you do. I have personally found dpx to be the best for playback and editing but they are large file size and I have experience color/gamma shifts going back and forth between different/non adobe software packages. I will check out grass valley, thanks.
  18. You could maybe try cineform but that is not free, other than that I do not know. Worst case you can do DNxHD using the following work flow, you can keep 90% of the resolution. 1.) In AE set your comp size to 1440 x 1080 with pixel aspect of HDV 1080/DVCPRO HD 720 (1.33). 2.) Set the pixel aspect of your footage to 1.33 then scale your footage to 90% :(. 3.) Export out to DNxHD at 1920 x 1080 using media encoder. 4.) Import in premiere and set the pixel aspect ratio to DVCPRO HD 1080 (1.5) and set the sequence settings to 1920 x 1080 with a 1.5 pixel aspect ratio. This will give you a final resolution of 2880 x 1080, its not 3200 x 1200 but it is better than 1920 x 720 and should be very efficient to work with and edit since it is just HD footage with a non square pixel aspect. For step number 1-2 you could just set the comp size to 1920 x 1080 and scale x by 120% and y by 90% instead, I prefer to use pixel aspect ratios when working with anamorphics since that is essentially what an anamorphic lens is changing. It keeps the number of real pixels lower and in turn requires less memory. The 1.33 aspect in AE combined with the 1.5 in premiere will give you a 2x stretch in the end. Just make sure that AE and Premiere are set up to display the pixel aspect ratio correction and everything will look as expected. If you go the DNxHD route you can use the extra resolution to slightly reframe your shots before exporting to DNxHD and not scale as much or at all, or you can shoot at a lower resolution in ML like 1472 and save some memory. On somewhat of a side note I always try and keep my footage the same resolution until the final output and use pixel aspect ratios to deal with the anamorphic stretch. So if you can find a codec that will export 1600 x 1200 keep the footage that size and just assign a 2.0 pixel aspect to it when working with it rather than scaling it to 3200 x 1200, it will reduce the number of pixels you are working with by half compared to scaling. I always keep my masters this way and then export out square pixel / black bar versions as needed when I go to the final format like HD. If you find a good alternative codec please post, I am sure many would like to know as well. Good luck. Maybe keep an eye on this for the future, it appears it will be open source based on cineform. https://kws.smpte.org/kws/public/projects/project/details?project_id=15
  19. Just avoid confusion, while it appears that it is possible to export this uncompressed to Cinema DNG that doesn't make it raw and doesn't make it the same as a DNG from Magic Lantern or a Blackmagic Camera. It is still 8bit 4:2:2 just in a dng container which won't be very well supported since technically DNG should only hold raw bayer data, even though adobe seems to support this. I see a lot of posts around mentioning exporting to DNG and unless you are converting a different raw format to DNG there really is no point, otherwise there are far more efficient codecs/containers to go with.
  20. Sounds like something is mis-aligned, posting an image of the issue would be helpful determining what.
  21. For your future reference the reason why it can is because the flange distance is larger on a Nikon F than a EF mount, If the lens is designed for a format equal to or larger than the format/sensor size of your camera and the flange distance is larger then it can be mounted easily. Both hold true for a Nikon F on EF.
  22. He is using the 5D MKII which isn't continuous at 1600 x 1200, looks to me like you are using a 5D MKIII.
  23. If your lens is really a 1.6 stretch then you will want to shoot with a ratio of 3:2 or wider. 3:2 with a 1.6x squeeze will give you a 2.4 aspect, 4:3 will give you a 2.13 aspect and would be non standard unless you crop. With ML you have so many choices for resolutions and aspect ratios that I would personally choose the settings that require the least amount of cropping. Cropping can be useful but you will be maxing out the potential of the camera so every little bit of savings will help, ML raw is very data intensive. If you shoot 3:2 at 1600 resolution that will be about 68 MB/s at 24P, this will be continuous with MLV and sound even with Global Draw on so you can use focus peaking, magic zoom etc. I haven't used ML raw lately but it was a little unstable with Global Draw on - dropped random frames. I am not sure it that has been fixed in the last few months or not, worth checking into. After that is 1728 resolution and that will pull about 79.7 MB/s which will not be continuous allowing maybe 10 seconds before dropping frames. Not worth it IMO. So, I would personally shoot 3:2 - 1600px at 24P with MLV and sound using global draw if you have no external monitoring and if it is now stable enough (may require some testing on your part). Even if you are recording externally I would record audio in camera as well. It helps tremendously for audio syncing and can be done automatically using plural eyes. For now I would test and experiment as much as you can. ML raw is awesome but it is data and time intensive and will really slow down post.
  24. The 54 is not like the 36. The 35 has a thin plastic lip you can sand down and allow for a little closer focus. The 54 has large thick tabs that contact each other that would have to be cut off with a saw since they are so big you cannot sand them down. The 54 is a very solid sturdy lens, this is not a modification I would even consider for the 54. They only way would be to dissemble the lens to the point were you can keep focusing until the lens screws apart and just leave it dismantled, again not something I would want to do.
×
×
  • Create New...