Jump to content

thephoenix

Members
  • Posts

    1,015
  • Joined

  • Last visited

Posts posted by thephoenix

  1. 1 hour ago, keessie65 said:

    screenshot from a film I am now editing. Halfway the hike in Austria the SSD from my Ninja V was full, so I stored new footage on SD card. In Premiere CC I edit both and Ninja files are faster to render than H265 from card....

    NINJAV_S001_S001_T282.00_04_26_11.Still001.jpg

    sure they are, h265 is a nightmare for ressources. transcode the files or do some proxies files. but the ninja v are supposed to be better anyway 422 vs 420

    nice image. anamorphic i guess :)

    frames from my next one too, these were my first images with the xt3.

    californight palm_1.47.1_1.47.1.png

    californight vegas_1.57.1.png

    californight_1.113.1.png

  2. 2 hours ago, Attila Bakos said:

    Yeah 16 doesn't have it, 16.1 b2 (the latest beta) added it.

    got it thanks

    now i gotta redo all i've done in the past 3 days on my last film ?

    btw, do you need any conversion lut to rec709 when selecting flog as input ?

  3. Just now, heart0less said:

    Why do you transcode it? 

    Can't you generate Optimized Media inside Resolve? I think it's much easier this way. 

    i am not sure the free resolve does it. i think you need the paid version to handle h265.

    last time i checked it didn't but it was few monthes ago

     

  4. 1 minute ago, Attila Bakos said:

    I did my tests using Fujifilm F-Log as input color space, then Rec.709 Gamma 2.4 as timeline & output color space. I don't use HLG but for that I'd use Rec.2100 HLG as input, in fact that's automatically chosen by RCM if you import a Fuji HLG file. (in Resolve 16.1 beta 2)

    About interpretation of F-Log files (didn't test HLG yet):
    If you use the Ninja V then you're fine.
    If you use internal footage then you're fine in Davinci YRGB and ACES, but not (yet) in RCM.
    If you transcode the footage by doing a matrix conversion from BT.601 to BT.709 then you're fine.
    If you transcode the footage without a matrix conversion and you preserve the original matrix coefficients flag, then you're fine in Davinci YRGB and ACES, but not (yet) in RCM.
    If you transcode the footage without a matrix conversion and you omit (or simply rewrite) the matrix coefficients flag, then the footage will be interpreted incorrectly everywhere.

    It all comes down to how shutter encoder works. I can only help with FFMPEG.

    shutter encoder is just a gui that works with ffmpeg. i have to see if it does change the matrix but i know i can change colorspace to rec 709

    must be my eyes because i don't see fujifilm flog as input in resolve rcm ?

    i think i will install 16b2 i am in final 16 at the moment

  5. 1 minute ago, Attila Bakos said:

    That's kinda the point here, that once the interpretation issue is fixed in RCM there will be almost no difference. Right now RCM is different from the others but yeah, you have to open the files in separate tabs and click back and forth to see the difference. A real world scenario might be more telling.

    what are your rcm settings as input ?

    i am wondering if i do it right.

    my workflow is that i shoot h265 internally then transcode to dnxhr using shutter encoder.

    importing transcoded files in resolve. so wonder if it still as to be interpreted as a fuji file ? doing this for both flog and hlg.

    same with ninja v usng dnxhr.

     

  6. 10 hours ago, Andrew Reid said:

    Sure, so the first thing I need at 180fps is to select 1/50 shutter manually ?

    Think.

    Why do you need manual controls in HFR mode? The ISO and shutter speed are going to be the same whether the camera sets it or you do.

    It is going to use the shutter speed most suitable for the frame rate, and the lowest ISO it can get away with in order to expose brightly enough for the available light.

    And you have an override with the exposure compensation dial to fine tune exposure anyway.

    So yeah, it's not perfect and could be better but do you want perfectly exposed 180fps full frame images or not?

    i disagree andrew, sometimes on hfr you want to get a bit of movement blur and you can set the shutter to a slower speed than the 180 degres rule or higher if you need to.

    so yes i do think about the result i want ;)

    like for 60fps you go 1/100 or 1/96 instead of 1/120

    if i follow your thinking all cameras should be automatic, are reds, arri and others automatic ?

     

  7. 10 hours ago, Andrew Reid said:

    19.8ms rolling shutter is ok. Not quite as good as a GH5 but given it is 6K and a larger sensor, it's on par with Fuji and much better than a Sony A6500.

    IMATest says dynamic range is 0.2 stops better vs Pocket 4K.

    And the number is completely meaningless because the Pocket 4K is pink.

    A reminder always to trust your eyes and not only the numbers.

    4K-vs-6K-5-stops-under_2.6.1.jpg

    well the image you show is when you under expose 5 stops ang bring back to 0, so that is partialy dr, i wish they did the same test with +5 stops

     

  8. 2 hours ago, thebrothersthre3 said:

    People generally prefer Log for delivery

    no, they prefer log for grading, hlg is the kinda the same in that perspective

  9. 28 minutes ago, Otago said:

    I don't understand how the photo site size can change the dof, unless it is massive and there are so few samples to judge sharpness from. I could sort of see how it could be true for film, a flat CCD or CMOS with micro lenses vs a multilayered sheet. I'm obviously out of my depth with the maths behind all of this, is there a book or paper that you know of to explain it all ? 

    i have documentation but in french ?

    not sure you do read french ;) but maybe google translate can help ( i doubt it on technical stuff)

    https://artfx.school/profondeur-de-champ-capteurs-numeriques/

     

  10. 1 hour ago, Attila Bakos said:

    A slight downside: it seems that the fix they implemented for the bad interpretation for Fujifilm files in v16 (bad YCbCr->RGB conversion) only works in Davinci YRGB and not in RCM.

    pffffff they'll never sort it out ?

    so disappointing

  11. 2 hours ago, Otago said:

    I was trying to figure out why we were wrong, the optical system hasn’t changed so the depth of field hasn’t changed - this is pretty fundamental in optical systems I have worked on.

     

    The bit that we were missing, as Kye says with the circle of confusion, was that while the optical system hasn’t changed, the viewing conditions have. The depth of field ( effective to the viewer of the output ) is the same for any size sensor as long as the output is scaled similarly to the input I.e. FF35 is viewed twice the size of M43. The circle of confusion is related to both the input and output size and scaling that changes the visible depth of field. I think of it like zooming into 100% , I can see whether something is actually in focus but I couldn’t tell when scaled to fit the screen, the difference won’t be as apparent as that example though.

     

    This is interesting to me because it explains why FF35 looks so good on a phone but can be too shallow on a big screen, and may explain some of the rush to the bigger sensors - as the images we view get smaller we need shallower depth of field to match the depth of field we see ourselves or are accustomed to in media.

     

    What I haven’t quite wrapped my head around yet is whether it changes the absolute level of blurriness of the out of focus areas. My initial thought is that it doesn’t, it just changes the rate of transition between acceptable sharpness and out of focus but not the absolute blurriness ( I’m sure there’s better terms for this :) ) Anyone got any pointers on that ?

    circle of confusion yes but also the photosite size has a real impact on dof.

    dof on digital is not the same as on film

  12. 6 hours ago, Stab said:

    Yes, that makes sense. With a crop factor of 2x you multiply the focal length and the f-stop by 2x to get a similar FOV and DOF.

    Got it.

    But, on the DOF calculator that I used, I didn't want to match any framings. I just put the same lens on 2 different bodies. And the MFT sensor had a shallower DOF than the Full Frame sensor. If that is correct, then why is that?

    Same distance to subject, same lens, same everything.

    you just compare 2 things that are not the same. sorry it's just the way it is. crop factor does exists. so in your test you're not compairing 2x 70mm but a 70mm a 2.8 and a 140mm at 5.6

    2 hours ago, thebrothersthre3 said:

    I've seen androidlads comparisons and thats good enough for me 

    he prefers hlg so you should try it

×
×
  • Create New...