Jump to content

kye

Members
  • Posts

    7,574
  • Joined

  • Last visited

Posts posted by kye

  1. 36 minutes ago, Amazeballs said:

    What is the best profile for 4k60p excluding Vlog? 

    CineD, CineV or Natural and with what settings would you use? 

    Everyone seemed to be in love with Cinelike-D before HLG was released, so I'm guessing that one.  I haven't shot much with it myself though.

    Settings depends on what look you're going for I'd imagine.

  2. Yeah, sounds like it was rendering proxies in the background.

    Your big load is the deflicker plugin.  Most plugins work by doing something to one frame at a time, but this is one that has to analyse a bunch of them and I've noticed that it kills my machine too.  

    A quick way to troubleshoot performance is to just play the timeline and watch the little FPS number above the viewer, and try turning on and off nodes and seeing where the load is.  Often there are nodes that you can get dialled in, and then disable while you edit, then re-enable before you export.  

    Resolve has a pretty complicated suite of performance optimising and caching functions, it's worth getting to know them so you can easily adjust things as you work to get the speed and the accuracy / quality you require at different stages of the editing process.

    For example:

    • for editing you need speed so if you have compressed source footage then generating proxies will help, if you're doing dissolves or other effects then lowering the timeline resolution can help
    • for image processing (not colour work) the load is in processing the footage so lowering the timeline resolution will help (as it has less pixels to compute)
    • for colour grading you need high quality (and maybe not speed) so turn up the timeline resolution and just skip through the timeline, but if you need realtime playback then rendering cache on the timeline is the answer, but if you need high quality, realtime playback and can't wait for proxies to render then just hand your wallet over and someone will take care of it!
  3. 20 minutes ago, thephoenix said:

    and there's nothing exectional in recovering 2 stops of highlights, but hey this is youtube, everything has to be insane ?

    Actually, I think that it's a deeper issue.  

    Just look at the terminology - "recovering highlights" is something that you do when things have gone brighter than white.  This only makes sense if white has a shared definition, which it does if everyone is publishing in rec709.  

    HLG is actually a delivery standard, so when someone shoots in HLG and are going to deliver in HLG then there is no recovery - if the camera clips the HLG file then that data is clipped.  Same as if you shoot rec709 for delivery in rec709.  If this guy was talking about filming in LOG then no-one would assume that he's delivering in LOG, so the conversation would be in the context of the default and standard delivery colour space / gamma.

    The point of HLG is that it's an alternative to rec709, and so now there is no default / standard / goes-without-saying reference point.

    Edit: coming from a camera, clipped HLG is the same as clipped 709..  that is some cameras may not actually be clipped because they might include super-whites in the output file, like I know the Canon cinema cameras do (as well as others I'm sure).  Ah, I love the smell of confusion in the morning.

  4. 2 hours ago, mercer said:

    Here’s a couple frame grabs from a recent 50mm comparison test I did. 

    Definitely not a fair comparison, but they certainly both look really nice!  The bokeh in the close-up shot is very nice..  and seems to be the anamorphic shape?

    I think you're slowly convincing yourself to go B&W! ???

  5. 14 hours ago, Grimor said:

    Interesting talk about recovering 2 stops of burning highlights...

    Well, he makes no sense through most of that video but was absolutely right about one thing - when he said "I don't know the technicalities of what's actually going on".

    My understanding would be this:

    • HLG includes the whole DR of the camera and the other profiles he tested don't
    • FCPX is confusing him..  here's what I think is happening:
      • FCPX takes the HLG file (which isn't clipping anything) and then converts it automatically to rec709, "clipping" it heavily but retaining that data in super-whites
      • When he makes adjustments to lower the exposure those values above 100 get brought back into range
      • He thinks that FCPX pushing the exposure beyond 100 somehow means the GH5 "clipped" (it didn't)
      • He things that lowering the exposure in FCPX and getting the highlights back means you can somehow recover clipped highlights (you can't)
    • If something is clipped then the values are lost (digital clipping is data loss)
    • FCPX is "helping" by automatically doing things without asking you

    TL;DR - HLG has greater DR; exposing for HLG is different than rec709; FCPX is "helping" and confusing people; and this guy isn't the person to be listening to for this stuff.

  6. 3 hours ago, PannySVHS said:

    old c-mounts and ibis are indeed a thing of beauty!  I shot a video today with a friend, all GX85, awesome Fujinon 12.5mm with a bit of fungus. That lens is the only reason i still have that camera. So this is from today, with grade and edit right after it. I should do that every week. 1hour shootin, no planning ahead. fun thing: shot in vivid, full contrast and sharpness +2, color at +4, though at the lit parts at -5.  Finally my first "real" video with the GX85, after almost two years owning it! :)

     

    Nice!  I don't get it, but it's a cool aesthetic :)

    Made me think about that idea of having a virtual film club with some loose rules to encourage working fast to get something out and not getting too attached to making the end result perfect.  I like the idea of having time limits on the stages of production and doing it all in one go.  Something that could be done in an afternoon :)

  7. 1 hour ago, @yan_berthemy_photography said:

    Hello there,

    I am having a question about color profile with the GH5, for the low light situation.

    Imagine you are in a lowlight situation (in the street), shooting skin tones, would you use CINE-D or Hybrid Log Gamma?

    Thank you.

    I normally find that if you can run a short test to answer a question then that's normally the best way to do it.

    I have a GH5 and I'd run the test for you, but you should just do the test yourself and that way you get to choose the right scenario and look at the footage without internet compression :)

    Edit: quoted the wrong message first time around. fixed now :)

  8. 6 hours ago, buggz said:

    I'm no expert, so, if anyone can think of anything better, I'm all for it.

    I got this info from another forum, it seems to work good for me.

    - In Project Settings, under Color Management, Change to Davinci Color Managed
    - Under Color Management select:
    -- Rec 2020 HLG ARIB STD-B67 for Input Color Space
    -- Rec 2020 HLG ARIB STD-B67 for Timeline Color Space
    -- Rec 709 HLG ARIB STD-B67 for Output Color Space

     

    It depends on what you're trying to accomplish.  When people ask about HLG they're often asking about delivering to HLG, so unfortunately it's ambiguous.

  9. 1 hour ago, Sage said:

    Not a bad way to go! The most appealing quality of Unreal was the photorealism; granted I was one to get a Gtx1080 at release to do a 4k downsample to CV1. Have you tried Dreadhalls? I remember that as being an algorithmically rendered dungeon (unique every time), and it was pretty scary.

    Cool.  No, I haven't tried it.  TBH I'm not really a gamer.  This might seem strange for me to be looking at programming, but my ideas come from a place a bit different to how things are traditionally done.

    I've previously written complete 3D engines from scratch and so when I first contemplated getting into this I was essentially looking for a 3D VR polygon engine that I could just program, whereas these things are more like using the Doom level editor than actually coding something, which is kind of promising I think because that's not at all what I'm thinking of, so in a sense I'm likely to get different results and stand out from the crowd.

    1 hour ago, KnightsFan said:

    @kye Unity is a lot more programmer friendly than Unreal, certainly a lot easier to make content primarily from a text editor than it is in Unreal. Unless you need really low level hardware control, Unity is the way to go for hacking around and making non-traditional apps.

    Awesome, that's cool to hear.

    My first idea was more as a VR experience to generate the entire 3D environment using fractal mathematics, recursive algorithms and other techniques to generate alternative spaces that aren't designed to look like the real world.  It seems silly to me to take computing power that can basically render whatever you want and then use it to just try and copy the real world.  

    Anyway, once I've worked out how it works I'll probably just be creating and moving objects around with code and not really using the editor at all.

    I just have to work through their examples..  first step is to work out why their 3D game kit gets compiler errors.  and when I say first step, I have to work out where I see those errors, then what they mean, then how to fix them :)

  10. 4 minutes ago, Sage said:

    That's the way its done. I dabbled in that arena with the DK2 a few years back; my feeling was that the new Unreal Engine 4 at the time was the way to go. Its a free platform, with uncompromising graphic fidelity, integrated VR handling, and exposed C++ source if you want. I will be getting back in with CV2

    I've just started with Unity.  

    I read a bunch of comparisons and they seemed to think that Unity was better for smaller studios and for doing mobile, so I'll see how I go.  In a sense I'm not the typical user because I know how to code and I don't want to use it for what it's designed for, so I guess we'll see.  I've learned so many programming languages over the years that it's not that hard to pick things up, although these things are very far removed from programming, so I guess I'll see how I go.  I suspect that I'll likely abandon the game interface and end up coding most things, as my goal is more around algorithmically rendered environments rather than typical 3D apps or games.

  11. 1 hour ago, leslie said:

    i'll bite, how do you do that ? oscilloscope or something else ? i guess its expensive or technical or both

    Interesting question. 

    You get a reference clip and then encode it to whatever codec you want to measure and then compare the compressed clip to the original and then do some math and get the result.  

    The tricky thing is that to compare BRAW with uncompressed RAW with the P4K you would have to point the camera at something recording uncompressed RAW and then set it to compressed BRAW and point the camera at the exact identical same thing, which is practically impossible.  In reality you'd have to be able to take uncompressed RAW and then convert it into BRAW so you had the the same thing in the two codecs, so it's something that only BM would be able to do I think.

    I'd bet that looking at signal-to-noise ratios would have been done probably thousands of times during the development of BRAW, and they'd have been comparing it to all the other codecs they could analyse too, but I doubt we'd ever see any of their numbers.

    The only way we'd get a glimpse of it would be for someone to buy a P4K, crack it open and then tap into the RAW feed coming off the sensor before it gets to the video processing circuitry and save that stream as uncompressed RAW and let the camera record to BRAW, then to compare those two files.  Of course, they would have to do a similar test for every BRAW setting, and as the codec quality is signal-dependent they'd have to do it many times for each set of settings in order to be able to reliably compare two different compression levels.

  12. 1 hour ago, Anaconda_ said:

    So basically, editing on a 4K timeline is a waste of my computer's resources? I'd get the same end results (4k master) if I edit on a 1080/720 timeline?

    Not a waste, but it's not required.

    Just did a few tests.  I got a project, changed the timeline to 720p, added a 4K still image, then exported at 4K, changed the timeline to be 4K, then exported at 4K again. This is what you get:

    • if you export at 4K from a 720p timeline you get a 4K file but with 720p quality
    • if you export at 4K from a 4K timeline you get a 4K file and 4K quality

    An easy way to judge what quality you're getting in the output is to add a Text generator and make the font size really big and look at how sharp the edges are in the output file, especially on curves or edges close to horizontal or vertical.   

  13. It would be interesting to see a signal-to-noise analysis of BRAW at various bitrates vs competing formats.  I doubt we'll see one, but after seeing how h265 is roughly the same as h264 with double the bitrate, it might be a similar situation between BRAW and CDNG formats where BRAW gives similar image quality to a much higher bitrate CDNG stream.

  14. Edit: to expand on the order of operations comment with an example..

    If we take a gradient, apply a colour treatment, apply an opposite colour treatment, then add some contrast with some curves, this is what we get:

    117284418_ScreenShot2019-03-07at10_16_48am.thumb.png.543131e5f32c874a4e3238de67b8d01f.png

    However, if we apply the same adjustments but change the order so that we apply the curve in-between the colour adjustments we get this:

    2092274524_ScreenShot2019-03-07at10_17_14am.thumb.png.dd8b1e6d77c9ef31acf4be8ebb6d49fb.png

    This is relevant because it simulates having an incorrect WB in camera, then trying to balance it in post but doing it in the wrong order.

    Inside the camera the sensor sees in Linear, then transforms to LOG (or whatever the codec is set to) and so the camera has already made the first two adjustments of WB adjustment and then a set of curves.  This is further complicated by the fact that different tools work in different colour spaces.  Some are designed to work in LOG (IIRC specifically the Cineon LOG curve - Juan Melara recommends ARRI LogC as being the closest match to this) - I believe the Cffset, Contrast/Pivot, and other tools are for LOG.  Others are designed to work in Rec709, such as the Lift Gamma Gain wheels.

    In a sense this isn't a problem as there is no such thing as 'correct' as very few of us are trying to capture a scene accurately, we deliberately adjust the WB of shadows and highlights separately (orange/teal, film emulation LUTs, etc) and we deliberately abandon a Linear gamma curve (almost any adjustment), and we apply separate treatments to different items within the scene (keys, power windows, gradients) so none of the above really matters in these cases, however it's good to be aware as if you're going for a natural look then you might be applying a teal/orange style adjustment like I showed above, but not in a good way.

    This is why there is such an emphasis on getting WB correct in-camera if you're using a colour profile - anyone who has done this and tried to recover the shot in post faces an impossible task of basically having to work out how to reverse the adjustments that the colour profiles applied.  That magical Canon colour science isn't so magical if you're trying to work out how to un-do it, make an adjustment, and then re-apply it!

  15. 6 hours ago, Anaconda_ said:

    Including me, as a RAW noob... Can help me with project settings for working with Braw? Does this look right?

    While editing, there's no problem in adjusting the 'Decode Quality' to half or quarter for smoother playback on a laptop right? as long as I change it back to full for the export?

    Should I have Highlight Recovery on by default? Or adjust that on a clip by clip basis in the Color tab?

    All help is greatly appreciated.

    EDIT: also, I've generated a sidecar file, but the Blackmagic RAW Player still shows me the ungraded clip, even when the video and sidecar are the only files in the folder it's playing from. - anyone else having a similar experience?

    I don't have much experience grading RAW so not really my area unfortunately.

    What I can tell you is that Resolve seems to do everything in a non-destructive way.  What I mean is that you can have a node that pushes the exposure to clip the whole frame, but if you then pull it down with the next node it's all still there.  Also, you can have a 4K clip on a 720p timeline and if you export it as a 4K clip it will pull all the resolution from the source clip regardless of what the timeline resolution was.  

    So, in a sense it kind of doesn't matter what you do as long as the end result is where you want it.  One exception to this is that you have to be careful with the order of operations as you can get funny results.  Ie, you can get different results applying adjustments by applying them before or after various conversions or other adjustments, so that's the thing to watch out for.

×
×
  • Create New...