Jump to content

kye

Members
  • Posts

    7,495
  • Joined

  • Last visited

Posts posted by kye

  1. 6 hours ago, IronFilm said:

    Just I'm highlighting the fact when he says "MFT IS GOING TO DIE!!!!" you should take it with a big grain of salt, as very likely a big part of that reason is for the views.....

     

    6 hours ago, BTM_Pix said:

    Just playing devil's advocate here though but a major part of the standard YouTube schtick in general is the follow up "Why I was wrong about xyz" videos.

    Its like getting two pieces out of one so there is arguably even more of a vested interest in going big one way or the other with the original piece.

    "The Pocket 4K Is Quite Decent Really" is never going to invoke much of a need for a retrospective video the following month.

     

    5 hours ago, IronFilm said:

    YouTube rewards people for going big on predictions/claims, rather than more nuanced mild views.

     

    They're sometimes even daily. Or at least multiple times a week.

     

    All in all the Northups play "The YouTube Game" really really well (& I do admire them for that, they've got 100,000x more success than my YT channel will ever have), but part of that is pushing hard to where the edges are (like predicting MFT will die! ha), however sometimes that means going too far by accident....  which is what they did with that video.

    It sounds to me like you guys are reacting to the title a lot more than the content?  But even if you aren't, I actually happen to think he presented a reasonable argument that MFT will die.  I think the sticking point might actually be timescales.  We'd probably all agree that MFT will die in the next 100 years, and not in the next 6 months, so it's a question of degrees.  The Northrups talk about long-term trends in a world that has the attention span of a squirrel and I think that mis-match of contexts is often a factor in people disagreeing about this stuff.

    In terms of people "making mistakes", or in this example I'd say it's closer to "missing the mark", everyone does that.  But then again, if you gauge things by the internet comments then basically everyone should go kill themselves immediately and companies should give up on everything and hand out fantasies for free.  Plus everyone seems to fall in love with what they own, which is a strange phenomena.

    And if you want to get a sense of how level-headed the Northrups can be, check out their reply to Jared Polin around RAW vs JPG.  Tony was so level-headed the video was almost boring to watch - not only did he not capitalise on the drama but his reply was designed to put the whole thing to bed instead of create more.

    I think most of their videos are akin to "The Pocket4K is quite decent really" and that's why they have such a following.  They gave the GoPro 360 camera a pretty bad review (for many good reasons) but still presented its good points, and even more recently they still listed its pros and cons when comparing it to the more recent insta360 camera.

    They're definitely not perfect, but I think on average they are ahead of 99.99% and criticising them based on the title of a video rather than the myriad of points within the video isn't that helpful.

  2. 1 hour ago, IronFilm said:

    They can duplicate a lot of tech across the camera ranges. 

    Nearly all the main players are supporting at least two sensor sizes with their ILCs: Nikon/Canon/Pentax/Sony/FujiFilm/Leica/etc....  Sigma/Olympus/Panasonic were the odd ones out here. But now Sigma(probably?) and Panasonic will be joining the rest and having two different sensor sizes at once. (will be curious if Olympus stays focused purely on only the one sensor size: MFT. Or if they'll join L mount, or something else? I suspect they'll stay with MFT, but L mount odds are not too far behind, with anything else being unlikely)

    That may be true now, but the way I understand it is that it's a shrinking market and that kind of split may not be the recipe for sustainability.  Who knows of course, those with crystal balls  (or steel balls) will be putting all their savings on the winning horse.

    1 hour ago, IronFilm said:

    You mean Tony "The Clickbait King" Northup?

    Every successful YouTuber is clickbait royalty - that's just the reality of getting new subs and keeping ahead of the others - look at the UK tabloids (and sensationalist media the world over) and tell me that it isn't a successful strategy.

    What I like about Tony is that he isn't afraid to say things, isn't afraid to predict things, but in addition to those traits (that are shared with nearly all YouTubers) he's also level-headed, admits when he's wrong, and explains his thinking so that you can understand why he came to the conclusions he did.  Even if he gets things wrong he definitely adds to the conversation rather than just taking up your time and contributing nothing.

  3. I recall some issues that Dave Dugdale had with the a6xxx cameras that looked like IBIS and IS fighting each other.  I don't recall if it ever got resolved, but I think that the only way to work out what is going on is to do a logical evaluation through testing.  Shoot a shot with everything plain settings and verify that you can get a smooth pan like that.  After you've established a baseline start adding in the settings you use one by one and see which one 'creates' the problem.  Then leave that setting on and start reverting the previously adjusted settings back to your baseline and see if it comes good again, and if so then it will be a combination of multiple settings.

  4. I'm yet to make the move, but I think I'll have to eventually.

    Slightly OT, but don't they have new OS's free for a certain period of time and after that you have to pay for the upgrade?  I think I had to pay for one when the version I was running didn't support the software I wanted to run and the only free version was the one that had buggy wifi so I paid to get the second newest version.

  5. 8 hours ago, IronFilm said:

    A little secret: this happens a lot more often than you think.......  just this one time Fstoppers let the curtain slip and you got a peek inside.

    Absolutely!

    Think of all the awful products out there, and take a minute to order why no reviewer has ever looked at any of them.  It's simply incredible that the people who make awful products never contact reviewers, or if they do then the reviewers aren't interested, or if they are then the product gets lost in the post.  That postal system must be crazy good at assessing how good products are - it's the final gatekeeper in the entire system!!

  6. I've used it with a key to beautify skin a few times, but I'm not sure that it's the best way to go as I haven't compared it to other methods.

    Also worth checking out is the Sharpen and Soften OFX plugin which gives you independent sliders for small, medium and large textures so you have more control, and it let's you sharpen some and soften others so it's powerful for skin tones.

    If you want to use a dedicated sharpen or blur tool you can use the Edge Detection OFX plugin (I think that's what it's called) in the mode to display the edges and then connect the main output from that to the key input of the node with your effect in it. This gives you the ability to apply any filter to edges (and presumably also the inverse) in a very controllable way. 

    There are so many ways to get the job done in Resolve!

  7. @Sage  I have shot a bunch of footage at night with HLG at various exposure levels (ETTR as well as ETTL to keep highlights) and I'm wondering how to pre-process the footage in post to get the right levels for the LUTs.

    At what step in the Resolve node graph would you suggest adjusting the exposure back so skin tones are in the recommended 40-60 IRE, and which controls would you suggest (perhaps either the Gamma wheel or the Offset wheel?).

    Thanks!

  8. 17 hours ago, Deadcode said:

    Dont get too excited, reddercity is brute forcing the registers in the past couple of years, and technically never archieved anything. The other forum members developed the new features from reddercity's findings . And unfortunetaly less developer using 5D2 every year.

    But let's hope for the best

    What is your impression of how transferrable things like this are?

    I read the ML forums for a while and saw some instances of where figuring something out on one camera helped to solve it on other models, but I'm not sure if these were isolated instances or if this was more the norm.

    If so, perhaps these findings might apply to the 5D3 and maybe 5D4?

  9. 10 hours ago, webrunner5 said:

    I would imagine when you add the grip and 2 batteries to the X-T3 it is not going to be too far off either weight wise. I can't find the grip exact weight. Just shipping weight.

    The Fuji T2 weighs 29.54 oz, 837 gms with battery and a loaded battery pack. The 1dx mk II weighs 53.97 oz, 1530gms. So a bit less than 24.43 grams or 1.5 pounds as heavy. Not that much really if the X-T3 weighs what the X-T2 does?

     

    X-T3-FRONTLEFT-GRIP-SLV.JPG

    Yes, I suppose that the 1DX isn't so bad if you think of it as having a battery grip included. Personally though I don't use a battery grip and I would expect to be able to get away with not having one, so it depends on your perspective.

  10. On 11/7/2018 at 8:26 AM, DBounce said:

    So it's been a little over a month since I purchased my first FujiFilm X-T3 body. About two weeks after purchasing the first body I became convinced that this camera was very special. It was fun to shoot with. It reignited my desire to shoot stills. When it came to video the X-T3 seemed more than capable. I decided to refine my lens collection, adding the 23mm F2, 35mm F2, 16mm F1.4 and 56mm F1.2. Later I add the magnificent MKX18-55mm T2.9. All of these lenses performed to my highest expectations. 

    I can tell you after this short period of owning these cameras that they can capture some truly lovely images. The in camera profiles are great and very inspiring to work with. The whole experience is really quite wonderful... well, that is it was. I am getting rid of the whole system. 

    Why? Well, the second body decided to quit. The screen froze up and would just blink. A quick call to Fuji tech support revealed that the hardware had failed. I called to get a return authorization, which was granted. I decided not to go for an exchange, instead opting to wait and see how the first body held up. Today, I went out to shoot some pictures for a project we are doing. Upon arriving at the location I grabbed the X-T3 out of my bag and flipped the switch to power it on... NOTHING! Absolutely nothing. No lights, no screen lighting up, no power up at all. I tried different batteries. Still nothing. I tried pulling the battery and holding down power, then re-inserting the battery and attempting to power up again. Still nothing. Finally I tried the one thing I knew would work. This technique has never failed me.... I went home and grabbed my Canon 1DXMk2. Needless to say the shoot was completed. But the Fuji's are paper weights. 

    I'm done with FujiFilm. It's all going back. It's packed up... I will not buy another one. In my experience they are not fit for serious use. Heck, they are not fit for even casual use. I had spoken highly of them in the past, but this experience with not one... but two bodies failing is enough for me. Lesson learned.

    Sorry to hear you got two duds...  I'd suggest buying a lottery ticket - luck that bad can only be that it's been redirected somewhere else!

    I was particularly interested in the dial system that Fuji had, being able to control every parameter in either auto or a given manual setting is great, and gives more flexibility than some PASM mode implementations do, with a much better interface.

    Are you back to using the 1DXii for video as well?  It looks great but I hear it weighs as much as a bus!

  11. 43 minutes ago, homestar_kevin said:

    I've been using the SLR Magic 8mm on my gx85/g7 and BMPCC and agree completely.

    Great image, great small little lens. Not the best to work with, but the for the price and image, it's definitely awesome

    I haven't seen it discussed much for video (maybe it was and I missed it) but lots of people have looked at it for photos.  It was the widest non-fisheye lens that I could find for a reasonable price.   It looks hilarious on the GH5 because it's so small and thin!

    My strategy has been to get a 17.5mm lens for generic people / environment shots, and the 8mm for those wider "wow" scenery shots where you want the grand look of a wide angle, and it's been great for that.  Today was my third day in India and it hasn't come off the camera yet as everything here seems to lend itself to that "wow" aesthetic - look at that building - look at how many people there are - look at this amazing chaos...  :)

    Basic grade and crop....

    India-1_1_29.1.thumb.jpg.f937b566efa6cb577d01b564d8868eb7.jpg

    India-2_1_33.1.thumb.jpg.8fc3674595d7ec5c090a60e03a2985a5.jpg

     

    I'm still making friends with the GH5 so these might not be sharp, but they certainly suit the geometric nature of the architecture :)

  12. 9 hours ago, webrunner5 said:

    I'd be sceptical, the bridge in the sample shot goes exactly through the middle of the frame, so it won't show many forms of distortion.  7artisans does have a range of lenses and a track record though, so who knows.

    I've been using the SLR Magic 8mm F4 on my GH5 and I'm really enjoying the image, but it's also designed as a drone lens and so the ergonomics aren't that great though! 

  13. 22 hours ago, Shirozina said:

    That's not the bit depth but the Chroma subsampling in Y'CbCr codecs. Even in 10bit  the colour information is very compromised compared to the Luma information. 

    So, if I understand correctly, colour information is worse than in a theoretical 8-bit RAW codec then?

  14. 10 hours ago, Kisaha said:

    Great! I am going your way too!

    I am staying NX at the moment (I have 4 cameras and 8 lenses plus unlimited accessories) using those as my hybrid cameras, and I will eventually buy a P4K as a dedicated video camera, until change hybrid system altogether sometime in the next couple of years. Photos are just a small portion of my pro life anyway, so doesn't make a lot of sense to invest a lot of coin that way.

    I am waiting for your real world experiences and comments eagerly.

    It will be difficult to match the cameras, I have filmconvert and maybe that helps a little.

    Does Resolve have support for the NX1 colour and gamma profiles?  If so, you could use the Colour Space Transform OSX plugin to convert both types of files to a common colour space.

    Although it's not perfect, I've had good results and it does most of the work, I just wish it had support for Protune so I could also use it to match GoPro footage.

  15. On 11/9/2018 at 12:16 PM, stephen said:

    OK maybe this statement (RAW has no color) is not that accurate. The point is that RAW has to be developed before you have the color of every pixel. You have the values for each pixel from the bayer sensor but they are one of the 3 basic colors only - Green, Red, Blue. With different intensity. Before the debayering / development you don't have "real" colors - RGB values for each pixel, only one of this values - R or G or B. The other two are interpolated, "made" by the software. So before developing the image you can't measure anything related to color. 

    This process has 3 variables (actually more): 1- the sensor and other electronics around it. Let's call it hardware. 2- the software that do the debayering/interpolation. 3 - the human deciding which parameter to use for the development. For color there are many parameters that can be changed in the software. So how you are going to measure for accurate color the developed image coming from RAW (because RAW can't be measured), when it is dependent of so many parameters and most of them are not related to the camera ? 

    Yes watched the video and totally agree with Tony that for RAW there is no point to measure color accuracy of the camera or cameras color science. As color depends on too many variables and parameters outside of the camera. You can literally get any color you want in the program. 

    Now Mattias and many other people argue that every camera (sensor and electronics in the camera) has specific signature and they affects the RAW image and as result the final/developed image. This is true. It that sens not all RAW are equal. Yes indeed it's one of the variable (some of the variables) in the process and for sure has an impact for the final image. Dynamic range of the sensor for example definitely affects the final image. But for colors specifically my argument is that all those differences in the sensor are easily obliterated by the software. Remember 2/3 of the color information is made by the software. It is the software (algorithm) and human behind it, who has the final saying what color a pixel and whole picture will have. So for me when people says different sensors / hardware give me differences in colors they mostly mean: different sensors/cameras gives me different colors in MY workflow. :) You can perfectly color match photos from different cameras/sensors. Same for video. 

    So we agree to disagree here :)

    Good post.

    I think we're mostly agreeing, but there are aspects of what you said that I think are technically correct but maybe not in practice.

    @TheRenaissanceMan touched on two of the biggest issues - the limitations of 8-bit video and the ease of use.

    It is technically true that you can use software (like Resolve or photoshop) to convert an image from basically any set of colours into any other set of colours, but in 8-bit files you may find that information may well be missing to do a seamless job of it.  Worse still, the closer a match you want, the more manipulations you must do, and the more complicated the processing becomes.

    In teaching myself to colour correct and grade I downloaded well shot high-quality footage and found that good results were very easy and simple to achieve.  But try inter-cutting and colour matching multiple low-quality consumer cameras and you'll realise that in practice it's either incredibly difficult or it's just not possible.

    21 hours ago, Andrew Reid said:

    Funny thing is, colour isn't even just a science

    Absolutely!

  16. 21 hours ago, stephen said:

    There are no colors yet in RAW.

    Manufacturers design the bayer filter for their cameras, adjusting the mix and strength of various of tints in order to choose what parts of the visible and invisible spectrum hit the photo sites in the sensor.

    This is part of colour science too.  Did you even watch the video?

  17. You should all hear how the folks over at liftgammagain.com turn themselves inside out over browser colour renderings...   almost as much as the client watching the latest render on grandmas 15 year-old TV and then breathing fire down the phone about how the colours are all gone to hell...

    I'm sure many people on here are familiar with the issues of client management, but imagine if colour was your only job!!

  18. 4 hours ago, capitanazo said:

    Hi, i want to know what codecs are playback by gpu on premiere or resolve.

    i have to work with too mutch tracks and clips and i get lagged with my cpu at 100% and my gpus practically do nothing lol.

    i got an rx 570 8gb and a ryzen 5 1600, 16gb ram ddr4 and ssd storage for the cache and the files.

    any codec will increase performance and use the gpu would be apreciate.

     

     

    Good question.

    Here's some info:

    Quote

    However for editing, VFX and grading, the compressed data needs to be decompressed to the full RGB per pixel bit depth that can use four times or more processing power of a HD image for the same real time grading performance. The decompression process, like compression, uses the CPU so the heavily compressed codecs need more powerful and a greater number of CPU cores. H.264 and H.265 are heavily compressed formats and while not ideal for editing are often used by lower cost cameras. If you use these types of compressed codecs you will need a more powerful CPU or be prepared to use proxies or Resolves optimised media feature.

    Once the files are decompressed DaVinci Resolve uses the GPU for image processing so the number of GPU cores and the size of GPU RAM becomes a very important factor when dealing with UHD and 4K-DCI sources and timelines. For VFX each layer of images uses GPU memory and so any GPU with a small amount of memory will have a performance hit as you add layers up to the level where the GPU just has insufficient memory and performance.

    For uncompressed images, in industry standards like DPX or EXR particularly with 16 bit files, the CPU has an easier time as these are effecient codecs, However they place greater demands on the disk array, RAID controller, storage connection and even the file system itself.

    Audio facilites who plan to use flattened HD video files wont need as powerful GPU or disk I/O as audio files by their nature are smaller than video. But remember, if you are importing a DaVinci Resolve project file or sharing projects on a central database, you may be opening projects with complex video timelines, a variety of image formats and codecs and potentially with demanding VFX elements so even the most basic audio system should be prepared for these demands.

    (Source)

    However, it seems some codecs are supported.  V14 release notes:

    Quote

    • Added support for hardware accelerated HEVC decode on macOS High Sierra
    • Added support for hardware accelerated HEVC encode on macOS High Sierra on supported hardware
    • Added support for hardware accelerated HEVC decode on supported NVIDIA GPUs on DaVinci Resolve Studio on Windows and Linux

    Unfortunately it doesn't seem like there's a complete list anywhere, the "what's new" feature lists are only incremental, and BM don't use consistent language in those posts so searching for phrases doesn't work either (sometimes they say "hardware accelerated" sometimes "GPU accelerated" etc).

    My guess is that you should just set up a dummy project with a high resolution output and then export it in every codec you can think of (just queue them up, hit go and walk away) and then pull all of them into a timeline and compare the FPS and CPU/GPU load on each type.  It's a bit of admin you shouldn't have to do, but it will definitely answer your question.

  19. 5 hours ago, BTM_Pix said:

    I think its a bit of a double whammy in that gimbal options for the Pocket4K (as you can see from this thread and the separate one) are a bit sketchy at the moment as is the lack of IS on a lot of the lenses people are using.

    The Sigma 18-35mm has become pretty much the de-facto "standard" lens over the past few years and whilst it's lack of IS didn't really matter that much as the camera it was mounted to had IBIS and/or was on a gimbal, it really does matter when neither of those avenues are available.

    That's why I think people might want to consider options like that cheap Tamron and the like if they are going to be shooting handheld as, pragmatically speaking, the IS arguably matters more in that context than the pure optical performance advantage of something like the 18-35mm that doesn't have it.

    Or they could treat it like a cinema camera ???

    Besides, we all know how that song about camera stabilisation went - "if you liked it then you should have put a rig on it".

     

×
×
  • Create New...