Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 02/14/2026 in Posts

  1. Panasonic just released a new on-camera mic. Looks like an excellent option for events etc where you want something small or something really flexible. I've watched a few YT showcases and for me, the best features are: It gives you 32-bit without having to have the external mic preamp box (and then adding microphones to that, making it larger again) It's small, much smaller than an on-camera shotgun mic You can quickly swap between modes (I assume?) it's powered by the camera It unlocks the ability to record >2 channels of audio into the files (one person said you can record left/right/mono/mono-20dB as a combo, and left/right/left-20dB/right-20dB as a different combo) It's definitely not magic and the laws of physics still apply. There don't seem to be any really good on-location stress tests posted yet, but there's a few examples. Media Division did an in-kitchen test to compare it to in-camera mics and lav and a DJI clip-on, and also applied a bit of AI voice isolation too to see how far you can push it: Dustin did some good tests including walking a 360 around the camera in each mode, which showed how directional it is, which seems pretty impressive. He also compared it to the Sennheiser MKE440. This shows the different modes out in nature: This is probably a complete revolution for a number of niche uses. Content creators would be one, where they're recording in noisy environments but still staying relatively close to the camera where physics will be helping them. Another is where the flexibility really helps, like shooting events where getting pristine audio isn't an absolute must but working super-quickly is more important and perhaps the 32-bit would really come into its own. This reminds me of how people used to talk about Panasonic when the GH4 and GH5 were around and people were saying that Panasonic just listened to people and then implemented the features that people would use rather than trying to be flashy and grab headlines. This will be an invisible workhorse for lots and lots of people.
    4 points
  2. Getting prepped for my next trip and have further refined my setup. This trip is a quick trip to China, but it's also a test case for a trip I'm taking later in the year to Europe where the packing approach will be minimalism. Unlike the way I like to travel in Asia, the Europe trip will involve changing accommodation every few days, so packing and unpacking and hauling bags around will be much more of a pain, so I'll try and travel really minimally. As such, my approach for this trip is "when in doubt, don't take it" and see what I actually use. So the setup for this trip is: GH7 14-140mm F3.5-5.6 zoom, which I use during the day at F5.6 which means my 1-5 stop vND is enough 12-35mm F2.8 zoom, which is a great walk-around lens after dark Takumar 50mm F1.4 with M42-MFT Speedbooster (with bokeh insert) for "night cinema" iPhone 17 Pro setup (Neewer phone filter mount, K&F 1-9 stop vND, MagSafe Popsocket) The GH7 and zooms are self-explanatory, so here's the 50mm F1.4 setup. I have played around with "inserts" and ended up with a pretty extreme design, so this is a test to see if the vertical edges are too strong a look for me. It's made from the sticky part of the post-it note, and a layer of sticky tape over the top to keep it a bit more together. It sits between the speed booster and the lens, and I won't use the speed booster for any other lenses while travelling so this will stay in there and protected, so doesn't need to be that robust. It's a strong look in some situations and quite "painterly" in others, so I'll be curious how it goes. For my iPhone 17 Pro, it's a phone most of the time and a camera only as a backup, so I searched for a setup that would: Protect my phone from drops (I dropped it on the last trip and the screen shattered, despite it being in an Apple case - the only one available at the time... sigh) Still be right-sized for getting in and out of pockets etc Have a vND solution for when I want to shoot and use 180 shutter I'll spare everyone from the rant about the options out there (everyone wants you to buy into their "ecosystem" now) so I ended up with the Otterbox Defender Series Pro case, which makes the iPhone feel even larger than it did in the Apple case (which doesn't seem possible but is true), but seems very robust. The vND is the Neewer phone filter mount, which sort-of clips onto the phone (It's designed to screw onto and clamp the phone but you're clamping against the screen, so I wouldn't tighten it that much). It's designed for a naked iPhone, so I had to modify it (and the Otterbox case) slightly where it interfered with the Otterbox case to get it to sit a bit flatter. It still doesn't sit flush, but it goes on and seems to be fine. I haven't got around to actually taking it out to shoot with it, so that remains to be seen. I paired it with the K&F 1-9 stop vND, which boasts 18 layers etc, but doesn't claim to be a "True Colour" one like the 1-5 stop ones do. It doesn't have hard stops and I think it still gives the X at the max amount, but I'll see how I go. Not having an aperture sure sucks considering you're not really losing having shallow DOF. That is all combined with the MagSafe Popsocket as a safeguard. I've used the adhesive popsockets before and they're great for giving a much better grip on the phone, but I wasn't sure how strongly the MagSafe would be. The Otterbox claims to have magnets in it that strengthen the MagSafe connection, and this might be true. It feels quite sturdy actually, and I tested it to require 1.75kg of force to pull off, compared to the 1.45kg of force it took to pull it off my naked iPhone 12 mini. No idea what strength a naked iPhone 17 Pro MagSafe connection would have, but it's not terrible. Lots of compromises involved, but it's really my backup camera, and the Otterbox case is very grippy, so I'll see how I go.
    3 points
  3. Regarding voice AI. Hoo boy. As a documentarian, this one can affect me a lot. A lot of ills can be smoothed over with AI audio. But ... at the end of the day it's an ethical choice how it's employed. I've decided to ONLY use it to salvage VERBATIM lines from interviews and field audio that is distorted beyond comfort. Like, wind noise, clothes rustling. And then it's a last ditch option after audio EQ/Rx tweaking. Best thing to do is just not 'f up the field production to begin with. Beyond that, if AI is used as a production short cut to solve a storytelling/crafting failure as a filmmaker -- I now consider AI use untenable for me. It's simply on the wrong side of things morally when it comes to making honest doc films. Sadly, I fear that's now a contrarian opinion; an "old-fart" opinion. No one probably really gives a shit anymore about these sorts of "cheats" 'cept me.
    3 points
  4. I'll have a dig and host them here if I find the mother lode
    2 points
  5. Well, there's one hell of a metaphor in the "context" of this.
    2 points
  6. "Tsumura-san confirmed that photographers who compose through viewfinders “strongly request the inclusion of an EVF,” and that Panasonic is considering the balance between compact size and EVF inclusion as they work to “meet the expectations of as many customers as possible.” LOL. GM5 anyone?
    2 points
  7. Yes, I don’t get it either. You are in the full-frame LUMIX ecosystem and are considering a new camera and on your interest list is the latest options from Nikon and from Sony. You are increasingly tempted to jump ship even though it’s a big move. Which one of the below options do you wish to hear? A: We are working on a new flagship camera that will be launched in the Summer. B: Expanded horizons, creative direction, global market strategies, Operation Epic Bullshit, waffle waffle waffle, blah di blah di blah. I didn’t watch it and haven’t read a single word of it but 100% sure it wasn’t A: A perfect example of how not to retain customers. There is a saying which applies to all business and that is no matter the size, small, medium or large/international, just because you are in business, doesn’t mean you are any good at business. Some companies are more clueless than others…
    2 points
  8. This reminds me of using my old Sony camcorder with the 5.1 surround sound microphone. I would shoot with it, then in post be able to isolate each channel and choose which one to use and ignore the ones that were just location noise. Pretty handy without much effort when shooting. This might be similar in that sense.
    2 points
  9. Fair enough. I just wish they'd spent time and resources elsewhere. I want a small, up-to-date, Panasonic M43 camera, not an overly complex version of a on-camera mic. This seems like a great product for 2010. But what do I know, maybe this THE MIC, the one that everyone was waiting for. What I can tell you with 100% certainty, people are ready and willing to pay vast sums of money for old gear, only because it's small. What happened Panasonic? The whole miniaturization of components thing has been apparently disregarded.
    2 points
  10. Come on John... everyone knows that anyone who wants better footage than a smartphone can provide is 100% totally fine with a camera the size of a microwave oven that looks like a Borg prototype! Being slightly serious though, it's easy to criticise, but as someone who wants flexibility and better sound options, this is FAR better than the previous options, so it's a welcome addition in my eyes. The worst enemy of progress is criticising everything that isn't perfect in every conceivable way.
    2 points
  11. It's great they came out with something new, but I wish they'd spent their time elsewhere. This product just doesn't seem like a priority. If audio is the priority, I'd rather a set of 2+ wireless lav microphones that connect and record to the camera via bluetooth or wifi. Why hasn't anyone done that?
    2 points
  12. While AI can be employed for positive or negative things, there's a bigger outlook at play for me. Robert Persig's famous musings are where I want to stand philosophically. His theories, and my limited understanding of them, are pretty much the reason why I ultimately view AI unfavorably.
    2 points
  13. Emanuel

    A/The Legend. RIP

    https://www.indiewire.com/news/obituary/robert-duvall-dead-1235141818/ The world will be much emptier without him. He was one of my favourites—and, for sure, one of many others here too. RIP, you’ll be truly missed, not just on screen.
    2 points
  14. Agreed. I felt the same about the G9II. Lowlight is good. Full frame cameras seem to just be INSANELY good. And seems like the crop of full frame cameras for the most part has been this way for the last few years. I owned the Nikon Z6 OG from 2020-2025. It has the same IMX410 sensor found in the Sony a7iii, Panasonic S5/S1/S5II/S5IIX. That sensor despite being used in 7 yr old bodies like the z6 or a7iii is great in lowlight, 12,800 ISO and 25,600 ISO never looked bad to me I used to push the z6 so hard with wedding films even dipping into 51,200 ISO and noise was always usable. I dabbled a bit with the G9II in January and lowlight seemed noticeably worse, but at the same time it wasn’t BAD per se and cleaned up well in post. Again I think it’s just that full frame cameras are insanely good. But then again so are crop sensors lol…I was just running some lowlight tests with my friends $649 Canon R50V. With some Denoise in Davinci resolve, 12,800 ISO looked great to my eye. Nuts! 12,800! $649 used to get me a Panasonic G7 and decent lens…how far all these cameras have come. I couldn’t dream of getting that type of result on the G7. But this $649 R50V was extremely impressive lol. We are so dang spoiled. I ended up getting a used canon r6 OG for a very good price ($929), overheating aside its a wonderful cam for $1k average. And looks great at ISO 25,600… I think my biggest isssue with the g9II was PDAF seemed to shut off or be used a lot less when above ISO 2500 or 3200 in a lot of cases. Meaning if you want to rely on autofocus it’s hard to really push things. Because I did find that with some Denoise ISO 6400 and 12,800 were honestly not bad. Maybe I also didn’t have the most optimal lens choices…but when I was recently filming at a summer camp where they had a canon r5 (so I could use the r5 when I wanted and my G9II when I wanted), the r5 seemed to wipe the floor with the g9II at 3200ISO and above especially when pushing things. And unfortunately I just seem to have times where I need to shoot in very very lowlight settings. So full frame is a big help.
    2 points
  15. Yep that's the kind of intermediary codec that the ZR needs. But only if Nikon doesn't cook it with that aggressive noise reduction. You know the drill, Fuji had similar issues I seem to remember you pointing out. Thing is, I don't buy cameras based on promised or wishful features anymore. Been burned too many times waiting for "coming soon" updates that arrive late, incomplete, or not at all. So as of right now, the ZR is off the table for me. It's not just the codec situation though tbh, the unreliable view assist/exposure tools and first gen quirks also give me cold feet. Good to know LT is officially on the roadmap, though.. great for early adopters but I think I'm done gambling on "maybe later".
    2 points
  16. Hello, I hope everyone is well! Even though I’m not really active on camera forums anymore, I frequently read the EOSHD blog and every now and then the forum, so I saw the thread and thought I would respond. Because it wasn’t ”poof gone”, it was announced on the channel over a year ago and mentioned in the last three videos. Before going into why, super flattered that this thread exist. I mean that. So here are some thoughts on the matter and why I took it down. Hobby vs Work YouTube was never my job, just a hobby. So was video making and photography, in the beginning. When starting the channel I was working as a producer after a couple of years as a radio/TV reporter. So I started the channel to keep my practical skills fresh. And to keep up with the development, which was huge at the time. The DSLR revolution, Blackmagic, cheaper editors etc. Fast forward a couple of years and I started making more videos at work again. At the same time I pretty much lost all interest in doing it as a hobby. And actually canceled the channel. Winston Churchill was definitely right in saying that work and hobbies should not be too similar. But what I had discovered was a passion for still photography, which I had pretty much no experience with. So I started making videos again. That’s why my videos became very repetitive and short. I didn’t care about that part, I just wanted to display my stills work and get feedback, talk to the community, experiment with cameras and develop. After a few years I became a good enough photographer that my new employer noticed and just like that I was shooting stills professionally all the time. And I still do (I work in marketing and PR). It’s a huge bonus in my field and if you are good at it you will never be out of work. So photography also became less and less of a hobby. Instead I found other hobbies. They where things that for example got me out into nature, so photography tagged a long a while, as a secondary activity. But eventually it faded. It was also nice to do things and not share it with people. I know I probably could have a very successful channel by making videos about my current hobbies, and even make some money. But I never really wanted a channel for the sake of a channel. And always had a full time job. The fact is that at no point would I had been able to live of my channel, not even at the peak. Even with sponsors it was never more that a regular salary (in my field and country). But as long as it was a hobby and I was glad to do it, it was a welcome addition to finance camera gear. Time At the same time as my channel started to feel less fun and other hobbies started taking my time, I started a family. So.. you get the idea: full time job + family + 2-3 hobbies = no YouTube. Upkeep So why take it down, why not leave it for the community? I did.. at first. Like some of you pointed out, the YouTube crowd in the photography/video space is generally nice and positive. That is my experience as well. Early on I learned that a good way of keeping the trolls away was to be present. Respond and engage. Trolls are usually idiots or cowards, so they don’t like getting push back. But once I stopped making videos, views and comments obviously went down. But the trolls started coming back. Not so much after me, and I don’t care about that. But agains the community. The people commenting started being nasty towards each other. I felt a responsibility to moderate, which was annoying. That’s when the thought about simply removing it started to grow. It wasn’t an impuls. It was an internal debate that went on for months. And the issue grew much much larger than a couple of trolls. I started thinking about five years ahead, 10 years, 30 years.. This post is already way too long so I won’t go into all of it. But I think you get the idea when I say: Privacy or when the content no longer reflects the creator. Digital minimalism, control over one’s narrative, inactive or outdated content. Risk of misuse of content due to me not checking the terms updates. Closure. So there is a looong ramble :) To keep in spirit of the forum I can charge my current gear for pro work :) For the longest time I used the EOS-R for 75% of all my work and the R5 (rental) for the rest. It wasn’t mine but my employer told me to buy whatever I wanted. Paired it with a 28, 35 and 70-200. 70/30 stills/video. The R5 is peak camera imo. Today is a little different. I started working for a new company about a year ago and again was told to buy what I needed. I would have bought the R5 without hesitation if it wasn’t for the Sigma 35-150/2-2.8.. I just had to have it. So I ordered the Nikon Z6iii. It’s not as good overall as the R5 for me and what I like in a tool camera. But it’s 90% there. And coupled with that lens it’s becomes on par. //MB
    2 points
  17. «As a Fuji user, that smooth autofocus on the BM6k makes me cry.» «I sold my X-H2s and all the lenses a few months ago to get BMD Cinema 6K, and I never looked back since.» source «The most significant thing about all this, as you pointed out in the video, is that Blackmagic is essentially giving autofocus to all of us who already bought their camera, instead of releasing a Blackmagic 6K Full Frame Pro with autofocus just to make us open our wallets again. They may be losing money in the short term, but in my view they are gaining in the long term, because the trust the brand inspires is truly remarkable.» source Disclaimer: Happy camper as Blackmagic shooter over here! Looks like I am not alone... - EAG :- )
    1 point
  18. I have the Sanity hacks (not 100% sure), Apocalypse Now (Drewnet), Cluster X (Drewnet), and Cluster X (Moon). I wish I took better notes. Why are personal view forums down? That's a good site.
    1 point
  19. I'll genuinely appreciate that a lot. Please preserve the gh2 history, don't let it vanish
    1 point
  20. I don't think this makes it a cinema camera...
    1 point
  21. No manufacturer is going to reveal a future product before it is ready to be sold unless they are in dire straits and their current products have zero chance of selling. To me it seems that manufacturers consider small cameras more entry-level and make a progression so that in each level up, most aspects of the next higher-level camera is better than the level below, except for size and weight, and the cost increases along with weight, features, performance, and quality. Since Panasonic have (more expensive) 35mm full-frame cameras, they have incentive to make the micro four thirds products less in most ways, to motivate people who can afford the FF to go with it instead of the MFT. Sony does emphasize small size and low weight throughout their stills/hybrid camera lineup. A small camera is more difficult to make more powerful (in terms of performance, image quality, high end video codecs etc.) and people will invariably complain about whatever its flaws may be, be it lack of efficient codecs, overheating, operation etc. IBIS makes the camera significantly more expensive. In the small sensor class, IBIS would be useful (just as it is with larger cameras) but it would increase the camera size, weight, and cost, all three factors noticeably, hence reducing the advantages of small size, light weight, and moderate to low cost. And this class of cameras is competing with smartphones as well, due to their pocketability and communications abilities. It's just a tight place to be in. Probably this is why Nikon discontinued the 1 series and Canon their M system. Full-frame telephoto lenses have also gotten much smaller and lighter in recent years.
    1 point
  22. Why would anyone use MFT today? They want a small, capable camera setup (under 500g with lens); They want high-end telephoto quality without the extreme weight. Almost everything else can be done with a larger sensor without too much weight penalty, which is the point. You'd think in this interview he'd at the very least mention that there's a small MFT camera in the pipeline. Instead, they say they're committed to MFT (and simultaneously discontinuing lenses like the 20mm f/1.7). They've had that G100/D since 2021, using the same tech as from 2015, only worse (no real IBIS). Meanwhile, the now old and abuse GM5 is poised to pass up cameras like the GH6 on MPB. Does anyone need more proof than that to know there's a serious want for consumers today? Come on Panasonic! Get your $hit together!
    1 point
  23. I really appreciate you taking the time to come post about this on here - I was wondering what happened and I'm glad I found this thread when I checked again recently. I found some of your old videos + reviews really entertaining and useful when I was considering trying blackmagic cameras, so thanks very much for that. I have a lot of fondness for some of those videos, and if I'd realised the channel would completely disappear I probably would've saved them. 10 years from now it would be nice to be able to look back on them.
    1 point
  24. This. Every review for the past decade has talked about firmware updates "hopefully the brand will do this of fix that". They never do. I'm in the Nikon ecosystem so I'm tempted to sell my xh2s for the ZR, but it's more of a sideways move. I'd lose open gate, and fuji has a great image. But the Fuji isn't reliable, especially with AF.
    1 point
  25. Agree this is a waste of time. They should say "a cinema camera is coming" or "the S1ii is our cinema camera". Either would help a lot of consumers decide what their next move is.
    1 point
  26. On the cined web site, there is a text version summarizing the interview - much less time-consuming to digest.
    1 point
  27. Doh - forgot to list the 9mm F1.7 lens. That's the ultra-wide I'll be taking too. So the total count is one body, 5 lenses, my phone with vND. I was slightly conflicted about the "wide-angle night cinema" slot. The SB+50/1.4 is equivalent to a 71mm F2.0 on FF, so having something wider seems an obvious thing but I'm just not sure if I would use it. I've mentioned the 12-35mm F2.8 as my night walk-around lens, and when combined with the GH7 low-light capability it's a fine combination, but it's not crazy fast/bright and isn't the best "cinema" option around. The things I considered were: my TTartsans 17mm F1.4, which is small and light and despite being soft wide-open is probably quite cinematic my 14mm F2.5 which is small and light but is bettered by the 12-35mm on flexibility grounds being a zoom my Voigtlander 17.5mm F0.95, which is a great performer but is quite heavy my c-mount 12.5mm F1.9, which is similar FOV when you crop in to its S16 image circle my 9mm F1.7 combined with the GH7 cropping, which is fast but sacrifices resolution and doesn't have the DOF advantages of other options (although I am already taking it) SB + 28mm F2.8 combos, but it's hard to get a reasonable quality 28mm F2.8 in M42 mount and it's not that fast anyway I opted to take the 12-35mm (which I sort-of take as a backup lens to the 14-140mm zoom) but if I do end up wanting a wider fast lens for night cinema, I think I might just bite the bullet and get the PanaLeica 15mm F1.7 as it'll be light and have AF and be sharper than I could ever want. I looked at the reviews of a bunch of budget F1.4 or faster lenses around the 14-20mm mark but I'd never be sure if it was as sharp as I'd like, and spending money to get something that isn't that much faster than my 17/1.4 or that much lighter than my 17.5/0.95 seems silly. MFT is the wrong format for ultra-fast wide lenses, and I already have lots of options for something I might not use, so the whole thing might end up being academic anyway.
    1 point
  28. Yes. This to me is just a somewhat expensive, limited use, vloggers device that has zero serious use case for my needs. I already have 2x Sennheisers that fill this role at €150 for the pair of them.
    1 point
  29. Looked at and decided very quickly it isn’t for me, but good to see LUMIX making stuff they at least think folks want. What they really want however is an S1H mk II in an FX3 style body with the screen mech from the S1II and the screen from the ZR. Then they will truly win the crowd and strut like gods of low-mid film-making.
    1 point
  30. Hi! I just stumbled upon this thread and thought I'd share an OKLab conversion DCTL I wrote about a year ago. It;s written as a header file to be included in any other DCTL you may need it for. It supports conversion from ACES, Davinci Wide and Rec.709/sRGB. OKLab_Transform.h: #line 2 #ifndef ENCODING_ENUMS_DEFINED_IN_UI enum Encoding { gAcc, gAcct, gDWI, gLIN, g709, gSRGB }; #endif #ifndef COLORSPACE_ENUMS_DEFINED_IN_UI enum ColorSpace { cACES0, cACES1, cDWG, c709 }; #endif // ============================================================= // Util // ============================================================= // ============================================================= __DEVICE__ float powCf(float base, float exp) { return _copysignf(_powf(_fabs(base), exp), base); } __DEVICE__ float3 VecMatMul3x3(const float3 m[3], float3 v) { float3 r; r.x = m[0].x * v.x + m[0].y * v.y + m[0].z * v.z; r.y = m[1].x * v.x + m[1].y * v.y + m[1].z * v.z; r.z = m[2].x * v.x + m[2].y * v.y + m[2].z * v.z; return r; } // ============================================================= // Matrices // ============================================================= // ============================================================= // These matrices are the concatenated forms of (colorspace -> XYZ -> OKlms) // or, XYZToLMS @ ColorspaceToXYZ and, XYZToColorspace @ LMSToXYZ // original matrices are included as comments at the bottom of this script // ACES (AP0) // ============================== __CONSTANT__ float3 mat_ACES0_LMS[3] = { {0.90454662f, 0.26349909f, -0.15602258f }, {0.35107161f, 0.6766934f, -0.03056591f }, {0.13684644f, 0.19250255f, 0.62038067f } }; __CONSTANT__ float3 mat_LMS_ACES0[3] = { {1.2881401f, -0.58554348f, 0.29511118f }, {-0.67171287f, 1.76268516f, -0.08208556f }, {-0.0757131f, -0.41779486f, 1.57228749f } }; // ACES (AP1) // ============================== __CONSTANT__ float3 mat_ACES1_LMS[3] = { {0.64173446f, 0.35314498f, 0.0171437f }, {0.27463463f, 0.63099904f, 0.09156544f }, {0.10036508f, 0.18723743f, 0.66212716f } }; __CONSTANT__ float3 mat_LMS_ACES1[3] = { {2.04479741f, -1.17697875f, 0.10982058f }, {-0.88115384f, 2.15979229f, -0.27586256f }, {-0.06077576f, -0.43234352f, 1.57164623f } }; // ITU BT.709 // ============================== __CONSTANT__ float3 mat_709_LMS[3] = { { 0.4122214708f, 0.5363325363f, 0.0514459929f }, { 0.2119034982f, 0.6806995451f, 0.1073969566f }, { 0.0883024619f, 0.2817188376f, 0.6299787005f } }; __CONSTANT__ float3 mat_LMS_709[3] = { { 4.0767416621f, -3.3077115913f, 0.2309699292f }, { -1.2684380046f, 2.6097574011f, -0.3413193965f }, { -0.0041960863f, -0.7034186147f, 1.7076147010f } }; // Davinci Wide // ============================== __CONSTANT__ float3 mat_DWG_LMS[3] = { { 0.68570951f, 0.45574409f, -0.14156279f }, { 0.27427422f, 0.81179945f, -0.08604675f }, { 0.04351009f, 0.15072461f, 0.80624495f } }; __CONSTANT__ float3 mat_LMS_DWG[3] = { { 1.8836253f, -1.09713301f, 0.21364045f }, { -0.63460063f, 1.57752473f, 0.05693684f }, { 0.01698397f, -0.23570436f, 1.21814431f } }; // OKLab <-> Cone Response // ============================== __CONSTANT__ float3 mat_LMS_LAB[3] = { { 0.2104542553f, 0.7936177850f, -0.0040720468f }, { 1.9779984951f, -2.4285922050f, 0.4505937099f }, { 0.0259040371f, 0.7827717662f, -0.8086757660f } }; __CONSTANT__ float3 mat_LAB_LMS[3] = { { 1.0f, 0.3963377774f, 0.2158037573f }, { 1.0f, -0.1055613458f, -0.0638541728f }, { 1.0f, -0.0894841775f, -1.2914855480f } }; // ============================================================= // Transfer Functions // ============================================================= // ============================================================= // ACEScc // ============================== __DEVICE__ float ACEScc_DecodeBase(float v, float a, float b, float upperClampThreshold, float lowerDecodeThreshold, float two_m16) { float out = v; if (v >= upperClampThreshold) out = 65504.0f; else if (v < lowerDecodeThreshold) out = (_exp2f(v * b - a) - two_m16) * 2.0f; else out = _exp2f(v * b - a); return out; } __DEVICE__ float3 ACEScc_Decode(float3 in) { const float two_m16 = _exp2f(-16.0f); const float a = 9.72f; const float b = 17.52f; const float lowerDecodeThreshold = (a - 15.0f) / b; const float upperClampThreshold = (_log2f(65504.0f) + a) / b; float3 out = in; out.x = ACEScc_DecodeBase(out.x, a, b, upperClampThreshold, lowerDecodeThreshold, two_m16); out.y = ACEScc_DecodeBase(out.y, a, b, upperClampThreshold, lowerDecodeThreshold, two_m16); out.z = ACEScc_DecodeBase(out.z, a, b, upperClampThreshold, lowerDecodeThreshold, two_m16); return out; } __DEVICE__ float ACEScc_EncodeBase(float v, float a, float b, float negConstant, float two_m15, float two_m16) { float out; if (v < 0.0f) out = negConstant; else if (v < two_m15) out = (_log2f(two_m16 + v * 0.5f) + a) / b; else out = (_log2f(v) + a) / b; return out; } __DEVICE__ float3 ACEScc_Encode(float3 in) { const float two_m16 = _exp2f(-16.0f); const float two_m15 = _exp2f(-15.0f); const float a = 9.72f; const float b = 17.52f; const float negConstant = (_log2f(two_m16) + a) / b; float3 out = in; out.x = ACEScc_EncodeBase(out.x, a, b, negConstant, two_m15, two_m16); out.y = ACEScc_EncodeBase(out.y, a, b, negConstant, two_m15, two_m16); out.z = ACEScc_EncodeBase(out.z, a, b, negConstant, two_m15, two_m16); return out; } // ACEScct // ============================== __DEVICE__ float3 ACEScct_Encode(float3 in) { const float a = 9.72f; const float b = 17.52f; const float X_BRK = 0.0078125f; const float A = 10.5402377416545f; const float B = 0.0729055341958355f; float3 out; out.x = (in.x <= X_BRK) ? (A * in.x + B) : ((_log2f(in.x) + a) / b); out.y = (in.y <= X_BRK) ? (A * in.y + B) : ((_log2f(in.y) + a) / b); out.z = (in.z <= X_BRK) ? (A * in.z + B) : ((_log2f(in.z) + a) / b); return out; } __DEVICE__ float3 ACEScct_Decode(float3 in) { const float a = 9.72f; const float b = 17.52f; const float Y_BRK = 0.155251141552511f; const float A = 10.5402377416545f; const float B = 0.0729055341958355f; float3 out = in; out.x = (in.x > Y_BRK) ? _exp2f(in.x * b - a) : ((in.x - B) / A); out.y = (in.y > Y_BRK) ? _exp2f(in.y * b - a) : ((in.y - B) / A); out.z = (in.z > Y_BRK) ? _exp2f(in.z * b - a) : ((in.z - B) / A); return out; } // Davinci Intermediate // ============================== __DEVICE__ float3 DWI_Decode(float3 in) { float3 out = in; float a = 0.0075; float b = 7.0; float c = 0.07329248; float m = 10.44426855; float log_cut = 0.02740668; out.x = in.x > log_cut ? powCf(2.0f, (in.x / c) - b) - a : in.x / m; out.y = in.y > log_cut ? powCf(2.0f, (in.y / c) - b) - a : in.y / m; out.z = in.z > log_cut ? powCf(2.0f, (in.z / c) - b) - a : in.z / m; return out; } __DEVICE__ float3 DWI_Encode(float3 in) { float3 out = in; float a = 0.0075; float b = 7.0; float c = 0.07329248; float m = 10.44426855; float lin_cut = 0.00262409; out.x = in.x > lin_cut ? (_log2f(in.x + a) + b) * c : in.x * m; out.y = in.y > lin_cut ? (_log2f(in.y + a) + b) * c : in.y * m; out.z = in.z > lin_cut ? (_log2f(in.z + a) + b) * c : in.z * m; return out; } // ITU BT.709 // ============================== __DEVICE__ float3 BT709_Decode(float3 in) { float3 out = in; out.x = out.x < 0.081f ? out.x / 4.5f : powCf((out.x + 0.099f) / 1.099f, 1.0f / 0.45f); out.y = out.y < 0.081f ? out.y / 4.5f : powCf((out.y + 0.099f) / 1.099f, 1.0f / 0.45f); out.z = out.z < 0.081f ? out.z / 4.5f : powCf((out.z + 0.099f) / 1.099f, 1.0f / 0.45f); return out; } __DEVICE__ float3 BT709_Encode(float3 in) { float3 out = in; out.x = out.x < 0.018 ? out.x * 4.5f : 1.099f * powCf(out.x, 0.45f) - 0.099f; out.y = out.y < 0.018 ? out.y * 4.5f : 1.099f * powCf(out.y, 0.45f) - 0.099f; out.z = out.z < 0.018 ? out.z * 4.5f : 1.099f * powCf(out.z, 0.45f) - 0.099f; return out; } // sRGB // ============================== __DEVICE__ float3 sRGB_Decode(float3 in) { float3 out = in; out.x = out.x < 0.04045 ? out.x / 12.92f : powCf((out.x + 0.055f) / 1.055f, 2.4f); out.y = out.y < 0.04045 ? out.y / 12.92f : powCf((out.y + 0.055f) / 1.055f, 2.4f); out.z = out.z < 0.04045 ? out.z / 12.92f : powCf((out.z + 0.055f) / 1.055f, 2.4f); return out; } __DEVICE__ float3 sRGB_Encode(float3 in) { float3 out = in; out.x = out.x < 0.0031308 ? out.x * 12.92f : 1.055f * powCf(out.x, 1.0f / 2.4f) - 0.055f; out.y = out.y < 0.0031308 ? out.y * 12.92f : 1.055f * powCf(out.y, 1.0f / 2.4f) - 0.055f; out.z = out.z < 0.0031308 ? out.z * 12.92f : 1.055f * powCf(out.z, 1.0f / 2.4f) - 0.055f; return out; } // ============================================================= // Convert // ============================================================= // ============================================================= __DEVICE__ float3 Decode(float3 in, int tFunction) { float3 out = in; switch (tFunction) { case gAcc: out = ACEScc_Decode(in); break; case gAcct: out = ACEScct_Decode(in); break; case gDWI: out = DWI_Decode(in); break; case g709: out = BT709_Decode(in); break; case gSRGB: out = sRGB_Decode(in); break; } return out; } __DEVICE__ float3 Encode(float3 in, int tFunction) { float3 out = in; switch (tFunction) { case gAcc: out = ACEScc_Encode(in); break; case gAcct: out = ACEScct_Encode(in); break; case gDWI: out = DWI_Encode(in); break; case g709: out = BT709_Encode(in); break; case gSRGB: out = sRGB_Encode(in); break; } return out; } __DEVICE__ float3 OKLab_OKLCh(float3 lab) { float C = _hypotf(lab.y, lab.z); float h = _atan2f(lab.z, lab.y); return make_float3(lab.x, C, h); } __DEVICE__ float3 OKLCh_OKLab(float3 lch) { float a = lch.y * cosf(lch.z); float b = lch.y * sinf(lch.z); return make_float3(lch.x, a, b); } __DEVICE__ float3 RGB_OKLab(float3 rgb, int colorspace) { const float3* mat; switch (colorspace) { case cACES0: mat = mat_ACES0_LMS; break; case cACES1: mat = mat_ACES1_LMS; break; case cDWG: mat = mat_DWG_LMS; break; case c709: mat = mat_709_LMS; break; } float3 lms = VecMatMul3x3(mat, rgb); float3 lms_; lms_.x = cbrt(lms.x); lms_.y = cbrt(lms.y); lms_.z = cbrt(lms.z); return VecMatMul3x3(mat_LMS_LAB, lms_); } __DEVICE__ float3 OKLab_RGB(float3 lab, int colorspace) { const float3* mat; switch (colorspace) { case cACES0: mat = mat_LMS_ACES0; break; case cACES1: mat = mat_LMS_ACES1; break; case cDWG: mat = mat_LMS_DWG; break; case c709: mat = mat_LMS_709; break; } float3 lms = VecMatMul3x3(mat_LAB_LMS, lab); lms.x = powCf(lms.x, 3.0f); lms.y = powCf(lms.y, 3.0f); lms.z = powCf(lms.z, 3.0f); return VecMatMul3x3(mat, lms); } __DEVICE__ float3 RGB_OkLCh(float3 rgb, int colorspace) { float3 lab = RGB_OKLab(rgb, colorspace); return OKLab_OKLCh(lab); } __DEVICE__ float3 OKLCh_RGB(float3 lch, int colorspace) { float3 lab = OKLCh_OKLab(lch); return OKLab_RGB(lab, colorspace); } // ============================================================= // Ref Matrices // ============================================================= // ============================================================= // DWG -> XYZ // 0.70062239, 0.14877482, 0.10105872 // 0.27411851, 0.87363190, -0.14775041 // -0.09896291, -0.13789533, 1.32591599 // XYZ -> DWG // 1.51667204, -0.28147805, -0.14696363 // -0.46491710, 1.25142378, 0.17488461 // 0.06484905, 0.10913934, 0.76141462 // 709 -> XYZ // 0.4123908, 0.35758434, 0.18048079 // 0.21263901, 0.71516868, 0.07219232 // 0.01933082, 0.11919478, 0.95053215 // XYZ -> 709 // 3.24096994, -1.53738318, -0.49861076 // -0.96924364, 1.8759675, 0.04155506 // 0.05563008, -0.20397696 , 1.05697151 // XYZ -> LMS // 0.8189330101, 0.3618667424, -0.1288597137 // 0.0329845436, 0.9293118715, 0.0361456387 // 0.0482003018, 0.2643662691, 0.6338517070 // LMS -> XYZ // 1.22701385, -0.55779998, 0.28125615 // -0.04058018, 1.11225687, -0.07167668 // -0.07638128, -0.42148198, 1.58616322 Using it another DCTL looks something like this: #line 2 #define ENCODING_ENUMS_DEFINED_IN_UI #define COLORSPACE_ENUMS_DEFINED_IN_UI #include "OKLab_Transforms.h" DEFINE_UI_PARAMS(p_InCSpace, Input Color Space, DCTLUI_COMBO_BOX, 2, {cACES0, cACES1, cDWG, c709}, {ACES (AP0), ACES (AP1), Davinci Wide Gamut, Rec.709 / sRGB / BT.1886}); DEFINE_UI_PARAMS(p_InGamma, Input Gamma, DCTLUI_COMBO_BOX, 2, {gAcc, gAcct, gDWI, gLIN, g709, gSRGB}, {ACEScc, ACEScct, Davinci Intermediate, Linear, Rec.709, sRGB}); __DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B) { float3 in = make_float3(p_R, p_G, p_B); float3 out = in; // Convert to OKLCh: float3 linear = Decode(in, p_InGamma); float3 oklch = RGB_OkLCh(linear, p_InCSpace); // Convert back: linear = OKLCh_RGB(oklch, p_InCSpace); out = Encode(linear, p_InGamma); return out; }
    1 point
  31. Another gem by @Henryo Really makes me want to pick up my bmpcc again soon. Anyway, such a treat to watch:
    1 point
  32. We should hold theft in disdain. Not doing the stealing thing, after all, is one of the commandments in the Bible. I have a friend/colleague that has gone into the AI rabbit hole. He wants to only deliver videos with 100% generative AI. His argument is the hackneyed "It's just a tool". Well, a tool delivering mimicry from unauthorized sources is theft. "But humans copy each other all the time" he's said. Sorry, bud, you're just rationalizing stealing. Putting aside that human plagiarism is also theft, the process of being creatively influenced as a human is not the same thing. Humans filter all creative context through their own impressions, wisdom, experiences, empathy, and feelings. That particular matrix is infinite, random, and organic. The talented know how to tap into this mystic calculus, to develop their expertise, bend their skill set as a means to an end, and to use all of it to create something profound. Hacks (of which I am one, mind. Maybe a self-aware one, but still one nevertheless) can only regurgitate superficially. This lazy superficiality has now been globally scaled and monetized for the 1%. It sucks. Specifically, it sucks for me because those mediocre jobs of regurgitation used to be $$ in my pocket, not theirs. I had a skill of the craft that was worth a certain value. That value is diminished significantly. Yes, I'm bitter about it. Should I be? I may lack art, but at least I had craft. Be that as it may, my colleague's use of AI is especially galling as he's eager to brag at how hard it is to get the various AI systems he uses to comply with his prompts. Here's the thing: he's putting out animation style videos. Do you know how difficult it is to be a crafts-person creating animation? Good god. And he says he's "working hard" doing prompts? The "it's a tool argument," to me, is like going into a museum to admire and marvel at the paintings and sculptures ... but then standing in front of a 10th grader's paint-by-numbers knock-off of "The Harvest" and insisting it also deserves as much admiration as the original Van Gogh -- Or looking at some technical feat, like a 3D print of Michelangelo's David and being, like, "Wow, the person that ran the 3D printer equipment to make a copy of that sculpture is so great!" Bull. Shit. Admiring the craft needs to also be part of admiring the art. If my colleague is so addled that he doesn't even see repercussions of that craft-art-divorce, he's probably hopeless. Worse, he keeps trotting out his latest video examples in a gee-whiz-isn't-this-great-way to everyone around him -- as if we're supposed to be impressed? He's literally said, "I can finally make everything that's been in my head exactly how I see it!" "Make?" No, that ain't what's happening, not really. And the fact that he can't even recognize that he's not a "maker" is the real problem. People that are too shallow to cop to any of that, to appreciate what's being lost ... again, it's the deeper major problem with [waves arms around] all of this. I'm tired hoss. Tired of shaking my fist at the clouds.
    1 point
  33. I am planning to use it on my custom phone with a Linos Mevis c mount 16mm lens. But it will work also with a bigger sensor. In the past I have used super 8 babies on full frame with great results.
    1 point
  34. Do you agree with my choices... And did I miss anything? https://www.eoshd.com/news/the-2025-hybrid-mirrorless-camera-rankings/ The 2025 camera rankings for video quality and value for money are in!
    1 point
  35. My take on the situation is that I'm super-happy with the GH7. It basically does everything I want, and apart from having ultra-sharp ultra-shallow DOF, pretty much does most things that FF does. It does low-light very well, and is only behind the low-light from FF cameras because they have gotten crazy good.
    1 point
  36. R3D NE data rate is about 2x of that of N-RAW Normal, which in turn is 2x that roughly expected of Prores 422 LT 4K which is coming to the ZR in a firmware update according to Nikon. So you can get a 75% reduction in data rate compared to R3D NE in a 4K 422 format in the future. Would this be enough to make the camera practical for you? Another possible help is if video editors will be able to make shortened R3D NE files (after cutting) in the future, to save storage space. I imagine this is just a matter of time, if the camera is popular, it will probably be implemented.
    1 point
  37. I'm glad Mr Burling replied. I was always happy to watch his stuff, always felt hand made with a touch of love. Bit like Nona's home made tomato sauce... You always new you were in for a treat. The man had a talent for finding interesting lenses and using them in ways i wouldn't consider. I own a couple of lenses, thanks to him and i do try to emulate him as i can. So thank you for the artistry and the knowledge you shared, and good luck for the future.
    1 point
  38. Django

    If not ZR, then Panasonic?

    I am currently doing my own ZR image testing with footage I shot in every codec and the R3D RAW is beautiful, super rich, super detailed. Probably the best image you can get from a mid priced mirrorless. But then the h265 log conundrum. I just messed around with some log files and they're terrible. Reminds me of 8bit FHD 5D3 footage. Actually the whole camera reminds me of my C200 which had either Canon RAW or 8bit. I hated not having an intermediary codec and its the same scenario with the ZR. Quite a shame as I love the design, form factor and huge display. But it's basically a massive RAW file camera which isn't going to be practical for a lot of people. I'm headed back to Canon as Sony is in limbo and I don't do L-mount.
    1 point
  39. Great to hear from you! It makes much more sense now we know why you brought the channel to a close and also chose to delete it eventually. All I can say is that I'll miss it. And I hope you find the passion to return one day to the tube. Having read that I understand completely and found the same challenges myself too, everyone can see that I struggle to get excited about the gear sometimes, whereas in the earlier "DSLR revolution" era, I'd be updating EOSHD 5 times a day sometimes more. It is difficult when a passion becomes work, when enthusiasm becomes repetition, when an audience goes toxic or when a big US tech companies enshitifies the platform you're posting such valuable creativity on. I mean, look what they did to Vimeo, it's a difficult pill to swallow and I've always struggled with my enthusiasm for YouTube as a platform as well and there's very little in way of alternatives. I'm just glad you're well and enjoying your R5... You're right in that it still holds up as near the peak even 5 years later, and the overheating drama is far behind it after Canon decided to undo the damage caused by their fake timers and cripple hammer decisions.
    1 point
  40. MrSMW

    If not ZR, then Panasonic?

    I’m not ‘waiting’ for it as such, but based on my S9 experience, it could be something pretty special. If they get it right. If they even make one… I have gone backwards and forwards over what to do with my S9 that has been at times both A cam and B cam but due to me picking up a pair of S1RII’s mid last year, plus certain limitations of the S9 (mainly build), I was going to let it go… But then too many times I have let stuff go and regretted it so I have repurposed it giving it an XLCS cage, super-lightweight tripod and the dedicated 2 lenses to it, the 18mm or 85mm f1.8. I am hoping they do an S9II with a bigger screen à la ZR but with the S1I/R/E tilt option. And make it a bit more robust but spec wise, it’s already peak camera for my needs. So @gethin maybe look at an S9 because straight out of the box, it’s very high spec and really it’s only the body that is a bit weak, but beef it up with a cage and it’s 💪 And used, pretty cheap!
    1 point
  41. That's really unfortunate. His Vimeo is still up, and his Instagram too, though they haven't been updated recently. His content output decreased a lot once Gunpowder passed, but he had already been less active as I think he became more and more disillusioned with the entire YouTube/Filmmaking/Photography scene. I hope he is well and creating the art that he loves.
    1 point
  42. This is something I don't see people mentioning enough. That era, right before the gen2 S series cameras, they were nailing color science in lumix bodies. I'm keeping my eye on used prices, sensor streaking issues n all.
    1 point
  43. Henryo

    The D-Mount project

    Thanks for chipping in. Yes Pocket og II would do really well. I actually have had so much fun shooting with it since the video and will have more stuff to put out soon. Watch this space. When I say it is so much fun, it is actually very inspiring just taking it with me wherever I go with a few batteries and capturing life happening around me. It is a great storytelling tool. Take care and have a great holiday everyone.
    1 point
  44. Vimeo is in the process of deleting ALL my videos, due to their new policy of deleting their entire library of content for ALL non-current users without an active Pro paid subscription. Also, about 90% of my Vimeo was nurfed by the copyright music shambles, where Vimeo did the 3-strikes thing and they delete your entire account. So to avoid that, back in the day, I just decided to make these videos private, and unlisted. I have not got round to putting it all on YouTube yet, but perhaps I should?
    1 point
  45. It once was that the pros had 16mm an 35mm film and the "amateurs" had Super 8 and videotape. Now that's all changed. Yesterday I was capturing some old videotapes from a friend's project that we did in 2011 on a Canon HV20. It looked amazing. I was expecting it to look worse than cameras of today but it doesn't. Just shows that even a camera from then, with a CMOS chip from that era, MPEG 2 encoding, 4:2:0 chroma subsampling, 1440 x 1080 frame size recorded of the wide screen image, and 8 bit colour, it still can look amazing. It just shows that cameras have been very good for a long time now. The differences are mostly ergonomics and physical size. When deciding on a camera, you have to consider what you want to spend months living with.
    1 point
  46. I think people are mistaking pretty with good cinematography. There’s good cinematography and there’s bad cinematography, and then there’s cinematography that’s right for the movie. In this case it looks right for the movie.
    1 point
  47. I'd argue it is the MOST important because without the camera, you don't have a picture. It is the small differences between the latest sensors and codecs that's the unimportant thing. In cinematography, our job isn't to worry about the costumes or set pieces, that's the job of someone else. So lighting and camera are the most important for a DP. What has happened is the gap between the top-end i.e. ARRI and the cheap stuff has closed up. This has been going on ever since the start of the DSLR revolution so it's not a new thing but there's never been a smaller gap that exists now, for example between something like the Alexa 35 and a $1000 used Panasonic S1H. By the way although Magellan has beautiful content and really nice camera-work, the sharpness of it and the deep DOF isn't everybody's cup of tea. It does look a bit too soap opera in parts of that trailer, I think. It looks very different to an IMAX shot film. So there's big differences between formats and lenses still... The same cinema focal length for example on 16mm has always looked vastly different to same on IMAX or large format. Also there are big differences in grading style, camera movement style, and so on. I think most relevant for us is that you don't need to make a massive rig any more to get good results. It's horrible having the weight as a one-man DP. Probably why they used such a small camera on this.
    1 point
×
×
  • Create New...