Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 02/22/2026 in Posts

  1. Panasonic just released a new on-camera mic. Looks like an excellent option for events etc where you want something small or something really flexible. I've watched a few YT showcases and for me, the best features are: It gives you 32-bit without having to have the external mic preamp box (and then adding microphones to that, making it larger again) It's small, much smaller than an on-camera shotgun mic You can quickly swap between modes (I assume?) it's powered by the camera It unlocks the ability to record >2 channels of audio into the files (one person said you can record left/right/mono/mono-20dB as a combo, and left/right/left-20dB/right-20dB as a different combo) It's definitely not magic and the laws of physics still apply. There don't seem to be any really good on-location stress tests posted yet, but there's a few examples. Media Division did an in-kitchen test to compare it to in-camera mics and lav and a DJI clip-on, and also applied a bit of AI voice isolation too to see how far you can push it: Dustin did some good tests including walking a 360 around the camera in each mode, which showed how directional it is, which seems pretty impressive. He also compared it to the Sennheiser MKE440. This shows the different modes out in nature: This is probably a complete revolution for a number of niche uses. Content creators would be one, where they're recording in noisy environments but still staying relatively close to the camera where physics will be helping them. Another is where the flexibility really helps, like shooting events where getting pristine audio isn't an absolute must but working super-quickly is more important and perhaps the 32-bit would really come into its own. This reminds me of how people used to talk about Panasonic when the GH4 and GH5 were around and people were saying that Panasonic just listened to people and then implemented the features that people would use rather than trying to be flashy and grab headlines. This will be an invisible workhorse for lots and lots of people.
    4 points
  2. kye

    The Aesthetic (part 2)

    I'm back from Guangzhou China, and starting to evaluate the footage, especially my modified Takumar 50mm F1.4 with the custom "insert" made from post-it notes and sticky tape. I managed to get out and shoot with it on a couple of nights. One in Beijing Road and the other in Yong Qing Fang. Some images from Beijing Road... these are all wide open, and lightly graded with Resolve and Film Look Creator. Overall, I'm really liking the aesthetic, which reminds me of mid-budget Hong Kong cinema, which I have a soft spot for. I mostly exposed to protect the highlights and then adjusted exposure in post under the FLC, and the GH7 has just enough DR for this, despite the scenes being quite challenging. The lens has a shallow enough DOF to be able to direct the viewers attention by choosing what is in focus, and the FOV (equivalent to a 71mm F2.0 on FF) is great for these type of scenes where the scenes would mostly overwhelm a wide lens with pure chaos. Some images from Yong Qing Fang.. same as above but with a touch of sharpening. This was a lot darker and I needed to push the ISO to get more levels in some scenes. It was also a lot higher DR, so some shots will be limited in post for how I grade them and I'll probably reach for NR in places. The lens is actually quite sharp in the middle, but the sides are more distortion than I'd like with quite a bit of bokeh distortion and coma from bright sources. The experiment with this "insert" was how strong a look it would be and I think it's probably too strong because the bokeh shapes are too distracting due to the sharp corners. It's distracting on frames with a clear subject (where you want the background to get out of the way) and on other shots its pure chaos and completely negates the idea of directing where the viewer will look. Getting DOF this shallow on MFT isn't easy, so I'll have to think about it more for future trips.
    3 points
  3. pat

    Looking for Gh2 patches

    My archive on gh2 patches 1 GOP Intra 'moon' T7 - Top Grading - Best Motion - Best Setting Ever 1-SpanMyBitchUp patch is good quality for spanning with long record times 2-AQuamotion v2 is medium-high quality with decent spanning recording times + 80% slowdown / EX TELE 3 GOP 'Spizz' - Hi-Quality - Pro Motion 3-TerrAQuake is seAQuake but less quality frame sizes for poorer type 10 cards 4-SeAQuake is Very High Quality for hi-end SD cards 6 GOP - Middle Earth 'Nebula' 7 GH2 Flow Motion v2 - 100Mbps Fast Action Performance & Reliability 8 T9-gh4 like 9 12/15 GOP 'DREWnet' T9 - Traditional Long GOP 12 https://www.dropbox.com/scl/fo/blop84zqvmgob2qiab07v/ACl0mBKWCsMcidxL5qUcqgA?rlkey=3gb5igu910uyw5sipalzlovp2&st=1vsslgjs&dl=0
    3 points
  4. I'll have a dig and host them here if I find the mother lode
    3 points
  5. Getting prepped for my next trip and have further refined my setup. This trip is a quick trip to China, but it's also a test case for a trip I'm taking later in the year to Europe where the packing approach will be minimalism. Unlike the way I like to travel in Asia, the Europe trip will involve changing accommodation every few days, so packing and unpacking and hauling bags around will be much more of a pain, so I'll try and travel really minimally. As such, my approach for this trip is "when in doubt, don't take it" and see what I actually use. So the setup for this trip is: GH7 14-140mm F3.5-5.6 zoom, which I use during the day at F5.6 which means my 1-5 stop vND is enough 12-35mm F2.8 zoom, which is a great walk-around lens after dark Takumar 50mm F1.4 with M42-MFT Speedbooster (with bokeh insert) for "night cinema" iPhone 17 Pro setup (Neewer phone filter mount, K&F 1-9 stop vND, MagSafe Popsocket) The GH7 and zooms are self-explanatory, so here's the 50mm F1.4 setup. I have played around with "inserts" and ended up with a pretty extreme design, so this is a test to see if the vertical edges are too strong a look for me. It's made from the sticky part of the post-it note, and a layer of sticky tape over the top to keep it a bit more together. It sits between the speed booster and the lens, and I won't use the speed booster for any other lenses while travelling so this will stay in there and protected, so doesn't need to be that robust. It's a strong look in some situations and quite "painterly" in others, so I'll be curious how it goes. For my iPhone 17 Pro, it's a phone most of the time and a camera only as a backup, so I searched for a setup that would: Protect my phone from drops (I dropped it on the last trip and the screen shattered, despite it being in an Apple case - the only one available at the time... sigh) Still be right-sized for getting in and out of pockets etc Have a vND solution for when I want to shoot and use 180 shutter I'll spare everyone from the rant about the options out there (everyone wants you to buy into their "ecosystem" now) so I ended up with the Otterbox Defender Series Pro case, which makes the iPhone feel even larger than it did in the Apple case (which doesn't seem possible but is true), but seems very robust. The vND is the Neewer phone filter mount, which sort-of clips onto the phone (It's designed to screw onto and clamp the phone but you're clamping against the screen, so I wouldn't tighten it that much). It's designed for a naked iPhone, so I had to modify it (and the Otterbox case) slightly where it interfered with the Otterbox case to get it to sit a bit flatter. It still doesn't sit flush, but it goes on and seems to be fine. I haven't got around to actually taking it out to shoot with it, so that remains to be seen. I paired it with the K&F 1-9 stop vND, which boasts 18 layers etc, but doesn't claim to be a "True Colour" one like the 1-5 stop ones do. It doesn't have hard stops and I think it still gives the X at the max amount, but I'll see how I go. Not having an aperture sure sucks considering you're not really losing having shallow DOF. That is all combined with the MagSafe Popsocket as a safeguard. I've used the adhesive popsockets before and they're great for giving a much better grip on the phone, but I wasn't sure how strongly the MagSafe would be. The Otterbox claims to have magnets in it that strengthen the MagSafe connection, and this might be true. It feels quite sturdy actually, and I tested it to require 1.75kg of force to pull off, compared to the 1.45kg of force it took to pull it off my naked iPhone 12 mini. No idea what strength a naked iPhone 17 Pro MagSafe connection would have, but it's not terrible. Lots of compromises involved, but it's really my backup camera, and the Otterbox case is very grippy, so I'll see how I go.
    3 points
  6. For sure, and I'd be excited to give it a try! I was a Kickstarter backer of the Z Cam E1 and I've bought a few Ribcage kits/cameras over the years. I was disappointed by all of them, but I'm still hoping for that magical/usable tiny sensor camera! I have a little bag full of D-mount and C-mount lenses just waiting to go on something! (I still wish there were a way to get a decent/non-laggy video feed from the Insta360 One R/RS series - I have a Ribcage-modded 1" module and the quality is really decent - but focus is hard, given the only options for monitoring are the camera's tiny screen or laggy wifi)
    2 points
  7. kye

    New cinema camera...?

    I found an interview with the person who shot the under-water sections of a GoPro promo video (IIRC it was for the Hero 3 or 3+), and the level of effort they put into it was simply incredible. He had a team of about 5, three crew and two cast, and they had a week for production. He was an independent DOP and had done some pre-production as part of his 'pitch' to GoPro to get the gig, but I think they did detailed pre during the week as well as camera tests and lots and lots of shooting. This was only for the underwater shots (the bikini girl diving beneath the waves). If we assume that each of the (maybe half a dozen?) locations each got 5 people for a week, then that's ~7500 hours just to film the 1-2 minute promo video. The level of cherry-picking is extreme - professional DOPs pitching projects, travel to the most exotic locations, testing of all modes with all manner of equipment, everyone in cast and crew are professionals, long shooting days at the best times (golden hour, etc), dozens of hours of footage just to make a short promo. Then people set it to auto, hold the tiny camera in their hand and film their family at the beach with whatever lighting and weather happens to be there at the time and then we wonder why it doesn't look like the promo videos... Having said all that, if GoPro make an interchangeable lens camera with a half-decent bitrate and a colour-managed LOG profile then it might be the tiny camera we've been wanting!
    2 points
  8. Well, there's one hell of a metaphor in the "context" of this.
    2 points
  9. "Tsumura-san confirmed that photographers who compose through viewfinders “strongly request the inclusion of an EVF,” and that Panasonic is considering the balance between compact size and EVF inclusion as they work to “meet the expectations of as many customers as possible.” LOL. GM5 anyone?
    2 points
  10. Yes, I don’t get it either. You are in the full-frame LUMIX ecosystem and are considering a new camera and on your interest list is the latest options from Nikon and from Sony. You are increasingly tempted to jump ship even though it’s a big move. Which one of the below options do you wish to hear? A: We are working on a new flagship camera that will be launched in the Summer. B: Expanded horizons, creative direction, global market strategies, Operation Epic Bullshit, waffle waffle waffle, blah di blah di blah. I didn’t watch it and haven’t read a single word of it but 100% sure it wasn’t A: A perfect example of how not to retain customers. There is a saying which applies to all business and that is no matter the size, small, medium or large/international, just because you are in business, doesn’t mean you are any good at business. Some companies are more clueless than others…
    2 points
  11. This reminds me of using my old Sony camcorder with the 5.1 surround sound microphone. I would shoot with it, then in post be able to isolate each channel and choose which one to use and ignore the ones that were just location noise. Pretty handy without much effort when shooting. This might be similar in that sense.
    2 points
  12. Fair enough. I just wish they'd spent time and resources elsewhere. I want a small, up-to-date, Panasonic M43 camera, not an overly complex version of a on-camera mic. This seems like a great product for 2010. But what do I know, maybe this THE MIC, the one that everyone was waiting for. What I can tell you with 100% certainty, people are ready and willing to pay vast sums of money for old gear, only because it's small. What happened Panasonic? The whole miniaturization of components thing has been apparently disregarded.
    2 points
  13. Come on John... everyone knows that anyone who wants better footage than a smartphone can provide is 100% totally fine with a camera the size of a microwave oven that looks like a Borg prototype! Being slightly serious though, it's easy to criticise, but as someone who wants flexibility and better sound options, this is FAR better than the previous options, so it's a welcome addition in my eyes. The worst enemy of progress is criticising everything that isn't perfect in every conceivable way.
    2 points
  14. It's great they came out with something new, but I wish they'd spent their time elsewhere. This product just doesn't seem like a priority. If audio is the priority, I'd rather a set of 2+ wireless lav microphones that connect and record to the camera via bluetooth or wifi. Why hasn't anyone done that?
    2 points
  15. Hello, I hope everyone is well! Even though I’m not really active on camera forums anymore, I frequently read the EOSHD blog and every now and then the forum, so I saw the thread and thought I would respond. Because it wasn’t ”poof gone”, it was announced on the channel over a year ago and mentioned in the last three videos. Before going into why, super flattered that this thread exist. I mean that. So here are some thoughts on the matter and why I took it down. Hobby vs Work YouTube was never my job, just a hobby. So was video making and photography, in the beginning. When starting the channel I was working as a producer after a couple of years as a radio/TV reporter. So I started the channel to keep my practical skills fresh. And to keep up with the development, which was huge at the time. The DSLR revolution, Blackmagic, cheaper editors etc. Fast forward a couple of years and I started making more videos at work again. At the same time I pretty much lost all interest in doing it as a hobby. And actually canceled the channel. Winston Churchill was definitely right in saying that work and hobbies should not be too similar. But what I had discovered was a passion for still photography, which I had pretty much no experience with. So I started making videos again. That’s why my videos became very repetitive and short. I didn’t care about that part, I just wanted to display my stills work and get feedback, talk to the community, experiment with cameras and develop. After a few years I became a good enough photographer that my new employer noticed and just like that I was shooting stills professionally all the time. And I still do (I work in marketing and PR). It’s a huge bonus in my field and if you are good at it you will never be out of work. So photography also became less and less of a hobby. Instead I found other hobbies. They where things that for example got me out into nature, so photography tagged a long a while, as a secondary activity. But eventually it faded. It was also nice to do things and not share it with people. I know I probably could have a very successful channel by making videos about my current hobbies, and even make some money. But I never really wanted a channel for the sake of a channel. And always had a full time job. The fact is that at no point would I had been able to live of my channel, not even at the peak. Even with sponsors it was never more that a regular salary (in my field and country). But as long as it was a hobby and I was glad to do it, it was a welcome addition to finance camera gear. Time At the same time as my channel started to feel less fun and other hobbies started taking my time, I started a family. So.. you get the idea: full time job + family + 2-3 hobbies = no YouTube. Upkeep So why take it down, why not leave it for the community? I did.. at first. Like some of you pointed out, the YouTube crowd in the photography/video space is generally nice and positive. That is my experience as well. Early on I learned that a good way of keeping the trolls away was to be present. Respond and engage. Trolls are usually idiots or cowards, so they don’t like getting push back. But once I stopped making videos, views and comments obviously went down. But the trolls started coming back. Not so much after me, and I don’t care about that. But agains the community. The people commenting started being nasty towards each other. I felt a responsibility to moderate, which was annoying. That’s when the thought about simply removing it started to grow. It wasn’t an impuls. It was an internal debate that went on for months. And the issue grew much much larger than a couple of trolls. I started thinking about five years ahead, 10 years, 30 years.. This post is already way too long so I won’t go into all of it. But I think you get the idea when I say: Privacy or when the content no longer reflects the creator. Digital minimalism, control over one’s narrative, inactive or outdated content. Risk of misuse of content due to me not checking the terms updates. Closure. So there is a looong ramble :) To keep in spirit of the forum I can charge my current gear for pro work :) For the longest time I used the EOS-R for 75% of all my work and the R5 (rental) for the rest. It wasn’t mine but my employer told me to buy whatever I wanted. Paired it with a 28, 35 and 70-200. 70/30 stills/video. The R5 is peak camera imo. Today is a little different. I started working for a new company about a year ago and again was told to buy what I needed. I would have bought the R5 without hesitation if it wasn’t for the Sigma 35-150/2-2.8.. I just had to have it. So I ordered the Nikon Z6iii. It’s not as good overall as the R5 for me and what I like in a tool camera. But it’s 90% there. And coupled with that lens it’s becomes on par. //MB
    2 points
  16. MrSMW

    New cinema camera...?

    It’s only a matter of time before some big movie is shot exclusively on a Go Pro or DJI…
    1 point
  17. Like with anything, it's best not to get too revved up over the hype and wait until there are actual cameras around to check out. GoPro are, of course, hiring professionals and cherry picking the very best footage/images that any of them captured. Otherwise, the concept of using an action camera ASIC in a cinema camera isn't new - it was exactly the strategy with the Z Cam E1. Since then, action cameras have advanced a lot, thankfully. Similarly, the Z Cam E2 series used/uses an off the shelf Hisense board, though that one was intended for higher-end security cams/consumer cameras and not action cameras.
    1 point
  18. I just commented on the Lumix Live weekly upload asking for a modern, small MFT camera- most liked comment. IMO, there are so many people, for whatever reason, want this. It could be practical or nostalgic for the Lumix fans. Just do it Lumix.
    1 point
  19. That’s about the sum of it John. Except I’d swap out the word ‘reassure’ for ‘excite’. To quote a famous movie line, “build it and they will come”. In the case of Panasonic Lumix, build it and they will come, but of as much importance, build it and they will stay. But if they already buggered off elsewhere because instead of telling them something they wanted to hear, you said nothing and 🤷‍♂️
    1 point
  20. «As a Fuji user, that smooth autofocus on the BM6k makes me cry.» «I sold my X-H2s and all the lenses a few months ago to get BMD Cinema 6K, and I never looked back since.» source «The most significant thing about all this, as you pointed out in the video, is that Blackmagic is essentially giving autofocus to all of us who already bought their camera, instead of releasing a Blackmagic 6K Full Frame Pro with autofocus just to make us open our wallets again. They may be losing money in the short term, but in my view they are gaining in the long term, because the trust the brand inspires is truly remarkable.» source Disclaimer: Happy camper as Blackmagic shooter over here! Looks like I am not alone... - EAG :- )
    1 point
  21. I've been hearing a lot of positive things about the AF on the BMCC with the latest betas. Given that BMD have said before that they plan to roll it out to all of their newer cameras, my assumption is that Petty will be announcing it for some more cameras at NAB in a month or so. My selfish hope is that the UC 12K LF is included. It'd also fix one of the major considerations if people are considering C80 or Pyxis 12K if they roll it out there too. My other big. hope/assumption is that they'll also finally announce a USB reader for a single UC media module priced under $400. The MM is fantastic, but it's a pain in the ass to connect the camera to a 10gE network to download it at any reasonable speed (USB and wifi transfers are both slow as hell, not sure why but the virtual network they create on the USB interface is only 100Mb/s) - and the only reader they sell now takes 3 modules which is 2 more than I have and it costs something like $1,000 - and from what I recall, it also needs to be connected to 10gE. And getting into wild speculation, it would be interesting to see them release a smaller camera using a lower-resolution RGBW sensor. A $3k or so 6K (or 8K) camera with similar dynamic range to the 12K sensor in the bigger cameras and Canon/Sony-level AF would be really tempting/compelling.
    1 point
  22. I have the Sanity hacks (not 100% sure), Apocalypse Now (Drewnet), Cluster X (Drewnet), and Cluster X (Moon). I wish I took better notes. Why are personal view forums down? That's a good site.
    1 point
  23. I'll genuinely appreciate that a lot. Please preserve the gh2 history, don't let it vanish
    1 point
  24. I don't think this makes it a cinema camera...
    1 point
  25. No manufacturer is going to reveal a future product before it is ready to be sold unless they are in dire straits and their current products have zero chance of selling. To me it seems that manufacturers consider small cameras more entry-level and make a progression so that in each level up, most aspects of the next higher-level camera is better than the level below, except for size and weight, and the cost increases along with weight, features, performance, and quality. Since Panasonic have (more expensive) 35mm full-frame cameras, they have incentive to make the micro four thirds products less in most ways, to motivate people who can afford the FF to go with it instead of the MFT. Sony does emphasize small size and low weight throughout their stills/hybrid camera lineup. A small camera is more difficult to make more powerful (in terms of performance, image quality, high end video codecs etc.) and people will invariably complain about whatever its flaws may be, be it lack of efficient codecs, overheating, operation etc. IBIS makes the camera significantly more expensive. In the small sensor class, IBIS would be useful (just as it is with larger cameras) but it would increase the camera size, weight, and cost, all three factors noticeably, hence reducing the advantages of small size, light weight, and moderate to low cost. And this class of cameras is competing with smartphones as well, due to their pocketability and communications abilities. It's just a tight place to be in. Probably this is why Nikon discontinued the 1 series and Canon their M system. Full-frame telephoto lenses have also gotten much smaller and lighter in recent years.
    1 point
  26. Agree this is a waste of time. They should say "a cinema camera is coming" or "the S1ii is our cinema camera". Either would help a lot of consumers decide what their next move is.
    1 point
  27. On the cined web site, there is a text version summarizing the interview - much less time-consuming to digest.
    1 point
  28. Doh - forgot to list the 9mm F1.7 lens. That's the ultra-wide I'll be taking too. So the total count is one body, 5 lenses, my phone with vND. I was slightly conflicted about the "wide-angle night cinema" slot. The SB+50/1.4 is equivalent to a 71mm F2.0 on FF, so having something wider seems an obvious thing but I'm just not sure if I would use it. I've mentioned the 12-35mm F2.8 as my night walk-around lens, and when combined with the GH7 low-light capability it's a fine combination, but it's not crazy fast/bright and isn't the best "cinema" option around. The things I considered were: my TTartsans 17mm F1.4, which is small and light and despite being soft wide-open is probably quite cinematic my 14mm F2.5 which is small and light but is bettered by the 12-35mm on flexibility grounds being a zoom my Voigtlander 17.5mm F0.95, which is a great performer but is quite heavy my c-mount 12.5mm F1.9, which is similar FOV when you crop in to its S16 image circle my 9mm F1.7 combined with the GH7 cropping, which is fast but sacrifices resolution and doesn't have the DOF advantages of other options (although I am already taking it) SB + 28mm F2.8 combos, but it's hard to get a reasonable quality 28mm F2.8 in M42 mount and it's not that fast anyway I opted to take the 12-35mm (which I sort-of take as a backup lens to the 14-140mm zoom) but if I do end up wanting a wider fast lens for night cinema, I think I might just bite the bullet and get the PanaLeica 15mm F1.7 as it'll be light and have AF and be sharper than I could ever want. I looked at the reviews of a bunch of budget F1.4 or faster lenses around the 14-20mm mark but I'd never be sure if it was as sharp as I'd like, and spending money to get something that isn't that much faster than my 17/1.4 or that much lighter than my 17.5/0.95 seems silly. MFT is the wrong format for ultra-fast wide lenses, and I already have lots of options for something I might not use, so the whole thing might end up being academic anyway.
    1 point
  29. Yes. This to me is just a somewhat expensive, limited use, vloggers device that has zero serious use case for my needs. I already have 2x Sennheisers that fill this role at €150 for the pair of them.
    1 point
  30. If it were half the price it'd be interesting but there if a lot of cheaper good gear out there. But it's good to see them innovating. I'm enjoying the S1mk2, I feel it's a big step up from the S5mk2. I hope it survives the summer! Panasonic now has quite comprehensive line up for all budgets.
    1 point
  31. Looked at and decided very quickly it isn’t for me, but good to see LUMIX making stuff they at least think folks want. What they really want however is an S1H mk II in an FX3 style body with the screen mech from the S1II and the screen from the ZR. Then they will truly win the crowd and strut like gods of low-mid film-making.
    1 point
  32. Hey all, quick follow-up after having purchased the C50 and using it couple days. The customization is seriously one of the best parts. I have EIS toggle, S&F, teleconverter, display brightness boost, view assist, WFM, false color all on physical buttons so they're instant. Then the touch quick cine menu overlay lets me flip through frame rates, codec, resolution and recording settings super fast without leaving shooting view. It just feels like one of the quickest cameras I've used in real life. Open gate 3:2 is still my absolute favorite thing. The aspect ratio looks fresh and having that extra vertical headroom for reframing or pulling stills is addictive. Being able to shoot 7K open gate in 10-bit h265 at only 486Mbps in the lowest bitrate is a great data rate to resolution ratio. The digital zoom via the rocker switches is the other standout. Light press for slow creep, hard press for fast punch, with separate speed curves for each. It's so tactile and controllable, and it makes punch-ins on primes feel intentional instead of a crop hack. The top handle is really cool too. It gives better balance and a two-handed grip so handheld shake is noticeably reduced, especially low angles or longer takes. But what's even cooler is how modular it is. Snap it off and the camera becomes super compact for travel, storage or quick discreet shots. Having the choice is great. Still working on stabilization. EIS helps when it's on, but you get that slight crop and occasional motion blur artifacts unless I crank shutter angle to 90° or 45° (which I do now). EIS is disabled in open gate 3:2 so those shots are raw shaky until post. Gyroflow should handle it but I'm still having trouble getting it to recognise the camera or lens. Anyone knows how to manually set it up? Any tricks for getting the gyro data to load properly? Still in early testing phase but overall the camera feels fast, intentional and pro in a way that keeps me shooting. Cheers!
    1 point
  33. Hi! I just stumbled upon this thread and thought I'd share an OKLab conversion DCTL I wrote about a year ago. It;s written as a header file to be included in any other DCTL you may need it for. It supports conversion from ACES, Davinci Wide and Rec.709/sRGB. OKLab_Transform.h: #line 2 #ifndef ENCODING_ENUMS_DEFINED_IN_UI enum Encoding { gAcc, gAcct, gDWI, gLIN, g709, gSRGB }; #endif #ifndef COLORSPACE_ENUMS_DEFINED_IN_UI enum ColorSpace { cACES0, cACES1, cDWG, c709 }; #endif // ============================================================= // Util // ============================================================= // ============================================================= __DEVICE__ float powCf(float base, float exp) { return _copysignf(_powf(_fabs(base), exp), base); } __DEVICE__ float3 VecMatMul3x3(const float3 m[3], float3 v) { float3 r; r.x = m[0].x * v.x + m[0].y * v.y + m[0].z * v.z; r.y = m[1].x * v.x + m[1].y * v.y + m[1].z * v.z; r.z = m[2].x * v.x + m[2].y * v.y + m[2].z * v.z; return r; } // ============================================================= // Matrices // ============================================================= // ============================================================= // These matrices are the concatenated forms of (colorspace -> XYZ -> OKlms) // or, XYZToLMS @ ColorspaceToXYZ and, XYZToColorspace @ LMSToXYZ // original matrices are included as comments at the bottom of this script // ACES (AP0) // ============================== __CONSTANT__ float3 mat_ACES0_LMS[3] = { {0.90454662f, 0.26349909f, -0.15602258f }, {0.35107161f, 0.6766934f, -0.03056591f }, {0.13684644f, 0.19250255f, 0.62038067f } }; __CONSTANT__ float3 mat_LMS_ACES0[3] = { {1.2881401f, -0.58554348f, 0.29511118f }, {-0.67171287f, 1.76268516f, -0.08208556f }, {-0.0757131f, -0.41779486f, 1.57228749f } }; // ACES (AP1) // ============================== __CONSTANT__ float3 mat_ACES1_LMS[3] = { {0.64173446f, 0.35314498f, 0.0171437f }, {0.27463463f, 0.63099904f, 0.09156544f }, {0.10036508f, 0.18723743f, 0.66212716f } }; __CONSTANT__ float3 mat_LMS_ACES1[3] = { {2.04479741f, -1.17697875f, 0.10982058f }, {-0.88115384f, 2.15979229f, -0.27586256f }, {-0.06077576f, -0.43234352f, 1.57164623f } }; // ITU BT.709 // ============================== __CONSTANT__ float3 mat_709_LMS[3] = { { 0.4122214708f, 0.5363325363f, 0.0514459929f }, { 0.2119034982f, 0.6806995451f, 0.1073969566f }, { 0.0883024619f, 0.2817188376f, 0.6299787005f } }; __CONSTANT__ float3 mat_LMS_709[3] = { { 4.0767416621f, -3.3077115913f, 0.2309699292f }, { -1.2684380046f, 2.6097574011f, -0.3413193965f }, { -0.0041960863f, -0.7034186147f, 1.7076147010f } }; // Davinci Wide // ============================== __CONSTANT__ float3 mat_DWG_LMS[3] = { { 0.68570951f, 0.45574409f, -0.14156279f }, { 0.27427422f, 0.81179945f, -0.08604675f }, { 0.04351009f, 0.15072461f, 0.80624495f } }; __CONSTANT__ float3 mat_LMS_DWG[3] = { { 1.8836253f, -1.09713301f, 0.21364045f }, { -0.63460063f, 1.57752473f, 0.05693684f }, { 0.01698397f, -0.23570436f, 1.21814431f } }; // OKLab <-> Cone Response // ============================== __CONSTANT__ float3 mat_LMS_LAB[3] = { { 0.2104542553f, 0.7936177850f, -0.0040720468f }, { 1.9779984951f, -2.4285922050f, 0.4505937099f }, { 0.0259040371f, 0.7827717662f, -0.8086757660f } }; __CONSTANT__ float3 mat_LAB_LMS[3] = { { 1.0f, 0.3963377774f, 0.2158037573f }, { 1.0f, -0.1055613458f, -0.0638541728f }, { 1.0f, -0.0894841775f, -1.2914855480f } }; // ============================================================= // Transfer Functions // ============================================================= // ============================================================= // ACEScc // ============================== __DEVICE__ float ACEScc_DecodeBase(float v, float a, float b, float upperClampThreshold, float lowerDecodeThreshold, float two_m16) { float out = v; if (v >= upperClampThreshold) out = 65504.0f; else if (v < lowerDecodeThreshold) out = (_exp2f(v * b - a) - two_m16) * 2.0f; else out = _exp2f(v * b - a); return out; } __DEVICE__ float3 ACEScc_Decode(float3 in) { const float two_m16 = _exp2f(-16.0f); const float a = 9.72f; const float b = 17.52f; const float lowerDecodeThreshold = (a - 15.0f) / b; const float upperClampThreshold = (_log2f(65504.0f) + a) / b; float3 out = in; out.x = ACEScc_DecodeBase(out.x, a, b, upperClampThreshold, lowerDecodeThreshold, two_m16); out.y = ACEScc_DecodeBase(out.y, a, b, upperClampThreshold, lowerDecodeThreshold, two_m16); out.z = ACEScc_DecodeBase(out.z, a, b, upperClampThreshold, lowerDecodeThreshold, two_m16); return out; } __DEVICE__ float ACEScc_EncodeBase(float v, float a, float b, float negConstant, float two_m15, float two_m16) { float out; if (v < 0.0f) out = negConstant; else if (v < two_m15) out = (_log2f(two_m16 + v * 0.5f) + a) / b; else out = (_log2f(v) + a) / b; return out; } __DEVICE__ float3 ACEScc_Encode(float3 in) { const float two_m16 = _exp2f(-16.0f); const float two_m15 = _exp2f(-15.0f); const float a = 9.72f; const float b = 17.52f; const float negConstant = (_log2f(two_m16) + a) / b; float3 out = in; out.x = ACEScc_EncodeBase(out.x, a, b, negConstant, two_m15, two_m16); out.y = ACEScc_EncodeBase(out.y, a, b, negConstant, two_m15, two_m16); out.z = ACEScc_EncodeBase(out.z, a, b, negConstant, two_m15, two_m16); return out; } // ACEScct // ============================== __DEVICE__ float3 ACEScct_Encode(float3 in) { const float a = 9.72f; const float b = 17.52f; const float X_BRK = 0.0078125f; const float A = 10.5402377416545f; const float B = 0.0729055341958355f; float3 out; out.x = (in.x <= X_BRK) ? (A * in.x + B) : ((_log2f(in.x) + a) / b); out.y = (in.y <= X_BRK) ? (A * in.y + B) : ((_log2f(in.y) + a) / b); out.z = (in.z <= X_BRK) ? (A * in.z + B) : ((_log2f(in.z) + a) / b); return out; } __DEVICE__ float3 ACEScct_Decode(float3 in) { const float a = 9.72f; const float b = 17.52f; const float Y_BRK = 0.155251141552511f; const float A = 10.5402377416545f; const float B = 0.0729055341958355f; float3 out = in; out.x = (in.x > Y_BRK) ? _exp2f(in.x * b - a) : ((in.x - B) / A); out.y = (in.y > Y_BRK) ? _exp2f(in.y * b - a) : ((in.y - B) / A); out.z = (in.z > Y_BRK) ? _exp2f(in.z * b - a) : ((in.z - B) / A); return out; } // Davinci Intermediate // ============================== __DEVICE__ float3 DWI_Decode(float3 in) { float3 out = in; float a = 0.0075; float b = 7.0; float c = 0.07329248; float m = 10.44426855; float log_cut = 0.02740668; out.x = in.x > log_cut ? powCf(2.0f, (in.x / c) - b) - a : in.x / m; out.y = in.y > log_cut ? powCf(2.0f, (in.y / c) - b) - a : in.y / m; out.z = in.z > log_cut ? powCf(2.0f, (in.z / c) - b) - a : in.z / m; return out; } __DEVICE__ float3 DWI_Encode(float3 in) { float3 out = in; float a = 0.0075; float b = 7.0; float c = 0.07329248; float m = 10.44426855; float lin_cut = 0.00262409; out.x = in.x > lin_cut ? (_log2f(in.x + a) + b) * c : in.x * m; out.y = in.y > lin_cut ? (_log2f(in.y + a) + b) * c : in.y * m; out.z = in.z > lin_cut ? (_log2f(in.z + a) + b) * c : in.z * m; return out; } // ITU BT.709 // ============================== __DEVICE__ float3 BT709_Decode(float3 in) { float3 out = in; out.x = out.x < 0.081f ? out.x / 4.5f : powCf((out.x + 0.099f) / 1.099f, 1.0f / 0.45f); out.y = out.y < 0.081f ? out.y / 4.5f : powCf((out.y + 0.099f) / 1.099f, 1.0f / 0.45f); out.z = out.z < 0.081f ? out.z / 4.5f : powCf((out.z + 0.099f) / 1.099f, 1.0f / 0.45f); return out; } __DEVICE__ float3 BT709_Encode(float3 in) { float3 out = in; out.x = out.x < 0.018 ? out.x * 4.5f : 1.099f * powCf(out.x, 0.45f) - 0.099f; out.y = out.y < 0.018 ? out.y * 4.5f : 1.099f * powCf(out.y, 0.45f) - 0.099f; out.z = out.z < 0.018 ? out.z * 4.5f : 1.099f * powCf(out.z, 0.45f) - 0.099f; return out; } // sRGB // ============================== __DEVICE__ float3 sRGB_Decode(float3 in) { float3 out = in; out.x = out.x < 0.04045 ? out.x / 12.92f : powCf((out.x + 0.055f) / 1.055f, 2.4f); out.y = out.y < 0.04045 ? out.y / 12.92f : powCf((out.y + 0.055f) / 1.055f, 2.4f); out.z = out.z < 0.04045 ? out.z / 12.92f : powCf((out.z + 0.055f) / 1.055f, 2.4f); return out; } __DEVICE__ float3 sRGB_Encode(float3 in) { float3 out = in; out.x = out.x < 0.0031308 ? out.x * 12.92f : 1.055f * powCf(out.x, 1.0f / 2.4f) - 0.055f; out.y = out.y < 0.0031308 ? out.y * 12.92f : 1.055f * powCf(out.y, 1.0f / 2.4f) - 0.055f; out.z = out.z < 0.0031308 ? out.z * 12.92f : 1.055f * powCf(out.z, 1.0f / 2.4f) - 0.055f; return out; } // ============================================================= // Convert // ============================================================= // ============================================================= __DEVICE__ float3 Decode(float3 in, int tFunction) { float3 out = in; switch (tFunction) { case gAcc: out = ACEScc_Decode(in); break; case gAcct: out = ACEScct_Decode(in); break; case gDWI: out = DWI_Decode(in); break; case g709: out = BT709_Decode(in); break; case gSRGB: out = sRGB_Decode(in); break; } return out; } __DEVICE__ float3 Encode(float3 in, int tFunction) { float3 out = in; switch (tFunction) { case gAcc: out = ACEScc_Encode(in); break; case gAcct: out = ACEScct_Encode(in); break; case gDWI: out = DWI_Encode(in); break; case g709: out = BT709_Encode(in); break; case gSRGB: out = sRGB_Encode(in); break; } return out; } __DEVICE__ float3 OKLab_OKLCh(float3 lab) { float C = _hypotf(lab.y, lab.z); float h = _atan2f(lab.z, lab.y); return make_float3(lab.x, C, h); } __DEVICE__ float3 OKLCh_OKLab(float3 lch) { float a = lch.y * cosf(lch.z); float b = lch.y * sinf(lch.z); return make_float3(lch.x, a, b); } __DEVICE__ float3 RGB_OKLab(float3 rgb, int colorspace) { const float3* mat; switch (colorspace) { case cACES0: mat = mat_ACES0_LMS; break; case cACES1: mat = mat_ACES1_LMS; break; case cDWG: mat = mat_DWG_LMS; break; case c709: mat = mat_709_LMS; break; } float3 lms = VecMatMul3x3(mat, rgb); float3 lms_; lms_.x = cbrt(lms.x); lms_.y = cbrt(lms.y); lms_.z = cbrt(lms.z); return VecMatMul3x3(mat_LMS_LAB, lms_); } __DEVICE__ float3 OKLab_RGB(float3 lab, int colorspace) { const float3* mat; switch (colorspace) { case cACES0: mat = mat_LMS_ACES0; break; case cACES1: mat = mat_LMS_ACES1; break; case cDWG: mat = mat_LMS_DWG; break; case c709: mat = mat_LMS_709; break; } float3 lms = VecMatMul3x3(mat_LAB_LMS, lab); lms.x = powCf(lms.x, 3.0f); lms.y = powCf(lms.y, 3.0f); lms.z = powCf(lms.z, 3.0f); return VecMatMul3x3(mat, lms); } __DEVICE__ float3 RGB_OkLCh(float3 rgb, int colorspace) { float3 lab = RGB_OKLab(rgb, colorspace); return OKLab_OKLCh(lab); } __DEVICE__ float3 OKLCh_RGB(float3 lch, int colorspace) { float3 lab = OKLCh_OKLab(lch); return OKLab_RGB(lab, colorspace); } // ============================================================= // Ref Matrices // ============================================================= // ============================================================= // DWG -> XYZ // 0.70062239, 0.14877482, 0.10105872 // 0.27411851, 0.87363190, -0.14775041 // -0.09896291, -0.13789533, 1.32591599 // XYZ -> DWG // 1.51667204, -0.28147805, -0.14696363 // -0.46491710, 1.25142378, 0.17488461 // 0.06484905, 0.10913934, 0.76141462 // 709 -> XYZ // 0.4123908, 0.35758434, 0.18048079 // 0.21263901, 0.71516868, 0.07219232 // 0.01933082, 0.11919478, 0.95053215 // XYZ -> 709 // 3.24096994, -1.53738318, -0.49861076 // -0.96924364, 1.8759675, 0.04155506 // 0.05563008, -0.20397696 , 1.05697151 // XYZ -> LMS // 0.8189330101, 0.3618667424, -0.1288597137 // 0.0329845436, 0.9293118715, 0.0361456387 // 0.0482003018, 0.2643662691, 0.6338517070 // LMS -> XYZ // 1.22701385, -0.55779998, 0.28125615 // -0.04058018, 1.11225687, -0.07167668 // -0.07638128, -0.42148198, 1.58616322 Using it another DCTL looks something like this: #line 2 #define ENCODING_ENUMS_DEFINED_IN_UI #define COLORSPACE_ENUMS_DEFINED_IN_UI #include "OKLab_Transforms.h" DEFINE_UI_PARAMS(p_InCSpace, Input Color Space, DCTLUI_COMBO_BOX, 2, {cACES0, cACES1, cDWG, c709}, {ACES (AP0), ACES (AP1), Davinci Wide Gamut, Rec.709 / sRGB / BT.1886}); DEFINE_UI_PARAMS(p_InGamma, Input Gamma, DCTLUI_COMBO_BOX, 2, {gAcc, gAcct, gDWI, gLIN, g709, gSRGB}, {ACEScc, ACEScct, Davinci Intermediate, Linear, Rec.709, sRGB}); __DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B) { float3 in = make_float3(p_R, p_G, p_B); float3 out = in; // Convert to OKLCh: float3 linear = Decode(in, p_InGamma); float3 oklch = RGB_OkLCh(linear, p_InCSpace); // Convert back: linear = OKLCh_RGB(oklch, p_InCSpace); out = Encode(linear, p_InGamma); return out; }
    1 point
  34. Another gem by @Henryo Really makes me want to pick up my bmpcc again soon. Anyway, such a treat to watch:
    1 point
  35. While AI can be employed for positive or negative things, there's a bigger outlook at play for me. Robert Persig's famous musings are where I want to stand philosophically. His theories, and my limited understanding of them, are pretty much the reason why I ultimately view AI unfavorably.
    1 point
  36. Regarding voice AI. Hoo boy. As a documentarian, this one can affect me a lot. A lot of ills can be smoothed over with AI audio. But ... at the end of the day it's an ethical choice how it's employed. I've decided to ONLY use it to salvage VERBATIM lines from interviews and field audio that is distorted beyond comfort. Like, wind noise, clothes rustling. And then it's a last ditch option after audio EQ/Rx tweaking. Best thing to do is just not 'f up the field production to begin with. Beyond that, if AI is used as a production short cut to solve a storytelling/crafting failure as a filmmaker -- I now consider AI use untenable for me. It's simply on the wrong side of things morally when it comes to making honest doc films. Sadly, I fear that's now a contrarian opinion; an "old-fart" opinion. No one probably really gives a shit anymore about these sorts of "cheats" 'cept me.
    1 point
  37. Do you agree with my choices... And did I miss anything? https://www.eoshd.com/news/the-2025-hybrid-mirrorless-camera-rankings/ The 2025 camera rankings for video quality and value for money are in!
    1 point
  38. Django

    If not ZR, then Panasonic?

    Yep that's the kind of intermediary codec that the ZR needs. But only if Nikon doesn't cook it with that aggressive noise reduction. You know the drill, Fuji had similar issues I seem to remember you pointing out. Thing is, I don't buy cameras based on promised or wishful features anymore. Been burned too many times waiting for "coming soon" updates that arrive late, incomplete, or not at all. So as of right now, the ZR is off the table for me. It's not just the codec situation though tbh, the unreliable view assist/exposure tools and first gen quirks also give me cold feet. Good to know LT is officially on the roadmap, though.. great for early adopters but I think I'm done gambling on "maybe later".
    1 point
  39. Great to hear from you! It makes much more sense now we know why you brought the channel to a close and also chose to delete it eventually. All I can say is that I'll miss it. And I hope you find the passion to return one day to the tube. Having read that I understand completely and found the same challenges myself too, everyone can see that I struggle to get excited about the gear sometimes, whereas in the earlier "DSLR revolution" era, I'd be updating EOSHD 5 times a day sometimes more. It is difficult when a passion becomes work, when enthusiasm becomes repetition, when an audience goes toxic or when a big US tech companies enshitifies the platform you're posting such valuable creativity on. I mean, look what they did to Vimeo, it's a difficult pill to swallow and I've always struggled with my enthusiasm for YouTube as a platform as well and there's very little in way of alternatives. I'm just glad you're well and enjoying your R5... You're right in that it still holds up as near the peak even 5 years later, and the overheating drama is far behind it after Canon decided to undo the damage caused by their fake timers and cripple hammer decisions.
    1 point
  40. MrSMW

    If not ZR, then Panasonic?

    I’m not ‘waiting’ for it as such, but based on my S9 experience, it could be something pretty special. If they get it right. If they even make one… I have gone backwards and forwards over what to do with my S9 that has been at times both A cam and B cam but due to me picking up a pair of S1RII’s mid last year, plus certain limitations of the S9 (mainly build), I was going to let it go… But then too many times I have let stuff go and regretted it so I have repurposed it giving it an XLCS cage, super-lightweight tripod and the dedicated 2 lenses to it, the 18mm or 85mm f1.8. I am hoping they do an S9II with a bigger screen à la ZR but with the S1I/R/E tilt option. And make it a bit more robust but spec wise, it’s already peak camera for my needs. So @gethin maybe look at an S9 because straight out of the box, it’s very high spec and really it’s only the body that is a bit weak, but beef it up with a cage and it’s 💪 And used, pretty cheap!
    1 point
  41. That's really unfortunate. His Vimeo is still up, and his Instagram too, though they haven't been updated recently. His content output decreased a lot once Gunpowder passed, but he had already been less active as I think he became more and more disillusioned with the entire YouTube/Filmmaking/Photography scene. I hope he is well and creating the art that he loves.
    1 point
  42. Henryo

    The D-Mount project

    Thanks for chipping in. Yes Pocket og II would do really well. I actually have had so much fun shooting with it since the video and will have more stuff to put out soon. Watch this space. When I say it is so much fun, it is actually very inspiring just taking it with me wherever I go with a few batteries and capturing life happening around me. It is a great storytelling tool. Take care and have a great holiday everyone.
    1 point
  43. Train Dreams on Netflix I don't know how many of you have already seen it, but I still wanted to recommend a film that I watched a few nights ago on Netflix that hit me right in the heart. It was presented at the last Sundance Film Festival but then went directly to Netflix without going through movie theaters. And it's a real shame because the cinematography is stunning. Shot in 3:2 in the Idaho forests with an almost documentarian feel. The story is infinitely sad and very slow. The reviews are almost all enthusiastic, and it has become one of the most viewed films on the platform in recent weeks. The negative reviews accuse it of being truly slow, but in my opinion and that of others, the beauty lies precisely in the film's slowness. If you manage to get into the mood, it hits you right in the heart. I don't want to spoil too much about the story, but the ending truly moved me. The DoP says that the film was shot almost entirely using natural light (à la Lubezky), and many scenes are set at dawn, sunset, and nighttime using real candlelight. The chiaroscuro is a delight for the eyes. https://filmmakermagazine.com/129137-interview-cinematographer-adolpho-veloso-train-dreams-sundance-2025/
    1 point
  44. I think people are mistaking pretty with good cinematography. There’s good cinematography and there’s bad cinematography, and then there’s cinematography that’s right for the movie. In this case it looks right for the movie.
    1 point
  45. I'd argue it is the MOST important because without the camera, you don't have a picture. It is the small differences between the latest sensors and codecs that's the unimportant thing. In cinematography, our job isn't to worry about the costumes or set pieces, that's the job of someone else. So lighting and camera are the most important for a DP. What has happened is the gap between the top-end i.e. ARRI and the cheap stuff has closed up. This has been going on ever since the start of the DSLR revolution so it's not a new thing but there's never been a smaller gap that exists now, for example between something like the Alexa 35 and a $1000 used Panasonic S1H. By the way although Magellan has beautiful content and really nice camera-work, the sharpness of it and the deep DOF isn't everybody's cup of tea. It does look a bit too soap opera in parts of that trailer, I think. It looks very different to an IMAX shot film. So there's big differences between formats and lenses still... The same cinema focal length for example on 16mm has always looked vastly different to same on IMAX or large format. Also there are big differences in grading style, camera movement style, and so on. I think most relevant for us is that you don't need to make a massive rig any more to get good results. It's horrible having the weight as a one-man DP. Probably why they used such a small camera on this.
    1 point
  46. My favourite AI outcome would be for it to fuck right off.
    1 point
  47. There are moments here I enjoy but this forum feels more like a place to vent about gear than to actually discuss the craft of filmmaking. Where are the conversations on creative problem solving? How are people pulling off run and gun shooting in restricted areas without permits? What are some cost-effective practical effects techniques for horror films? Are certain shot compositions or camera movements more effective at evoking specific emotions in an audience? I know I can find some of this on YouTube, but part of the appeal of a forum like this is the ability to connect directly with professionals, exchange real world experiences, and even spark collaborations. Is anyone still having these kinds of discussions here?
    1 point
×
×
  • Create New...