Leaderboard
Popular Content
Showing content with the highest reputation since 02/25/2026 in Posts
-
Panasonic just released a new on-camera mic. Looks like an excellent option for events etc where you want something small or something really flexible. I've watched a few YT showcases and for me, the best features are: It gives you 32-bit without having to have the external mic preamp box (and then adding microphones to that, making it larger again) It's small, much smaller than an on-camera shotgun mic You can quickly swap between modes (I assume?) it's powered by the camera It unlocks the ability to record >2 channels of audio into the files (one person said you can record left/right/mono/mono-20dB as a combo, and left/right/left-20dB/right-20dB as a different combo) It's definitely not magic and the laws of physics still apply. There don't seem to be any really good on-location stress tests posted yet, but there's a few examples. Media Division did an in-kitchen test to compare it to in-camera mics and lav and a DJI clip-on, and also applied a bit of AI voice isolation too to see how far you can push it: Dustin did some good tests including walking a 360 around the camera in each mode, which showed how directional it is, which seems pretty impressive. He also compared it to the Sennheiser MKE440. This shows the different modes out in nature: This is probably a complete revolution for a number of niche uses. Content creators would be one, where they're recording in noisy environments but still staying relatively close to the camera where physics will be helping them. Another is where the flexibility really helps, like shooting events where getting pristine audio isn't an absolute must but working super-quickly is more important and perhaps the 32-bit would really come into its own. This reminds me of how people used to talk about Panasonic when the GH4 and GH5 were around and people were saying that Panasonic just listened to people and then implemented the features that people would use rather than trying to be flashy and grab headlines. This will be an invisible workhorse for lots and lots of people.4 points
-
The Aesthetic (part 2)
TrueIndigo and 2 others reacted to kye for a topic
I'm back from Guangzhou China, and starting to evaluate the footage, especially my modified Takumar 50mm F1.4 with the custom "insert" made from post-it notes and sticky tape. I managed to get out and shoot with it on a couple of nights. One in Beijing Road and the other in Yong Qing Fang. Some images from Beijing Road... these are all wide open, and lightly graded with Resolve and Film Look Creator. Overall, I'm really liking the aesthetic, which reminds me of mid-budget Hong Kong cinema, which I have a soft spot for. I mostly exposed to protect the highlights and then adjusted exposure in post under the FLC, and the GH7 has just enough DR for this, despite the scenes being quite challenging. The lens has a shallow enough DOF to be able to direct the viewers attention by choosing what is in focus, and the FOV (equivalent to a 71mm F2.0 on FF) is great for these type of scenes where the scenes would mostly overwhelm a wide lens with pure chaos. Some images from Yong Qing Fang.. same as above but with a touch of sharpening. This was a lot darker and I needed to push the ISO to get more levels in some scenes. It was also a lot higher DR, so some shots will be limited in post for how I grade them and I'll probably reach for NR in places. The lens is actually quite sharp in the middle, but the sides are more distortion than I'd like with quite a bit of bokeh distortion and coma from bright sources. The experiment with this "insert" was how strong a look it would be and I think it's probably too strong because the bokeh shapes are too distracting due to the sharp corners. It's distracting on frames with a clear subject (where you want the background to get out of the way) and on other shots its pure chaos and completely negates the idea of directing where the viewer will look. Getting DOF this shallow on MFT isn't easy, so I'll have to think about it more for future trips.3 points -
My archive on gh2 patches 1 GOP Intra 'moon' T7 - Top Grading - Best Motion - Best Setting Ever 1-SpanMyBitchUp patch is good quality for spanning with long record times 2-AQuamotion v2 is medium-high quality with decent spanning recording times + 80% slowdown / EX TELE 3 GOP 'Spizz' - Hi-Quality - Pro Motion 3-TerrAQuake is seAQuake but less quality frame sizes for poorer type 10 cards 4-SeAQuake is Very High Quality for hi-end SD cards 6 GOP - Middle Earth 'Nebula' 7 GH2 Flow Motion v2 - 100Mbps Fast Action Performance & Reliability 8 T9-gh4 like 9 12/15 GOP 'DREWnet' T9 - Traditional Long GOP 12 https://www.dropbox.com/scl/fo/blop84zqvmgob2qiab07v/ACl0mBKWCsMcidxL5qUcqgA?rlkey=3gb5igu910uyw5sipalzlovp2&st=1vsslgjs&dl=03 points
-
Looking for Gh2 patches
John Matthews and 2 others reacted to Andrew - EOSHD for a topic
I'll have a dig and host them here if I find the mother lode3 points -
Getting prepped for my next trip and have further refined my setup. This trip is a quick trip to China, but it's also a test case for a trip I'm taking later in the year to Europe where the packing approach will be minimalism. Unlike the way I like to travel in Asia, the Europe trip will involve changing accommodation every few days, so packing and unpacking and hauling bags around will be much more of a pain, so I'll try and travel really minimally. As such, my approach for this trip is "when in doubt, don't take it" and see what I actually use. So the setup for this trip is: GH7 14-140mm F3.5-5.6 zoom, which I use during the day at F5.6 which means my 1-5 stop vND is enough 12-35mm F2.8 zoom, which is a great walk-around lens after dark Takumar 50mm F1.4 with M42-MFT Speedbooster (with bokeh insert) for "night cinema" iPhone 17 Pro setup (Neewer phone filter mount, K&F 1-9 stop vND, MagSafe Popsocket) The GH7 and zooms are self-explanatory, so here's the 50mm F1.4 setup. I have played around with "inserts" and ended up with a pretty extreme design, so this is a test to see if the vertical edges are too strong a look for me. It's made from the sticky part of the post-it note, and a layer of sticky tape over the top to keep it a bit more together. It sits between the speed booster and the lens, and I won't use the speed booster for any other lenses while travelling so this will stay in there and protected, so doesn't need to be that robust. It's a strong look in some situations and quite "painterly" in others, so I'll be curious how it goes. For my iPhone 17 Pro, it's a phone most of the time and a camera only as a backup, so I searched for a setup that would: Protect my phone from drops (I dropped it on the last trip and the screen shattered, despite it being in an Apple case - the only one available at the time... sigh) Still be right-sized for getting in and out of pockets etc Have a vND solution for when I want to shoot and use 180 shutter I'll spare everyone from the rant about the options out there (everyone wants you to buy into their "ecosystem" now) so I ended up with the Otterbox Defender Series Pro case, which makes the iPhone feel even larger than it did in the Apple case (which doesn't seem possible but is true), but seems very robust. The vND is the Neewer phone filter mount, which sort-of clips onto the phone (It's designed to screw onto and clamp the phone but you're clamping against the screen, so I wouldn't tighten it that much). It's designed for a naked iPhone, so I had to modify it (and the Otterbox case) slightly where it interfered with the Otterbox case to get it to sit a bit flatter. It still doesn't sit flush, but it goes on and seems to be fine. I haven't got around to actually taking it out to shoot with it, so that remains to be seen. I paired it with the K&F 1-9 stop vND, which boasts 18 layers etc, but doesn't claim to be a "True Colour" one like the 1-5 stop ones do. It doesn't have hard stops and I think it still gives the X at the max amount, but I'll see how I go. Not having an aperture sure sucks considering you're not really losing having shallow DOF. That is all combined with the MagSafe Popsocket as a safeguard. I've used the adhesive popsockets before and they're great for giving a much better grip on the phone, but I wasn't sure how strongly the MagSafe would be. The Otterbox claims to have magnets in it that strengthen the MagSafe connection, and this might be true. It feels quite sturdy actually, and I tested it to require 1.75kg of force to pull off, compared to the 1.45kg of force it took to pull it off my naked iPhone 12 mini. No idea what strength a naked iPhone 17 Pro MagSafe connection would have, but it's not terrible. Lots of compromises involved, but it's really my backup camera, and the Otterbox case is very grippy, so I'll see how I go.3 points
-
New cinema camera...?
Emanuel and one other reacted to eatstoomuchjam for a topic
For sure, and I'd be excited to give it a try! I was a Kickstarter backer of the Z Cam E1 and I've bought a few Ribcage kits/cameras over the years. I was disappointed by all of them, but I'm still hoping for that magical/usable tiny sensor camera! I have a little bag full of D-mount and C-mount lenses just waiting to go on something! (I still wish there were a way to get a decent/non-laggy video feed from the Insta360 One R/RS series - I have a Ribcage-modded 1" module and the quality is really decent - but focus is hard, given the only options for monitoring are the camera's tiny screen or laggy wifi)2 points -
I found an interview with the person who shot the under-water sections of a GoPro promo video (IIRC it was for the Hero 3 or 3+), and the level of effort they put into it was simply incredible. He had a team of about 5, three crew and two cast, and they had a week for production. He was an independent DOP and had done some pre-production as part of his 'pitch' to GoPro to get the gig, but I think they did detailed pre during the week as well as camera tests and lots and lots of shooting. This was only for the underwater shots (the bikini girl diving beneath the waves). If we assume that each of the (maybe half a dozen?) locations each got 5 people for a week, then that's ~7500 hours just to film the 1-2 minute promo video. The level of cherry-picking is extreme - professional DOPs pitching projects, travel to the most exotic locations, testing of all modes with all manner of equipment, everyone in cast and crew are professionals, long shooting days at the best times (golden hour, etc), dozens of hours of footage just to make a short promo. Then people set it to auto, hold the tiny camera in their hand and film their family at the beach with whatever lighting and weather happens to be there at the time and then we wonder why it doesn't look like the promo videos... Having said all that, if GoPro make an interchangeable lens camera with a half-decent bitrate and a colour-managed LOG profile then it might be the tiny camera we've been wanting!2 points
-
Any takers here?2 points
-
Fav AI outcome out there...
John Matthews and one other reacted to fuzzynormal for a topic
Well, there's one hell of a metaphor in the "context" of this.2 points -
Panasonic says nothing for 30 minutes at CP+ 2026
eatstoomuchjam and one other reacted to Andrew - EOSHD for a topic
"Tsumura-san confirmed that photographers who compose through viewfinders “strongly request the inclusion of an EVF,” and that Panasonic is considering the balance between compact size and EVF inclusion as they work to “meet the expectations of as many customers as possible.” LOL. GM5 anyone?2 points -
Panasonic says nothing for 30 minutes at CP+ 2026
eatstoomuchjam and one other reacted to MrSMW for a topic
Yes, I don’t get it either. You are in the full-frame LUMIX ecosystem and are considering a new camera and on your interest list is the latest options from Nikon and from Sony. You are increasingly tempted to jump ship even though it’s a big move. Which one of the below options do you wish to hear? A: We are working on a new flagship camera that will be launched in the Summer. B: Expanded horizons, creative direction, global market strategies, Operation Epic Bullshit, waffle waffle waffle, blah di blah di blah. I didn’t watch it and haven’t read a single word of it but 100% sure it wasn’t A: A perfect example of how not to retain customers. There is a saying which applies to all business and that is no matter the size, small, medium or large/international, just because you are in business, doesn’t mean you are any good at business. Some companies are more clueless than others…2 points -
Panasonic on-camera mic released
John Matthews and one other reacted to Clark Nikolai for a topic
This reminds me of using my old Sony camcorder with the 5.1 surround sound microphone. I would shoot with it, then in post be able to isolate each channel and choose which one to use and ignore the ones that were just location noise. Pretty handy without much effort when shooting. This might be similar in that sense.2 points -
Panasonic on-camera mic released
kye and one other reacted to John Matthews for a topic
Fair enough. I just wish they'd spent time and resources elsewhere. I want a small, up-to-date, Panasonic M43 camera, not an overly complex version of a on-camera mic. This seems like a great product for 2010. But what do I know, maybe this THE MIC, the one that everyone was waiting for. What I can tell you with 100% certainty, people are ready and willing to pay vast sums of money for old gear, only because it's small. What happened Panasonic? The whole miniaturization of components thing has been apparently disregarded.2 points -
Come on John... everyone knows that anyone who wants better footage than a smartphone can provide is 100% totally fine with a camera the size of a microwave oven that looks like a Borg prototype! Being slightly serious though, it's easy to criticise, but as someone who wants flexibility and better sound options, this is FAR better than the previous options, so it's a welcome addition in my eyes. The worst enemy of progress is criticising everything that isn't perfect in every conceivable way.2 points
-
Panasonic on-camera mic released
Alt Shoo and one other reacted to John Matthews for a topic
It's great they came out with something new, but I wish they'd spent their time elsewhere. This product just doesn't seem like a priority. If audio is the priority, I'd rather a set of 2+ wireless lav microphones that connect and record to the camera via bluetooth or wifi. Why hasn't anyone done that?2 points -
Where did Mattias Burling go? Youtube channel is gone.
MurtlandPhoto and one other reacted to Mattias Burling for a topic
Hello, I hope everyone is well! Even though I’m not really active on camera forums anymore, I frequently read the EOSHD blog and every now and then the forum, so I saw the thread and thought I would respond. Because it wasn’t ”poof gone”, it was announced on the channel over a year ago and mentioned in the last three videos. Before going into why, super flattered that this thread exist. I mean that. So here are some thoughts on the matter and why I took it down. Hobby vs Work YouTube was never my job, just a hobby. So was video making and photography, in the beginning. When starting the channel I was working as a producer after a couple of years as a radio/TV reporter. So I started the channel to keep my practical skills fresh. And to keep up with the development, which was huge at the time. The DSLR revolution, Blackmagic, cheaper editors etc. Fast forward a couple of years and I started making more videos at work again. At the same time I pretty much lost all interest in doing it as a hobby. And actually canceled the channel. Winston Churchill was definitely right in saying that work and hobbies should not be too similar. But what I had discovered was a passion for still photography, which I had pretty much no experience with. So I started making videos again. That’s why my videos became very repetitive and short. I didn’t care about that part, I just wanted to display my stills work and get feedback, talk to the community, experiment with cameras and develop. After a few years I became a good enough photographer that my new employer noticed and just like that I was shooting stills professionally all the time. And I still do (I work in marketing and PR). It’s a huge bonus in my field and if you are good at it you will never be out of work. So photography also became less and less of a hobby. Instead I found other hobbies. They where things that for example got me out into nature, so photography tagged a long a while, as a secondary activity. But eventually it faded. It was also nice to do things and not share it with people. I know I probably could have a very successful channel by making videos about my current hobbies, and even make some money. But I never really wanted a channel for the sake of a channel. And always had a full time job. The fact is that at no point would I had been able to live of my channel, not even at the peak. Even with sponsors it was never more that a regular salary (in my field and country). But as long as it was a hobby and I was glad to do it, it was a welcome addition to finance camera gear. Time At the same time as my channel started to feel less fun and other hobbies started taking my time, I started a family. So.. you get the idea: full time job + family + 2-3 hobbies = no YouTube. Upkeep So why take it down, why not leave it for the community? I did.. at first. Like some of you pointed out, the YouTube crowd in the photography/video space is generally nice and positive. That is my experience as well. Early on I learned that a good way of keeping the trolls away was to be present. Respond and engage. Trolls are usually idiots or cowards, so they don’t like getting push back. But once I stopped making videos, views and comments obviously went down. But the trolls started coming back. Not so much after me, and I don’t care about that. But agains the community. The people commenting started being nasty towards each other. I felt a responsibility to moderate, which was annoying. That’s when the thought about simply removing it started to grow. It wasn’t an impuls. It was an internal debate that went on for months. And the issue grew much much larger than a couple of trolls. I started thinking about five years ahead, 10 years, 30 years.. This post is already way too long so I won’t go into all of it. But I think you get the idea when I say: Privacy or when the content no longer reflects the creator. Digital minimalism, control over one’s narrative, inactive or outdated content. Risk of misuse of content due to me not checking the terms updates. Closure. So there is a looong ramble :) To keep in spirit of the forum I can charge my current gear for pro work :) For the longest time I used the EOS-R for 75% of all my work and the R5 (rental) for the rest. It wasn’t mine but my employer told me to buy whatever I wanted. Paired it with a 28, 35 and 70-200. 70/30 stills/video. The R5 is peak camera imo. Today is a little different. I started working for a new company about a year ago and again was told to buy what I needed. I would have bought the R5 without hesitation if it wasn’t for the Sigma 35-150/2-2.8.. I just had to have it. So I ordered the Nikon Z6iii. It’s not as good overall as the R5 for me and what I like in a tool camera. But it’s 90% there. And coupled with that lens it’s becomes on par. //MB2 points -
The Aesthetic
Aussie Ash reacted to kye for a topic
We're all talking about aesthetics. We're talking about aesthetics when we talk about the "look", but we're talking about it when we talk about specifications too. A debate rages about what is "enough" resolution, "enough" sharpness, "nicer" bokeh... what is "cinematic"... what is "visible"... what is "practical". This thread is a reality check against the warped concepts that stills photographers and their camera-club specifications-obsessions have given us. Because, for the most part, better objective measurements are mostly worse subjectively. It's our imperfections that creates our humanity, and it's analog imperfection that creates emotional images. Baseline First, let's establish a baseline. Here are some test images from ARRI that are designed to showcase the technology, not a creative aesthetic. Note the super-clean image, lack of almost all lens-distortions (except wide angle distortion on the wide lens, which is actually super-wide at 12mm). If you were there, this might be what it actually looked like. Those were grabs from a 4K YouTube upload, but lots of trailers aren't uploaded in 4K, so here is a still from 1080p Youtube video that ARRI uploaded in 2010. Now, without further ado.... The Aesthetic - The Chilling Adventures of Sabrina These are obviously very distorted, and I chose frames that were especially so. This should instantly disavow you of the idea that somehow Netflix demands "pristine" images - these are filthy as hell, but this is appropriate to the subject matter, which is about witchcraft, the occult, demons, and literally, hell. Whenever I hear someone say "oh I can't believe how terrible that lens is - look at the edge softness" I just laugh. The person may as well be saying "not only don't I have a clue about film-making, but my eyes also don't work either.. please ignore everything else I say from now on". The Aesthetic - Sex Education A show with a deliberately vintage vibe, the look is suitably vintage, with some pretty wicked CA. One thing that's interesting is the last shot, which was either a drone with a vintage lens on it, or it was doctored in post, because it has pretty severe CA - look at the bottom right of the frame above the Netflix logo. Also note how nothing looks sharp - the first image should have had something in-focus, but softness of this level is deliberate because once again, the last shot is a deep-focus shot with a stopped-down aperture and should be super-sharp but isn't. The Aesthetic - No Time To Die Some shots are softer than others, but note the amazing barrel distortion and edge softness on the middle two shots. In case you missed it, here's the star of this $400M movie in a pivotal scene from the movie: Are there lenses that could have made this shot more "accurate"? Sure - just scroll up to the ARRI shots which look pristine (and they're ZOOMS!). This was deliberate and is consistent with the emotion and narrative. The Aesthetic. The Aesthetic - The Witcher Sharp when it wants to be, oversharp too - see the second image, but with anamorphic bokeh for the look. Ironically, a fantasy story of witchcraft and monsters, using cleaner more modern looking glass. The complete opposite approach of Sabrina. Note on the second-last image the vertical anamorphic bokeh, and then look at the last image and note the "swirl" in the bokeh. I doubt this was accidental. The Aesthetic - You Sharp and clean when it wants to be, and other times, really not. Appropriate for the subject matter. The Aesthetic - Squid Game Clean, sharp but not too sharp, neutral colour palette, but note the vertical lines on the edges of the frame aren't straight? Subtle, and perhaps not deliberate, but picture it in your mind if they didn't flare out.. it makes a difference, deliberate or not. The Aesthetic - Bridgerton Clean, spherical, basically distortion free, but sharp? No. Go look at that Witcher closeup again for some contrast. The Aesthetic - The Crown Clean and relatively distortion free, but lots of diffusion, haze, and low-contrast when required. The Crown is a masterpiece of the visuals matching the emotional narrative of the story, which is made extra difficult as the story is set in reality, and the emotional tone is so muted that had lesser people been involved in making it any subtleties may well have simply been bland rather than subtle but deep. The Aesthetic - Mindhunter Perhaps the most interesting example here. After looking at the previous images the above might seem completely unremarkable, except that this look was created in post. and I mean, completely in post: More here: https://filmmakermagazine.com/103768-dp-erik-messerschmidt-on-shooting-netflixs-mindhunter-with-a-custom-red-xenograph/#.YftoaC8RrOQ and here: https://thefincheranalyst.com/tag/red-xenomorph/ (there's a great video outlining the lens emulations in post in this one). That's enough for now. Hopefully now you can appreciate that "perfectly clean" optically is actually only perfect for "perfectly clean" moments in your videos. Sure, if you're out there doing corporate day in and day out then it might seem like "clean" is the right way, or if you're in advertising or travel where neutral reigns, but when it comes to emotion, it's about choosing the best imperfections to suit the desired aesthetic.1 point -
The Aesthetic
Aussie Ash reacted to kye for a topic
Like almost everything of value! Seriously though, one of the best reality checks you can do is to find the all-time best examples of whatever you're doing and study them. When I did this it basically took almost every one of my previous references and relegated them to below 5/10, and made the 'most recent' on YT and streaming platforms look like toddlers playing with crayons.1 point -
I am working on my own NLE for the Mac
eatstoomuchjam reacted to Andrew - EOSHD for a topic
Hiding all the clutter would go a long way to making stuff like Resolve more usable, it's fine for professionals who actually need and use 1000 features but when you just want to get a quick turnaround done on a piece of video news journalism or a YouTube edit, it's total overkill central and as for newcomers it's totally baffling for them, and creates a sense of dread. FCP's magnetic timeline is the sort of thing you need to learn and read the manual for, it never felt intuitive compared to Premiere. It doesn't work well for soundtracks, sends stuff out of sync, maybe I was using it wrong but I never figured it out myself and gave up on it (like a good proportion of the pro market did). The situation today is we have a few iPad apps that are vaguely decent and a few Mac NLEs that look like Windows XP apps with too much clutter. But if people have constructive design ideas for an alternative solution I'm all ears 🙂 The EOSHD NLE is already under way and basic prototype exists.1 point -
I am working on my own NLE for the Mac
Andrew - EOSHD reacted to Clark Nikolai for a topic
Sounds interesting. I would say that FCP's magnetic timeline is what makes it so fast (once you learn it.) Also that you can hide all the clutter and make it look simple. But I don't want to discourage you on this project. Give it a try. For suggestions, I don't have many but I suppose being able to run on old hardware and old OS version. There are plenty old Mac Pro towers out there from 2010 still working away. Mostly places that do videotape digitizing in standard definition and people running old telecines where the software won't run on new computers. Good luck.1 point -
I have one. i dont mind it for when i do road trips. The 15mm olympus lens i mean... Can use it like a lens cap, push the lever and away you go. Although i do worry that ultimately dust or dirt is going to find its way into the lens... takes up no room what so ever. You do get a slightly different look, its a simple lens, i guess you like it or you don't . If your camera has peaking that helps with depth of field i find. Would have liked the 9mm as well but that hasn't eventuated yet. Not so sure about the moon shot, the moon is pretty bright, if your using a tele lens of some kind you dont need to venture too far from a normal iso and shutter speed. I can string a lens combo together of about 950mm on a mft mount, and i can tell you that image of the moon thats been supplied is huge. Be interesting to see how they did it, my money's on this new gopro gaffer taped to a telescope of some kind at least. maybe 1200mm as my images of the moon aren't that big. I'm also willing to bet gopro gave their camera to someone who's heavily invested into astrophotography and got all the gear and said, here have a play with this. Its actually a very nice image of the moon all things considered. I am confident that optimum conditions and fair amount of skill were involved in that photo of the moon. Same with all the other images supplied they all look done under optimum "conditions" Gopro might sell a bunch of these, if all you have to do is gaffer tape it to the back of a telescope and can get similar results. There's plenty of enthusiasts out there who would buy one, however if you have the gear already, you probably have a decent camera already as well... I like the little go pros, i think their pretty cool considering what you can do with them. They are a great little action camera, maybe not a great cinema camera but that comes back to the owner and time and effort and money they want to put into it. My "gripe" with gopro is its all digital, digital stab, digital zoom. I dont like the fisheye too much, so i shoot linear , which is a digital zoom i believe. Now i'm not nocking the digital stab or the zoom per se but there's no optical with gopro if you want optical its up to you to supply your own diopters or other type of " kit ". I would be interested if gopro did do some kind of two lens system like a wide angle and a normal lens that you could twist on and off like the front element does, but i fear with the new gp3 there will be just more digital and an Ai moniker 😉.1 point
-
I am working on my own NLE for the Mac
eatstoomuchjam reacted to Andrew - EOSHD for a topic
is the '3d lut editor' this one https://3dlutcreator.com Costs 99 quid? Seems very pricey when Resolve can be used as a lut creator for free. Still, a sledgehammer to crack a nut. And then you have iMovie which is a toy hammer to crack a nut! I'm thinking of something else... Maybe an NLE and LUT Creator all in one for $39 which does away with all the bloatware and speeds up your workflow for 99% of edits. I have used Luma Fusion, a touch screen tablet / phone app - I don't know about you but I hate NLEs on a phone, you just don't have enough screen real-estate.1 point -
HBO have their first production underway using the Blackmagic URSA 17K
sanveer reacted to Aussie Ash for a topic
Not a very good interview but the short and curly of it is The URSA Cine 17K 65 retails for approximately US$29,995. In contrast, the Arri Alexa 65 is unavailable for purchase and must be rented, with high-end productions often spending hundreds of thousands on camera packages.1 point -
The Aesthetic
kye reacted to Aussie Ash for a topic
Handy material here that has become buried in the archive1 point -
HBO have their first production underway using the Blackmagic URSA 17K
Aussie Ash reacted to eatstoomuchjam for a topic
Indeed, though "Ursa Cine LF" would be a totally fine name. There's just no need to say "12K" on there. For the other? "Ursa Cine 65" since the 65mm sensor is going to be a whole lot more exciting to most people than telling them it has a 17k sensor that they will mostly use in 8K or 4K mode. Just about the only competition on the market, the rental-only Alexa 65, is a 6.5k camera. That is, of course, unless you're selling something to be used for projecting on The Sphere in Vegas. Then I guess there's a single camera that's competing - like literally just one body, as far as I know - that 18K thing with the 75x75mm or so sensor. I've been a bit tempted a few times now to try to rig up something with a medium format ground glass or a 4x5 GG, similar to an old DOF adapter for camcorders. I remember Gale had the Forbes70 which worked that way and there have been some 8x10 projects to do the same, like the FZero from Salazar and the one that Media Division put together. I should probably just go through my pile of project cameras to see if I have a pretty clean GG around and just do it...1 point -
HBO have their first production underway using the Blackmagic URSA 17K
eatstoomuchjam reacted to Aussie Ash for a topic
At least Red choose some catchy names V-Raptor,Red Raven,Dragon,Helium,Gemini,Monstro,Komodo,Gemini,Red Epic1 point -
HBO have their first production underway using the Blackmagic URSA 17K
MurtlandPhoto reacted to eatstoomuchjam for a topic
I really wish BMD would stop putting the maximum resolution in the product name! I make these jokes too about my own UC 12K ("finally, something for you to play on your 12K TV!), but in reality, it's just a really nice 8K/4K camera which has a bonus 12K mode for... mostly VFX shots. 😅1 point -
HBO have their first production underway using the Blackmagic URSA 17K
eatstoomuchjam reacted to Andrew - EOSHD for a topic
At this point Blackmagic should just buy ARRI and take over 🙂1 point -
That’s a lot of the (my) reason for sure and because you need to often be right up in people’s grills, and then there is the distortion and massive hands… 18mm on full-frame is the widest I have and go for all of these reasons plus, I just don’t like the look. Never have. But when used for ‘artistic purposes’, ie, ‘intent’ that is a whole other thing.1 point
-
New cinema camera...?
kye reacted to eatstoomuchjam for a topic
Sure, slow zone focusing lenses are definitely a possibility. I used that exact lens for part of my YouTube review of the Z Cam E2c if I remember right, pushing the compact-ness of the body/setup. It's a potato, but it's alright in the context of "it's a body cap you can use to make photos." You could also go the route of MS Optics-style designs. I have their 21/4.5 triplet and there are most definitely compromises to get it as small as it is, but it's also a really fun lens with better quality than the Olympus cap (and full frame-ish coverage). A design like that one or their 24/4, but stopped down to f/8 could be interesting. Indeed - and I don't think they've said that the images are SOOC. It's hard to know how much editing was done with the moon shot. It's also not out of the question that an auxiliary lens was used to make it more telephoto. This is the importance of waiting until devices are in the hand of real consumers before getting too hyped. There aren't many things that I'll preorder and the number gets smaller every year, getting replaced by the number of things that I'll wait are available used for at least a 20-30% discount. X-M5 for $900? Shrug. X-M5 for $824-874 on mpb? Get bent, mpb. X-M5 for $781 on adorama? Starting to move in the right direction... Yes. And the close-up stuff could, theoretically, be done with some of the nicer existing action cameras with a diopter.1 point -
New cinema camera...?
eatstoomuchjam reacted to kye for a topic
Could do, I guess there are options. One thing that comes to mind for the vlogger crowd is having a small manual focus that goes between two useful focal distances, like vlogging distance and normal infinity focus. This is how the Olympus 15mm F8 MFT pancake lens works, and it's surprisingly functional. It sort of sits in that middle-ground where you need to adjust focus because you can't get 30cm to infinity in focus at the same time (like a normal GoPro), but the DOF is still deep enough that you don't really need to have much control over it. In practice it's sort of like a switch where you're either at one end or the other. Looking at those GoPro sample shots, both the shallow DOF shots are relatively macro, so that doesn't need a large sensor or super-fast lens, but the moon shot might actually be the more difficult one requiring both a long focal length and also a larger aperture to get enough light. I don't really do astro-photography but the moon is approaching higher-ISOs I would imagine. Seriously though, there are probably 5-year-old android phones that could replicate both those images, so I'd suggest that most of what we're seeing is the hype and that GoPro shares the same definition of cinema that most YouTubers do.1 point -
New cinema camera...?
kye reacted to eatstoomuchjam for a topic
Oh yeah, they absolutely do. And as the article you linked said, they look like optical shallow DOF instead of simulated! Another possibility is that they release two versions, one with a small sensor for the traditional action sports use case - and one with a bigger sensor for the vlogger crowd. If they DID release something like the Z Cam E1, but with a modern SOC supporting 10-bit, a flat profile, and a decent H.265 implementation, I'd be excited for a GoPro for the first time in years.1 point -
New cinema camera...?
eatstoomuchjam reacted to kye for a topic
All true, but the sample images from the promo video all have shallow DOF, so that means another kettle of fish entirely with AF and/or focus guides (peaking etc). I'd question if it might have lidar rather than PDAF etc, but it's a GoPro, so let's just assume it's 95% marketing and only 5% actual specs, like almost everything else about their cameras (no proper log profile, barely-passable bitrates, etc).1 point -
New cinema camera...?
eatstoomuchjam reacted to kye for a topic
1 point -
Ultra wide lens used to shoot "Poor Things" 4mm , 8mm Nikkors & 10mm Arri/Zeiss
Aussie Ash reacted to kye for a topic
Wides are a completely different thing depending on the circumstances. If you're hand-holding and moving around for video it's a completely different beast than doing stills or doing video but on a tripod with very careful camera placement and subject movement etc. I also think it's pretty difficult to make wide angle lenses look professional - that demo from ARRI showcasing their ultra-wide zoom had more "amateur with an action camera" vibes than a shallow-DOF 85mm portrait shot from the standard video mode on a 5DII. This is the elephant in the room for amateurs - the pros choose equipment in support of the vision of the project whereas amateurs choose an aesthetic and then use it for completely inappropriate projects.1 point -
New cinema camera...?
eatstoomuchjam reacted to BTM_Pix for a topic
It sounds like an off-Road vacuum cleaner.1 point -
New cinema camera...?
eatstoomuchjam reacted to MrSMW for a topic
Do I need to use a VPN and a private tab to Google ‘Hardcore Henry’?1 point -
I'm making films with it. I was recently offered enough for an upcoming feature that it would more than pay for the camera, but it would have also been like half their budget. I suggested a percentage instead - which will almost certainly yield less money, but allow that money to be spent to improve the parts of the film which are more important than my camera, which is most of them. 😅 Just footage in general? The most recent couple of shorts on my YouTube channel (https://youtube.com/eatstoomuchjam) were with the UC12K. Otherwise, most of what I've shot is in short films that are still in festival rotation. Between the UC12K and the GFX 100 II, I'm happy enough with both. If I'm on set, I'm most likely to bring the Ursa, Ronin 4D, R5, and Pocket 3. If I'm taking photos or traveling, the GFX or R5, maybe with the Pocket 3. If I need to travel lighter or be more inconspicuous, the Komodo-X and/or Komodo, probably with the Pocket 3. I'm thinking about taking the train to Chicago to do a 48 in a couple of weeks. If I do, I'll most likely bring the Komodo-X and Pocket 3 with my DJI Mini 3 Pro - that's an entire shooting setup that easily can fit in a backpack, more or less - and with EF lenses, I can use the Canon VND adapter when outside and the Canon focal reducer when inside.1 point
-
It’s only a matter of time before some big movie is shot exclusively on a Go Pro or DJI…1 point
-
New cinema camera...?
eatstoomuchjam reacted to newfoundmass for a topic
I don't think this makes it a cinema camera...1 point -
Where did Mattias Burling go? Youtube channel is gone.
John Matthews reacted to Emanuel for a topic
That tracks.1 point -
Where did Mattias Burling go? Youtube channel is gone.
John Matthews reacted to Hbomberguy for a topic
I really appreciate you taking the time to come post about this on here - I was wondering what happened and I'm glad I found this thread when I checked again recently. I found some of your old videos + reviews really entertaining and useful when I was considering trying blackmagic cameras, so thanks very much for that. I have a lot of fondness for some of those videos, and if I'd realised the channel would completely disappear I probably would've saved them. 10 years from now it would be nice to be able to look back on them.1 point -
This. Every review for the past decade has talked about firmware updates "hopefully the brand will do this of fix that". They never do. I'm in the Nikon ecosystem so I'm tempted to sell my xh2s for the ZR, but it's more of a sideways move. I'd lose open gate, and fuji has a great image. But the Fuji isn't reliable, especially with AF.1 point
-
Panasonic says nothing for 30 minutes at CP+ 2026
John Matthews reacted to mtol for a topic
Agree this is a waste of time. They should say "a cinema camera is coming" or "the S1ii is our cinema camera". Either would help a lot of consumers decide what their next move is.1 point -
Panasonic says nothing for 30 minutes at CP+ 2026
John Matthews reacted to Ilkka Nissila for a topic
On the cined web site, there is a text version summarizing the interview - much less time-consuming to digest.1 point -
Doh - forgot to list the 9mm F1.7 lens. That's the ultra-wide I'll be taking too. So the total count is one body, 5 lenses, my phone with vND. I was slightly conflicted about the "wide-angle night cinema" slot. The SB+50/1.4 is equivalent to a 71mm F2.0 on FF, so having something wider seems an obvious thing but I'm just not sure if I would use it. I've mentioned the 12-35mm F2.8 as my night walk-around lens, and when combined with the GH7 low-light capability it's a fine combination, but it's not crazy fast/bright and isn't the best "cinema" option around. The things I considered were: my TTartsans 17mm F1.4, which is small and light and despite being soft wide-open is probably quite cinematic my 14mm F2.5 which is small and light but is bettered by the 12-35mm on flexibility grounds being a zoom my Voigtlander 17.5mm F0.95, which is a great performer but is quite heavy my c-mount 12.5mm F1.9, which is similar FOV when you crop in to its S16 image circle my 9mm F1.7 combined with the GH7 cropping, which is fast but sacrifices resolution and doesn't have the DOF advantages of other options (although I am already taking it) SB + 28mm F2.8 combos, but it's hard to get a reasonable quality 28mm F2.8 in M42 mount and it's not that fast anyway I opted to take the 12-35mm (which I sort-of take as a backup lens to the 14-140mm zoom) but if I do end up wanting a wider fast lens for night cinema, I think I might just bite the bullet and get the PanaLeica 15mm F1.7 as it'll be light and have AF and be sharper than I could ever want. I looked at the reviews of a bunch of budget F1.4 or faster lenses around the 14-20mm mark but I'd never be sure if it was as sharp as I'd like, and spending money to get something that isn't that much faster than my 17/1.4 or that much lighter than my 17.5/0.95 seems silly. MFT is the wrong format for ultra-fast wide lenses, and I already have lots of options for something I might not use, so the whole thing might end up being academic anyway.1 point
-
Panasonic on-camera mic released
John Matthews reacted to MrSMW for a topic
Yes. This to me is just a somewhat expensive, limited use, vloggers device that has zero serious use case for my needs. I already have 2x Sennheisers that fill this role at €150 for the pair of them.1 point -
Panasonic on-camera mic released
John Matthews reacted to Thpriest for a topic
If it were half the price it'd be interesting but there if a lot of cheaper good gear out there. But it's good to see them innovating. I'm enjoying the S1mk2, I feel it's a big step up from the S5mk2. I hope it survives the summer! Panasonic now has quite comprehensive line up for all budgets.1 point -
Panasonic on-camera mic released
John Matthews reacted to MrSMW for a topic
Looked at and decided very quickly it isn’t for me, but good to see LUMIX making stuff they at least think folks want. What they really want however is an S1H mk II in an FX3 style body with the screen mech from the S1II and the screen from the ZR. Then they will truly win the crowd and strut like gods of low-mid film-making.1 point -
Hi! I just stumbled upon this thread and thought I'd share an OKLab conversion DCTL I wrote about a year ago. It;s written as a header file to be included in any other DCTL you may need it for. It supports conversion from ACES, Davinci Wide and Rec.709/sRGB. OKLab_Transform.h: #line 2 #ifndef ENCODING_ENUMS_DEFINED_IN_UI enum Encoding { gAcc, gAcct, gDWI, gLIN, g709, gSRGB }; #endif #ifndef COLORSPACE_ENUMS_DEFINED_IN_UI enum ColorSpace { cACES0, cACES1, cDWG, c709 }; #endif // ============================================================= // Util // ============================================================= // ============================================================= __DEVICE__ float powCf(float base, float exp) { return _copysignf(_powf(_fabs(base), exp), base); } __DEVICE__ float3 VecMatMul3x3(const float3 m[3], float3 v) { float3 r; r.x = m[0].x * v.x + m[0].y * v.y + m[0].z * v.z; r.y = m[1].x * v.x + m[1].y * v.y + m[1].z * v.z; r.z = m[2].x * v.x + m[2].y * v.y + m[2].z * v.z; return r; } // ============================================================= // Matrices // ============================================================= // ============================================================= // These matrices are the concatenated forms of (colorspace -> XYZ -> OKlms) // or, XYZToLMS @ ColorspaceToXYZ and, XYZToColorspace @ LMSToXYZ // original matrices are included as comments at the bottom of this script // ACES (AP0) // ============================== __CONSTANT__ float3 mat_ACES0_LMS[3] = { {0.90454662f, 0.26349909f, -0.15602258f }, {0.35107161f, 0.6766934f, -0.03056591f }, {0.13684644f, 0.19250255f, 0.62038067f } }; __CONSTANT__ float3 mat_LMS_ACES0[3] = { {1.2881401f, -0.58554348f, 0.29511118f }, {-0.67171287f, 1.76268516f, -0.08208556f }, {-0.0757131f, -0.41779486f, 1.57228749f } }; // ACES (AP1) // ============================== __CONSTANT__ float3 mat_ACES1_LMS[3] = { {0.64173446f, 0.35314498f, 0.0171437f }, {0.27463463f, 0.63099904f, 0.09156544f }, {0.10036508f, 0.18723743f, 0.66212716f } }; __CONSTANT__ float3 mat_LMS_ACES1[3] = { {2.04479741f, -1.17697875f, 0.10982058f }, {-0.88115384f, 2.15979229f, -0.27586256f }, {-0.06077576f, -0.43234352f, 1.57164623f } }; // ITU BT.709 // ============================== __CONSTANT__ float3 mat_709_LMS[3] = { { 0.4122214708f, 0.5363325363f, 0.0514459929f }, { 0.2119034982f, 0.6806995451f, 0.1073969566f }, { 0.0883024619f, 0.2817188376f, 0.6299787005f } }; __CONSTANT__ float3 mat_LMS_709[3] = { { 4.0767416621f, -3.3077115913f, 0.2309699292f }, { -1.2684380046f, 2.6097574011f, -0.3413193965f }, { -0.0041960863f, -0.7034186147f, 1.7076147010f } }; // Davinci Wide // ============================== __CONSTANT__ float3 mat_DWG_LMS[3] = { { 0.68570951f, 0.45574409f, -0.14156279f }, { 0.27427422f, 0.81179945f, -0.08604675f }, { 0.04351009f, 0.15072461f, 0.80624495f } }; __CONSTANT__ float3 mat_LMS_DWG[3] = { { 1.8836253f, -1.09713301f, 0.21364045f }, { -0.63460063f, 1.57752473f, 0.05693684f }, { 0.01698397f, -0.23570436f, 1.21814431f } }; // OKLab <-> Cone Response // ============================== __CONSTANT__ float3 mat_LMS_LAB[3] = { { 0.2104542553f, 0.7936177850f, -0.0040720468f }, { 1.9779984951f, -2.4285922050f, 0.4505937099f }, { 0.0259040371f, 0.7827717662f, -0.8086757660f } }; __CONSTANT__ float3 mat_LAB_LMS[3] = { { 1.0f, 0.3963377774f, 0.2158037573f }, { 1.0f, -0.1055613458f, -0.0638541728f }, { 1.0f, -0.0894841775f, -1.2914855480f } }; // ============================================================= // Transfer Functions // ============================================================= // ============================================================= // ACEScc // ============================== __DEVICE__ float ACEScc_DecodeBase(float v, float a, float b, float upperClampThreshold, float lowerDecodeThreshold, float two_m16) { float out = v; if (v >= upperClampThreshold) out = 65504.0f; else if (v < lowerDecodeThreshold) out = (_exp2f(v * b - a) - two_m16) * 2.0f; else out = _exp2f(v * b - a); return out; } __DEVICE__ float3 ACEScc_Decode(float3 in) { const float two_m16 = _exp2f(-16.0f); const float a = 9.72f; const float b = 17.52f; const float lowerDecodeThreshold = (a - 15.0f) / b; const float upperClampThreshold = (_log2f(65504.0f) + a) / b; float3 out = in; out.x = ACEScc_DecodeBase(out.x, a, b, upperClampThreshold, lowerDecodeThreshold, two_m16); out.y = ACEScc_DecodeBase(out.y, a, b, upperClampThreshold, lowerDecodeThreshold, two_m16); out.z = ACEScc_DecodeBase(out.z, a, b, upperClampThreshold, lowerDecodeThreshold, two_m16); return out; } __DEVICE__ float ACEScc_EncodeBase(float v, float a, float b, float negConstant, float two_m15, float two_m16) { float out; if (v < 0.0f) out = negConstant; else if (v < two_m15) out = (_log2f(two_m16 + v * 0.5f) + a) / b; else out = (_log2f(v) + a) / b; return out; } __DEVICE__ float3 ACEScc_Encode(float3 in) { const float two_m16 = _exp2f(-16.0f); const float two_m15 = _exp2f(-15.0f); const float a = 9.72f; const float b = 17.52f; const float negConstant = (_log2f(two_m16) + a) / b; float3 out = in; out.x = ACEScc_EncodeBase(out.x, a, b, negConstant, two_m15, two_m16); out.y = ACEScc_EncodeBase(out.y, a, b, negConstant, two_m15, two_m16); out.z = ACEScc_EncodeBase(out.z, a, b, negConstant, two_m15, two_m16); return out; } // ACEScct // ============================== __DEVICE__ float3 ACEScct_Encode(float3 in) { const float a = 9.72f; const float b = 17.52f; const float X_BRK = 0.0078125f; const float A = 10.5402377416545f; const float B = 0.0729055341958355f; float3 out; out.x = (in.x <= X_BRK) ? (A * in.x + B) : ((_log2f(in.x) + a) / b); out.y = (in.y <= X_BRK) ? (A * in.y + B) : ((_log2f(in.y) + a) / b); out.z = (in.z <= X_BRK) ? (A * in.z + B) : ((_log2f(in.z) + a) / b); return out; } __DEVICE__ float3 ACEScct_Decode(float3 in) { const float a = 9.72f; const float b = 17.52f; const float Y_BRK = 0.155251141552511f; const float A = 10.5402377416545f; const float B = 0.0729055341958355f; float3 out = in; out.x = (in.x > Y_BRK) ? _exp2f(in.x * b - a) : ((in.x - B) / A); out.y = (in.y > Y_BRK) ? _exp2f(in.y * b - a) : ((in.y - B) / A); out.z = (in.z > Y_BRK) ? _exp2f(in.z * b - a) : ((in.z - B) / A); return out; } // Davinci Intermediate // ============================== __DEVICE__ float3 DWI_Decode(float3 in) { float3 out = in; float a = 0.0075; float b = 7.0; float c = 0.07329248; float m = 10.44426855; float log_cut = 0.02740668; out.x = in.x > log_cut ? powCf(2.0f, (in.x / c) - b) - a : in.x / m; out.y = in.y > log_cut ? powCf(2.0f, (in.y / c) - b) - a : in.y / m; out.z = in.z > log_cut ? powCf(2.0f, (in.z / c) - b) - a : in.z / m; return out; } __DEVICE__ float3 DWI_Encode(float3 in) { float3 out = in; float a = 0.0075; float b = 7.0; float c = 0.07329248; float m = 10.44426855; float lin_cut = 0.00262409; out.x = in.x > lin_cut ? (_log2f(in.x + a) + b) * c : in.x * m; out.y = in.y > lin_cut ? (_log2f(in.y + a) + b) * c : in.y * m; out.z = in.z > lin_cut ? (_log2f(in.z + a) + b) * c : in.z * m; return out; } // ITU BT.709 // ============================== __DEVICE__ float3 BT709_Decode(float3 in) { float3 out = in; out.x = out.x < 0.081f ? out.x / 4.5f : powCf((out.x + 0.099f) / 1.099f, 1.0f / 0.45f); out.y = out.y < 0.081f ? out.y / 4.5f : powCf((out.y + 0.099f) / 1.099f, 1.0f / 0.45f); out.z = out.z < 0.081f ? out.z / 4.5f : powCf((out.z + 0.099f) / 1.099f, 1.0f / 0.45f); return out; } __DEVICE__ float3 BT709_Encode(float3 in) { float3 out = in; out.x = out.x < 0.018 ? out.x * 4.5f : 1.099f * powCf(out.x, 0.45f) - 0.099f; out.y = out.y < 0.018 ? out.y * 4.5f : 1.099f * powCf(out.y, 0.45f) - 0.099f; out.z = out.z < 0.018 ? out.z * 4.5f : 1.099f * powCf(out.z, 0.45f) - 0.099f; return out; } // sRGB // ============================== __DEVICE__ float3 sRGB_Decode(float3 in) { float3 out = in; out.x = out.x < 0.04045 ? out.x / 12.92f : powCf((out.x + 0.055f) / 1.055f, 2.4f); out.y = out.y < 0.04045 ? out.y / 12.92f : powCf((out.y + 0.055f) / 1.055f, 2.4f); out.z = out.z < 0.04045 ? out.z / 12.92f : powCf((out.z + 0.055f) / 1.055f, 2.4f); return out; } __DEVICE__ float3 sRGB_Encode(float3 in) { float3 out = in; out.x = out.x < 0.0031308 ? out.x * 12.92f : 1.055f * powCf(out.x, 1.0f / 2.4f) - 0.055f; out.y = out.y < 0.0031308 ? out.y * 12.92f : 1.055f * powCf(out.y, 1.0f / 2.4f) - 0.055f; out.z = out.z < 0.0031308 ? out.z * 12.92f : 1.055f * powCf(out.z, 1.0f / 2.4f) - 0.055f; return out; } // ============================================================= // Convert // ============================================================= // ============================================================= __DEVICE__ float3 Decode(float3 in, int tFunction) { float3 out = in; switch (tFunction) { case gAcc: out = ACEScc_Decode(in); break; case gAcct: out = ACEScct_Decode(in); break; case gDWI: out = DWI_Decode(in); break; case g709: out = BT709_Decode(in); break; case gSRGB: out = sRGB_Decode(in); break; } return out; } __DEVICE__ float3 Encode(float3 in, int tFunction) { float3 out = in; switch (tFunction) { case gAcc: out = ACEScc_Encode(in); break; case gAcct: out = ACEScct_Encode(in); break; case gDWI: out = DWI_Encode(in); break; case g709: out = BT709_Encode(in); break; case gSRGB: out = sRGB_Encode(in); break; } return out; } __DEVICE__ float3 OKLab_OKLCh(float3 lab) { float C = _hypotf(lab.y, lab.z); float h = _atan2f(lab.z, lab.y); return make_float3(lab.x, C, h); } __DEVICE__ float3 OKLCh_OKLab(float3 lch) { float a = lch.y * cosf(lch.z); float b = lch.y * sinf(lch.z); return make_float3(lch.x, a, b); } __DEVICE__ float3 RGB_OKLab(float3 rgb, int colorspace) { const float3* mat; switch (colorspace) { case cACES0: mat = mat_ACES0_LMS; break; case cACES1: mat = mat_ACES1_LMS; break; case cDWG: mat = mat_DWG_LMS; break; case c709: mat = mat_709_LMS; break; } float3 lms = VecMatMul3x3(mat, rgb); float3 lms_; lms_.x = cbrt(lms.x); lms_.y = cbrt(lms.y); lms_.z = cbrt(lms.z); return VecMatMul3x3(mat_LMS_LAB, lms_); } __DEVICE__ float3 OKLab_RGB(float3 lab, int colorspace) { const float3* mat; switch (colorspace) { case cACES0: mat = mat_LMS_ACES0; break; case cACES1: mat = mat_LMS_ACES1; break; case cDWG: mat = mat_LMS_DWG; break; case c709: mat = mat_LMS_709; break; } float3 lms = VecMatMul3x3(mat_LAB_LMS, lab); lms.x = powCf(lms.x, 3.0f); lms.y = powCf(lms.y, 3.0f); lms.z = powCf(lms.z, 3.0f); return VecMatMul3x3(mat, lms); } __DEVICE__ float3 RGB_OkLCh(float3 rgb, int colorspace) { float3 lab = RGB_OKLab(rgb, colorspace); return OKLab_OKLCh(lab); } __DEVICE__ float3 OKLCh_RGB(float3 lch, int colorspace) { float3 lab = OKLCh_OKLab(lch); return OKLab_RGB(lab, colorspace); } // ============================================================= // Ref Matrices // ============================================================= // ============================================================= // DWG -> XYZ // 0.70062239, 0.14877482, 0.10105872 // 0.27411851, 0.87363190, -0.14775041 // -0.09896291, -0.13789533, 1.32591599 // XYZ -> DWG // 1.51667204, -0.28147805, -0.14696363 // -0.46491710, 1.25142378, 0.17488461 // 0.06484905, 0.10913934, 0.76141462 // 709 -> XYZ // 0.4123908, 0.35758434, 0.18048079 // 0.21263901, 0.71516868, 0.07219232 // 0.01933082, 0.11919478, 0.95053215 // XYZ -> 709 // 3.24096994, -1.53738318, -0.49861076 // -0.96924364, 1.8759675, 0.04155506 // 0.05563008, -0.20397696 , 1.05697151 // XYZ -> LMS // 0.8189330101, 0.3618667424, -0.1288597137 // 0.0329845436, 0.9293118715, 0.0361456387 // 0.0482003018, 0.2643662691, 0.6338517070 // LMS -> XYZ // 1.22701385, -0.55779998, 0.28125615 // -0.04058018, 1.11225687, -0.07167668 // -0.07638128, -0.42148198, 1.58616322 Using it another DCTL looks something like this: #line 2 #define ENCODING_ENUMS_DEFINED_IN_UI #define COLORSPACE_ENUMS_DEFINED_IN_UI #include "OKLab_Transforms.h" DEFINE_UI_PARAMS(p_InCSpace, Input Color Space, DCTLUI_COMBO_BOX, 2, {cACES0, cACES1, cDWG, c709}, {ACES (AP0), ACES (AP1), Davinci Wide Gamut, Rec.709 / sRGB / BT.1886}); DEFINE_UI_PARAMS(p_InGamma, Input Gamma, DCTLUI_COMBO_BOX, 2, {gAcc, gAcct, gDWI, gLIN, g709, gSRGB}, {ACEScc, ACEScct, Davinci Intermediate, Linear, Rec.709, sRGB}); __DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B) { float3 in = make_float3(p_R, p_G, p_B); float3 out = in; // Convert to OKLCh: float3 linear = Decode(in, p_InGamma); float3 oklch = RGB_OkLCh(linear, p_InCSpace); // Convert back: linear = OKLCh_RGB(oklch, p_InCSpace); out = Encode(linear, p_InGamma); return out; }1 point
