Leaderboard
Popular Content
Showing content with the highest reputation since 01/05/2026 in all areas
-
As many of you will have guessed, I’m not a rich teenage kawaii girl so take my opinion on this camera with a grain of matcha. What I am though is a member the unspoken demographic for it which is the jaded old photographer with a bad back. So I’m not her, I’m not you and you aren’t either of us so I’m predicting your mileage will vary wildly. Which is a good thing. I enjoy it for what it is, a genuinely pocketable holiday camera that makes me take silly snaps with far more frequency than I would do with a “real” camera because I’ve long since understood that holiday doesn’t mean assignment. Anyway, some silly soft, noisy snaps I took with it. Could I have taken these with my phone and at likely better quality ? Of course but the question is would I? No, because getting my phone out of my pocket and wrestling with it like a wet fish when I see something interesting is not my idea of using a camera.6 points
-
Incidentally, FujiFilm have announced a new instax camera today that is already boiling the piss of many people in the same way as the X-Half does. It’s basically a video focused version of the Instax Evo but retains the printer and is based on their Fujica cine cameras of the past. The dial on the side let’s you choose the era of look that you want to emulate. So it shoots video which transfers to the app and then it prints a key frame still from it complete with a QR code on it that people can then scan to download the video from the cloud. It looks beautiful and based on my experience with the X-Half, if they made this with that larger format sensor (sans printer obviously) then I would be all over it for all the same reasons as I love the X-Half.4 points
-
Ok, first 2 week trip behind with the ZR, 35mm F1.4, 50mm F1.2 and 135mm F1.8 Plena and here are some thoughts about the ZR. First the good sides. The ZR screen worked well enough for nailing focus and exposure, even when shooting into shadows in bright daylight, but you may want to max the screen brightness. Zooming into the image with the zoom lever was handier than with Z6iii plus and minus buttons. Even with screen brightness maxed occasionally battery lasted about as well as Z6iii with it’s EVF on normal brightness. Had to use 2nd battery only a few times during 4-5 hour shooting days in cold, 0 to 10C conditions. Brought also the Smallgrip L cage with me, but did not use it, as it makes the ZR body taller than Z6iii and about similar weight. Even with 1kg lenses ZR felt quite comfortable to use and hold, but as a climber my fingers are not the weakest. I missed the Z6iii EVF a bit, but used now also different shooting angles and heights more due to the bigger screen being handier than EVF for that. 32bit float saved the few clipped audios I had pretty well, even though I don’t know if it is true 32bit pipeline from Rode wireless go mic to the ZR. Still, the audio sounded a bit better than what I have gotten with Z6iii and Rode. Exposing clips with R3D NE took at first a bit more time than with NRaw, but by using high zebras set to 245, waveform, and Cinematools false color and Rec.709 clipping LUTs it was quite easy to avoid crushed blacks and clipped highlights. R3D NE has manual WB, so I took always a picture first and set the WB by using the picture as preset. It worked pretty well, but not perfectly every time. Shot also NRaw in between to compare, but used auto A1 WB for it. It seems the auto WB did not always work perfectly either, but it was relatively easy to get R3D NE and NRaw to match WB wise in post. In highlights R3D NE clips earlier than NRaw and it was clearly seen in the zebras and waveform. Still with R3D NE there was not much need to over expose and even with under exposing I needed to use NR only in a couple of clips, where I under exposed too much. On last year’s trip with Z6iii, when it didn’t have the 1.10 FW yet, that improved the shadow noise pattern, I needed to use NR in many clips, until I realised I could raise high zebras from 245 to 255 without clipping. With R3D NE and NRaw 4 camera buttons and one lens button was enough. I had 3D LUT and WB added to My menu and that mapped to a button, so it was quite fast to change display LUTs or WB. WB mapped directly to a button or added in i menu won’t let you set the WB by taken picture as preset. WB se to i menu let’s you measure the white point and set that though. In post I preferred the R3D NE colors over NRaw in almost all of the clips I took, except in few clips where NRaw had more information in the highlights. Changing NRaw to R3D with NEV to R3D hack brought NRaw grading closer to R3D NE, but they were still not exactly the same. NRaw as NRaw seemed to have more blueish image in some of the clips due to the blue oversaturation issue it has, but the NEV to R3D hack fixes that. Then the bad sides. After coming home I picked the Z6iii, looked through it’s EVF, felt all of it’s buttons and thought, this is still the better camera, a proper one. Z6iii has also focus limiter and mech sutter which both I missed during the trip. The worst part became pretty clear after every shooting day. Not the R3D NE file sizes itself, but the lack of software support to be able to save only the trimmed parts of R3D NE clips. Currently Davinci Resolve saves the whole clips without trims, even though NRaw works just fine, and Red Cine x pro gives an error during R3D trim export. If you happen to fill 2TB card a day with R3D NE, you need to save now everything. I saved like 6TB of footage from this trip when it could have been only 600GB. If this does not get fixed I could as well shoot NRaw with Z6iii and get rid of the damn ZR. Changing trimmed NEV files to R3D does not work either, as Resolve does not import the files. ZR is fun to shoot, no doubt about it, but it’s R3D NE workflow is almost unusable at the moment, at least for my use.4 points
-
Panasonic G9 Mark II. I was wrong
FHDcrew and one other reacted to newfoundmass for a topic
I don't regret jumping to full frame. The S5 and S5II X have treated me well and both are really good values. It was the right choice at the time, for a multitude of reasons. BUT if I'd known that the G9 II and GH7 were in the pipeline I probably would've stayed with M43. The main benefit for me has been the better low light, but these newer M43 cameras are pretty darn good at that. FF still has an edge, but it's not a huge one. I also don't typically do a lot of work where I need really shallow depth of field. Often times I'm closing the lense down to get similar results to what I got when filming on M43, except these lenses are much heavier and more expensive than the ones I used on my GH5, G85, and GX85 bodies. I could fit all my lenses in a bag and it didn't weigh much at all. The same definitely cannot be said for my FF lenses! The stabilization, to my eye, also looks a lot better on the G9II and GH7 than my S5 and S5II X. I hope Lumix keeps M43 alive and even gets back to innovating with the system. A return to smaller bodies, and possibly even smaller lenses, would definitely pique my interest. I don't know that I'd ever jump back into the system completely, but I could see myself buying a couple lenses and a body if it was compelling enough.2 points -
Panasonic G9 Mark II. I was wrong
newfoundmass and one other reacted to MrSMW for a topic
Ditto and it has been an absolute game changer for me with my workflow, especially as 100% of my clients what a result in ‘my style’, so effectively carte blanche. As long as I don’t do anything radically different…2 points -
Excellent insight, Thank you for the write up!2 points
-
My advice is to forget about "accuracy". I've been down the rabbit-hole of calibration and discovered it's actually a mine-field not a rabbit hole, and there's a reason that there are professionals who do this full-time - the tools are structured in a way that deliberately prevents people from being able to do it themselves. But, even more importantly, it doesn't matter. You might get a perfect calibration, but as soon as your image is on any other display in the entire world then it will be wrong, and wrong by far more than you'd think was acceptable. Colourists typically make their clients view the image in the colour studio and refuse to accept colour notes when viewed on any other device, and the ones that do remote work will setup and courier an iPad Pro to the client and then only accept notes from the client when viewed on the device the colourist shipped them. It's not even that the devices out there aren't calibrated, or even that manufacturers now ship things with motion smoothing and other hijinx on by default, it's that even the streaming architecture doesn't all have proper colour management built in so the images transmitted through the wires aren't even tagged and interpreted correctly. Here's an experiment for you. Take your LOG camera and shoot a low-DR scene and a high-DR scene in both LOG and a 709 profile. Use the default 709 colour profile without any modifications. Then in post take the LOG shot and try and match both shots to their respective 709 images manually using only normal grading tools (not plugins or LUTs). Then try and just grade each of the LOG shots to just look nice, using only normal tools. If your high-DR scene involves actually having the sun in-frame, try a bunch of different methods to convert to 709. Manufacturers LUT, film emulation plugins, LUTs in Resolve, CST into other camera spaces and use their manufacturers LUTs etc. Gotcha. I guess the only improvement is to go with more light sources but have them dimmer, or to turn up the light sources and have them further away. The inverse-square law is what is giving you the DR issues. That's like comparing two cars, but one is stuck in first gear. Compare N-RAW with Prores RAW (or at least Prores HQ) on the GH7. I'm not saying it'll be as good, but at least it'll be a logical comparison, and your pipeline will be similar so your grading techniques will be applicable to both and be less of a variable in the equation. People interested in technology are not interested in human perception. Almost everyone interested in "accuracy" will either avoid such a book out of principle, or will die of shock while reading it. The impression that I was left with after I read it was that it's amazing that we can see at all, and that the way we think about the technology (megapixels, sharpness, brightness, saturation, etc) is so far away from how we see that asking "how many megapixels is the human eye" is sort-of like asking "What does loud purple smell like?". Did you get to the chapter about HDR? I thought it was more towards the end, but could be wrong. Yes - the HDR videos on social media look like rubbish and feel like you're staring into the headlights of a car. This is all for completely predictable and explainable reasons.. which are all in the colour book. I mentioned before that the colour pipelines are all broken and don't preserve and interpret the colour space tags on videos properly, but if you think that's bad (which it is) then you'd have a heart attack if you knew how dodgy/patchy/broken it is for HDR colour spaces. I don't know how much you know about the Apple Gamma Shift issue (you spoke about it before but I don't know if you actually understand it deeply enough) but I watched a great ~1hr walk-through of the issue and in the end the conclusion is that because the device doesn't know enough about the viewing conditions under which the video is being watched, the idea of displaying an image with any degree of fidelity is impossible, and the gamma shift issue is a product of that problem. Happy to dig up that video if you're curious. Every other video I've seen on the subject covered less than half of the information involved.2 points
-
What does 16 stop dynamic range ACTUALLY look like on a mirrorless camera RAW file or LOG?
Jahleh and one other reacted to Video Hummus for a topic
I agree with this 100%. More DR is great but it will be at the extremes of the image but all the magic happens in the middle.2 points -
DJI banned in US
eatstoomuchjam reacted to Emanuel for a topic
2021? More than a decade later. Well, too late for me then... :- )1 point -
Panasonic G9 Mark II. I was wrong
Alt Shoo reacted to Andrew - EOSHD for a topic
It turns out I am a bit wrong. ... That Micro Four Thirds was dead. Well near me, the G9 II came down to a much more sensible 1299 so I thought I'd give it a try. This thing... oh my gawd. Feel like putting the rest of my gear in the bin! This little box of joy is pure art in the handheld 4K/120p mode (and also in 5K open gate). The colour science, slow mo and IBIS are so, so good. The new GH7 sensor is quite something. Beautiful filmic quality to it. And I thought IBIS was good on the full frame Panasonic cameras or Olympus OM-1 but this is taking the biscuit now. You can just stand there and get a completely static frame especially in 120fps. I keep putting shutter at 1 second for long expose stills, pin sharp...The first camera that can really lay claim to being a tripod killer, in my view. Then there's the image processing... It totally defies the price. The new sensor just looks so clean in low light and dynamic range is fantastic. The real-time LUTs look stunning here. No other Micro Four Thirds camera has nearly as good colour processing (except the more expensive GH7), so in this sense I prefer it even to the Olympus OM-1 with the lovely Olympus skin tones. In some ways it is better than a flagship $4k full frame cam... I am not joking. Not missing a full frame sensor that much to be honest. It has the dynamic range, the low light, the resolution, and with a fast enough lens... the full frame look as well. The Metabones Speed Booster 0.64x fits without scraping the sensor-box. Also, the EVF is enormous and totally defies the price. Criticisms? Autofocus is very lens dependant - it's still a bit rubbish with the older stuff and adapters. Also no ProRes LT like the X-H2... With two SD card slots, it limits you only to 1080p in ProRes mode which is a bit silly... but the high-res stuff is available if you plug in an SSD via USB. GH7 has an advantage there for sure. But in plain old 10bit H.265 the image is superb. I think this body design suits the smaller lenses too... You know I'm not the greatest fan of the S5 II body design, well it is growing on me here... Micro Four Thirds and small stuff seems to go well with the G9 II / S5 II body design. It starts to make more sense. The sharp angles cut in less, camera as a whole is lighter, the grip is sufficient for everything and it's got that "GH2 feel" when you put the tiny 20mm F1.7 pancake on there whereas the S5 II with the larger lenses doesn't have that same charm to it. I am inclined to say Micro Four Thirds LOOK is back too... It's an antidote to predominance of a super shallow depth of field in commercial work and Netflix. It really makes me want to fully commit again to the system as it just does SO MUCH, far more than any full frame camera remotely affordable. It does more than a Sony a1 II FFS!1 point -
That new Fooj does look interesting… Utterly gimmicky for sure, but a great way to get folks at an event actually going to your website after the fact. It’s like a SteamPunk TikTok gen business card machine!1 point
-
@MrSMW yea, once clients trust your taste and the look becomes part of what they’re hiring you for, baking it in just makes sense. It shifts the work from “fixing” to actually shooting with intent. I’ve found it especially useful on doc and news style projects where speed matters and consistency is more important than endless options later. As long as the look is designed thoughtfully up front, it’s hard to want to go back.1 point
-
Chris and Jordan throwing Sigma Bf and Fuji X-Half under the bus
sanveer reacted to Andrew - EOSHD for a topic
I really like the idea of the time travel dial A really elegant way of switching the look. Good to see the Super 8 / Bolex form factor make a come back as well. Fuji of course, now should do a high-end version of this with Cinema DNG.1 point -
So true, I find myself shooting way more random snaps with this camera than any other camera I own, and it's fun! Ditto, Couldn't agree more! I only use my phone camera for visually documenting something!1 point
-
I think a lot of people wrote Micro Four Thirds off before really paying attention to what changed. Once you start working with the newer Panasonic bodies as a system not just a sensor the color, IBIS, and real time LUT workflow start to make a lot of sense, especially for documentary and run and gun work. I’ve been baking looks in camera more and more instead of relying on heavy grading later and it’s honestly sped everything up. I just put out a short video showing how I’m using that approach on the GH7 if anyone’s interested: My YouTube is more of a testing and experimenting space rather than where I post my “serious” work, but a lot of these ideas end up feeding directly into professional gigs.1 point
-
How about using Dolby Vision? On supported devices, streaming services, and suitably prepared videos it adjusts the image based on the device's capabilities automatically, and can do this even on a scene-by-scene basis. I have not tried to export my own videos for Dolby Vision yet, but it seems work very nicely on my Sony xr48a90k TV. The TV adjusts itself based on ambient light and the Dolby Vision adjusts the video content to the capabilities of the device. It seems to be supported also on my Lenovo X1 Carbon G13 laptop. High dynamic range scenes are quite common, if one for example has the sun in the frame, or at night after the sky has gone completely dark, and if one does not want blown lamps or very noisy shadows in dark places. In landscape photography, people can sometimes bracket up to 11 stops to avoid blowing out the sun and this requires quite a bit of artistry to get it mapped in a beautiful way onto SDR displays or paper. This kind of bracketing is unrealistic for video so the native dynamic range of the camera becomes important. For me it is usually more important to have reasonably good SNR in the main subject in low-light conditions than dynamic range, as in video, it's not possible to use very slow shutter speeds or flash. From this point of view I can understand why Canon went for three native ISOs in their latest C80/C400 instead of the dynamic range optimized DGO technology in the C70/C300III. For documentary videos with limited lighting options (one-person shoots) the high ISO image quality is probably a higher priority than the dynamic range at the lowest base ISO, given how good it already is on many cameras. However, I'd take more dynamic range any day if offered without making the camera larger or much more expensive. Not because I want to produce HDR content but because the scenes are what they are, and usually for what I do the use of lighting is not possible.1 point
-
I think the one major use case for the high DR of the Alexa 35 is the ability to record fire with no clipping. It's a party trick really, but a cool one. It's kind of fun to be able to see a clip from a show and be able to pick out Alexa 35 shots simply because of the fire luminance. That being said, it has no real benefit to the story in any real way. I did notice a huge improvement in the quality of my doc shoots when moving from 9-10 stop cameras to 11-12 stop cameras though. But around 12-12.5 stops, I feel like anything beyond has a very diminishing rate of return. 12 stops of DR in my opinion can record most of the real world in a meaningful way, and anything that clips outside of those 12 stops is normally fine being clipped. This means most modern cameras can record the real world in a beautiful, meaningful way if used correctly1 point
-
I'm seeing a lot of connected things here. To put it bluntly, if your HDR grades are better than your SDR grades, that's just a limitation in your skill level of grading. I say this as someone who took an embarrassing amount of time to learn to colour grade myself, and even now I still feel like I'm not getting the results I'd like. But this just goes to reinforce my original point - that one of the hardest challenges of colour grading is squeezing the cameras DR into the display space DR. The less squeezing required the less flexibility you have in grading but the easier it is to get something that looks good. The average quality of colour grading dropped significantly when people went from shooting 709 and publishing 709 to shooting LOG and publishing 709. Shooting with headlamps in situations where there is essentially no ambient light is definitely tough though, and you're definitely pushing the limits of what the current cameras can do, and it's definitely more than they were designed for! Perhaps a practical step might be to mount a small light to the hot-shoe of the camera, just to fill-in the shadows a bit. Obviously it wouldn't be perfect, and would have the same proximity issues where things that are too close to the light are too bright and things too far away are too dark, but as the light is aligned with the direction the camera is pointing it will probably be a net benefit (and also not disturb whatever you're doing too much). In terms of noticing the difference between SDR and HDR, sure, it'll definitely be noticeable, I'd just question if it's desirable. I've heard a number of professionals speak about it and it's a surprisingly complicated topic. Like a lot of things, the depth of knowledge and discussion online is embarrassingly shallow, and more reminiscent of toddlers eating crayons than educated people discussing the pros and cons of the subject. If you're curious, the best free resource I'd recommend is "The Colour Book" from FilmLight. It's a free PDF download (no registration required) from here: https://www.filmlight.ltd.uk/support/documents/colourbook/colourbook.php In case you're unaware, FilmLight are the makers of BaseLight, which is the alternative to Resolve except it costs as much as a house. The problem with the book is that when you download it, the first thing you'll notice is that it's 12 chapters and 300 pages. Here's the uncomfortable truth though, to actually understand what is going on you need to have a solid understanding of the human visual system (or eyes, our brains, what we can see, what we can't see, how our vision system responds to various situations we encounter, etc). This explanation legitimately requires hundreds of pages because it's an enormously complex system, much more than any reasonable person would ever guess. This is the reason that most discussions of HDR vs SDR are so comically rudimentary in comparison. If camera forums had the same level of knowledge about cameras that they do about the human visual system, half the forum would be discussing how to navigate a menu, and the most fervent arguments would be about topics like if cameras need lenses or not, etc.1 point
-
I shoot also mostly with available light, and when the sun has set in the light of dim headlamps. So being able to push and pull shadows and highlights is extremely important. In that regard GH7 is no slouch, but it is not quite the same than Z6iii, ZR nor even S5ii was either. If you have a good HDR capable display (and I don’t mean your tiny phones, laptop or medium sized displays, but a 65” or bigger OLED with infinite contrast, or a JVC projector with good contrast and inky blacks) one must be a wooden eye to not notice the difference between SDR and HDR masters. At least with my grading skills the 6 stops of DR in SDR look always worse than what I can get from HDR.1 point
-
The battleships of The Golden Fleet will take down the evil DJI regime from wherever they come from. Greenland or somewhere.1 point
-
"Left-handed Girl" anyone?
kye reacted to eatstoomuchjam for a topic
Probably. I just found it really overbearing. I personally don't bother with diffusion filters at all. The short, lacking detail reason is that I'll just use a vintage lens if I want a vintage look. And yes, your observations align with mine about using diffusion filters. On low-budget sets, they also add headaches on controlled shots as the DP is now complaining that the lights are interacting with their diffusion filter in a bad way, causing time loss due to coddling the darn thing.1 point -
DJI banned in US
Video Hummus reacted to BTM_Pix for a topic
It would take more than a Ronin to stabilise that government at the moment.1 point -
Definitely; that's where the 12-35 2.8 and DJi 15mm 1.7 will show their strengths for me. The lenses haven't arrived yet, but the camera came with the 12-60mm 3.5-5.6 kit lens, so definitely in the ball park with size and weight. The thing is so darn light; it actually feels extremely comfortable and balanced attached to the larger body style. I could be fine with a sigma 18-35; funny thing is even that feels somewhat light to me, as I am used to having to carry around my Atomos Ninja V for anything 10 bit. Yes, yes, yes. The only reason I have a great idea now of what matters to me, is because at this point I've shot a lot of projects, and have done so on multiple different camera systems. So along the way, I've learned what matters to me. I'm not going to notice a 1/2 stop DR difference; some grainy footage (as long as the grain isnt ugly) doesn't bother me terrible. Sometimes I like it. But I do value great stabilization, as the way I shoot I end up spending a good chunk of my time finessing post-stablization to achieve the type of camera movement I want while keeping things completely handheld. I have a solid handheld technique down, but on most cameras it is still not perfect. I always hold the camera losely, usually shoot wide, and do a great heel-toe walk or body sway. But I always need to post-stabilize. I end up trying the stabilization options in Davinci Resolve. If that doesn't work, I render that clip out to Prores and import it to After Effects just to apply Adobe Warp Stabilizer, as it is a bit better in my experience. Once I get the result I like, I export. The beauty of the G9II is that when you combine the fantastic IBIS with e-stabilization high, I get quite close to the result I get after all of that post stabilization process...but this is just the footage out of the camera. It saves me a lot of time. And I can even add a drop of warp stabilizer on top to make it perfectly smooth. Another big advantage for me, its effectively doing what Gyroflow does but all internally and paired with the best IBIS ever. I've tried Gyroflow. I've used it on some FX3 footage I shot for my buddy's wedding film company. I've also rigged an iPhone to my Nikon Z6 as well as used the Senseflow A1. Its a nice solution. I figured I'd love it; its the same concept of what normal post-stabilization does (which I always use ALL THE TIME). Big difference is it is using true camera motion data; so the results should be perfect right? Well yes, but you need to shoot on a high shutter speed. And I found that even on the Sony FX3, where Gyroflow can work with IBIS on, the crop was often still fairly large. And the workflow is lengthy. With the G9II, I have a minimal 1.255x crop with e-stab on high, and because its working fully in tandem with the phenominal physical IBIS system, its very stable AND I can zoom the lens mid-shot and it works fine (can't do this with gyroflow on most setups). AND I can keep my shutter at 1/50 because the physical IBIS system is doing 80% of the work here. But yeah. Moral of the story is shooting lots and lots of stuff has made me realize what matters most to me. The G9II seems to really hit that. Again, I used the Nikon Z6 for 4 years. I've also filmed weddings on a Sony FX3 with nice Sony G master glass. I filmed very extensively for one organization with a Canon R5 and EF glass. This past summer I bought a Canon R7, then a Panasonic S9, then sold both. So I've tried enough cameras and shot enough to know what works well for me. I'd fully agree that a lot of what camera youtuber's claim are the big time differences are not always as important as they seem; for me, the wonderful IBIS of the G9II and the minor crop in e-stabilization is way more useful than a full stop of DR improvement when you already had great DR in the first place. Etc etc. This is a concert I filmed and edited this summer on the Panasonic S9. I haven't had a chance to film anything substantial on my G9II...but this is close. It's a super weird setup I sort of wound up with over the summer...The Lumix S9 with the Sigma 18-35...in the Super 35 crop mode WITH E-stab on high. So basically a 2x crop MFT level at that point. But I still found the image to be very nice. More importantly, with some careful walking, I got the images to be this stable and a lot of these shots have NO post-stab applied. Colors were very rich. G9II is even better because again the crop is lessened in E-stab high and the physical IBIS is better. And build quality smokes the S9; that was something I did not appreciate about that camera. A short clip from a concert I filmed, with the aforementioned Lumix S9 setup. Again, no post-stabilization. It is just so smooth. Makes all the difference with how I like to film. More handheld with the Lumix S9 setup. This has a bit less "gimbal-push-in" shots and a bit more regular handheld shots. With e-stabilization high, it has a perfect balance struck, where you can walk and move the camera such that it looks like a steadicam, or you can just handhold it for regular stuff and it looks as stable as a cine-cam weighted down. This wedding trailer was with my old Nikon Z6 setup. Combo of Davinci post-stab and Warp Stabilizer. Outside, I cranked my shutter speed very high to help. I used RSMB to add motion blur in post. While this worked, I had to spend extra time stabilizing in post and tweaking things if it was not perfect. This is all but eliminated now with the G9II. Also, half of this video was shot on a Nikon F-mount 24-85mm 3.5-4.5, entirely at f/4.5. I reckon that looks pretty close to what the Panasonic12-35 2.8 will look like; @kyelet me know if I am wrong since you've used that very lens I think? But anyways, its enough DOF for me. That being said, if you like more, totally get it. Nothing wrong with that. End ramble haha.1 point
-
Panasonic G9 Mark II. I was wrong
Video Hummus reacted to kye for a topic
1 point
