Jump to content

Eugenia

Members
  • Content Count

    23
  • Joined

  • Last visited

About Eugenia

  • Rank
    Member

Profile Information

  • Gender
    Female
  • Location
    Spokane, WA, USA
  • Interests
    Filmmaking, illustration, collage, cooking, tech, sci-fi.
  • My cameras and kit
    BMPCC HD & 4k, GX85, Sony A6400, Canon M/M50/5DMkII

Contact Methods

  • Website URL
    https://cargocollective.com/eugenialoli
  • Twitter
    https://twitter.com/EugeniaLoli
  • Instagram
    http://instagram.com/eugenia_loli

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Anyone knows what to set the zebras on, for HLG3?
  2. Does anyone have a link for a right-out-of-the-camera slog2 or slog3 video file that contains skintones? I would like to test some arri-emulation luts on it, that used to work with the older slog versions, but I need to make sure they still work well before I decide to purchase or not this camera.
  3. I received the modded Helios 44-2 today, from a Ukraine seller who mods them for both anamorphic, declicked aperture, and cine gears. Looks less swirly than my other Helios, but I think it also looks more convincingly anamorphic. Massive flare at f/2.0 (wide open), but thankfully lots of it goes away if using a filter (any filter) in front of the lens. I loved the results.
  4. ML RAW is still not stable for "continuous" raw. I don't know how you get it to be stable, but here, it's not. Hence the "cool toy". We can go back and forth on this, but you won't convince me, because I've been trying ML on and off for years now. And yes, its UI is absolutely dreadful. As I already mentioned, the EOS M is a great starter camera, which is why I suggest newbies to get one, and also why I own 3 of them. But using ML RAW for serious work, with its aliasing problems and having to overcrank the SD controller to manage to write fast enough (which eventually leads to overheating and drop in frames, and often in crashes), is not acceptable in a professional setup. It's great for small videos like the one I posted above, where only a few seconds are shot every time, but not much more than that. Even Zeek on Youtube (which I follow) hasn't posted anything more than video vignettes with his EOS M. Show me ONE *serious* narrative short film that was shot with ML RAW simply because it's a cheap way to get RAW. I wouldn't shoot a short film with ML RAW myself. I would shoot though a short film with the built-in h.264 encoder, simply because I know that I can trust it to be stable. Plus, I like the look I get from it anyway. I don't always need raw. I just make sure my scenes are not so contrasty to have to change ISOs or to recover highlights in post. In such a non-random setup, the bare Canon cameras do manage fine. I shot a number of official music videos with the 5D MkII in 8 bit, and it worked fine, for example. These days, I could get an even better image and bitrate with an M6 MkII at 4k with 120 mbps. If I can also get 10bit in the future, even better (just for even more advanced color grading purposes). Which is why I'm eyeing the upcoming Canon R6. But raw, I don't necessarily need it. If anything, I *degrade* the image I get out of most of my cameras, to make them softer, in order to get the vintage film look. I own a BMPCC 4k that shoots RAW, and I haven't used it for more than a few hours since last year. I don't like the Sony sensor look (that all non-Canon cameras use), and I quickly found out that I don't need its RAW capabilities. Which is why I mostly use my Canon M50 these days. On paper, the M50 is a toy compared to a BMPCC 4k. But it produces the image I like and it's ultra stable (more so than the BMPCC 4k). To give you an idea where I'm coming from: I used to be the editor in chief of a big tech news web site in the early 2000s. During my time there, I took up on to myself to try and configure Linux in new ways for multimedia purposes (it was weak back then in video, wifi or bluetooth performance). So I was hacking code and config files, finding ways to make things work, and then write tutorial articles. Well, come 2005 I was already sick and tired of trying to make other people's messes work. These days, if something is not perfectly polished (from my OS, to computer, camera, laptop to my refrigerator and washing machine) I simply don't bother with it. My tweaking days are long gone, and I don't want them back.
  5. I've shot with ML Raw for years on and off, and I find its aliasing a big issue. It's also very visible on youtube footage by others. Overall, ML has massive issues on various levels (including a terrible UI). It's a cool toy, but nothing more than that. Still, the EOS M is the camera I always suggest people start their journey into filmmaking, exactly because they can have the standard easy use, but they can also delve in to ML and learn how a camera works more in-depth -- something that will help them down the line. But after that initial phase is over, it's time to upgrade.
  6. Anything is doable, but more labor-intensive. The biggest problem I have with ML RAW is not even the instability (since I would only take a few seconds worth of footage usually for my videos), but the aliasing artifacts that are very prevalent on the EOS M (I own three of these cameras!). There's no way around them, and people who shoot with ML RAW have often to throw away footage for that reason. Not a great experience IMHO. There is not enough processing power on that old ARM CPU in the EOS M to apply any filters. Personally, as I wrote above, I find the normal 8bit h.264 files at 120 mbps (in 4k) to be good-enough, even for a feature film, if the right picture style was used when recording. I'm able to get the look I want without destroying the 8bit footage when using the VisionColor CineTech picture style. For me, that has been a lifesaver.
  7. Thank you, appreciated! Regarding your question: ML footage comes out a little bit contrasty compared to the VisionColor CineTech picture style I'm using (which is my favorite of all I ever tried, and it has more dynamic range than even Technicolor Cinestyle or Miller's C-Log). Being contrasty, it means it's more difficult to manipulate MLRAW footage. For that reason, while I own three EOS M cameras that are capable of ML RAW, I still use the M50, with just 32 mbps h.264 bitrate at 1080p. It works well enough. I'm planning on buying the new M5 MkII in October, that reportedly has IBIS. The 4k on the new M cameras has 120 mbps, and while 4k is 4x the 1080p resolution, the 4x bitrate is actually like 8x (it's how perceivable quality works). So h.264 at 4k at 120 mbps is more than enough to do a good job, without the need of the unstable ML RAW. Even at 8 bit, the kind of grading I do does not destroy the footage. If someone doesn't need IBIS, I'd highly suggest the M6 MkII, that already has proper 4k with AF, without crop and without the massive rolling shutter of the M50 in 4k mode.
  8. This is the closest we will ever get with spherical lenses. There are a few more tricks to play though, like adding anamorphic lens distortion, as I have done so above. Overall, it's close. If you look at some modern anamorphic lenses, they lack the character of the old anamorphic lenses too.
  9. Γεια σου πατριώτη! Nice footage! You're not supposed to decompress anything on a fake anamorphic recording though. It's still square pixels. So when you shoot with Magic Lantern, you should not tell it that it's anamorphic recording, but rather normal recording. Then it will appear correctly when editing. Here is mine from today (Canon M50, Helios 44-2, Viltrox Speedbooster, Vid-Atlantic anamorphic aperture filter). I also have a modded Helios 44-2 (that doesn't require the Vid-Atlantic filter to get anamorphic bokeh) in the mail, coming soon:
  10. Can someone confirm that after installing the latest firmwares on any of these Canon cameras (M6 MkII, RP, 90D), firmware versions that now enable 24p recording, the HDMI-out STILL defaults to 30p? Peter Berg over at Youtube says that even if you set the camera to 24p, if you connect an HDMI cable to a Ninja V, the camera reverts to recording 30p without telling you about it. Can someone confirm?
  11. It is also my opinion that VR is the future of movies, and that 8k will help there. VR movies will also bring some interaction, which is the No1 thing youngsters report when they are asked why they prefer video games over movie entertainment.
  12. For that sample I used a manual 7Artisans lens which has vintage characteristics. Except the Sigma 18-35 and some Tokina zooms for their versatility, I avoid modern lens designs (like the ultra sharp Panasonic/Leicas, for example). Regarding GFX 100's sharpness btw, here's a test I did yesterday comparing it to the BMPCC4k (more information on its youtube page, not via the video itself): https://www.youtube.com/watch?v=Efx6i5LMPK4&t=2s
  13. On Davinci Resolve Studio (not the free version) there are various blurring algorithms/plugins. I use "sharpen and soften" plugin, adjustment node, then Filmconvert itself blurs, and if it's still too sharp, I use any of these plugins or the default blurring found in one of the tabs on bottom half of the color page. At the end, I add one more adjustment node, and contrast pop (another Studio plugin). Modern digital cameras are too good in sharp details but with an anemic look, while film was traditionally softer but with much more acutance and midtone detail (particularly older film, modern film is closer to digital). Basically, details are still there, but "thicker". There's "texture" rather than outright "sharpness" (difficult to explain). These adjustments make the image look more like a moving painting, rather than video (at least to my eyes, who is after a vintage film look). Example below, using Panasonic's CineLike D and a rather soft, low contrast lens: https://imgur.com/a/FZoTc8j (open image in its own tab to look at full 4k size).
  14. My husband's GFX100 arrived yesterday for his studio. In a very short test I did, I found it to be sharper than my BMPCC 4k (its native Fuji lenses are very sharp, they resolve a lot of detail). I honestly don't think I need even sharper or whatever "better" Fuji thinks it can give us (better color/midtone detail sure, but even sharper details heck no). I had to soften the image in post by a lot to make it look anywhere near cinematic. There was a reason why Alexa was sitting at 2.8k resolution for the longest time before Netflix forced them to make a 4k camera. For true traditional cine stuff, 4k is enough. I personally have no plans on updating on 6k/8k any time soon, not even when 8k TVs might be commonplace. I end up softening my 4k clips in post by either Panasonic or BMD, but the GFX100 required a lot more (f-log). If you're after super crisp, commercial style look, that's your camera.
×
×
  • Create New...