Jump to content


  • Posts

  • Joined

  • Last visited

Everything posted by maxotics

  1. The Sigma will do very well in that test, because they're both APS-C sized sensors and the D5500, in the Foveon world, is considered a 6 megapixel sensor However, against a Nikon D810/D850 (which has more pixels to de-mosaic and wider DR) things would get interesting. Very nice shot btw. That's is the kind of shot that I believe will look better from a Sigma. You have specular lighting and a lot of neutral colors (gray). The kind of stuff that makes color artifacts galore in bayer cameras. Really looking forward to your photo comparisons!!!!!
  2. First, very nice looking! A gazillion times better than anything I could do! So please take my thoughts in that context. Without the right juxtaposition, all art, from music to film, breaks the spell, which can only be one. What is the spell you're trying to get the viewer under? I start, I'm following a guy walking down an alley. It seems very confined. Has a title, am I in a James Bond movie, who is this guy? Then a swoosh sound. And I'm now with him in a building, a large stage, smokey light, a piano-- another young guy. Is the film really about the piano guy, both of them? The other guys starts singing. I see him from one angle, then another, then another. Then an orchestra. But no one is in the audience. Why? Why a big orchestra and no people? Then he's singing close to the mic. then he's singing far away from the mic. Lips don't see perfectly in sync with music--okay it's a music video. A music video. So my question is, were you directing to create a story or just filming? Something to think about next time maybe. I thought this a very interesting analysis of music videos done by David Fincher, if you haven't seen it.
  3. Isn't that a USB mic? Would mean the analogue audio gain is set electronically within the mic? I have a Shure XLR to USB where I can change the mic's gain (Shure SM7B). There are no markers/gauge, but if I set it to 10 it gets distorted, but fine from 9 down! Weird. Anyway, tells me that the USB interface can create havoc so I (you) want some control. Is that right, others who have more experience? Sounds like he should invest in a dedicated Mic to USB interface. Can one really trust an all-in-one USB mic? My experience is no. Also, neither one of us are young, so don't be offended Are you sure you were at -10db and not +10db Yep, I've done that too!
  4. I'm glad my post was good in a passionate way because I certainly mean them that way! I try to say it as much as possible, that there is a place for LOG and I would, and do, use it myself. It has a look you simply can't get any other way. But it has a trade-off, as you point out. My feeling is you can't really understand LOG until you understand what a standard profile is meant to do, which is match all our perceptible colors to the display gamma. That is I DID NOT understand LOGs until I understood those fundamentals. I was working on measuring the various noise ratios in various LOG profiles last year, but reporting my first findings here was very aggravating and lot of people pushed back. So though I have no doubt that @Don Kotlos is giving good advice about which LOG profiles to use, it would be nice if we had more objective data. Or better tests. When I see test of LOG done on YouTube in a low DR scene I nearly burst a blood vessel Right now I'm working on materials to show how RAW works, so I'm basically continuing what I did last year, but starting from the beginning! Again, thanks for the encouragement!
  5. In a studio environment, where size/weight isn't a real concern, a Nikon D8XX will be faster and more flexible than the Sigma. Because the Sigma camera doesn't have a bayer CFA, but samples colors vertically, it can deliver as clean an image, color-wise, as the best Bayer cameras IMHO. However, the color RED is sampled last in the sensor (bottom layer), so it loses some nuance. Shouldn't be a problem in a studio environment, but you have to light correctly. Also, these sensors have only 6 stops of DR. That's MY opinion, many will say that incorrect. But I throw it out there for you. It does NOT detract from the camera for me, that's all I need, but it's something you should know if you're not exact with your exposure. To use a baseball analogy, these cameras either strike out or hit home runs. Base-hits aren't their thing. These cameras excel in any image where a bayer sensor can be confused by thin strips of light or dark, like a person's hair in the sun. The bayer sensor will miss color information and create color artifacts. For example, a strand may only hit the red and green pixel, creating a reddish little blotch after de-bayering (but you'd need to pixel peep). The Sigmas don't succumb to this. So outside, with a strong backlight, the Dp3 can take a portrait that exceeds what any Bayer sensor camera can do--in a very small package. In short, if you need to travel light, can work slow, and want to do some portraits outside, and you have good light, the Dp3 can deliver shots that will bring tears of joy to your eye. Also the images do have a different look. This is a fact, if they were as fast and sensitive as bayer sensors no one would use anything else. I recommend that every serious photography try these cameras out. So I recommend doing what Mattias did. I'd certainly LOVE to see some portraits done with the camera. One my list of things to do one day too. (I had a dp2q).
  6. Why sell your external recorder? Was that what I suggested? I only said that banding can be a result of an aggressive LOG profile, as Don said, or from too little color information due to a high compression CODEC that uses 420. We both have different roles, right? You shoot and grade real stuff I assume. I'd hire you over me to go shoot something. I test stuff, a more scientific approach. We're here to educate each other, I hope Just so the OP can make sense of this I will explain some of this stuff in more detail, as I understand it! Many filmmakers believe their cameras record color just like our eyes/brains do. But they don't. Cameras can only see in gray-scale actually. The manufacturers put color filters over alternating pixels and then composite a "color". But it's not really a color. It's just a brightness value of light through a filter with the noise floor of the silicon as a reference/floor. Too much brightness, and the value doesn't change. The #1 confusion I see is the belief that these values have an implied relative brightness; that is, that the brightness difference between 100 and 200 would mean double the brightness as we see it. That SHOULD be the case, but isn't. The camera records linear, we see exponentially. But even that is IN THE WEEDs. The fact is, as YOU WOULD BE THE FIRST TO POINT OUT, the real world of grading is getting an image that looks right to the viewer, or conveys the feel the filmmaker wants. That will mean adjusting the relationship of brightness across the image... But but but..., you must have enough color data to fit your display gamma or you will notice where you don't have enough color, which is banding in smooth color gradients. You can't widen your gamma (LOG) in a fixed data color space (8-bit) and assume that you will always have enough color information in your image to fully saturate a gradient. Don was basically saying, the less aggressive LOG you use, the less chance you have to run into banding. AGREE!!!! And he'd know more than I about which LOG profiles do what in the real world. Let me say this again in other words. Extending a gamma beyond the amount of color data you have can increase banding. Similarly, forcing more color into a narrow gamma will result in an image that has no discernable visual difference, which is why LOGs can have benefits in certain circumstances (they exploit what may be unnecessary color data). But you have to really know your shit to know when that is! What I learned in my experiments is I did NOT know my sh_t As for the external recorder, I was just pointing out that its benefit can affect banding, but is a separate issue from how it happens when using a LOG gamma for shooting in an 8bit space? The OP should really be clear about the differences, right, before assuming one will fix the other? If he was, I don't believe he would have asked the question because it's very subjective, or too variable to give an objective answer, right? Are we friends again?
  7. I'm glad you asked. Very little experience. What I did do is create 32 images representing 16 million colors and created video using various standard gamma profiles and LOG profiles and compared the evidence. I also have a Ninja Blade where I could look at the difference between 422 HDMI out and internal 420. Unlike some people here, I try to test my knowledge and not claim to have turned lead into gold through some special camera settings for knees, black level, saturation and other, sorry, horseshit which most people don't fully understand. If you can grade a beautiful image, bully for you! But if you're going to challenge me on the technical differences between say standard profiles and LOG, then I'm all ears! That you believe I need to be good at subjective grading doesn't do it for me If there is something wrong in what I said, say what it is, and why.
  8. Self appointed LOG police here. A camera's normal gamma profile is designed to maximize color saturation across an 8/24-bit gamma curve. As soon as you shoot LOG, you accept a trade off between gray-scale DR and chroma. The improvement you'll get shooting external 422 (if the camera shoots internal 420) has nothing to do with the banding from chroma spread out too thinly on a gamma curve, it is about compression. To reduce data 420 substitutes more color than 422. Two different types of banding (or color loss). I'm just extending what @Don Kotlos said above. My 2-cents is don't shoot a LOG profile that will give you noticeable banding (which depends on what you shoot) and expect a more lossless CODEC to save you
  9. Yes, the Internet is certainly the ultimate free market of ideas, but not of bandwidth. I can host a website at a company that is connected through Tier 3 providers, or one that has Tier 1 providers, or better, multiple Tier 1 providers. The idea of "free/neutral" pricing is a fantasy of ignorant consumers. That is, am I to believe I'd get the same speed/reliability to consumers on a Tier 3 hosting place as a Tier 1. Are you telling me that the hosting cost of EOSHD has nothing to do with cost for bandwidth? That a camera blog would have as good access running from his home PC as you have at your hosting provider? Net neutrality is mostly a fiction. Eliminating the last vestiges of net neutrality removes a fair bit of hypocrisy from the government, which pretends to give everyone the same access to information. It doesn't matter how neutral the Internet is if you can't get a connection where you live, or can only choose one. Don't get me wrong Andrew, I can easily get major pissy about all this. It's NOT the world I want. However, it is the world we live in which everyone else seems to want--like lower priced airline seats by playing supply/demand-chicken with the airline, unfair seating orders, etc., etc., If we're going to have net neutrality than we need to have equal access for all. Otherwise, as someone said above, I'd rather know who has got me bent over
  10. There are two issues which confuse the matter. The first is a franchise you need from the government to install cable, or set up microwave towers, etc., on public property. The reason most ISPs can charge what they want is they got in first (with telephone or cable and can reuse it for Internet) and, like you say, matching what they do is very expensive. It's probably not as hard as you think to start a high-speed ISP where you live/ What makes it practically unfeasible is the expense you need in legal fees, engineering reports, environmental assessments, etc., in getting the government to give you the right to string cable. However, the rules are public and there's no real reason you can't do it. Those phone companies and cable companies did it. We can elect a government tomorrow that says anyone can string up cable anywhere. If they did that, you'd have a lot of choices in ISPs. You'd also have wires all over the place and with each one a bit cash strapped whose to say the service would be good? There is something to be said about economies of scale in large companies As I said elsewhere, States should build "neutral" network through higher taxes or bond issuance. But NOOOO. No one wants to pay for it The second issue is one of pricing regulation. I'm against fixed pricing because economic studies have shown, over the centuries, that any kind of pricing control eventually backfires. People are just too good at getting around them and the energy they spend doing that ends up in the price of the things you're buying. That why ending Net Neutrality isn't some crazy idea. If the companies really do want to raise rates, what's to stop them now? They do want to raise rates, yes, but on services they can't offer now because of price controls. They will also LOWER prices on things where there is too much competition or supply. It's the fear of change that really drives the politics of net neutrality. If there are problem at least NOW people can say, it's a complete free market, so why don't have have 2 ISPs in Southern California. Then maybe we can have tax incentives to change that.
  11. @slonick81 It surprises me that no one involved in the design and manufacturer of camera sensors visits this forum and gives a hint why the GH5 can't output RAW video. If the lowly first gen Canon EOS-M can do 720 at 37MBS and the 5D3 1080 at 80MBS I don't know why the Panny cameras don't. What is it about RAW capture that makes the sensor so hot it must be cooled, like Peltier cooling with the BMPCC. So I'm with you, so many questions! Yes, I think you're right, the RAW from ML Magic isn't like photographic RAW, still, it looks fine!
  12. My understanding is the sensor data to RAW isn't changed, but that all those issues you mentioned are taken care of in various coefficients and instructions saved into the RAW file in the header. Most people don't think to second-guess the manufacturer's instructions on how to pre-process RAW data so I believe that leads to some confusion. I wish I knew this stuff better too, that's why I want to make sure what you say is correct. My understanding remains that RAW data is truly original measurements of photon absorption. When I worked with Magic Lantern RAW, on the "pink pixel" problem, it seemed that if what you said was correct, the focus pixel data would never make it to the RAW stream. I am currently working on a "workbench" to better analyze RAW data which hopefully you and other might find interesting when it's ready to show. Last year I studied LOG gammas, my conclusions of which attract some hate mail I'm hoping that if I start from the ground up in RAW, some of my findings about LOG gammas will make more sense. And if I'm wrong about some of my conclusions, I can better see those errors.
  13. If you're amplifying before the ADC wouldn't that circuitry distort the signal before digitising, just trading one source of error (analog) for another (digital)? How would any specific pixel know whether to amplify or not? I mean, that's the rub isn't it, no pixel can know beforehand how many photons will be sent its way. Anyway, can you give more specifics about how the sensor works in that regard? When you say a "RAW file is not actual sensor data, there is still a lot of processing done", sure the camera manufacturers differ in how they program their ADC (where black levels might be set, say, which is data non-destructive assuming you won't need the full 14bits), but I don't know of any other "lot of processing" done. Can you elaborate? My experience is that RAW seems pretty close to actual sensor data. Indeed, if you just output a non-debayered grayscale TIFF with dcraw (least amount of processing) it's unusable without a lot of complex processing.
  14. I don't know if this applies, but something to think about. A camera sensor is not sensitive to each color (filter) the same. It's more sensitive to green, so green will clip before red and blue meaning that if the camera can't adjust for it you can end up with a magenta cast in the clouds say (because the red values are higher relative to the clipped/max green values). Even in photography, the problem can be somewhat incurable in the wrong lighting. My guess is that VND filters also transmit each wavelength differently so create havoc with the camera's ability to white balance. So another test you might do is trying different strengths of ND to see which create the least/most cast. Also, my guess is that my favorite, LOG gammas, will further exacerbate this problem.
  15. Agree. The million dollar question is whether removing net neutrality with make more competition or less. I don't know. I will argue the other case that if it wasn't for net neutrality we wouldn't be where we are now. It allowed everyone to develop their services without spending a lot of time negotiating every connection. Anyway, it's like all American politics these days. There are no real issues discussed in the mainstream media. Just good vs evil
  16. I don't see the connection between companies "blocking" access under current net neutrality and increased prices and worse services in a world where everyone charges what they want for their bandwidth. This is the kind of faulty logic, connecting shady business practices (which go on everywhere) to non neutrality that makes me doubt those for net neutrality. Again, I'm not saying I've definitely for net neutrality or against it, only that, from 40,000 feet, I'm for every business / consumer having freedom to produce / consume based on their economic realities. Companies are NOT ethical, they're goal is to succeed in their business. Net neutrality doesn't make companies ethical or non ethical. Again, a faulty argument to me. IF people really want net neutrality than the government should set up enough network so that every citizen is guaranteed a certain amount of bandwidth and any content creator can use that same bandwidth to provide content. The current system sort of works, but you point out that much of the population doesn't have much choice in bandwidth. My guess is they have what the cable companies and telecoms have already wired decades ago. The whole issue is like so much that fuels Trump supporters. One side makes nice sounding arguments while in the real world the people outside the big cities end up with crappy Internet, health care, public transportation, etc. Net neutrality is NOT really neutral. It favors those close to good access, like in cities. Or let me put it this way, having neutral access to anyone's content is not the same as everyone have neutral access to the same bandwidth. In order for everyone to have exactly the same access, both in content and in bandwidth, the economics have to either change for private industry or the government has to step in and build it.
  17. I haven't read the details; however, in general I see "net neutrality" as another name for artificial pricing. There are many problems with free markets, delivering better prices and services is generally not one of them (according to the economics I've read). It seems most people want to keep net neutrality more from fear of the unknown and hatred of big cable companies. I haven't seen any studies, from Wikipedia, that show clearly, with evidence, how it will hurt their service or my access to it. Certainly, things will change, but it can be for the good. For example, if providers can charge by bandwidth than it will be easier for competitors to build network out into rural areas without worrying that Netflix is going to swamp their lines. They can charge one amount for those wanting to occasionally browse FB and those who want to watch Netflix. Right now, it's very easy for Netflix to raise prices, which it is doing, but very difficult for the cable companies, devils that they may be. Why should Netflix get a free ride on someone's difficult/expensive to build out network? I have a hotspot which I use for business. It's part of my cell plan and I pay $20 for the device. It gives me unlimited data, but 15gig fast, then slows down to 600K? Why, because most who use it will end up watching Netflix. There is no way to separate users. There is no 500gig plan or 600K plan because they need to match their pricing to net neutrality. It's quite possible that once they do away with net neutrality, pricing will adjust to usage and for some it will go up, but others go down. More importantly, there will be more levels of service based on bandwidth, not on the expectation that you can use as much as you want. With net neutrality gone, Verizon would be free to make plans based on how much network they have and can sell in different areas. So I guess I'm one person against net neutrality because I'm against anything where there is artificial/forced pricing on services that have wide differences in cost to produce and end-users looking for different value propositions. One last thing. I live in Cambridge, home to Harvard and MIT. We have internet speed issues from phone lines, telephone poles, etc. The highly educated citizens can vote tomorrow for more taxes and the city to build its own network. Do they? No. No one really wants to pay the real cost FOR THEIR NEIGHBOR They're rather have some private company to do it and then they can BLAME THEM and I bitch about it ad nauseum. This is nationwide is my guess. Net neutrality exposes this expectation of getting something for nothing.
  18. Take out your spot meter and put all that rubbish in perspective
  19. A lot of Sony "zombie skin" is from filmmakers using LOG and de-saturated profiles where they shouldn't. I've tried to be the "LOG police" on the Internet but will give up. The desire to believe you can get pro-equipment/lighting quality in mirrorless is stronger than the simple fact of the matter--"film quality" (what should we call it?) is now moving well past consumer equipment. The trend will only continue into 2023. In other words, many consumer equipment filmmakers will always believe in snake-oil (LOG, 10bit, rec2020 as a shooting CODEC, etc.) Of course, all those shooting wild-color music videos and zombie movies will be as proud as peacocks with their mirrorless kits The epiphany came for me while watching Netflix's "Abstract" series--on a 720 TV. I thought it was done on something like the Canon C300, or even C500. Nope, RAW-shootin' Red Epic Dragons with $10,000+ glass. When this site was in its formative years ER (I believe) shot a whole episode using a 5DII. It was good, but not as good as the professional equipment of the time (too much aliasing from spread out pixels on that FF sensor). I believe a 5D4 is no closer to today's professional video cameras--farther even. Unless consumer cameras shoot essentially RAW quality (and it seems physical power limitations are preventing this because SD cards are now fast enough) then the cameras of 2023 will be as you describe. Better than today's cameras. Not as superior to the cameras of 2017 as today's cameras are from 2012. All that said, one can get in the game, even with a G6 (what was his name on this forum that championed that camera?) that was IMPOSSIBLE when I was young and even a few minutes of 16mm film was $100 back then (never mind the camera!). Some more proof that video has diverged is @Mattias Burling. He used to shoot RAW, now it's 8bit. So I believe his YouTube quality has gone down, but seems I'm the only one (though it doesn't stop me from watching his videos)! If mirrorless today is good enough then it should be good enough in 2023. Other questions, are, will consumer available video editing/processing software and lighting get powerful enough to do David Fincher type work? I'm certainly interested in your thoughts there Oliver! I know this seems left field, but I've heard the iPhone X is selling mostly because of the real-time video emojis it can create using its face recognition post processing software/chips, etc. That's what I mean about post technology. Maybe computer visual processing will be more important than whether or not it's 8bit or RAW sourced.
  20. Been talking to Salim a bit on PM about the Atomos I have. The more I think about it, the more I believe he has adequate equipment for his shoot. An external monitor may help with getting shallow focus, but it will will create a host of other problems. First, shallow focus: I'm sure others have more experience than me here. I find it difficult enough in photography. In video? Unless the person is very still, shallow focus is almost impossible to track. When I see it attempted on TV there's always a second or two when the subject gets out of focus (and that's after they've picked the best clip from a bunch of footage). What I find difficult about shallow focus in a moving object is determining the direction of someone moving from a 2-dimensional screen. Having stereoscopic vision allows us to make the determination, somewhat. A monitor is like looking at the scene with one eye. So, though I can see when the focus pixels fire on the screen, getting them back once the subject moves, that's a challenge unless I know the marks they subject is going for. Generally I don't. So I can't quickly figure out which direction the subject is moving. Camera autofocus doesn't have some magic in this AFAIK. That's why Canon came out with new STM lenses for the dual pixel (which have special, quiet, focusing micro motors). I believe the camera essentially focus hunts back and forth, but so quickly that you don't notice it. This is the same reason Nikon struggles in video focus. The autofocus system is designed to find focus as quickly as possible, for the photo, not keep in focus once it's there. So the question, often posed here on EOSHD, is, can you manually focus as well with a Nikon say, as Canon's computer can with its DPAF, or even the new Sony cameras? My 2-cents is, not any more. I either use DPAF or I zone focus. All that means an external monitor, to me, has marginal benefits over either the focus peaking on the camera, a magnifier on the back LCD. Salim might be better putting in time practicing with what he has with talent that is given marks to hit. The A6000 has very good autofocus with the 35/1.8. The reason I stopped using the Atomos is mounting it puts you between a rock and a hard place. You with use a small swivel, which due to the leverage of the screen, always threatens to come lose and the monitor crashing down onto the camera, or you use a cage where now you have a big rig that will CERTAINLY draw attention in the tube, or any place where cameras are not welcome. The cable to the camera adds huge mental stress to me, at any rate. Then there are the batteries. Like light rays, every additional wire or battery you add to your rig exponentially raises the risk of some failure if not your anxiety. That why such setups need a crew. You need to delegate those worries. Some proof to what I'm saying is I never see youTube videos with a single shooter using an external monitor. It's always a crew of some sort. Once you get into that, then you really have to balance your investment in the monitor with everything else. And if you have a crew, my guess is the talent will want to put time into learning their marks so as not to get cold looks at the evening post party
  21. Boy, kick a guy when he's dumb...ur...down OH WAIT, a pun! I'm not shure actually. But what do you expect. I Rhode into town on the wrong horse!
  22. HAHA @IronFilm You believe in me .... as if I'll remember Anyway, I couldn't get any of it right. It's an NTG-1! Check back next week when I call it a NT9G I bought a Sure SM7B recently...I think. In my head it's an SMB7. Crickey!
  23. In the metro you can get away with a grainy look from any camera. Even professional TV/film seems to let its hair down in the subway Think you're set with those kind of shots. I have something similar, Nebula 3000 (?). The thing about the C100 is it's sensor is MADE for video. It's an A7S long before Sony made one. If you're doing the story at some point you need to focus on directing and get a cinematographer. They'll have the "good stuff" Ironically, a better camera may hurt you because you'll get into the camera settings. That's certainly how I screw myself over and over and over again! The D810 and A6000 are great video cameras in 8bit. IMHO, you need to spend $5,000 to really make that next leap. All the stuff in between is a compromise. Even if you had a RED and every piece of equipment you want, working alone, a crew of 4 with Canon T3I, audio person, lighting person and assistant will end up with better footage. So put your money where the difference will make it to the end! Sounds like you're already doing it the right way. So rock on!
  24. Had to check hours. It has 191 hours. Would include 2 Belkin 64GB SD cards, 2 batteries, box, charger, etc. Thinking $1650 including postage. Optional add Atomos Ninja Blade with 128GB SSD for $250, 18-135mm STM for $150, Rhode NT2 (I think, not sure $). @salim I have both the A6000 and A6300. Where do you want to go? Do you want to be a cinematographer, director, producer? That will help me answer your question.
  25. Hi Jonesy, I was just taking a shower thinking I should do a video on this subject. I'm still not clear on it all, but this is my current understanding from my experiments with the Sony X70 that has both 8 and 10bit. Theoretically, what you say is correct IF the 10bit value ends up in a 10bit space. So when I shot 10bit on the Sony X70 I expected to see a big increase in DR. I detected none. Why? I haven't seen any big real DR improvement of 10bit anywhere on the Internet. Just a lot of marketing "you'll see" type of stuff. Excuse me if this is confusing. I'm still trying to figure it out in my head. When you use RAW values you'd have 1024 per channel, as you say, so there's a big difference between 256 at the stop of 8bit, and 1024, right? That's the way I see it too. So let's say we have 3 RGB values of 100, 256 and 1024 so multiplied we have 26,214,400. Can we fit this in a 10bit space? Absolutely, IF we're saving CHANNEL data as 10bits. And when I've shot some limited Magic Lantern at 12bit I DO SEE a marked improvement over 8bit H.264. So when I looked at the 8bit vs 10bit files from the X70 they were roughly the same file size, but I did see a difference in banding (but I had to super pixel peep). So how could it really be 10bit? And if it isn't really 10bit, what is it? My guess is that it is really 8bit with an extra couple of bits added to the full color data. So it really isn't 10bit, it's more like 26bit full color (24bit plus 2 bits). That would explain why the colors would band less on close inspection. There is more precision between shades. Time will tell if I'm wrong, but so far I believe everyone has fallen for a lot of marketing hype, nonsense. Not surprising to me, if you read my other posts few understand DR properly Panasonic loves to muddy the water with all manner of CODECs and bit-rate-mumbo-jumbo. So that's my take so far. It's not 10bit acquisition. It's more like 8.1 bit display
  • Create New...