Jump to content

maxotics

Members
  • Posts

    957
  • Joined

  • Last visited

Posts posted by maxotics

  1. A lot of the film look has to do with 3 trucks worth of lighting, modifiers, gels, cranes and dollies.  It has to do with set designers who carefully pick colors and furniture.  It has to do with story-board artists who pre-visualize what will best convey the intent of the scene.  It has to do with colorists, or film graders.  It has to do with actors hitting their marks.  It has to do with wardrobe. 

     

    I would make an argument that any camera only contributes 5% to the "film" look.  

  2. @tupp, this is all very complicated stuff.  It sounds like you know what you're talking about.  A lot of people, however, make the leap from 4K to better dynamic range (from end-to-end) and I just wanted to clarify that a bit.  

     

    That bayer sensors borrow colors and don't have true color depth the way many think of them is true, but again, one of those things that trip everyone up (including me at one point).

     

    Hope you enjoy the Fujian.  Best value in show business :)

  3.  It has a lot to do with the low noise, good DR and Nikon colours probably. I have a Speed Booster on my G6 but I think the 5300 has more of a S35mm look to it for some reason. 

     

    The camera manufacturers have tried to move Heaven and Earth to get full-frame image quality out of APS-C and MFT sensors.  They haven't been able to do it.  The difference isn't just in people's "heads"  That's why Sony came out with full-frame mirrorless.  Of course, full-frame has its own drawbacks (large mirrors, lenses and the difficulty of creating a video frame from spread out pixels).  Anyway, when I compared my NIkon d600 vs a Panny G5, like you, I found the colors more real from the full-frame sensor, but "cleaner", due to the closeness of the pixels from the G5.  Of course, shallow DOF is better with larger sensors too.

     

    My sense is that the more you shoot video the more you like the Panny cameras.  Taken on their own, what's not to like! :)  The more you shoot photography (stills) the more you gravitate to larger sensor cameras (like the NIkon) and RAW video, because you're trying to get video images that equal what you can get in stills.  I'm in the latter camp.  Of course, when you try to get that color fidelity you lose on other things.  

     

    As you said, in really good light, the smaller sensor cameras catch up.  My guess is that in the summer you'll switch to the Panny camp.

  4. I'm sorry tupp, but I think what you wrote may be misleading.  I believe you're looking at a lot of video compression math, and not camera sensor (RAW) math (which is the prime driver of image quality)

     

    First, I've tried creating non-bayered images where the eye will blend in the red, green and blue pixels.  I could not get it to work.  You must de-bayer pixel values to create full-color pixels.  Here is a video I created where you can see how the eye can not do this on its own

     

     

    In other article you link to, you mention that 10-bit color is 2^10, or 1,024.  So if you have a resolution of 2 megapixels (around 1080) that gives you about 2 billion color depth by your equation above.

     

    HOWEVER, most camera sensors do not sample at 10-bits, but more like 14-bits.  So that gives you about a 16,383 x 2 million or 33 billion color depth.

     

    By your calculation, you could increase the resolution 4 times (like 4K), so 8 million times 1,024 is around 8 billion "color depth" by your equation.  

     

    If you multiplied 1080 (2 million) against 11bits (2,048) you'd get 4 billion.  Against 12 bits you get 8 billion, similar to 4K in your equation.

     

    After that, the camera data is past 4K.

     

    How do I explain this?  The number of bits that represent a color have 2 aspects

     

    1. The larger the bit value the GREATER accuracy you can have in representing the color

    2. The large the bit value, the greater RANGE you can have between the same color in two neighboring pixels, say.  

     

    The "dynamic range" aspect of color depth is what is missing from your thinking.  

     

    Higher resolution does not create higher dynamic range.   Dynamic range is a function of bit-depth at the pixel level. This has been pointed out by many on this forum, though it's difficult to explain to people who have only worked with compressed video (which almost always assumes an 8-bit channel color space which can cover all visible colors).  Once you work with RAW video you get it, or at least I did.  

  5. I've always been a prime-boy, but I have to admit I'm finding it optics overkill (and I shoot mostly stills).

     

    1. Like you say, shooting anything under 2.8 (even 5.6 for me), requires very careful focus and the subject must stay still.  So the lenses are only good for "set pieces".  It's no fun walking backwards into traffic or people trying to fit things into your frame :)

    2. Bokeh differs with each lens, and Primes can have unique qualities there. (doesn't float my boat, might yours).

    3. Contrast and color also differentiate lenses, and some are very pleasing with low DXO scores.  This is a point Andrew made in his GH2 book and it opened my eyes and turned me onto one of my favorite lenses, the Fujian 35mm 1.7 c-mount CCTV lens ($30!)

    4. I do so much more in post now that EVEN if the prime was better to start with, I can't see the difference once I have the image the way I want it.  So what I suggest is START WITH ZOOMS that give you the most flexibility, and once you have done everything you can to get your image in post the way you want it , THEN try a prime and see how much better it is.

     

    o. Also, look on Flickr sets to get an idea of how each lens performs.  There's a group for almost every lens.

     

    o. Although I like to disagree with Andy ;) he seems to have the most lenses of anyone and shoots a lot with MFT, so find and read his posts on lens selections.  Also, check with Andrew is he updated his lens stuff for the GH3 guide, if so, get that, otherwise, get the GH2 guide.

     

    Lastly, most lenses look good in video.  Primes are overkill for what are much smaller images than still.


  6.  The Lumix 14mm f2.8 is your cheapest, smallest decent option. It's sharp, fast and I gives a pretty good image.

    Don't write off the 14-42 kit lens either - it's pretty decent, though not very fast.

     

    I bought a 14mm with a GF3 for a little over $200 on  CL (the GF3 is a fun/useful cam by the way).  So you may be able to pick up a b-cam panny body for almost nothing with that lens, or the 14-42 as Matt James Smith mentions.  

     

    These lenses will also work on Blackmagic Cinema cameras.  However, the detail from the BMPCC is unreal so the images picks up the smallest camera shake.  Therefore, you want lenses with OIS (and that have an external button for it).  That makes the Nikon lenses less attractive from that perspective.  However, stills are better from an APS-C or full-frame sensor, so the NIkon lenses are a hug plus there.  What's great about Nikon is all their lenses work on their digital cameras (unlike Canon).  So if you find a manual Nikkor at a garage sale you could use it both on any Nikon or the Panny with Adapter.

     

    I agree with Andrew's frustration about the DSLRs.  They aren't nearly as easy to use as Panny (as you wrote) and their quality doesn't touch the BMPCC. 

  7. Having been here a while now, I can certainly agree that Andrew is certainly a bitter pill sometimes.   He sometimes seems to get equipment from the smaller manufacturers, but then he seems to be more balanced in his reviews.  He seems to go a bit crazy when it's equipment from the large corporations that, I can assure you, do not cow-tow to anyone.  Indeed, I think it is their aloofness that irritates him to say things like the "baby mode."   Having been up again big corporation in other matters, I can sympathize with him there.  

     

    Again, just want to say, having been here awhile, that the last thing Andrew is is anyone's shill. 

  8. 100% sure they can do it technically speaking. Something is holding them back from actually doing it. 

     

    It may not be as easy as one would think.  When ML sets the EOS-M to 24fps to shoot RAW video, the display goes crazy in photo mode because the camera wants to show that at 30fps.  IN other words, Olympus my run all the video out through a 30fps timer.  It may be built into read only chips.  Firmware can only do so much.   It may be able to shoot in 24fps, but not display it, for example.  A lot of what these cameras do is "hard-wired" into CODEC and sensor IO chips.  Olympus has a further problem of corporate problems that probably don't help them in focusing in on their mirrorless cameras.

  9. No doubt that these technical charts are essential to top quality performance. Many find it very interesting, and worth a "hobby." When I see this stuff though it's a BUZZKILL.

     

    Hi Aaron, it's not about "top quality performance."  It's about getting the quality YOU want for your artistic expression.  So again, Andrew shot his latest video with the Olympus (which he and others slam for its video quality) because it has superior in-camera stabilization.  If he really bought into what you're saying he would have shot with a RAW based camera.  If you look at the video, you will see that he needed to get close-up to his subject, and need to focus on composition than his camera settings.  EVERY shoot is a trade-off!  

     

    What is a buzzkill is listening to people argue about seemingly arcane stuff.  I wouldn't argue with you there :)  Keep in mind that the manufacturers want to sell cameras.  They do not want you to know the truth about their weaknesses.  So everyone here tries to figure out what is what.  

  10. That's why I don't really care all that much.  I'm the type that would just rather use the dang things to make something interesting and call it good.

     

    I mean, it's curious and cool to know the tech, but hardly a priority for making something artistic.

     

    Good for you if you want to delve in though.  Lord knows I'm not inclined to be an engineer.

     

    fuzzynormal, think about what you're saying.  Why do you set the aperture to be 2.8 vs f8.  Why might you not use f22 if you want everything in focus?  Why would use use 25fps in Europe, but not the U.S.?  Why would you turn sharpness down in the camera?  Why wouldn't you expect to use your pancake lens with an adapter on a camera not designed for it?  Why might you use a Blackmagic camera for stuff you plan on showing at your local theater, but would use a GH3, say, for an on-line video series?  You think more like an engineer that you realize ;)

     

    Andrew has argued this before.  You can't separate the technical from the artistic.  Yes, you DO NOT have to be a technical expert to create great art.  That's why movie-making is the most collaborate of efforts.  No one can know/do it all.  You have to have multiple experts.  However, if you are doing this yourself, YOU want to know as much as possible.  For the guy-and-a-dog filmmaker, this site is an oasis.

     

    Most people here are not learning the tech to be "curious and cool".  They're learning it to be better artists.  

     

    This is how I got here.  I've never liked skin tones in compressed video.   I come from a film background.  I tried all kinds of things, but nothing worked.  Then I read a blog post here by Andrew on the 50D and how it was shooting RAW video (which I had no idea about).  So I bought his 50D guide, a camera, and tried it.  I just followed Andrews step-by-step instructions.   The first clip changed my life.  And I've been here every since, learning and sharing my knowledge and clips with others.  

     

    I admit that I get lost in the weeds in the technology, which becomes counter-productive artistically.  We all do.  I think that's why Andrew shot his latest video in the dark with a non-RAW camera using internal stabilization.   That's a real video, a real work of art, for a real client.  Andrew has amazing cameras.  He could have shot with RED.  But he shot with the Olympus because his technical knowledge told him what would be THE BEST EQUIPMENT TO REALIZE HIS ARTISTIC VISION.  

     

    You can get into the technology and still use a super-8 film camera!  One does not preclude the other.

  11. The BIG plus of this lens is the OpticalImageStabilization of the 3rd "power" generation.

    I was able to use it handheld without any shoulder support and minimal deshaking in post. No motion blur due to slight camera shaking. Smooth, slow pans handheld.

     

    I want to second this.  The awesome pixel level detail of the BMPCC has a caveat emptor.  The slightest shake is noticable.  Even with my 14mm prime I notice it.  If you don't have an external switch to OIS, or plan to use only on a tripod, then this problem will distract you.  I suppose you can fix some of it in post.

     

    I have the 14-45.  It's light and works fine, but the zoom is not smooth.  Good for the price you can get used though.

  12. "Very nicely, overkill even. 4:2:0 maps to an effective 4:4:4 with root-2 the image size. So 2.7k 4:2:0, is a nice 1920x1080 4:4:4 -- notice 2.7K is one of GoPro video modes partly for this reason."

     

    For those who don't know, David Newman is not just "some software guy from Gopro" -  He invented the Cineform Codec and is clearly technically/mathematically gifted.

     

    Video compression has some unfortunate terms, like "chroma sub-sampling".  More accurately, it is "chroma removal".  The whole idea behind video compression is we are more sensitive to luma and sharpness and less so to color.  That why 4:2:0 works as well as it does.  Is 4:4:4 really necessary for broadcast video?  Is 4:2:2 even?  

     

    The problem with 4:2:2 vs 4:4:4 is that you are doubling up on color values; that is, using one value for 2 pixels.  Sounds bad, but again, in practice works very well.  

     

    My guess, because I am not an expert in this stuff like David, is that with 4K you can shift pixels over, and therefore use a neighboring chroma pixel to bring one next to it from 4:2:0, to 4:2:2: and then 4:4:4.  Will the quality be visually better--we shall see!

     

    What I worry people interpret that as is the 4:4:4: equaling a 4:4:4 created from the the ORIGINAL data.  I don't see how it can be.  Though I grant it will be close if the dynamic range is fairly flat.

     

    Again, I think that is all most knowledgeable people are talking about.

  13. A WORK IN PROGRESS.  Everyone, feel free to correct, add, subtract...

     

    Storage, power and bandwidth constraints necessitate the need for video compression.  It's easier to understand the trade-offs, and issues, once you understand the ideal world.

     

    In the ideal world, you would work with all the data recorded by the camera

     

    • The total pixels in a frame of 1,920 pixels wide, and 1,080 pixels high is 2,073,600, or about 2 million pixels.
    • In one second, we watch 30 of those frames, so that 2 million times 30, or roughly 60 million pixels per second.
    • For a minute we’d need 60 million times 60 seconds, or 3,600,000,000 pixels per minute, or 3.6 billion.
    • When you’re watching your HD-TV your eye is viewing 3.6 billion pixels every minute.
    • What makes up a pixel? A color. Colors are often described in their red, green and blue components. That is, every color can be separated into a red, green and blue value, often abbreviated RGB.  
    • Most cameras record each color as a brightness value from 0 to 16,383 (14 bits).
    • You need three sets of numbers, red (0 to 16,383), green (0 to 16,383) and blue (0 to 16,383) to numerically describe ANY color that the camera has recorded.
    • Some simple math tells us that we will get a range of values between zero and 4.3 trillion.  (16,383 times 16,383 times 16,383)
    • To make matters REALLY confusing, cameras only shoot one color at each pixel location (red, green or blue (or yellow, magenta or cyan), in a "bayer" pattern.  So each pixel is only accurate about 25% of the color at that location.  It assumes that two pixels near it can give the correct color information to create a full color, through "de-bayering".  This trick of borrowing color information from nearby pixels is ALSO used in video compression in a completely different way. Too complicated to get into here.

     

    We can only "see" about 12 million colors.  We don't need 4.3 trillion.  

    That is, we don't need 14bit * 14bit * 14 bit, we need 8bit * 8bit * 8bit (which actually gives us about 16 million)

     

    Therefore, for viewing purposes, we can throw out most of the recorded data

     

    Let’s go back to the optimum image we’d like to see, 3.6 billion pixels per minute times 24bits (3 bytes). That would be 10.8 gigabytes per minute. As you know, you’re not streaming 10 gigabytes of video to your TV every minute. Video compression does a marvelous job of cutting that down to a manageable size

     

    HD 720p @ H.264 high profile 2500 kbps (20 MB/minute)
    HD 1080p @ H.264 high profile 5000 kbps (35 MB/minute)

     

    If your compressed image "overexposed" your original data, you cannot get back the correctly exposed data from the compressed video.  You would want the original data.  

     

    Put another way, in compressed video you are starting out with 24-bit pixels (8/8/8).  In the original data, you have 42bit pixels (14/14/14/).  Those 42bits aren't all equal (the sensors aren't as accurate at the extreme ends of their readings), but this should give you an idea of why RAW sensor data is the ideal.

     

    REFERENCE

     

    BAYER SENSORS

    http://www.siliconimaging.com/RGB%20Bayer.htm

     

    DEBAYERING

    http://pixinsight.com/doc/tools/Debayer/Debayer.html

     

    PATENT BELIEVED BEHIND CANON's CURRNET VIDEO FOCUS PIXEL TECHNOLOGY

    http://www.google.com/patents/US20100165176?printsec=abstract#v=onepage&q&f=false

     

    SIGMA/FOVEON NON-BAYER SENSORS (not currently used in video due to technical problems)

    http://en.wikipedia.org/wiki/Foveon_X3_sensor

     

    CAMERAS MUST DUMP IMAGE DATA IN REAL-TIME.  Or, not all SD cards created equally

    http://en.wikipedia.org/wiki/Secure_Digital

     

    VIDEO COMPRESSION

     

    http://tech.yanatm.com/?p=485

     

    Oh this stuff makes my head swim ;)

  14. Think about how bracketing, or HDR, works in photography.  You take 3 images at different exposures.  Software then uses any large patches of detail-less pixels to be replaced by pixels with detail.  It is essentially USES the 8bit truncated data to tell the computer that it has to try to get other data that was truncated higher or lower.  It looks in the images under or over-exposed.  

     

    With a RAW image, you can do close to the same thing in photoshop by selecting detail-less pixels and then using the RAW data to bring out detail (from the 14bit per channel data), to essentially move the center exposure point of those pixels only.  

     

    A way the GH4 could potentially use its resolution in a way that would approach RAW is to take multiple frames and then HDR them.  That is, if the GH4 exposed 1920x1080 pixels in one exposure a bit above, and another, a bit below, THEN the downsampling could use the extra range to bulid a more detailed image.  

     

    People at Magic Lantern have discussed such ideas and do it in a limited way, though I don't follow it.  Perhaps someone reading this can explain more about that technique.

  15. OK let's put this in context HurtinMinorKey please. What affect does your theory have on the end result, are we arguing here over a tiny technicality / mathematical proof, or is it a serious issue which will mean we get nowhere near a higher bit depth as David and others are suggesting?

     

    If a pixel in the camera reads 14 bits of data you CANNOT get all of it back once you truncate to 8 bits of data.  

     

    Certainly, there will probably be modest color/luma improvements by downsampling fomr 4K, but only within its 8bit dynamic range.  

     

    That is to say, IN PRACTICE, if you shoot a scene that falls within the CODEC's dynamic range output you may get better color nuance through average of neighboring pixels.  But if the neighboring pixels are choppy then you're just going to create artifacts.

     

    However, you cannot get values from those RAW pixels that were out of the 8bit range they took from the 14bits.

     

    I don't mean to be rude, but you're confusing color bits with compression bits.  People who read this thread who think the GH4 is going to do what the Blackmagic cameras, ML RAW, or high end RAW based cameras do should understand this.

  16. I'm a case in point.  I have a Sigma DP1 Merrill for portable medium-format quality stills.  A d600 for low-light event photography and portraits (with 85mm).  A Canon EOS-M with CCTV lens you recommended in your GH2 guide, for fun, super shallow DOF photography and video.  A GF3 which I share with my daughter for quick and easy stills and video with almost any lens and adapters.

     

    For video I have a Blackmagic pocket cinema camera.  If I CARE about the image-quality of the video, I wouldn't even think about shooting it with any of the above.

  17. I sold my Nex 7, odd as it may seem, because of their proprietary hot-shoe.  I had to use an adapter for my radio trigger to my Canon flashes, which was a pain.  Other than that, and slow focus, I loved that camera.  Now that Sony has gotten smart about the hot shoe I'm intrigued again.

     

    Sony uses APS-C sized sensors, and for photography, they deliver much better IQ compared to MFT, at least for me.  Also, these are small cameras with EVF which are important for older people who need reading glasses (like me).  

     

    Sony has also created an API where you can control these, and some earlier cameras from any computing device with wireless networking.  Like Canon ML, ONLY started with the manufacturer!  

     

    Go Sony! :)

  18. The source signal is 10bit 4:2:2 from the sensor and the encoder does a good job of compressing it into 8bit 4:2:0. 

     

    I too, was confused before working with the actual bits of RAW video data.  When the sensor is exposed to light a chip reads the values from each sensel (which is eventually combined into a pixel value).  The sensels are basically just voltage resistances.  The camera can read the values to as much precision as the electronics allow.  You can liken this to getting the voltage from you household electric.  A simple meter would show 220 volts in the U.K., but a fancy one might show 219.556345 volts. 

     

    The very first sensel, top left, is usually a red filtered sensel (keep in mind the sensor are, at heart, monochromatic).  A Canon camera, for example, reads that value and stores it as a 14-bit number, 0 to 16,383.  That means it records 16,383 shades of red.  It does the same for the next pixel, a green one.  Below that green one, on the second row, will be a blue sensel, again the same for that.  Both Magic Lantern RAW and Blackmagic, save these values (the blackmagic with some lossless data compression, not to be confused with visual compression).

     

    Each of these values is a number between 0 and 16,283.  From the three, if you construct a full-color pixel you'd end up with a value at around 4 trillion, say it was white.  And another pixel, black, was 0.  Unfortunately, our eyes can only see about 16 million variations in color so the 4 trillion dollar white would actually only look like a 16 million value.  For the most part, we can only see, and our display and printing produce, color values within 16 million shades.  So why do we want 4 trillion colors when we can only see 16 million?

     

    EXPOSURE!

     

    First, let's turn to the GH4 (or any H.264 tech).  In those cameras each sensels value is saved into a 256 (8-bit) value.  Those 3 values are then combined to create a 16 million (24 bit) full color value. Assuming you've exposed the scene perfectly, and the colors are what you want, you will happy as Andy with a gold-plated G6 :)  Also 4:2:2, and all similar nomenclature, are about color compression for motion; they degrade each still image.

     

    But what if you didn't expose correctly?  What if you made a mistake and exposed +2.  Now your scene is washed out.  When you try to get lighter values at the bottom you can't get them.  The detail (shading) just isn't there.

     

    But much of it WAS there in the RAW data from the sensor!  Let's say with the GH4 you exposed perfectly, and it happened to be the 16 million color values between 2 trillion and 2.16 trillion.  Then, you expose +2 and it saves the values from 2.74 to 2.9 trillion.  With the RAW data you can pick up those values and CONSTRUCT your 16 million color H.264 video data.  Naturally, the sensor works better when you expose in the middle of its range, so you can't fix the exposure perfectly in real life.

     

    The only benefit of the I-frame data is that it doesn't try to compress the 24bit values from one pixel in one frame, to another, which can create artifacts.  It does not, in any way, solve the problem of available color depth, if you're comparing 8bit to 14bit.  

     

    It sounds like the GH4 will provide fantastic resolution in low-dynamic range shooting conditions.  It will not do what the Blackmagic cameras do in high-dynamic range shooting conditions (save enough data for you to fix exposures in post, or recover details from shadows, etc)

     

    Filmmakers who don't understand the difference between the cameras may end up having the wrong camera and that would be a shame.  They are both fantastic technologies.  You can no more have both than you can have a Range Rover and Jaguar in the same car ;)

  19. I was about to buy the 2K BMCC, I'm going to wait and see what happen in the near future.

     

    Why EOSHD always glosses over the difference between 8-bit per channel and 12-14-bit per channel sensor data is beyond me.  Fantastic review, as always!  However, the Blackmagic and Magic Lantern RAW technologies are completely different.  The GH4 compresses sensor data into an 8-bit/channel video stream--more resolution DOES NOT equal color depth.  

     

    For example, he points out you can get a 4096 x 2160 still.  He's absolutely right that will come in very handy.   However, that still will be a JPEG, with 8-bits per channel.  If you use a 1920x1080 dng still from a Blackmagic you are getting 12-bits per channel.  The latitude of what you can do in post is leaps and bounds beyond 8-bit.

     

    Both cameras have strengths for different types of film-makers.  In no way does the GH4 compete with Blackmagic raw video cameras, and visa-versa.

     

    They are both great cameras and EOSHD has written great reviews on both.

  20. The larger the sensor the better the low light performance due to less lens-diffraction.  Noise reduction is an art unto itself.  I think the cameras are all a toss-up.  Some will do some things better, some worse, in low light.  You may want to budget $50 for NEAT.  You may want to put your money into a good quality LED camera light, and/or panel on a stand.  The speed-booster will impress you, but may also make you lazy about getting more light onto your subjects.

     

    RAW would blow any h.264 out of the water, IMHO.  Here's an EOS-M with Magic Lantern, with a Sigma 10-20mm in crop mode.  You could get this camera/lens setup used for around $600, maybe less.  However, ML is a pain, so you'd need time beforehand to learn how to use it so it doesn't crap out on you at shoot.  Of you could get/borrow/rent a BMPCC, which is my final recommendation.

     

  21. If you want to influence the manufacturers, this is the way to do it   That may be good or bad ;)  I'd create a "Pocket list".  I'd like to see a place for the Canon EOS-M, which has mic input, shoots RAW, etc.  

     

    Also, my small beef with the Alexis article is it wasn't compared to DSLRs.  I don't think professional cameras should be excluded, but should be used to illustrate what features are lacking in consumer cameras and how well certain consumer cameras can overcome the limitations, etc.

  22. With the right light and limited motion, any of the above cameras with a sharp lens could shoot it?  I agree with HurtinMinorKey, limited DR and images flattened into midtones, which is my tell-all for "video" cameras which can't show true blacks without high contrast. (though I agree, probably a high end camera)  The images are mixed flat-DR and sharp, something I don't think you'd get with film.  I liked the close-up of the girl's crotch on the staircase. Why you'd shoot and grade in such a dreamy way and then put that jarring clip in, well, it no longer made a difference to me what camera they were using.

×
×
  • Create New...