Jump to content

Light L16 - A Camera Breakthrough!


DBounce
 Share

Recommended Posts

3 hours ago, JurijTurnsek said:

If anyone has not seen the officially released 20mp images (noisy stuff):

 

Aren't these much older pics? I think there appear to be a few more generations of photos after these. 

I see an improvement in the photo quality. Unfortunately, the photos uploaded lately all mostly seem to be low res ones. I hope they change that.

I think these are the pics from the latest hardware and software version:

SuperBloom_D-0790_L16_00070-Edit-2.jpg

 

L16_00186-Edit-Edit.jpg

 

Juan_Cruz_7_11_2017-Edit-2.jpg

They look pretty promising.

TBO, though, I wonder how many bits of RAW is actually even possible with a tiny smartphone sized camera sensor (I am guessing most are limited to 10-bit RAW). Also, it seems unlikely that the photo stitching with RAW photos. It may require some insane levels of processing to stitch togetheranything more than 8-bit JPEGS. I wonder if a few (3-5?) Snapdragon 835 type processors would be able to handle stitching together upto 10 RAW photos from 10 cameras into a single one, and do a good job of creating a single image exceeding anything else coming out of a smartphone?

Link to comment
Share on other sites

EOSHD Pro Color 5 for Sony cameras EOSHD Z LOG for Nikon CamerasEOSHD C-LOG and Film Profiles for All Canon DSLRs
22 hours ago, cantsin said:

They are quite underwhelming. Shows that even the best algorithms can't get rid of the noise produced by small sensors and the lack of true detail resolution produced by tiny cheap lenses.

That's not the best example though, mate, where detail requires much better ingredients to bake the perfect cheesecake indeed ; ) And, actually, my attack : D on your thread out there also shows these computational cameras can rock if in good hands. That is, if we know what we're doing and the proper basket where to put the eggs.

high resolution sample.png

Link to comment
Share on other sites

10 minutes ago, Emanuel said:

That's not the best example though, mate, where detail requires much better ingredients to bake the perfect cheesecake indeed ; ) And, actually, my attack : D on your thread out there also shows these computational cameras can rock if in good hands. That is, if we know what we're doing and the proper basket where to put the eggs.

high resolution sample.png

Actually that example shows what is going wrong and why the camera is bad. The overall tonality of the larger images is poor but yes, small high contrast details are present in crops. It's like the camera is scanning the scene for high contrast detail and junking everything else. I.e. it's an extreme version of the classic disease of bad camera design - trading resolution of high contrast detail for tonality. Aka Megapixel War Syndrome, Cheapus Compactusitus or Walmart Camera Canker.

Link to comment
Share on other sites

11 minutes ago, meanwhile said:

Actually that example shows what is going wrong and why the camera is bad. The overall tonality of the larger images is poor but yes, small high contrast details are present in crops. It's like the camera is scanning the scene for high contrast detail and junking everything else. I.e. it's an extreme version of the classic disease of bad camera design - trading resolution of high contrast detail for tonality. Aka Megapixel War Syndrome, Cheapus Compactusitus or Walmart Camera Canker.

So, we are here to review a camera you can't put your hands over so far, is that?

Or the technology behind and beyond?

Moreover, that consumer concept in hands of many seems to annoy a few of PROs much more focused to prone the pixel peeping abilities rather than the potentiality of the gear from and on their skills if any.

C'mon...

Link to comment
Share on other sites

13 minutes ago, Emanuel said:

So, we are here to review a camera you can't put your hands over so far, is that?

 

How does it matter how a camera feels in your hands if the images - the images cherry picked to show it in the best (haha) light - suck???

More than that, Light are asking people to give them money based on these images. So saying that the images shouldn't be discussed is bizarre.

Link to comment
Share on other sites

35 minutes ago, meanwhile said:

How does it matter how a camera feels in your hands if the images - the images cherry picked to show it in the best (haha) light - suck???

More than that, Light are asking people to give them money based on these images. So saying that the images shouldn't be discussed is bizarre.

It is not exactly the way you feel the device in your hands but 'the proof is in the pudding' concept behind. What do you want? Go deep or go flat? Any profitable discussion implies that.

From that post addressed to tonality, comes to my mind this example here (sourcelessly, that is, shot on 35mm film but it is irrelevant for such matter):

 

Link to comment
Share on other sites

2 minutes ago, Emanuel said:

It is not exactly the way you feel the device in your hands but 'the proof is in the pudding' concept behind. What do you want? Go deep or go flat? Any profitable discussion implies that.

 

If you are trying to say that image quality can't be judged separately from dof, this is both wrong and irrelevant. If a sensor has poor resolution of lower contrast detail, that will be the case at all dofs.

Link to comment
Share on other sites

22 minutes ago, meanwhile said:

If you are trying to say that image quality can't be judged separately from dof, this is both wrong and irrelevant. If a sensor has poor resolution of lower contrast detail, that will be the case at all dofs.

Not at all. Deep or flat, shallow, whatever word used, such terminology doesn't apply only on DOF terms. I mean, discussing images itself instead.

Take a look on this second sample from the same film feature example (on tonality subject):

Any worthy discussion about pictures requires the matter of our subject indeed, not strictly under a technology or geek POV.

So, is it any valuable for our discussion the fact we don't have a flawless device on here?

Sure not. It matters the potentiality of the technology instead. The open doors from there. The way you can use the tool. Something you can have and you hadn't had before. Hence, the breakthrough, I am all in. You should do too. Sorry pal, but I'm afraid you had lost my point there :-)

Link to comment
Share on other sites

A bad image stays a bad image. Maybe the camera performs better in some cases - and mind you, those images are demo images cherry-picked by the company, intended to make the camera look good. Your example with the rock image is very likely the result of computing/compositing the high-resolution image from a combination of wide-angle and tele micro lenses (where the people on the beach were captured by the tele micro lens). The sample image with the grass shows that this approach doesn't work if you have high detail (such as the grass leaves) across the whole frame.

Here's another example for the limitations of the camera, again picked from Light's sample images. The motif was shot in extremely good light, yet the shadows on the cheek of the model are noisy like a 6400 ISO image. Given the fact that all the sample images were shot in optimum light conditions, this is a strong indicator of extremely weak low light capabilities of the camera.

Btw., I'm not dismissing light field and computational photography as such. I just think that the available technology today is as premature as, for example, the Apple Quicksnap digital camera from the late 1990s (which shot 640x480 JPEG). Back then, digital cameras were toys and "Ebay cameras" (for cheaply shooting low-res pics of products) and nobody expected them to ever compete with analog photography.

I actually do expect light field and computational photography (along with commodified deep learning AI in consumer hardware) to obsolete most conventional camera designs - eventually, but not today. A lot of hardware and software development still needs to be done.

CubaCowboy-crop.jpg

Link to comment
Share on other sites

23 minutes ago, cantsin said:

Your example with the rock image is very likely the result of computing/compositing the high-resolution image from a combination of wide-angle and tele micro lenses (where the people on the beach were captured by the tele micro lens). The sample image with the grass shows that this approach doesn't work if you have high detail (such as the grass leaves) across the whole frame.

Indeed. Hourses for courses ;-)

23 minutes ago, cantsin said:

A bad image stays a bad image. Maybe the camera performs better in some cases - and mind you, those images are demo images cherry-picked by the company, intended to make the camera look good. (...)

Here's another example for the limitations of the camera, again picked from Light's sample images. The motif was shot in extremely good light, yet the shadows on the cheek of the model are noisy like a 6400 ISO image. Given the fact that all the sample images were shot in optimum light conditions, this is a strong indicator of extremely weak low light capabilities of the camera.

I think it is too early to shoot the messenger, don't you agree?

Link to comment
Share on other sites

It's actually a good example of current technological limitations that can be solved in the future through AI and deep learning-based computer vision (which currently isn't feasible yet with the computational capabilities of a handheld device, i.e. without tensel co-processors). An AI computer vision algorithm could have recognized the grass leaf detail captured by a tele microlens as grass leaf detail and applied a grass-leaf detail reconstruction model to the rests of the image where the grass had been captured by wide-angle microlenses in lower resolution.

Therefore, it would be really interesting to see whether the Light L16 will have a way of storing "raw", i.e. unprocessed microlens image fragments. Theoretically, one might be able to compute better images out of that raw data in a few years with more powerful software. 

Link to comment
Share on other sites

18 minutes ago, cantsin said:

Btw., I'm not dismissing light field and computational photography as such. I just think that the available technology today is as premature as, for example, the Apple Quicksnap digital camera from the late 1990s (which shot 640x480 JPEG). Back then, digital cameras were toys and "Ebay cameras" (for cheaply shooting low-res pics of products) and nobody expected them to ever compete with analog photography.

I actually do expect light field and computational photography (along with commodified deep learning AI in consumer hardware) to obsolete most conventional camera designs - eventually, but not today. A lot of hardware and software development still needs to be done.

This is a consumer device, I guess. In any case, feel free to use it as artistic tool, isn't it?

Link to comment
Share on other sites

9 minutes ago, Emanuel said:

Indeed. Hourses for courses ;-)

I think it is too early to shoot the messenger, don't you agree?

No, images are evidence. :-) Just like a sample images of a conventional camera with high noise, lacking detail or blow-out highlights are not fully, but let's say 75-90% reliable indicators of a camera's actual issues with sensor noise, resolution or dynamic range. (All the more when these images weren't shot by someone who's still unfamiliar with the camera.)

3 minutes ago, Emanuel said:

This is a consumer device, I guess. In any case, feel free to use it as artistic tool, isn't it?

Yes, but the company's PR that the camera delivers DSLR quality needs to be taken with a huge grain of salt. And relativizing these claims is what this thread is about.

Link to comment
Share on other sites

8 minutes ago, cantsin said:

No, images are evidence. :-) Just like a sample images of a conventional camera with high noise, lacking detail or blow-out highlights are not fully, but let's say 75-90% reliable indicators of a camera's actual issues with sensor noise, resolution or dynamic range. (All the more when these images weren't shot by someone who's still unfamiliar with the camera.)

Form, content is a way beyond the technology used/based in. Who guarantees you the shooter is anyhow able to extract the best from there?

Without mention certain limitations are the best tool the user must overcome or live with.

Some other fallacy often reproduced over these pages is the misconception that content must be a perfect form of the state-of-the-art technology.

Link to comment
Share on other sites

6 minutes ago, Emanuel said:

Form, content is a way beyond the technology used/based in. Who guarantees you the shooter is anyhow able to extract the best from there?

This is getting silly, sorry. We have sample images, taken in perfect light, probably from a tripod and at high shutter speeds (since there's not the slightest trace of camera shake or motion blur), properly focused (since the detail resolution issues aren't the kind of issues caused by a defocused lens, but the same kind of issues that we know from frame grabs of highly compressed video) and properly exposed. Now you're saying that we can't make ANY statement about the camera's image quality based on these images. Then we can just as well close this whole forum.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • EOSHD Pro Color 5 for All Sony cameras
    EOSHD C-LOG and Film Profiles for All Canon DSLRs
    EOSHD Dynamic Range Enhancer for H.264/H.265
×
×
  • Create New...