Searching \ for '[EE]:improving camera resolution AGSC' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page: www.piclist.com/techref/index.htm?key=improving+camera
Search entire site for: 'improving camera resolution AGSC'.

Exact match. Not showing close matches.
PICList Thread
'[EE]:improving camera resolution AGSC'
2006\01\12@143144 by M. Adam Davis

face picon face
On 1/11/06, Stephen R Phillips <spam_OUTcyberman_phillipsTakeThisOuTspamyahoo.com> wrote:
{Quote hidden}

How the camera captures the image is relevant, but your explanation
does not prove that one could not obtain sub-pixel resolution from
multiple shots of the same subject.

I don't see a reason why it's not possible.  In fact, some of the
algorithms to convert the bayer pattern image to a "regular" image are
applicable to the problem.  Nasa uses these techniques to produce high
resolution images of Mars.

Imagine a one pixel, monochromatic camera.  You've taken four images
of one subject, and offset each image from the others by 1/3 of the
pixel size.  You now have four images.  If you overlay them on top of
each other relative to the area of the picture you end up with a 3x3
table - each image covers four cells in the table - the central cell
has been imaged 4 times, but each image contains only 1/4 of the
information required for the center cell.  the four corners were each
imaged only once.  The remaining four cells were imaged twice.

Using linear algebra it's possible to obtain a 9 pixel image from
those four single pixel images.  It won't be as good as a real 9 pixel
image, but it'll be much better than the one pixel image.

If you want to do the same for a more complex situation (more pixels,
bayer pattern, etc) then there are a number of ways to extend the
solution.  In any case, yes, it's possible to increase the resolution
of the image by taking multiple pictures of something with the camera
slightly offset.

Also, with a telecine camera it won't matter as much, but in most
cases you want to step the image sensor one sub-pixel rather than
stepping the lens and image sensor together.

As an aside, Foveon is the company with the neat stacked sensor.  I
haven't heard much about them recently, but they would be ideal for
increasing the resolution by stepping the image sensor.  They did
release their first sensor, which is available in a Sigma camera.  You
can get a sensor evaluation kit as well: http://www.foveon.com/ .

-Adam

2006\01\12@162658 by Daniel Serpell

picon face
Hi!

On 1/12/06, M. Adam Davis <stienmanspamKILLspamgmail.com> wrote:
>
> How the camera captures the image is relevant, but your explanation
> does not prove that one could not obtain sub-pixel resolution from
> multiple shots of the same subject.
>
> I don't see a reason why it's not possible.

Well, the problem really is much more complex, and related to filtering.

First, an example. You take a photo of a pattern of white vertical lines
on a black background, and each pixel takes exactly 1 line and the
background surrounding it. You see each pixel with the same value (at
half the brightness of the line). If you move the camera to the side,
always you get 1 line per pixel (with some line entering from the left as
another leaves at the right), so you always obtain the same image.
How can you reconstruct the original pattern?

Into the details, you can describe the process like:

* The original image ("infinite resolution") goes through the lenses system.
 The lens actually low-pass filters the image, convolutioning the image
 with the diffraction spot of the lens.
* Then, the image is sampled by the sensor, using rectangular pixels.
 This applies another filter to the image (convolution with a box filter),
 and then aliases the remaining high frequencies to the lower bands.

Now, the aliasing effect can be reduced taking new images (of the
*same* data) with the sensor at another (fractional-pixel) position.

An then you can equalize the new image using an optimal inverse
filter.

Problem is, there are *some* frequencies that are highly attenuated
by the filters (even some that are zeroed), so you can not restore the
original information.

To solve this problem, you can move the camera perpendicular to the
image, so the sampling frequency changes. But the reconstruction
process can be *very* difficult, and this can only be done if the image
is very far away, so moving the camera don't change the scene imaged.

   Daniel.

2006\01\12@165426 by M. Adam Davis

face picon face
Ok, so what you're essentially saying is that the image sensor is
acting as a low pass filter in this case - there is a limit to the
resolving power it has, especially in the particular case you've
shown.

However, the practical aspects of the image sensor actually make this
an easier job - since there's a space between each pixel that is not
sensed, then even in your vertical line example one could obtain some
information about the lines.  They would likely show up as high
frequency moire patterns, I'm guessing.

Further, which pixel are you talking about?  In the case of
white/black it doesn't make too much of a difference, but in the case
of red/green then whether you pick the size of a sub pixel (one color
filtered pixel) or a "whole" pixel (two green, one red, one blue)
you'll certianly be able to resolve much more information than a
simple solid color.

-Adam



On 1/12/06, Daniel Serpell <.....daniel.serpellKILLspamspam.....gmail.com> wrote:
{Quote hidden}

> -

2006\01\13@162732 by Gus Salavatore Calabrese

face picon face
So given the comments about filtering issues,
would it perhaps be better to use a monochromatic
sensor and pass three filters (RGB) in front of it ?

I will not be capturing any objects in motion.

Thanks

More... (looser matching)
- Last day of these posts
- In 2006 , 2007 only
- Today
- New search...