Exact match. Not showing close matches.
'[EE]:improving camera resolution AGSC'
M. Adam Davis
On 1/11/06, Stephen R Phillips <yahoo.com> wrote: cyberman_phillips
How the camera captures the image is relevant, but your explanation
does not prove that one could not obtain sub-pixel resolution from
multiple shots of the same subject.
I don't see a reason why it's not possible. In fact, some of the
algorithms to convert the bayer pattern image to a "regular" image are
applicable to the problem. Nasa uses these techniques to produce high
resolution images of Mars.
Imagine a one pixel, monochromatic camera. You've taken four images
of one subject, and offset each image from the others by 1/3 of the
pixel size. You now have four images. If you overlay them on top of
each other relative to the area of the picture you end up with a 3x3
table - each image covers four cells in the table - the central cell
has been imaged 4 times, but each image contains only 1/4 of the
information required for the center cell. the four corners were each
imaged only once. The remaining four cells were imaged twice.
Using linear algebra it's possible to obtain a 9 pixel image from
those four single pixel images. It won't be as good as a real 9 pixel
image, but it'll be much better than the one pixel image.
If you want to do the same for a more complex situation (more pixels,
bayer pattern, etc) then there are a number of ways to extend the
solution. In any case, yes, it's possible to increase the resolution
of the image by taking multiple pictures of something with the camera
Also, with a telecine camera it won't matter as much, but in most
cases you want to step the image sensor one sub-pixel rather than
stepping the lens and image sensor together.
As an aside, Foveon is the company with the neat stacked sensor. I
haven't heard much about them recently, but they would be ideal for
increasing the resolution by stepping the image sensor. They did
release their first sensor, which is available in a Sigma camera. You
can get a sensor evaluation kit as well: http://www.foveon.com/ .
On 1/12/06, M. Adam Davis <gmail.com> wrote: stienman
> How the camera captures the image is relevant, but your explanation
> does not prove that one could not obtain sub-pixel resolution from
> multiple shots of the same subject.
> I don't see a reason why it's not possible.
Well, the problem really is much more complex, and related to filtering.
First, an example. You take a photo of a pattern of white vertical lines
on a black background, and each pixel takes exactly 1 line and the
background surrounding it. You see each pixel with the same value (at
half the brightness of the line). If you move the camera to the side,
always you get 1 line per pixel (with some line entering from the left as
another leaves at the right), so you always obtain the same image.
How can you reconstruct the original pattern?
Into the details, you can describe the process like:
* The original image ("infinite resolution") goes through the lenses system.
The lens actually low-pass filters the image, convolutioning the image
with the diffraction spot of the lens.
* Then, the image is sampled by the sensor, using rectangular pixels.
This applies another filter to the image (convolution with a box filter),
and then aliases the remaining high frequencies to the lower bands.
Now, the aliasing effect can be reduced taking new images (of the
*same* data) with the sensor at another (fractional-pixel) position.
An then you can equalize the new image using an optimal inverse
Problem is, there are *some* frequencies that are highly attenuated
by the filters (even some that are zeroed), so you can not restore the
To solve this problem, you can move the camera perpendicular to the
image, so the sampling frequency changes. But the reconstruction
process can be *very* difficult, and this can only be done if the image
is very far away, so moving the camera don't change the scene imaged.
M. Adam Davis
Ok, so what you're essentially saying is that the image sensor is
acting as a low pass filter in this case - there is a limit to the
resolving power it has, especially in the particular case you've
However, the practical aspects of the image sensor actually make this
an easier job - since there's a space between each pixel that is not
sensed, then even in your vertical line example one could obtain some
information about the lines. They would likely show up as high
frequency moire patterns, I'm guessing.
Further, which pixel are you talking about? In the case of
white/black it doesn't make too much of a difference, but in the case
of red/green then whether you pick the size of a sub pixel (one color
filtered pixel) or a "whole" pixel (two green, one red, one blue)
you'll certianly be able to resolve much more information than a
simple solid color.
On 1/12/06, Daniel Serpell <gmail.com> wrote: daniel.serpell
Gus Salavatore Calabrese
So given the comments about filtering issues,
would it perhaps be better to use a monochromatic
sensor and pass three filters (RGB) in front of it ?
I will not be capturing any objects in motion.
More... (looser matching)
- Last day of these posts
- In 2006
, 2007 only
- New search...