Searching \ for '[EE]:improving camera resolution AGSC' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page: www.piclist.com/techref/index.htm?key=improving+camera
Search entire site for: 'improving camera resolution AGSC'.

Exact match. Not showing close matches.
PICList Thread
'[EE]:improving camera resolution AGSC'
2006\01\11@125349 by Gus Salavatore Calabrese

face picon face
Having begun the prototyping of a SAN (Swiss Army Knife)
optical workstation, some questions have come up.

My intention is to have a portable platform which has programable
lighting ( IR thru UV ) and a camera that will take wide angle,
telecentric and macro shots.  ( Under computer control )

I am guessing that some telecentric characteristics can be faked by  
moving
the camera ( or object ) around and stitching pixels together so that  
vertical
surfaces do not block the observation of details next to them.  Or  
maybe a
telecentric lense system is a better approach? Cost and weight are  
issues
here.

http://www.lhup.edu/~dsimanek/3d/telecent.htm

Another issue I was pondering was whether the resolution of the  
camera could be
improved by taking a shot, moving it a fraction of a pixel, taking a  
shot, moving it a
fraction of a pixel, .......   Would it be possible to compute sub-
pixels by doing this ?
Or would it be better to zoom in, take a shot of a small portion of  
the object, move to the
next section, etc.   and then stitch the sectors together ?  A  
telecentric lens might help
with this.  Can a telecentric lens be zoomed ?

Thanks

AGSC
Augustus Gustavius Salvatore Calabrese 720.222.1309    AGSC
http://www.omegadogs.com   Denver, CO

2006\01\11@133945 by William Chops Westfield

face picon face
On Jan 11, 2006, at 9:55 AM, Gus Salavatore Calabrese wrote:

> Another issue I was pondering was whether the resolution of the
> camera could be improved by taking a shot, moving it a fraction
> of a pixel, taking a shot, moving it a fraction of a pixel,
> Would it be possible to compute sub-pixels by doing this ?

Yes.  NASA used some technology like this to get 'super resolution'
pictures from the Mars rovers.  I gather that it's pretty complex
stuff to do; not just inserting pixels from one image to another...
I don't know much about the actual process or its status in "the
real world."

> Or would it be better to zoom in, take a shot of a small portion
> of the object, move to the next section, etc.   and then stitch
> the sectors together ?

This is much easier; anyone can do it.

BillW

2006\01\11@145527 by Stephen R Phillips

picon face


--- Gus Salavatore Calabrese <spam_OUTgscTakeThisOuTspamomegadogs.com> wrote:

> Having begun the prototyping of a SAN (Swiss Army Knife)
> optical workstation, some questions have come up.
>
> My intention is to have a portable platform which has programable
> lighting ( IR thru UV ) and a camera that will take wide angle,
> telecentric and macro shots.  ( Under computer control )
>
> I am guessing that some telecentric characteristics can be faked by  
> moving the camera ( or object ) around and stitching pixels together
so
> that vertical surfaces do not block the observation of details next
to
> them.  Or maybe a telecentric lense system is a better approach? Cost
and
> weight are issues here.
>
Why not use a digital camera pointing at a conical mirror then
unwraping the information focused on the image plane into a 360 view.
This requires
1) No movement of the camera, save perhaps focus, 2) few optical
eliments, and no need for achromatic adjustment since there are no
prisms involved (and thus no chromatic seperation issues).

> http://www.lhup.edu/~dsimanek/3d/telecent.htm
>
> Another issue I was pondering was whether the resolution of the  
> camera could be improved by taking a shot, moving it a fraction of a
> pixel, taking a shot, moving it a fraction of a pixel, .......  
> Would it be possible to compute sub-pixels by doing this ?

No.  Ok LITTLE bit of information about sensors, here.  Your typical
camera image element is a grid of monochromatic sensors with pass
filters above them. They are arranged in what is termed a bayer
pattern.  Something like

R G
G B

The reality is your typical digital camera's resolution SAYS 3 mega
pixels for example, but it most certainly is NOT 3 mega pixels.  It's a
bit of deception.  They estimate the color at the other pixel location
by converting the pattern through a filter into RGB pixels.  However to
be blunt and to the point, the RGB values are a guess at best.  A more
expensive but acurate system involves precise lenses and dichromatic
mirrors and 3 image sensors. A company was developing a sensor that was
true RGB however I've not seen it hit the market.

> Or would it be better to zoom in, take a shot of a small portion of  
> the object, move to the
> next section, etc.   and then stitch the sectors together ?  A  
> telecentric lens might help
> with this.  Can a telecentric lens be zoomed ?
AHEM did you read the page you gave as a reference? It answers the
later question for you. :D

So here is a good question What are you trying to do? :)

Stephen R. Phillips was here
Please be advised what was said may be absolutely wrong, and hereby this disclaimer follows.  I reserve the right to be wrong and admit it in front of the entire world.

__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

2006\01\12@195644 by James Newtons Massmind

face picon face
> Another issue I was pondering was whether the resolution of
> the camera could be improved by taking a shot, moving it a
> fraction of a pixel, taking a shot, moving it a
> fraction of a pixel, .......   Would it be possible to compute sub-
> pixels by doing this ?

A guy named Steve Mann at a university in Toronto did that for an eyeglass
mounted low res camera. He was able to not only vastly increase the
resolution of the photo but also compute the amount of movement from the
overall change in pixel levels so that no mechanical position feedback was
necessary. As he would sweep his head from side to side, the system would
provide head tracking from the over all change in the pixels and then go
back later and compute sub-pixels from the frames to build a high resolution
image.

Googleing:
http://en.wikipedia.org/wiki/Steve_Mann
http://eyetap.org/publications/index.html (appears to be down)
http://wearcam.org
http://wearcam.org/tip.ps.gz I think this is the one you want.
http://wearcam.org/wyckoff/index.html or this one??
http://hi.eecg.toronto.edu/orbits/orbits.html Ahhh, no here it is! This is
the one. Orbits.

P.S. Steve Mann is a true hero of mine. Brilliant, unconstrained by
convention, one step ahead on the evolutionary ladder.

---
James Newton: PICList webmaster/Admin
.....jamesnewtonKILLspamspam@spam@piclist.com  1-619-652-0593 phone
http://www.piclist.com/member/JMN-EFP-786
PIC/PICList FAQ: http://www.piclist.com


More... (looser matching)
- Last day of these posts
- In 2006 , 2007 only
- Today
- New search...