Searching \ for '[OT] CCD Cameras' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page: www.piclist.com/techref/index.htm?key=ccd+cameras
Search entire site for: 'CCD Cameras'.

Exact match. Not showing close matches.
PICList Thread
'[OT] CCD Cameras'
2000\02\03@181108 by Sean Breheny

face picon face
Hi all,

As a part of the Autonomous Helicopter project that I am working on here at
Cornell, we need to select a vision system which consists of three cameras
which are synchronized. We are having difficulty selecting cameras,
however, because we don't fully understand how CCD cameras work and we
aren't getting knowledgeable answers from the distributors (to whom the
companies refer you to when you try to ask technical questions!)

So, if anyone on the list here could answer any of the following questions,
I would be very grateful:

#1) How does the electronic shutter on a CCD camera work?
#2) Are CCD cameras integrating? So, in other words, is the exposure truly
the amount of light received integrated over the shutter open time?
#3) When CCD cameras send out their data, is the data read out WHILE the
shutter is open, or is the picture "snapped" and then the data read out?
#4) A corolary to #3 - When a CCD camera gives interlaced output, is the
shutter only open once per frame, or once per field? In other words, can
the interlacing cause motion blur problems?

Thanks very much,

Sean

|
| Sean Breheny
| Amateur Radio Callsign: KA3YXM
| Electrical Engineering Student
\--------------=----------------
Save lives, please look at http://www.all.org
Personal page: http://www.people.cornell.edu/pages/shb7
spam_OUTshb7TakeThisOuTspamcornell.edu ICQ #: 3329174

2000\02\03@194018 by Thomas McGahee

flavicon
face
Sean,
It has been a while since I worked hands-on with a CCD camera chip,
and some of this may have changed since the early days of CCD, but
I will put it forward for what it is worth.

>#1) How does the electronic shutter on a CCD camera work?

In the ones I worked with we emulated shutter speed by changing how
OFTEN we read the beast. When you read it you "reset" the individual
cell levels. There are two modes for doing this. One is a burst mode
where you read out one frame at normal speed and then twiddle your
thumbs for awhile before initiating the next frame scan. This has
the advantage of NOT requiring a frame buffer, but you can't read
faster than the regular speed. The second mode is where you
change the reading speed by changing the clock rate and then
read the ongoing CONTINUOUS data stream into a frame buffer
where the OUTPUT speed can be independently synchronized to the
world of TVs. That removes some of the upper limit on the "shutter
speed", because you can read FASTER than the TV could take the data.
When reading FASTER then normal some of the data is thrown away.
The system might keep frame #1 and throw away frames #2 and #3.
That gives you frame coherence. A stranger way to do things is
to build up a frame buffer with pieces of many different frames.
While this sounds somewhat bizarre, it does in fact have some
applications, such as detecting moving objects.

>#2) Are CCD cameras integrating? So, in other words, is the exposure truly
>the amount of light received integrated over the shutter open time?

Yep, and that's why that emulated shutter speed method worked in the
first place. Just for jollies I would sometimes integrate the light
exposure over a LONG time (seconds to minutes). You got a LOT of
extraneous noise in the picture doing this, but you could see in
very low light levels this way. Some astronomers cooled their sensors
to reduce electrical noise and were able to enhance their ability to
locate dim stars and the like. Good for getting the telescope pointed
just right before taking a time exposure photo.

>#3) When CCD cameras send out their data, is the data read out WHILE the
>shutter is open, or is the picture "snapped" and then the data read out?

There is no physical shutter. The shutter speed is really the time
between successive readings. You can read real fast and just throw away
every other frame you read. I don't know for absolute certainty if
the shutter speed of the newer CCDs is implemented the same way.

Some units have lenses with irises to adjust the amount of light
coming in and adjust depth of field, but I have never seen
a physical shutter used except in certain time exposure experiments
where the external shutter was used to block out light before
and after a timed exposure. But the CCD is normally run in a continuous
mode where the data from the cells is constantly being read out.

>#4) A corollary to #3 - When a CCD camera gives interlaced output, is the
>shutter only open once per frame, or once per field? In other words, can
>the interlacing cause motion blur problems?

No real shutter. You eliminate blur by reading the data out faster. This
then requires a brighter light source.

One method used is frame buffering. You have the data from the CCD being
read out at any rate you want. Every so often you capture a whole frame
in your RAM buffer and THIS is read out at a constant controlled rate
so that the output image is synchronized to the TV electronics.
At high scan rate the "extra" frames are just dumped, ignored, passed
on to the old infinite bit bucket known as null.

At very LONG shutter times the data is SLOWLY read into the RAM buffer
which is almost always of a dual-port design. The stuff is being put in
at *one* rate and scanned *out* at another, constant rate that is
synchronized to the TV monitor.

I hope this info is of some use and not too outdated.

One technique that I developed for detecting movement in a room where
there was not supposed to be anyone was to read a scan into a RAM chip
and then read a scan about a second later into a second RAM chip.
XOR the contents bit by bit and use the output as the data stream
for the TV monitor. Any part of the image that was identical in both
RAMS showed up on the monitor as black. Anything that was different
showed up white. An object at rest was invisible, except for some
random "snow". Anything moving stood out like a sore thumb. The
white shimmering image looked cool. By counting the number of white
pixels in an output data frame you could electronically detect
moving objects extremely well. The method relied on the fact that
room lighting was constant. Great for monitoring rooms in a museum
and the like. A further enhancement was to use the XOR output
data stream to gate the original data streams now coming in. The result
was that the white XOR mask now revealed the underlying image as
a standard TV image so you could more easily identify what or who was
moving. (I have left out some details such as how I converted the
continuous tone data to 1 bit digital form using a high speed
comparator. That set the threshold for what was considered black
or white in the two RAM banks.)

I really enjoyed hacking around in the 70's

Fr. Tom McGahee

{Original Message removed}

2000\02\03@202624 by Stuart

flavicon
face
I can send you a PDF file on CCD's if you like?
Regards
Stuart
-----Original Message-----
From: Sean Breheny <.....shb7KILLspamspam@spam@CORNELL.EDU>
To: PICLISTspamKILLspamMITVMA.MIT.EDU <.....PICLISTKILLspamspam.....MITVMA.MIT.EDU>
Date: Friday, February 04, 2000 10:07 AM
Subject: [OT] CCD Cameras


{Quote hidden}

2000\02\03@214421 by Sean Breheny

face picon face
Hi stuart,

I would appreciate that, thanks. Please send it to

shb7spamspam_OUTcornell.edu

Sean

At 12:30 PM 2/4/00 +1100, you wrote:
>I can send you a PDF file on CCD's if you like?
>Regards
>Stuart

|
| Sean Breheny
| Amateur Radio Callsign: KA3YXM
| Electrical Engineering Student
\--------------=----------------
Save lives, please look at http://www.all.org
Personal page: http://www.people.cornell.edu/pages/shb7
@spam@shb7KILLspamspamcornell.edu ICQ #: 3329174

2000\02\03@214429 by Sean Breheny

face picon face
Hi Fr. Tom,

Wow! What a great response. I do have something additional to ask, though:

At 07:46 PM 2/3/00 -0500, you wrote:
{Quote hidden}

Well, we are looking at high-end CCD cameras with a frame grabber board.
They(the datasheets for the frame grabber boards) talk about two types of
camera: progressive scan and interlaced. Both will do very high shutter
speeds. From what I have been told, it sounds like progressive scan means
that the entire data for one frame is read out at once. Interlaced is just
what you would expect, you get the data in two fields per frame, field
rate=60Hz and frame rate=30Hz,just like regular NTSC video. However, I was
wondering if the two fields are REALLY read from the CCD at different
times,or both read at once and then sent from a frame buffer? There seems
to be a real price difference in cameras, so it almost seems as though they
are using half-resolution CCDs and doing real interlacing to achieve higher
resolution. What do you think is the case?

This matters for us because we need to determine the position of our
helicopter to within a few mm and it could be moving at several meters per
second. A few milliseconds  (16.6 ms for 60Hz) between fields, even if each
field only takes 1/5000 of a second to capture, could cause significant
blur when the two fields are merged into one frame. It would seem as though
increasing the shutter speed wouldn't help if the two fields are really
always taken 16.6 ms apart.

One last question: A CCD array is similar to a DRAM array in certain
respects, correct? If I understand correctly, a CCD array is an array of
MOS capacitors which are set up in rows which can be read out like a
bucket-brigade device. So, when you read out data from a CCD, is it read
out semi-parallel, like a row at a time but each row comes out in parallel,
or a column at a time, but each column comes out in parallel? This isn't
really important for our application,but I would like to know it to have a
general overall view of what is going on!

Yes, your XORing video scheme sounds like it worked very well.
Unfortunately for us, we need to determine not only what moved, but by how
much, and which color LED is where. It is all being done in software on a
PC, and the code to detect LED motion is already written, and works pretty
well. It's also not my part of the project ;-)


Thanks,

Sean


|
| Sean Breheny
| Amateur Radio Callsign: KA3YXM
| Electrical Engineering Student
\--------------=----------------
Save lives, please look at http://www.all.org
Personal page: http://www.people.cornell.edu/pages/shb7
KILLspamshb7KILLspamspamcornell.edu ICQ #: 3329174

2000\02\04@021857 by Roland Andrag

flavicon
face
Sean,

> #1) How does the electronic shutter on a CCD camera work?

I have worked with two types: Either you set the exposure using a
trimpot/similar device on the camera, and trigger the exposure using a
rising/falling edge, or there is a shutter input, where the CCD is being
'exposed' as long as the input is high (or low, depending on camera).  i.e.
You end up giving a 10 ms pulse to take a picture with a 10 ms exposure.

Of course the shutter is just that - electronic, not physical.  It just
inhibits or enables charge being built on the array.

> #2) Are CCD cameras integrating? So, in other words, is the exposure truly
> the amount of light received integrated over the shutter open time?
AFAIK, the are.  So if you get an average intensity of 30 with a 10 ms
exposure, you will get an average intensity of 60 with a 20 ms exposure.

> #3) When CCD cameras send out their data, is the data read out WHILE the
> shutter is open, or is the picture "snapped" and then the data read out?
The picture is snapped and then read out in all the cameras I have worked
with.

> #4) A corolary to #3 - When a CCD camera gives interlaced output, is the
> shutter only open once per frame, or once per field? In other words, can
> the interlacing cause motion blur problems?
I dont know. In my application (taking pictures of shockwaves travelling at
~1000 km/h) I snapped a frame, and then read it out only minutes (or several
seconds) later. I was thus capturing lots of frames successively.

Hope that helps
Roland

2000\02\04@104845 by wagner

flavicon
face
... let me add my 2cents...

As far as I understand about "interlace", it is not an advantage for
image capture, but for exposure.
There are two main reasons why to use interlace:

First:
------
When the TV CRT needs to produce a complete frame, the idea is that the
whole image should have a flat image brightness. A TV CRT has a (focus
adjustable) image dot size, that by the phosphor issue, the center of
the dot is brighter than the surrounding, like a flashlight spot. By
this reason, the real center of a raster image line is brighter than the
borders.

To avoid this "image darkness" to be visible, the image should be
assembled with raster lines as much close as possible, so the "center"
of raster line 56 would be overlapping the darker border of the line 55.
Problem is that line 56 also has a darker border, even being not so
bright as its center, it will overlap the bright part of line 55.
Because it happens so fast (to your eye), line 56 border will actually
replace part of the line 55 image, so the image loses quality.

To eliminate this problem, the interlace counts with your eyes.
Overlapping image "odd lines" to "even lines" in one single frame scan,
your eyes will see quality loss.  If keeping the same lines position,
but first exposing only the "odd" lines complete and than the "even"
lines, the phosphor will not suffer the strong overlap effect, and your
eyes either, so your brain will "see" a better quality image.

The interlace is only useful when the "phosphor image decay" has certain
relationship in time with the scan time itself... Suppose that you
increase your TV scan frequency from 30 fps to 120 fps (repeating 4
times the same whole interlaced frame), the interlace will lose effect,
since the raster will be much faster than the phosphor decay, so the
overlap will happens even with interlace.

Second:
-------
Phosphor decay and scan time.

The phosphor decay is one of the major elements to any CRT quality. A
long decay creates a vivid and bright image, but with blur and lose in
definition, since overlaps happens all the time. A short decay creates a
sharp and well focused image, but lose in flat brightness over the
entire image, since when the raster is being formed at the bottom of the
image, the top is already decaying and getting dark, so scintillation is
the effect.

Today's TV and image monitors are not working only in dark family rooms
with all lights off and all the family members quiet. Today they are
installed outside, ambient light all around, so they need image as
bright as possible.  The most economic way to produce a bright image is
by increasing the phosphor decay time, but remember, it loses quality
and sharpness.

One way to increase phosphor decay and eliminate the blur effect, is
interlacing. It gains instantly 50% of image quality, since the vertical
raster will be split in time, so no more overlapping. The horizontal
raster still a mess, but your eyes can notice an image improvement.

So, I guess, the CCD chip capture all the image elements at once, close
the electronic shutter, and deliver to you in interlaced mode, just
because *you want this way*. There is no advantage to the CCD chip to do
this, the advantage is to the CRT add your eyes.

Wagner.

> #4) A corolary to #3 - When a CCD camera gives interlaced output, is the
> shutter only open once per frame, or once per field? In other words, can
> the interlacing cause motion blur problems?

2000\02\06@064806 by aipi Wijnbergen

picon face
<x-flowed>Hi Sean,

Nice thread about CCD cameras theory, but, to address the initial
questions, you really need to check that particular CCD camera that you are
going to use. Or buy a camera that would meet your specifications. Cameras
very from one another by specifications of exposure time, some cameras
would not allow you to change their exposure settings.

I am using cameras from TELI that can be configured in their exposure to do:
Each field is exposed for 20mSec (CCIR) or Each Field is exposed for 40mSec
and the two fields are overlapping by 20mSec, so, that one field would be
exposed for 20mSec and then the second field would start exposed, after
40mSec another first field would start exposure.

You can ask the camera manufacturer for compleat timing diagram that would
show you the relationship between

an external trigger for exposure
     =>  Exposure time of the fields
                 => Transfer time of each of the fields

Also, you might want to check the type of CCD Chip that is installed
Interline or Frame Transfer.

Hope this helps.

Chaipi

</x-flowed>

More... (looser matching)
- Last day of these posts
- In 2000 , 2001 only
- Today
- New search...