'RŽf. : Re: OverClocking a PIC'
Date: Tue, 6 Jul 1999 22:33:57 +1000
From: "Paul B. Webster VK2BZC" <MIDCOAST.COM.AU> paulb
Subject: Re: OverClocking a PIC
Content-Type: text/plain; charset=us-ascii
Paul B. Webster wrote:
On a die the main chip (say a PIC) is not alone, there are extra test and
spare components, to allow the control of process, and extra test points to
make measurement at the wafer level. OTPs are certainly tested using spare
memory cells, accessible either at wafer level or using special (hidden)
[OPINIONS ARE MY OWN & DO NOT NECESSARILY REFLECT BULL POLICIES]
|At 07:47 7/07/99 +0200, you wrote:
Why would you do this as it only tests that cell and not all the others?
Unless the chip has spare cells on each row and colum which can not be
accessed unless the chip enters a test mode. Noted anything on test modes
in the PIC??? But the main question is how do you test the cells for speed
access? One approach is to let the processor cycle through all addresses
and monitor the PC for rollover, this does several tests in one go
including speed and contents of the OTP (This can be done by setting the
condition codes and then checking after compeltion etc but then the qestion
is how to read them! Note also that die testing is only to port or pin
access points and not to within the substrate as the above indicates,
unless such is a non bonded pin).
There is no reason why the cells could not be programmed and then
reprogrammed however the number of test vector required would be quite
large as you have to test for stuck at 1 stuck at 0 (Two sets of test
vectors for each cell) then you may also wish to test for stuck together.
But otften in fault modeling this is not a valid method so only the stuck
at 0 and stuck at 1 would be used with the added function to perform a bulk
erase. This form of functionality test does take time. Paul thinks that
this is not so! Unless the fault model can have the fault list reduced with
the removal of equivalent and dominant faults for the EEPROM, then
Microchip are using a form of Risk management in not testing the EEPEOM. I
would find this situation highly unlikly, as one chip failure in 100000
would severly disrupt a six sigma process.
Design for testability is a subject the most of us have little knowledge
about especilay at the IC level, and you FAE will most proberably know just
as much as you. However if you need more information (As a real need rather
than curiosity), then mail me off list and I will attempt to put you in
contact with people whom can tell you more in the states.
I am sure that Microchips engineers have performed full fault modling and
coverage analysis of the PIC, including fault grading. If they had not,
then many of us will be complaining about chips that don't work.
More... (looser matching)
- Last day of these posts
- In 1999
, 2000 only
- New search...