Exact match. Not showing close matches.
'[OT long] overclocking PICS, now beomes testing'
I have been passing several e-mails over this topic. The contents have been
interesting from my stand point and I think that this information should be
shared with all as it has become obvious that many people (Looking at all
the e-amils on this subject) don't have much of an idea on the difference
between fault identification tests (Design for test DTF) and operational
verification tests (Or functionality tests).
I have included some of the discussions that have been past at this point
Testing for speed DP
>> As for the 10Mhz bit, humm this indicates delay to on and delay to off
>> tests, normaly encapsulated within the stuck at / functionality tests.
>> As all devices may be checked for functionality at full speed, the
>> ones out side of the normal distrobution for 10MHz are labled as 4MHz
>I see your point, but sometimes speed has to be considered and the time
taken to test must be enough to >cover all possible fault conditions. Just
as a side point before you take to creating a response, about 40% >of
manufacturing cost is TEST
>Also, how do you think that EPROM and OTPROM is tested?
>> Certainly the EEPROM on those devices which use it, is perfectly
>>> amenable to full testing.
>> What do you mean by "Full testing?"
>> Just as a side point before you take to creating a response, about 40%
>> of manufacturing cost is TEST
>> Also, how do you think that EPROM and OTPROM is tested?
> Well, there really aren't that many choices. Either you exhaustively
>(well, do a full program anyway) test samples, you program "test"
>locations, or you program the lot and erase the lot.
One other poster indicated that some extra circuitry may be used to provide
test. This is often BIST control, however on the PICS the basic
archetecture is simple and the extra cost that such would impose may be too
great, however "Scan" principles may be used.
My comments to all of this are
It *seems* obvious that there is no understanding between functionality
tests and fault tests (For want of better words). In the IC world the
standard and accepted methods to detect faults are the *stuck at models*
where you test for faults such as stuck at 1 and stuck at 0, not stuck
together (Stuck together may be detected with a verification/functionality
test, but these are not the objected if design for test; testing). These
test patterns are generated using algarithims such as PODEM etc and
injected with the use of test pettern generators and signature testers
(Comprising of Liner feedback shift registers (CRC like)).
Simulated stuck together faults can be injected where the node oppersite is
forced to a level. However this is not very good in that the designer is
assuming how the die is formatted. Also industry standard has each fault is
modeled as a single stuck at fault, and with CMOS the drive characteristics
are similar of high and low so the results of stuck together are unknown.
Yes die testing, each die on the wafer is tested with a bed of nails before
separation (The dud ones are marked with a small Black or Red dot).
One thing is correct in that erasure can be done. Considering that a wafer
is 6" in diameter and can contain several hundered dies it is possible to
use high intesiity UV to erase (No window also makes it faster as OTP is
normaly window without the window). Thus it is possible to erase several
hundered dies in very little time. Also I doubt that a conveyor-belt is
used as this would risk contamination of the dies. In this case the entire
wafer is picked up and placed (May be by hand or machine)
I think that you will find that this is tesed (OTP) as Microchip in the QA
section indicate what is not tested but guarenteed by design (This includes
thing such as input/output level standard temperature range etc.) All the
clues are there for those who have been involved in IC design.
> Speaking of patterns and shorted bus-lines, on a die basis, I suspect
>that such a fault not showing up on complementary patterns (55 AA or
>even just 00 FF) is *very* unlikely.
I don't agree.
This assumes that the bus is always // throughout the die, while this may
be so on a PCB is certainly is not always the case on a DIE. It also
assumes that the IC designer knows how the dies is going to be layed out.
Many people use this sort of test thinking that it will detect faults, well
maybe it does, but what sort of coverage does it provide? To gain insight
into this one has to generate a full fault list for the target and then
test to see if the patterns injected will make the fault obserable. In most
cases the 0xaa,0x55,0x00 and 0xff patterns work, but theya re not fully
portable to all targets (PCB or DIE).
To prove this for yourself get a PIC and use an 8 bit port to simulate a
bus, run the 0x55,0xaa,0x00,0xff test as output functions and monitor the
pins, do you get the correct result i.e. do you see what you expect? Of
course you do. Now short bit0 to ground (Simulate stuck at 0) do the test
patterns find this? Yes.
now short any other port pin to VCC, do the test patterns fund this also? Yes.
Now short bit 0 to bit 2, do the test patterns find this? NO NO NO NO NO NO
i.e The fault is testable but the test patterns don't make the fault
Consider all odd bits running // and all even bits running // of the same
bus, it may be possible for a short between two bits in the odd or even
side and the 0x55, 0xaa, 0xff, 0x00 test WILL NOT FIND IT i.e. it is *very*
likly that it may not be found!
Consider an 8 to 1 multiplexer, and generate the minimum test patterns to
check this for fault conditions.
8 inputs, 1 output and 3 select lines. Hint 0x55 0xaa 0x00 and 0xff will
take too long to verify fault conditions, and is dependant on the structure
of the MUX. The 0x55 0xaa may be used for functionality tests (Check for
correct operation as intended), but again these take too long to perform.
If I recal the number of tests required for functionality is 16, and only
11 for faults (The fault list depends on the constuction of the gates used
(Using fault equivalence and fault dominance reduction methods, again this
is up to the people at Microchip)).
The basis of all this has come from how is the OTP section tested? Without
knowing the internal structure I can not answer this directly, only offer
suggestions as to how it may be. But I don't agree with the opinion that it
is not tested, as one cell in error on 100000 chips will blow away any six
sigma process. I have suggested that the NOP instruction can be used for
this and verification of the CCR bits. This requires that either software
is loaded and run, and or access to the internal data bus is used. This may
be so with a simple "Scan" process and then Microchip only require 3 non
bonded test points (See also IEEE 1149.2 for possible hints!)
Have fun with this one
More... (looser matching)
- Last day of these posts
- In 1999
, 2000 only
- New search...