|Home | Articles | Forum | Glossary | Books|
In any serious troubleshooting situation, it is usually wise to plan what tests are most likely to give you the answer quickly, rather than just charging off in random directions.
Intermittents are the toughest, most frustrating kind of troubleshooting problem.
And bench instruments augment an engineer's senses and open the window of perception to the circuits he or she is troubleshooting. "Miscellaneous" is an old expression around our lab that means potpourri, catch-all, or miscellaneous. In this section, I'll throw into the "miscellaneous" category a collection of philosophical items, such as advice about planning your troubleshooting, and practical hints about computers and instruments.
Troubleshooting Intermittent Problems
The car that refuses to malfunction when you take it to the shop, the circuit that refuses to fail when you're looking at it-does it really fail only at 2 am?--these are the problems that often require the most extreme efforts to solve.
1. Look for correlation of the problem with something. Does it correlate with the time of day? The line voltage? The phase of the moon? (Don't laugh.)
2. Get extra observers to help see what else may correlate with the problem. This extra help includes both more people to help you observe and more equipment to monitor more channels of information.
3. Try to make something happen. Applying heat or cold may give you a clue. Adding some vibration or mechanical shock could cause a marginal connection to open permanently, thus leading to identification of the problem and its solution. (Refer to the notes on The Soul of a New Machine in Section 5).
4. Set up a storage scope or a similar data-acquisition system to trap and save the situation at the instant of the failure. Depending on the nature of the instrument, you may be able to store the data before the event's trigger or after, or both. This may be especially useful in self-destructive cases.
5. Get one or more buddies to help you analyze the situation. Friends can help propose a failure mode, a scenario, or a new test that may give a clue.
6. As the problem may be extremely difficult, use extreme measures to spot it. Beg or borrow special equipment. Make duplicates of the circuit or equipment that is failing in hopes of finding more examples of the failure. In some cases, you are justified in slightly abusing the equipment in hopes of turning the intermittent problem into an all-the-time problem, which is often easier to solve.
The following techniques apply to intermittent problems:
Sugar and Spice and Nothing Nice?
In case you haven't guessed, I'm not a big fan of digital computers and simulation.
When a computer tries to simulate an analog circuit, sometimes it does a good job; but when it doesn't, things get very sticky. Part of the problem is that some people put excessive confidence and belief in anything a computer says. Fortunately, my bosses are very skeptical people, and they agree that we must be cautious when a computer makes outrageous promises. Still, we all agree that computers promise some real advantages, if only we can overcome their adversities and problems.
In many cases, if you have trouble with the simulation of an analog circuit or system, you troubleshoot the simulation just as you would the circuit itself. You get voltage maps at various "times" and "temperatures," you insert various stimuli, watch to see what's happening, and modify or tweak the circuit just like a "real" circuit. But, just like the Mario brothers, you can encounter problems in Computerland:
1. You might actually have a bad circuit.
2. You might forget to ask the computer the right question.
3. You might have mistyped a value or instruction or something.
The easiest mistake of this sort is to try to add a 3.3 M resistor into your circuit.
SPICE thinks you mean 3.3 milliohms, not megohms. This problem has hooked almost everybody I know. I solved it by using "3300 K" (3300 k-ohm in SPICE), or I may just type out "3300111".
4. You might have a bad "model" for a transistor or device. I've seen a typographical error in the program listing of a transistor's model tie a project in knots for months.
5. You might have neglected to include strays such as substrate capacitance, PC-board capacitance, or-something that most people forget-lead inductance.
6. You might get a failure to converge or an excessive run time. Or the computer might balk because the program is taking too many iterations.
Sometimes problems happen that only a computer expert can address. But when you ask the computer guru for advice, you might get no advice or--what's worse--bad advice. After all, many computer wizards know nothing whatsoever about linear circuits. If the wizard tells me, "Hey, don't worry about that," or, "Just change the voltage resolution from 0.1 mV to 10 mV," then I must explain to the wizard that, although that advice might make some computers happy, it gives me results that are completely useless. Talking to computer wizards is sometimes difficult.
Even if you do everything right, the computer can lie to you. Then you have to make a test to prove that you can get the right answer and the computer can't.
For example, one time we had a circuit with 60 transistors, and a diff-amp appeared to be oscillating even though it was clearly switched OFF. The computer experts told us that we had obviously made a mistake. So we disconnected and removed 58 of the transistors--there was nothing left but 2 transistors, and one of them was biased off by a full volt. And its collector current was "oscillating" at 100 kHz, between plus and minus 10 uA, even though nothing in the circuit was moving.
When we confronted the computer experts, they belatedly admitted there was an "internal timing error," which they proceeded to fix. But it certainly took us a difficult week to get them to admit that.
FIG. 1. (coming soon) I hurled this computer to its doom from atop National's 3-story parking garage. As the dust settled, I knew that computer would never lie to me again!
My boss tells me that I should not be so negative, that computers are a big part of our future. When he says that, I am tempted to go out and buy stock in companies that make Excedrin and antacids, because that also will be a big part of our future...
I have given a couple lectures at major conferences, with comments about SPICE and some of its problems. (Ref. 1 and 2). After the lecture, engineers from other companies have come up and told me, "Yeah, we have those kinds of problems, too..."
One guy gave me a tip: "Don't put a 50-ohm resistor in your circuit--put in 50.1 ohm, and it may converge better." In other cases, we discovered that a parallel resistor-capacitor combination that was connected only to ground was helping to give us convergence; when we "commented out" the R and C, we could not get convergence any more. Other people comment that if you change the name of a resistor, or its number, or its position on the list of components, convergence may be improved-or ruined. So this convergence bird is a very fragile and flighty thing.
My boss reminds me that some versions of SPICE are better than others at converging, and I shouldn't just be a complainer. But I am just reminding you that all kinds of computer simulation get criticized, and sometimes the criticism is valid-the complainer is not just imagining things (Ref. 3).
So if the computer persists in lying to you, just tell your boss that the computer has proved itself incompetent. Junk that digital piece of disaster!
What I really think you ought to do, instead of using digital simulations, is to make! an analog-computer model--you'll have a lot less trouble. Be sure to scale all the transistors' capacitances at 100X or 1000X their normal values, so the time scale is scaled down by 100X, which makes the strays negligible. That's what I do. I have seen it work when SPICE cannot be beaten into cooperation. You might call it an "analog computer," because that's exactly what it is. I will listen to alternative points of view but, be forewarned, with frosty skepticism.
Lies, Damned Lies, and Statistics
One thing that doesn't help me a darned bit is "statistics," at least in the sense that mathematicians use them. I find most statistical analyses worse than useless. What I do like to use is charts and graphs.
The data I took of the diodes' VF versus IF back in Section 6 were a little suspicious when I wrote down the numbers, but after I plotted the data, I knew there was something wrong. Then I just went back and took more data until I understood what the error was--AC current noise crashing into my experiment, causing rectification. If data arise from a well-behaved phenomenon and conform to a nice Gaussian distribution, then I don't care if people use their statistical analyses-it may not do a lot of harm. (Personally I think it does harm, because when you use the computer and rely on it like a crutch, you get used to believing it, and trusting it without thinking.. .) However, when the data get screwy, classical statistical analysis is worse than useless.
For example, one time a test engineer came to me with a big formal report. Of course, it didn't help things any that it arrived at 1:05 for a Production Release Meeting that was supposed to start at 1:OO. But this was not just any hand-scrawled report. It was handsome, neat, and computerized; it looked professional and compelling.
The test engineer quoted many statistical items to show that his test system and statistical software were great, even if the ICs weren't. Finally he turned to the last page and explained that, according to the statistics, the ICs' outputs were completely incompetent, way out of spec, and thus the part could not be released. In fact, he observed, the median level of the output was 9 V, which was pretty absurd for the logical output of an LM1525-type switching regulator, which could only go to the LOW level of 0.2 V or the HIGH level of 18.4 V. How could the outputs have a median level of 9 V?? How do you get an R-S flip-flop to hang up at an output level halfway between the rails? Unlikely. . . . Then he pointed out some other statistics--the 3 u values of the output were +30 V and -8 V. Now, that is pretty bizarre for a circuit that has only a +20-V supply and ground (and it is not running as a switching regulator-it's just sitting there at DC). The meeting broke up before I could find the facts and protest, so that product was not released on schedule.
It turned out, of course, that the tester was running falsely, so while the outputs were all supposed to be SET to +18.4 V, it was actually in a random state, so that half the time the outputs were at 18.4 V and half the time at 0.2 V. If you feed this data into a statistical program, it might indeed tell you that some of the outputs might be at 9 V, assuming that the data came from a Gaussian distribution. But if you look at the data and think, it is obvious that the data came from a ridiculous situation. Rather than just try to ram the data into a statistical format, the engineer should start checking his tester.
Unfortunately, this engineer had so much confidence in his statistical program that he spent a whole week preparing the Beautiful Report. Did he go report to the design engineer that there were some problems? No. Did he check his data, check the tester? No. He just kept his computer cranking along, because he knew the computer analysis was the most important thing.
We did finally get the tester fixed, and we got the product out a little late, but obviously I was not a fan of that test engineer (nor his statistics) as long as he was at our company. And that is just one of a number of examples I trot out, when anybody tries to use statistics when they are inappropriate.
I do like to use scatter plots in two dimensions, to help me look for trends, and to look for "sports" that run against the trend. I don't look at a lot of data on good parts or good runs, but I study the hell out of bad parts and bad runs. And when I work with other test engineers who have computer programs that facilitate these plots, I support and encourage those guys to use those programs, and to look at their data, and to think about those data. Anything that facilitates thinking--that I support.
Keep It Cool, Fool... .
A couple years ago I was approached by an engineer who was trying to use one of our good voltage references that had a typical characteristic of about 20 ppm per 1000 hours long-term stability at +125 C. He was using it around room temperature, and he was furious because he expected it to drift about 0.1 ppm per 1000 hours at room temp, and it was a lot worse than that. Why was our reference no good, he asked? I pointed out that amplifiers' drifts and references' drifts do nor keep improving by a factor of 2, every time you cool them off another 11 degrees more.
I'm not sure who led him to believe that, but in general, modem electronic components are not greatly improved by cooling or the absence of heating. In fact, those of us who remember the old vacuum-tube days remember that a good scope or voltmeter had an advantage if you kept it running nice and warm all the time, because all the resistors and components stayed dry and never got moist under humid conditions.
I won't say that the electrolytic capacitors might not have liked being a little cooler. But the mindless effort to improve the reliability by keeping components as cool as possible has been overdone. I'm sure you can blame a lot of that foolishness on MIL-HBDK-217 and all its versions. In some businesses you have to conform to -217, no matter how silly it is, but in the industrial and instrument business, we don't really have to follow its every silly quirk and whim. One guy who is arguing strenuously about -217 is Charles Leonard of Boeing, and you may well enjoy his writing (Ref. 4). So if something is drifting a little and you think you can make a big improvement by adding a fan and knocking its temperature down from +75 to +55 C, I'm cautioning you, you'll probably be disappointed because there's not usually a lot of improvement to be had. It is conceivable that if you have a bad thermal pattern causing lots of gradients and convection, you can cut down that kind of thermal problem, but in general, there's not much to be gained unless parts are getting up near their max rated temperature or above +100 degrees. Even plastic parts can be reliable at +100 degrees. The ones I'm familiar with are.
There's Nothing Like an Analog Meter
Everybody knows that analog meters aren't as accurate as digital meters. Except…you can buy DVMs with a 0.8% accuracy; analog meters better than that exist.
Anyway, let's detail some problems with analog meters.
Even if an analog meter is accurately calibrated at full scale, it may be less accurate at smaller signals because of nonlinearity arising from the meter's inherent imperfections in its magnetic "circuits." You can beat that problem by making your own scale to correct for those nonlinearities. Then there's the problem of friction and hysteresis. The better meters have a "taut-band" suspension, which has negligible friction-but most cheap meters don't. Now, as we have all learned, you can neutralize most of the effects of friction by gently rapping on, tapping at, or vibrating the meter. It's a pain in the neck, but when you're desperate, it's good to know.
Even if you don't shake, rattle, or roll your meters, you should be aware that they are position-sensitive and can give a different reading if flat or upright or turned sideways. The worst part about analog meters is that if you drop them, any of these imperfections may greatly increase until the meter is nearly useless or dead. This is "position sensitivity" carried to an extreme. Ideally, you would use digital meters for every purpose. But analog meters have advantages, for example, when you have to look at a trend or watch for a derivative or an amplitude peak--especially in the presence of noise, which may clutter up the readings of a digital voltmeter. So, analog meters will be with us for a long time, especially in view of their need for no extra power supply, their isolation, and their low cost.
But, beware of the impedance of meter movements. They look like a stalled motor-a few hundred millihenries--at high frequencies. However if the needle starts swinging, you'll get an inductive kick of many henries. So, if you put an analog meter in the feedback path of an op amp, you'll need a moderate feedback capacitor across the meter.
Digital Meters--Not So Bad, and Sometimes Better than That
As I mentioned before, digital meters are always more accurate than analog meters... except for when they aren't. Recently, a manufacturer of power supplies decided to "modernize" its bench-type supplies by replacing the old analog meters with digital meters. Unfortunately, these meters came with an accuracy of +/- 5%. Having a 2-1/2-digit digital panel meter (DPM) with a resolution of 1 part in 200 but an accuracy of 1 part in 20 certainly is silly. Needless to say, I stopped buying power supplies from that manufacturer.
The steadiness and irrefutability of those glowing, unwavering digits is psychologically hard to rebut. I classify the readings of the DVM or DPM with any other computer's output: You have to learn to trust a computer or instrument when it's telling the truth, and to blow the whistle on it when it starts to tell something other than the truth.
For example, most slow DVMs have some kind of dual-slope or integrating conversion, so they're inherently quite linear, perhaps within 1 or 2 least-significant digits. Other DVMs claim to have the advantage of higher conversion speed; this higher speed may be of no use to the bench engineer, but it is usable when the DVM is part of an automated data-acquisition system. These faster instruments usually use a successive--approximation or recirculating--remainder conversion scheme, both of which are not inherently linear but depend on well-trimmed components for linearity.
I have seen several DVMs that cost more than $1000 and were prejudiced against certain readings. One didn't like to convert 15 mV; it preferred to indicate 14 or 16.
One time I got a call from an engineer at one of the major instrument companies. He wondered why the Voltage-to-Frequency converter he made with an NSC LM33 1 was showing him poor linearity-worse than the guaranteed spec of 0.01%. I told him that was strange, because if it was true, it was the first LM331 to have poor linearity from the first couple million we had produced. I advised him to check the capacitors and the op-amp waveform, and to call me back, because if he had a part that didn't meet specs, I wanted to get my hands on it.
The next day he called me back, feeling very sheepish and embarrassed. He admitted he had been using a prototype DVM designed by his company, and because it was a prototype, it was not exactly under calibration control. It was his DVM that had gone out of linearity, not the LM331.
Normally I hate to use a DVM's autoranging mode. I have seen at least two (otherwise high-performance) DVMs that could not lock out the autorange feature. The worst aspect of these meters was that I couldn't tell where they would autorange from one range to another, so I couldn't tell where to look for their nonlinearity; yet I knew there was some nonlinearity in there somewhere. After an hour of searching, I found a couple of missing codes at some such preposterous place as 10.18577 V. And this on a $4000 DVM that the manufacturer claimed could not possibly have such an error--could not have more than 1 ppm of nonlinearity.
Another fancy DVM had the ability to display its own guaranteed maximum error, saying that its own error could not be more than +/- 0.0040% when measuring a 1 M-ohm resistor. But then it started indicating that one of my better 1.000000 M-ohm resistors was really 0.99980 M-ohm. How could I prove if it was lying to me? Easy-I used jiujitsu I employed its own force against itself. I got ten resistors each measuring exactly 100.000 k-ohm--the fancy machine and all the other DVMs in the lab agreed quite well on these resistors' values. When I put all 10 resistors in series, all the other meters in the lab agreed that they added up to 1.00000 M-ohm; the fancy but erroneous machine said 0.99980 M-ohm. Back to the manufacturer it went.
So, if you get in an argument with a digital meter, don't think that you must necessarily be wrong. You can usually get an opinion from another instrument to help prove where the truth lies. Don't automatically believe that a piece of "data" must be correct just because it's "digital." And be sure to hold onto the user's manual that comes with the instrument. It can tell you where the guaranteed error band of the DVM gets relatively bad, such as for very low resistances, for very high resistances, for low AC voltages, and for low or high frequencies....
Most digital voltmeters have a very high input impedance (10,000 M-ohm typ) for small signals. However, if you let the DVM autorange, at some level the meter will automatically change to a higher range where the input impedance becomes 10 M-ohm.
Some DVMs change at +2 V or 3 V, others at 10 or 12 or 15 V, and yet others at +/- 20 V.
As I mentioned in the section on equipment, I like to work with the DVMs that stay high-impedance up to at least 15 V. But, the important thing is to know the voltage at which the impedance changes. A friend reminded me that his technician had recently taken a week's worth of data that had to be retaken because he neglected to allow for the change of impedance. I think I'll go around our lab and put labels on each DVM.
Still, DVMs are very powerful and useful instruments, often with excellent accuracy and tremendous linearity and resolution4ften as good as 1 ppm. I've counted some of these ultralinear meters as my friends for many years. I really do like machines--such as the Hp3455, HP3456, and HP3457-that are inherently, repeatably linear, as some of these DVMs are absolutely first-class.
One picky little detail: Even the best DVM is still subject to the adage, "Heat is the enemy of precision." For example, some DVMs have a few extra microvolts of warm-up drift, but only when you stand the box on its end or side. Some of them have a few microvolts of thermal wobble and wander when connected to a zero-volt signal (shorted leads), but only when you use banana plugs or heavy-gauge (16, 18, or 20 gauge) leads--not when you use fine wire (26 or 28 gauge). The fine-wired leads apparently do not draw as much heat from the front-panel binding posts. So, even the best DVM auto-zero circuit cannot correct for drifts outside its domain.
Most engineers know that DVMs add a resistive (10 Ma) load to your circuit and a capacitive load (50 to1000 pF) that may cause your circuit to oscillate. But, what's not as well known is that even the better DVMs may pump noise back through their input terminals and spray a little clock noise around your lab. So if you have a sensitive circuit that seems to be picking up a lot of noise from somewhere, turn off your DVM for a few seconds to see if the DVM is the culprit. If that's not it, turn off the function generator or the soldering iron. If it is the DVM's fault, you may want to add RC filters, RLC filters, or active filter-buffers with precision operational amplifiers, to cut down on the noise being injected into your circuit. There is a little RC filter shown in Figure 2.4 of Section 2, that is useful for keeping the noises of the DVM from kicking back. Or, you might want to go to an analog meter, which-as we discussed on a previous page do not have any tendency to oscillate or put out noise.
An analog meter with a battery-powered preamplifier will not generate much noise at all, by comparison to a DVM....
While I'm on the subject of instruments, I really enjoy using a good function generator to put out sines and triangle waves and square waves and pulses. I love my old Wavetek 191. But I certainly don't expect the signals to be absolutely undistorted--all these waveforms will distort a little, especially at high frequencies. So if I want my function generator to give me a clean sine wave, I put its output through an active filter at low frequencies or an LC filter at high frequencies. If I want a clean, crisp square wave, I will put the signal through a clipping amplifier or into a diode-limited attenuator ( FIG. 3). If I want a cleaner triangle than the function generator will give me, I just make a triangle generator from scratch.
But a function generator lets me down when some absent-minded person pushes one button too many and the output stops. (Usually, that absent-minded person is me.) It can take me five minutes to find what the problems are. I love all those powerful, versatile functions when I need them, but they drive me nuts when the wrong button gets pushed.
Similarly, a scope's trace can get lost and hide in the comer and sulk for many minutes on end if you don't realize that somebody (maybe your very own errant fat finger) pushed a treacherous button. When the digital scopes with their multiple layers of menus and submenus start playing that game, I find I need a buddy system-somebody to come and bail me out when I get hopelessly stuck. What menu is that dratted beamfinder on, anyway? But, scopes work awfully well these days. Just don't expect precision results after you drive the trace many centimeters off scale by turning up the gain to look at the bottom of a tall square wave. Most scopes aren't obligated to do that very well.
Similarly, be sure to keep the trimmers on your 10X probes well adjusted, and run a short ground path to your probes when you want to look at fast signals, as discussed in Section 2.
Troubleshoot As You Go
Some people like to build up a big system and turn on the power; and, Voila, it doesn't work. Then they have to figure out what kind of things are wrong in the whole megillah. I prefer to build up modular chunks, and to test each section as I get it built. Then if it works, that provides a pleasant positive kick, several times along the length of a project. But if it doesn't work, it gives me a chance to get it on track before I go charging ahead and get the whole thing finished. Sometimes it's just a missing capacitor. Other times, I've got the whole concept wrong, and the sooner I find it out, the better. So if you see one of my systems made up of 14 little 7-inch square sections, all lashed together on a master framework, don't be surprised. I mean, if you can make a big system work the first time, more power to you. I often remind my technician, "This may not work the first time, but it will be really close. You may have to tweak an R here or a C there, but it won't be disastrously bad." Similarly, when I have a circuit that does not work right -- do I just want to get it working right? Rather not. What I want is to learn what was wrong, and learn what happens when I try changes. So I don't give my technician a long list of changes to make, all at once. I tell him, make this change first and see if the gain gets better. If that doesn't work, make that change and then that one, and keep an eye on the gain and the phase. Then try this tweak on the output stage, and trim it for lower distortion at 10 kHz.... If he made all the changes at once, the performance might improve, but if we weren't sure which changes made the improvement, we wouldn't be learning much, would we?
The procedure for trimming V, to 22 V within 1 % tolerance is as follows:
If V, is higher than 23.080 V, snip out R3 (if not. don't);
then if V, is higher than 22.470 V, snip out R4, (if not. don't);
then , V, if is higher than 22.160 V, snip out R5 (if not. don't).
Obviously, you can adapt this scheme to almost any output voltage. Choosing the break-points and resistor values is only a little bit tricky.
Systems and Circuits
When a system is designed, it is usually partitioned into subsections that are assigned to different people or groups to engineer. Two very important ingredients in such a system are Planning and Communication. If the partitioning was done unfairly, then some parts of the system might be excessively easy to design, and other parts substantially impossible. We've all seen that happen, so we must be careful to prevent it from happening to our systems. For if all the subsystems work except for one, the whole project will probably fail.
The need for good communications is critical--good oral and written communications, to prevent confusion or false assumptions. After all, it's not realistic to expect the system design and every one of its definitions to start out perfect from the first day. The chief troubleshooter for the system should be the Program Manager, or whatever he is called. He'll have his computers, Pert charts, GANTT charts, and so forth, but, most valuable, he has his people who must be alert for the signs of trouble.
These people have to be able to communicate the early signs of trouble, so the leader can get things fixed.
How to Trim without Trimming Potentiometers
Speaking of keeping circuits well trimmed, some people like to use trimming potentiometers to get a circuit trimmed "just right." Other people hate to, because the potentiometers are expensive or unreliable or drifty. Worst of all. if a circuit can be trimmed, it can also be mis-trimmed; some person may absentmindedly or misguidedly turn the potentiometer to one end of its range or to the wrong setting. How long will it take before that error is corrected? For just this reason, some people prefer fixed-voltage regulators because they always have a valid output (+/- 5%) and can never get goofed up by a trimming potentiometer.
Other people need a tighter tolerance yet are nervous about the trimming potentiometer. You will find the solution in the snip-trim network in FIG. 4. (Ref. 5). This scheme will let you trim a regulator well within 196 without trimming potentiometers. Note that you could also use this technique to set the gain of integrators and the offset of amplifiers. It's not always easy to engineer the correct values for these trims, but it is possible. And, nobody's going to go back and tweak the potentiometer and cause trouble if there's no potentiometer there to tweak.
A pet gripe of mine concerns engineers who design a circuit with an adjust range that's so wide that damage can occur. For example, FIG. 5a is a bad idea for a regulator for a 5-V logic supply because the TTL parts would be damaged if someone tweaks the pot to one end of its range. FIG. 5b is better.
What about Solderless Breadboards
Here's a chunk of late-breaking miscellaneous-the topic is those solderless breadboards, which consist of a number of metal strips and solderless connectors hidden underneath a plastic panel with lots of holes in it. Schools often use them to introduce students to the joys of breadboarding because you can easily connect things by just stuffing wires and components into the holes. The problems begin with capacitance. The breadboards usually have 2, 3, or 5 pF between adjacent strips. On a good day, only a wise engineer could plan a layout that all the capacitors, sprinkled throughout the circuit, wouldn't ruin.
The next serious problem with solderless breadboards is the long leads, which make adding effective power-supply bypass capacitors close to a chip difficult.
Next, I suspect that some of these panels, although they are not inexpensive. use cheap plastics such as nylon. On a warm, humid day, cheap plastics do not offer high insulation resistance. Nobody wants to talk about what kind of plastic the breadboards are made of.
Finally, Mr. Scott Bowman of Dublin, CA, points out that after you insert enough wires into any given hole, the solderless connector will scrape sufficient solder off the wire so that the scraps of solder will pile up and start to intermittently short out to an adjacent strip. Further, the adhesive that holds on the back panel tends to hold the solder scraps in place, so you can't clean the scraps out with a solvent or a blast of air.
I didn't even think about these solderless breadboards when I wrote my series because I see them so rarely at work. They just have too many disadvantages to be good for any serious work. So. if you insist on using these slabs of trouble, you can't say I didn't warn you.
1. Robert A. Pease, Practical Considerations for the Design of High-Volume Linear ICs, IEEE International Symposium for Circuits and Systems, April 1990.
2. Robert A. Pease, "Band-Gap Reference Circuits: Trials and Tribulations," IEEE Bipolar Circuits and Technology Meeting, September 1990.
3. Robert A. Pease, "What's all this SPICEy Stuff Anyhow?" Electronic Design, December 1990.
4. Leonard, Charles, "Is reliability prediction methodology for the birds?," Power Conversion and Intelligent Motion, November 1988, p. 4.
5. Robert A. Pease, "A New Production Technique for Trimming Voltage Regulators," Electronics, May 10, 1979, p. 141. (Also available as Linear Brief LB-46 in the Linear Applications Databook, National Semiconductor Corp., 1980-1990)
|Top of Page||PREV.||NEXT||Related Articles||HOME|