On-line Diagnostic Testing
On-line Diagnostic Testing
In response to a comment of May 13, 2011 to Middle East Query – Diagnostic Testing Timing, let’s dive deeper into the data. Below I have reproduced slide number 281 from the CDFI (Cable Diagnostic Focused Initiative) Regional meeting presented by NEETRAC (The National Electric Energy Testing Research & Application Center at Georgia Tech’s College of Engineering) and hosted by American Electric Power (AEP). The meeting was held on October 13-14, 2009 in Columbus, Ohio, U.S.A. The entirety of the presentation slides are available by clicking here. The figure (from slide 281) shows the failure results tracked for over three years on 114 feeder cable miles tested using online PD on cables that included EPR, XLPE, and PILC cables. After the testing was completed, the cables and attached accessories were allowed to fail—that is, no rehabilitation actions were taken. There were about 85 accessory failures; there were about 90 cable failures.
|False Positive—Testing indicates the existence of an incipient fault in a cable or accessory, but the presumed incipient fault does not progress to a fault during the observation period.|
|False Negative—Testing fails to indicate the existence of an incipient fault in a cable or accessory, and the unidentified incipient fault progresses to a fault during the observation period.|
The online PD testing indicated the need for action (i.e. imminent failure) on 45 accessories. Of the identified 45, 14, or 31%, actually failed. The false positives were 69%. The results on the cable were marginally better. Of the 52 cables, which were diagnosed as “bad,” 23 actually failed or about 44%. The false positives were 56%. For both accessories and cables the number of faults that occurred on plant, that had been deemed “good” by the testing firm, far outnumbered those identified as “bad.” There were about 71 and 67 false negative failures for accessories and cable respectively.
Not only did the observations show that the testing was unable to provide reasonable discrimination between bad and good, the raw number of failures that occurred in the presumably “good” sub-population was about 3 to 5 times higher. Because the researchers did not provide population statistics beyond the total mileage of cable installed, it is not possible to determine with precision the relative false negative performance. However, I can make some estimates. If the average three-phase feeder run length were 1760 feet (typical for North America) and there were 2.2 components per cable segment (also typical), there would have been approximately 343 cable segments (or about 114 three-phase cables, termination to termination) and about 750 accessories. The relative failure rate over the three-year period would have been 11% (i.e. 85/750) for accessories and 26% (i.e. 90/343) for cables. My estimate of the false negatives are 9.5% (i.e. (85-14)/750) and 19.5% (i.e. (90-23)/343) for accessories and cables respectively.
Amazingly, these profoundly dismal results are spun by testing proponents as proof that a testing program is a fruitful endeavor. It’s no wonder to me why people get sucked into tulip and real-estate bubbles and Ponzi schemes—no one has ever been so duped. Alas, wishing that a diagnostic provides useful information does not make it so.
There are two immutable reasons and their “anti-synergy” that explain why the current generation of diagnostics cannot work. These two reasons are:
1. The economics of aged circuit rehabilitation, and
2.The second law of thermodynamics.
Further, without some technological breakthrough that reduces the cost of applying diagnostics by an order of magnitude, it is unlikely these immutable and anti-synergetic forces will ever be reconciled. To inoculate yourself from these ill-conceived schemes, read and understand the DEIS (Dielectric and Electrical Insulation Society) feature article, “Diagnostic Testing of Stochastic Cables” published in the March/April 2009 pages of IEEE’s Electrical Insulation Magazine.