Abstract

In order to ensure that the changing utility environment does not adversely affect the reliability of electric power supplied to customers, several state regulatory agencies have started to prescribe reliability standards-minimum reliability levels-to be maintained by electric power distribution companies. The standards are based on reliability indexes computed from historical outage data. The reliability indexes vary from year to year because of the statistical variation in the number of customer interruptions and the duration of such interruptions. To be effective, the reliability standards adopted must identify feeders that consistently perform poorly, while being insensitive to those that occasionally have poor reliability. This paper employs a duration based Monte Carlo simulation to explore the predicted impact of various reliability standards on a large practical distribution system. The sensitivity of different standards to differences in system size and component failure rate is also explored.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call