Predictive maintenance in power transformers

Date
2019
Authors
Atherfold, J.M.
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Power transformers are the link between power generation systems, and the clients that are supplied with power. They are one of the most important components in a power distribution network, and hence their reliability is of the utmost importance. Maintenance data from the power transformers is in the form of Dissolved Gas Analysis (DGA) time series data. This research attempts to consolidate industry knowledge on the maintenance of power transformers and time series forecasting techniques into a coherent system for the purpose of predictive maintenance of power transformers. In addition to this, the generalisability of forecasting models is investigated by measuring performance of single models across multiple transformers, and hence, multiple data sets. The data preprocessing method utilised an exponential smoothing technique specifically designed for the type of raw data received (aperiodically sampled, noisy data). Additional features were engineered based on research in the field, and added to the data set. The preprocessing technique is novel in the context of DGA forecasting. The forecasting method developed utilises a Least-Squares Support Vector Machine (LS-SVM), and the hyper-parameters of the model were chosen and optimised using a Particle Swarm Optimiser (PSO). Performance of the developed models was measured against various metrics, including previously developed univariate forecasting techniques; built-in Support Vector Regressors; Naive and Mean forecasts; and ARIMA and Exponential Smoothing forecasters. Two sets of experiments were run (LS-SVM(1) and LS-SVM(2)) which differed in how the Training, Validation, and Testing sets were chosen. LS-SVM(1) held out an entire transformer for Testing; and LS-SVM(2) held out the final h points of each transformer, h ∈ [1 : 4]. These experiments were run for different input vectors; the Original input vector, and an input vector Augmented with the additionally engineered features. The statistical significance between the difference of error distributions produced by the different input vectors was discussed, and found to be highly feature-dependent. The LS-SVM(2) model outperformed all other models, when the distributions of the Testing errors were considered. In terms of the generalisability of the models, it was found that the models trained across all transformers outperformed the per-transformer models that treated the data as univariate time series. Recommendations were made in order to potentially improve the performance of future developed models, such as changing how the Training, Validation, and Testing sets are chosen; changing the fitness function used in the PSO; and training individual models per transformer.
Description
A research report submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Science in Computer Science, February 2019
Keywords
Citation
Collections