Finite element model updating
Mthembu, Linda Simo
This thesis focuses on engineering, specifically structural, systems that are approximated by finite element models (FEMs). Initial FEMs are found to have poor accuracies and improved or updated models are sort. From the literature we note that a common set of challenges still persists in all FEM work. These are; which aspects of the model are most uncertain, how can we efficiently update the model and finally how do we know that our chosen model is the best for the system at hand. This is the finite element model updating problem. These challenges are reinforced by the number of different FEMs that can be proposed for any one system and the difficulty of determining the best model from these. Moreover all the said challenges are applicable to all possible FEMs. To address these challenges we propose that the FEM updating problem be analyzed in a multi-model context. What is implicit in this proposal is that updating one model in isolation will not be very informative. This proposed context requires that all proposed methods in this thesis be general enough to be applicable to any set of FEMs. To address the challenge of identifying the most uncertain parameters of a FEM, we propose using an evolution based procedure; population based incremental learning (PBIL). The main assumption for this method is that a list of uncertain model parameters can be represented as a vector. PBIL then probabilistically selects and updates, from this vector, the most uncertain parameters. To verify the consistency of this PBIL method, it is tested on two different objective functions and under two different measurement datasets. The second challenge of finding an efficient way to update a FEM is also addressed via an evolution based procedure. In the proposed multi-model framework, efficiently means updating models quickly and without bias. We thus propose the updating of multiple FEMs using particle swarm optimization (PSO). This approach allows all models to be simultaneously updated and evaluated under one scheme. The result is the interaction of models as they are updated and an accuracy ordering of these. Simulations of a real beam are carried out on a number of models and two objective functions. To determine whether our chosen model is the best in the multi-model setting we propose using the Bayesian model evidence statistic. The model evidence is calculated using the Nested sampling algorithm. Jeffrey’s scale is used to evaluate the significance of model evidence differences. Simulations on two real systems, using multiple models for each, are performed. The proposed method concisely shows and justifies the model ordering.