0


Guest Editorial

ASME J. Risk Uncertainty Part B. 2017;4(1):010301-010301-2. doi:10.1115/1.4037447.
FREE TO VIEW

The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.

Commentary by Dr. Valentin Fuster

Research Papers

ASME J. Risk Uncertainty Part B. 2017;4(1):011001-011001-10. doi:10.1115/1.4037557.

We demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscosity model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. We find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.

Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(1):011002-011002-8. doi:10.1115/1.4037452.

Proper quantification and propagation of uncertainties in computational simulations are of critical importance. This issue is especially challenging for computational fluid dynamics (CFD) applications. A particular obstacle for uncertainty quantifications in CFD problems is the large model discrepancies associated with the CFD models used for uncertainty propagation. Neglecting or improperly representing the model discrepancies leads to inaccurate and distorted uncertainty distribution for the quantities of interest (QoI). High-fidelity models, being accurate yet expensive, can accommodate only a small ensemble of simulations and thus lead to large interpolation errors and/or sampling errors; low-fidelity models can propagate a large ensemble, but can introduce large modeling errors. In this work, we propose a multimodel strategy to account for the influences of model discrepancies in uncertainty propagation and to reduce their impact on the predictions. Specifically, we take advantage of CFD models of multiple fidelities to estimate the model discrepancies associated with the lower-fidelity model in the parameter space. A Gaussian process (GP) is adopted to construct the model discrepancy function, and a Bayesian approach is used to infer the discrepancies and corresponding uncertainties in the regions of the parameter space where the high-fidelity simulations are not performed. Several examples of relevance to CFD applications are performed to demonstrate the merits of the proposed strategy. Simulation results suggest that, by combining low- and high-fidelity models, the proposed approach produces better results than what either model can achieve individually.

Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(1):011003-011003-10. doi:10.1115/1.4037454.

In a Bayesian network (BN), how a node of interest is affected by the observation at another node is a main concern, especially in backward inference. This challenge necessitates the proposed global sensitivity analysis (GSA) for BN, which calculates the Sobol’ sensitivity index to quantify the contribution of an observation node toward the uncertainty of the node of interest. In backward inference, a low sensitivity index indicates that the observation cannot reduce the uncertainty of the node of interest, so that a more appropriate observation node providing higher sensitivity index should be measured. This GSA for BN confronts two challenges. First, the computation of the Sobol’ index requires a deterministic function while the BN is a stochastic model. This paper uses an auxiliary variable method to convert the path between two nodes in the BN to a deterministic function, thus making the Sobol’ index computation feasible. Second, the computation of the Sobol’ index can be expensive, especially if the model inputs are correlated, which is common in a BN. This paper uses an efficient algorithm proposed by the authors to directly estimate the Sobol’ index from input–output samples of the prior distribution of the BN, thus making the proposed GSA for BN computationally affordable. This paper also extends this algorithm so that the uncertainty reduction of the node of interest at given observation value can be estimated. This estimate purely uses the prior distribution samples, thus providing quantitative guidance for effective observation and updating.

Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(1):011004-011004-17. doi:10.1115/1.4037455.

The research objective herein is to understand the relationships between the interatomic potential parameters and properties used in the training and validation of potentials, specifically using a recently developed modified embedded-atom method (MEAM) potential for saturated hydrocarbons (C–H system). This potential was parameterized to a training set that included bond distances, bond angles, and atomization energies at 0 K of a series of alkane structures from methane to n-octane. In this work, the parameters of the MEAM potential were explored through a fractional factorial design and a Latin hypercube design to better understand how individual MEAM parameters affected several properties of molecules (energy, bond distances, bond angles, and dihedral angles) and also to quantify the relationship/correlation between various molecules in terms of these properties. The generalized methodology presented shows quantitative approaches that can be used in selecting the appropriate parameters for the interatomic potential, selecting the bounds for these parameters (for constrained optimization), selecting the responses for the training set, selecting the weights for various responses in the objective function, and setting up the single/multi-objective optimization process itself. The significance of the approach applied in this study is not only the application to the C–H system but that the broader framework can also be easily applied to any number of systems to understand the significance of parameters, their relationships to properties, and the subsequent steps for designing interatomic potentials under uncertainty.

Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(1):011005-011005-19. doi:10.1115/1.4037457.

We consider the utilization of a computational model to guide the optimal acquisition of experimental data to inform the stochastic description of model input parameters. Our formulation is based on the recently developed consistent Bayesian approach for solving stochastic inverse problems, which seeks a posterior probability density that is consistent with the model and the data in the sense that the push-forward of the posterior (through the computational model) matches the observed density on the observations almost everywhere. Given a set of potential observations, our optimal experimental design (OED) seeks the observation, or set of observations, that maximizes the expected information gain from the prior probability density on the model parameters. We discuss the characterization of the space of observed densities and a computationally efficient approach for rescaling observed densities to satisfy the fundamental assumptions of the consistent Bayesian approach. Numerical results are presented to compare our approach with existing OED methodologies using the classical/statistical Bayesian approach and to demonstrate our OED on a set of representative partial differential equations (PDE)-based models.

Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(1):011006-011006-8. doi:10.1115/1.4037459.

Searching for local minima, saddle points, and minimum energy paths (MEPs) on the potential energy surface (PES) is challenging in computational materials science because of the complexity of PES in high-dimensional space and the numerical approximation errors in calculating the potential energy. In this work, a local minimum and saddle point searching method is developed based on kriging metamodels of PES. The searching algorithm is performed on both kriging metamodels as the approximated PES and the calculated one from density functional theory (DFT). As the searching advances, the kriging metamodels are further refined to include new data points. To overcome the dimensionality problem in classical kriging, a distributed kriging approach is proposed, where clusters of data are formed and one metamodel is constructed within each cluster. When the approximated PES is used during the searching, each predicted potential energy value is an aggregation of the ones from those metamodels. The dimension of each metamodel is further reduced based on the observed symmetry in materials systems. The uncertainty associated with the ground-state potential energy is quantified using the statistical mean-squared error in kriging to improve the robustness of the searching method.

Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(1):011007-011007-7. doi:10.1115/1.4037460.

The nonlinear stochastic behavior of a nonconservative acousto-elastic system is in focus in the present work. The deterministic acousto-elastic system consists of a spinning disk in a compressible fluid filled enclosure. The nonlinear rotating plate dynamics is coupled with the linear acoustic oscillations of the surrounding fluid, and the coupled field equations are discretized and solved at various rotation speeds. The deterministic system reveals the presence of a supercritical Hopf bifurcation when a specific coupled mode undergoes a flutter instability at a particular rotation speed. The effect of randomness associated with the damping parameters are investigated and quantified on the coupled dynamics and the stochastic bifurcation behavior is studied. The quantification of the parametric randomness has been undertaken by means of a spectral projection based polynomial chaos expansion (PCE) technique. From the marginal probability density functions (PDFs), it is observed that the stochastic system exhibits stochastic phenomenological bifurcations (P-bifurcation). The study provides insights into the behavior of the stochastic system during its P-bifurcation with reference to the deterministic Hopf bifurcation.

Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(1):011008-011008-13. doi:10.1115/1.4037450.

The transitional Markov chain Monte Carlo (TMCMC) is one of the efficient algorithms for performing Markov chain Monte Carlo (MCMC) in the context of Bayesian uncertainty quantification in parallel computing architectures. However, the features that are associated with its efficient sampling are also responsible for its introducing of bias in the sampling. We demonstrate that the Markov chains of each subsample in TMCMC may result in uneven chain lengths that distort the intermediate target distributions and introduce bias accumulation in each stage of the TMCMC algorithm. We remedy this drawback of TMCMC by proposing uniform chain lengths, with or without burn-in, so that the algorithm emphasizes sequential importance sampling (SIS) over MCMC. The proposed Bayesian annealed sequential importance sampling (BASIS) removes the bias of the original TMCMC and at the same time increases its parallel efficiency. We demonstrate the advantages and drawbacks of BASIS in modeling of bridge dynamics using finite elements and a disk-wall collision using discrete element methods.

Commentary by Dr. Valentin Fuster
Select Articles from Part A: Civil Engineering

Technical Papers

ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering. 2017;4(1):. doi:10.1061/AJRUA6.0000936.
Abstract 

Abstract  This paper presents a methodology for analyzing wind pressure data on cladding and components of low-rise buildings. The aerodynamic force acting on a specified area is obtained by summing up pressure time series measured at that area’s pressure taps times their respective tributary areas. This operation is carried out for all sums of tributary areas that make up rectangles with aspect ratios not exceeding four. The peak of the resulting area-averaged time series is extrapolated to a realistic storm duration by the translation method. The envelope of peaks over all wind directions is compared with current specifications. Results for one low-rise building for one terrain condition indicate that these specifications can seriously underestimate pressures on gable roofs and walls. Comparison of the proposed methodology with an alternative method for assignment of tributary areas and area averaging is shown as well.

Topics:
Structures , Wind pressure , Cladding systems (Building)
ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering. 2017;4(1):. doi:10.1061/AJRUA6.0000938.
Abstract 

Abstract  Risk identification is adversely affected by the still existing definitional and applicational discrepancy regarding risks and other related notions, such as hazards and impacts. A paradigm shift is beginning to be in effect, proposing the preliminary identification of risk sources to ameliorate the aforementioned adversities. However, apart from identifying risk sources from the outset, the bulk of the already conducted project risk-related research, from which risk sources could be derived, is still not free of discrepancies and is falling short of use. In this paper, a new linguistic clustering algorithm, using the k-means++ procedure in addition to the semantics tools of stop world removal and word stemming is developed and codified. Then, the algorithm is applied on a vast risk notions set, emanated from an exhaustive review of the relative literature. The clustered and semantically processed results of the application are then used for the deduction of risk sources. Thus, this paper provides a compact, general, and encompassing master set of risk sources, discretized among distinct overhead categories.

Topics:
Algorithms , Semantics , Risk , Hazards

Case Studies

ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering. 2017;4(1):. doi:10.1061/AJRUA6.0000935.
Abstract 

Abstract  This study investigates the availability-based reliability-centered maintenance scheduling of domestic (building-integrated) hot water (DHW) of HVAC systems. The keeping system availability (KSA) method is adopted, which provides maintenance scheduling by incorporating the effect of the maintenance activities. This method has been developed for maintenance scheduling in power plants in which the continual ability to generate power is a critical issue. This approach is applied to the case of the DHW system of HVACs, which is also a critical system in provision of hot water in buildings during the long cold seasons in Canada. The mean time to failure (MTTF) and mean time to repair (MTTR) are used to measure the availability of the DHW system. Components with different maintenance timings are sorted according to the effect of maintenance on availability of the system. At the end, a combination of maintenance schedules for the components of the DHW system is provided to ensure its availability while avoiding overmaintenance.

Topics:
Maintenance , Reliability , Hot water

Technical Papers

ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering. 2018;4(1):. doi:10.1061/AJRUA6.0000949.
Abstract 

Abstract  This study investigates the use of big data analytics in uncertainty quantification and applies the proposed framework to structural diagnosis and prognosis. With smart sensor technology making progress and low-cost online monitoring becoming increasingly possible, large quantities of data can be acquired during monitoring, thus exceeding the capacity of traditional data analytics techniques. The authors explore a software application technique to parallelize data analytics and efficiently handle the high volume, velocity, and variety of sensor data. Next, both forward and inverse problems in uncertainty quantification are investigated with this efficient computational approach. The authors use Bayesian methods for the inverse problem of diagnosis and parallelize numerical integration techniques such as Markov-chain Monte Carlo simulation and particle filter. To predict damage growth and the structure’s remaining useful life (forward problem), Monte Carlo simulation is used to propagate the uncertainties (both aleatory and epistemic) to the future state. The software approach is again applied to drive the parallelization of multiple finite-element analysis (FEA) runs, thus greatly saving on the computational cost. The proposed techniques are illustrated for the efficient diagnosis and prognosis of alkali-silica reactions in a concrete structure.

Topics:
Uncertainty quantification
ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering. 2017;4(1):. doi:10.1061/AJRUA6.0000948.
Abstract 

Abstract  Timely completion of dam and hydroelectric power plant (HEPP) projects is indispensable for the countries constructing them due to their economic, political, and social impacts. Robust and stable schedules should be created at the beginning of these projects in order to realistically estimate project durations considering uncertainties and variations. This paper proposes a buffer sizing methodology based on fuzzy risk assessment which can be used to calculate time buffers accurately for concrete gravity dam and HEPP projects by considering the vulnerability of activities to various risk factors as well as their interdependencies. A generic schedule is developed and 89 potential causes of delay/risk factors are identified for the concrete gravity dam and HEPP projects. Risk assessment is conducted at the activity level. The inputs of the model are frequency and severity of risk factors, and the output is estimated time buffer as a percentage of original duration. Implementation of the model is illustrated by an example project. Results show that outputs of the model can be used for scheduling, estimation of time buffers, and risk management of concrete gravity dam and HEPP projects. Although the model and its outputs are specific for concrete gravity dams, the buffer sizing methodology based on fuzzy risk assessment can easily be adapted to other types of construction projects.

Topics:
Gravity (Force) , Dams , Concretes , Polishing equipment , Risk assessment , Hydroelectric power stations

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In