0


Guest Editorial

ASME J. Risk Uncertainty Part B. 2017;4(1):010301-010301-2. doi:10.1115/1.4037447.

The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.

Commentary by Dr. Valentin Fuster

Research Papers

ASME J. Risk Uncertainty Part B. 2017;4(1):011001-011001-10. doi:10.1115/1.4037557.

We demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscosity model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. We find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.

Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(1):011002-011002-8. doi:10.1115/1.4037452.

Proper quantification and propagation of uncertainties in computational simulations are of critical importance. This issue is especially challenging for computational fluid dynamics (CFD) applications. A particular obstacle for uncertainty quantifications in CFD problems is the large model discrepancies associated with the CFD models used for uncertainty propagation. Neglecting or improperly representing the model discrepancies leads to inaccurate and distorted uncertainty distribution for the quantities of interest (QoI). High-fidelity models, being accurate yet expensive, can accommodate only a small ensemble of simulations and thus lead to large interpolation errors and/or sampling errors; low-fidelity models can propagate a large ensemble, but can introduce large modeling errors. In this work, we propose a multimodel strategy to account for the influences of model discrepancies in uncertainty propagation and to reduce their impact on the predictions. Specifically, we take advantage of CFD models of multiple fidelities to estimate the model discrepancies associated with the lower-fidelity model in the parameter space. A Gaussian process (GP) is adopted to construct the model discrepancy function, and a Bayesian approach is used to infer the discrepancies and corresponding uncertainties in the regions of the parameter space where the high-fidelity simulations are not performed. Several examples of relevance to CFD applications are performed to demonstrate the merits of the proposed strategy. Simulation results suggest that, by combining low- and high-fidelity models, the proposed approach produces better results than what either model can achieve individually.

Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(1):011003-011003-10. doi:10.1115/1.4037454.

In a Bayesian network (BN), how a node of interest is affected by the observation at another node is a main concern, especially in backward inference. This challenge necessitates the proposed global sensitivity analysis (GSA) for BN, which calculates the Sobol’ sensitivity index to quantify the contribution of an observation node toward the uncertainty of the node of interest. In backward inference, a low sensitivity index indicates that the observation cannot reduce the uncertainty of the node of interest, so that a more appropriate observation node providing higher sensitivity index should be measured. This GSA for BN confronts two challenges. First, the computation of the Sobol’ index requires a deterministic function while the BN is a stochastic model. This paper uses an auxiliary variable method to convert the path between two nodes in the BN to a deterministic function, thus making the Sobol’ index computation feasible. Second, the computation of the Sobol’ index can be expensive, especially if the model inputs are correlated, which is common in a BN. This paper uses an efficient algorithm proposed by the authors to directly estimate the Sobol’ index from input–output samples of the prior distribution of the BN, thus making the proposed GSA for BN computationally affordable. This paper also extends this algorithm so that the uncertainty reduction of the node of interest at given observation value can be estimated. This estimate purely uses the prior distribution samples, thus providing quantitative guidance for effective observation and updating.

Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(1):011004-011004-17. doi:10.1115/1.4037455.

The research objective herein is to understand the relationships between the interatomic potential parameters and properties used in the training and validation of potentials, specifically using a recently developed modified embedded-atom method (MEAM) potential for saturated hydrocarbons (C–H system). This potential was parameterized to a training set that included bond distances, bond angles, and atomization energies at 0 K of a series of alkane structures from methane to n-octane. In this work, the parameters of the MEAM potential were explored through a fractional factorial design and a Latin hypercube design to better understand how individual MEAM parameters affected several properties of molecules (energy, bond distances, bond angles, and dihedral angles) and also to quantify the relationship/correlation between various molecules in terms of these properties. The generalized methodology presented shows quantitative approaches that can be used in selecting the appropriate parameters for the interatomic potential, selecting the bounds for these parameters (for constrained optimization), selecting the responses for the training set, selecting the weights for various responses in the objective function, and setting up the single/multi-objective optimization process itself. The significance of the approach applied in this study is not only the application to the C–H system but that the broader framework can also be easily applied to any number of systems to understand the significance of parameters, their relationships to properties, and the subsequent steps for designing interatomic potentials under uncertainty.

Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(1):011005-011005-19. doi:10.1115/1.4037457.

We consider the utilization of a computational model to guide the optimal acquisition of experimental data to inform the stochastic description of model input parameters. Our formulation is based on the recently developed consistent Bayesian approach for solving stochastic inverse problems, which seeks a posterior probability density that is consistent with the model and the data in the sense that the push-forward of the posterior (through the computational model) matches the observed density on the observations almost everywhere. Given a set of potential observations, our optimal experimental design (OED) seeks the observation, or set of observations, that maximizes the expected information gain from the prior probability density on the model parameters. We discuss the characterization of the space of observed densities and a computationally efficient approach for rescaling observed densities to satisfy the fundamental assumptions of the consistent Bayesian approach. Numerical results are presented to compare our approach with existing OED methodologies using the classical/statistical Bayesian approach and to demonstrate our OED on a set of representative partial differential equations (PDE)-based models.

Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(1):011006-011006-8. doi:10.1115/1.4037459.

Searching for local minima, saddle points, and minimum energy paths (MEPs) on the potential energy surface (PES) is challenging in computational materials science because of the complexity of PES in high-dimensional space and the numerical approximation errors in calculating the potential energy. In this work, a local minimum and saddle point searching method is developed based on kriging metamodels of PES. The searching algorithm is performed on both kriging metamodels as the approximated PES and the calculated one from density functional theory (DFT). As the searching advances, the kriging metamodels are further refined to include new data points. To overcome the dimensionality problem in classical kriging, a distributed kriging approach is proposed, where clusters of data are formed and one metamodel is constructed within each cluster. When the approximated PES is used during the searching, each predicted potential energy value is an aggregation of the ones from those metamodels. The dimension of each metamodel is further reduced based on the observed symmetry in materials systems. The uncertainty associated with the ground-state potential energy is quantified using the statistical mean-squared error in kriging to improve the robustness of the searching method.

Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(1):011007-011007-7. doi:10.1115/1.4037460.

The nonlinear stochastic behavior of a nonconservative acousto-elastic system is in focus in the present work. The deterministic acousto-elastic system consists of a spinning disk in a compressible fluid filled enclosure. The nonlinear rotating plate dynamics is coupled with the linear acoustic oscillations of the surrounding fluid, and the coupled field equations are discretized and solved at various rotation speeds. The deterministic system reveals the presence of a supercritical Hopf bifurcation when a specific coupled mode undergoes a flutter instability at a particular rotation speed. The effect of randomness associated with the damping parameters are investigated and quantified on the coupled dynamics and the stochastic bifurcation behavior is studied. The quantification of the parametric randomness has been undertaken by means of a spectral projection based polynomial chaos expansion (PCE) technique. From the marginal probability density functions (PDFs), it is observed that the stochastic system exhibits stochastic phenomenological bifurcations (P-bifurcation). The study provides insights into the behavior of the stochastic system during its P-bifurcation with reference to the deterministic Hopf bifurcation.

Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(1):011008-011008-13. doi:10.1115/1.4037450.

The transitional Markov chain Monte Carlo (TMCMC) is one of the efficient algorithms for performing Markov chain Monte Carlo (MCMC) in the context of Bayesian uncertainty quantification in parallel computing architectures. However, the features that are associated with its efficient sampling are also responsible for its introducing of bias in the sampling. We demonstrate that the Markov chains of each subsample in TMCMC may result in uneven chain lengths that distort the intermediate target distributions and introduce bias accumulation in each stage of the TMCMC algorithm. We remedy this drawback of TMCMC by proposing uniform chain lengths, with or without burn-in, so that the algorithm emphasizes sequential importance sampling (SIS) over MCMC. The proposed Bayesian annealed sequential importance sampling (BASIS) removes the bias of the original TMCMC and at the same time increases its parallel efficiency. We demonstrate the advantages and drawbacks of BASIS in modeling of bridge dynamics using finite elements and a disk-wall collision using discrete element methods.

Commentary by Dr. Valentin Fuster
Select Articles from Part A: Civil Engineering

Technical Papers

ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering. 2016;3(3):. doi:10.1061/AJRUA6.0000899.
Abstract 

Abstract  Traffic congestion is a serious challenge that urban transportation systems are facing. Variable speed limit (VSL) systems are one of the countermeasures to reduce traffic congestion and smooth traffic flow on roadways. The negative impacts of congestion, including road rage, air pollution, safety issues, and traffic delays, are well recognized. The impact of unexpected delays on road users is quantified through travel time reliability (TTR) measures. In this study, a bilevel optimization problem was introduced to determine location, speed limit reduction, start time, and duration of limited number of VSL signs while maximizing travel time reliability on selected critical paths on a network. The upper-level problem focuses on TTR optimization whereas the lower-level problem assigns traffic to the network using a dynamic traffic assignment simulation tool. A heuristic approach, simulated annealing, was used to solve the problem. The application of the methodology to a real roadway network is shown and results are discussed. The proposed methodology could assist traffic agencies in making proper decisions on how to allocate their limited resources to the network to maximize the benefits.

Topics:
Reliability , Simulation , Optimization
ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering. 2017;3(3):. doi:10.1061/AJRUA6.0000909.
Abstract 

Abstract  Windstorms, in an average year, are responsible for the most insured losses in the United States. Despite the significant losses, estimation of peak wind loading on structures remains difficult due to the inherent uncertainties in the estimation process. These uncertainties have usually been grouped into what has been termed the wind loading chain, which includes elements from the wind climate (e.g., storm type, wind directionality), terrain (e.g., roughness length, surrounding buildings), and aerodynamic (e.g., building shape, orientation) and dynamic (e.g., stiffness) effects. A lack of knowledge in a particular link (i.e., weak link) of the chain can lead to the unreliability of the entire structure. Current projections suggest the frequency and intensity of some environmental extremes, including wind speeds, will be affected due to a changing climate, thereby adding another layer of uncertainty in the wind climate link regarding the treatment of future (i.e., design) extreme wind loading for structures. This paper develops an approach for uncertainty characterization of extreme wind speeds with a worked example applied to nonhurricane events. The objective of this paper is to improve understanding of the wind climate and terrain links in the wind load chain by identifying, quantifying, and attributing uncertainties in these links including those from climate change. This objective will be achieved by (1) identification of new data-driven sources to better understand uncertainties; (2) development of a probabilistic approach to incorporate data and its associated uncertainties into the extreme wind speed estimation process, including projections of extreme winds in future climate states; and (3) quantification and attribution of uncertainties to the links of the chain considered.

Topics:
Wind velocity , Climate change
ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering. 2017;3(3):. doi:10.1061/AJRUA6.0000904.
Abstract 

Abstract  In some regions, sea level rise due to climate change is expected to increase saltwater intrusion in coastal aquifers, leading to increased salt levels in drinking water wells relying on these supplies. Seawater contains elevated concentrations of bromide, which has been shown to increase the formation and alter the speciation of disinfection by-products (DBPs) during the treatment process. DBPs have been associated with increased risk of cancer and negative reproductive outcomes, and they are regulated under drinking water standards to protect human health. This paper incorporates statistical simulation of changes in source water bromide concentrations as a result of potential increased saltwater intrusion to assess the associated impact on trihalomethane (THM) formation and speciation. Additionally, the health risk associated with these changes is determined using cancer slope factors and odds ratios. The analysis indicates that coastal utilities treating affected groundwater sources will likely meet regulatory levels for THMs, but even small changes in saltwater intrusion can have significant effects on finished water concentrations and may exceed desired health risk threshold levels due to the extent of bromination in the THM. As a result of climate change, drinking water utilities using coastal groundwater or estuaries should consider the implications of treating high bromide source waters. Additionally, extra consideration should be taken for surface water utilities considering mixing with groundwater sources, as elevated source water bromide could pose additional challenges for health risk, despite meeting regulatory requirements for THM.

Topics:
Public utilities , Groundwater , Shorelines , Climate change , Health risk assessment

Corrections

Technical Papers

ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering. 2017;3(3):. doi:10.1061/AJRUA6.0000902.
Abstract 

Abstract  The conventional simulation model used in the prediction of long-term infrastructure development systems such as public–private partnership (PPP)–build-operate-transfer (BOT) projects assumes single probabilistic values for all of the input variables. Traditionally, all the input risks and uncertainties in Monte Carlo simulation (MCS) are modeled based on probability theory. Its result is shown by a probability distribution function (PDF) and a cumulative distribution function (CDF), which are utilized for analyzing and decision making. In reality, however, some of the variables are estimated based on expert judgment and others are derived from historical data. Further, the parameters’ data of the probability distribution for the simulation model input are subject to change and difficult to predict. Therefore, a simulation model that is capable of handling both types of fuzzy and probabilistic input variables is needed and vital. Recently fuzzy randomness, which is an extension of classical probability theory, provides additional features and improvements for combining fuzzy and probabilistic data to overcome aforementioned shortcomings. Fuzzy randomness–Monte Carlo simulation (FR-MCS) technique is a hybrid simulation method used for risk and uncertainty evaluation. The proposed approach permits any type of risk and uncertainty in the input values to be explicitly defined prior to the analysis and decision making. It extends the practical use of the conventional MCS by providing the capability of choosing between fuzzy sets and probability distributions. This is done to quantify the input risks and uncertainties in a simulation. A new algorithm for generating fuzzy random variables is developed as part of the proposed FR-MCS technique based on the α-cut. FR-MCS output results are represented by fuzzy probability and the decision variables are modeled by fuzzy CDF. The FR-MCS technique is demonstrated in a PPP-BOT case study. The FR-MCS results are compared with those obtained from conventional MCS. It is shown that the FR-MCS technique facilitates decision making for both the public and private sectors’ decision makers involved in PPP-BOT projects. This is done by determining a negotiation bound for negotiable concession items (NCIs) instead of precise values as are used in conventional MCS results. This approach prevents prolonged and costly negotiations in the development phase of PPP-BOT projects by providing more flexibility for decision makers. Both parties could take advantage of this technique at the negotiation table.

Topics:
Simulation , Chaos

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In