Advertisement

In the past ** Enrique Castillo** has collaborated on articles with

More information about

AbstractLet X¯1,…,X¯n denote a set of n independent identically distributed k-dimensional absolutely continuous random variables. A general class of complete orderings of such random vectors is supplied by viewing them as concomitants of an auxiliary random variable. The resulting definitions of multivariate order statistics subsume and extend orderings that have been previously proposed such as norm ordering and N-conditional ordering. Analogous concepts of multivariate record values and multivariate generalized order statistics are also described.

AbstractIn this paper we show how discrete and continuous variables can be combined using parametric conditional families of distributions and how the likelihood weighting method can be used for propagating uncertainty through the network in an efficient manner. To illustrate the method we use, as an example, the damage assessment of reinforced concrete structures of buildings and we formalize the steps to be followed when modeling probabilistic networks. We start with one set of conditional probabilities. Then, we examine this set for uniqueness, consistency, and parsimony. We also show that cycles can be removed because they lead to redundant probability information. This redundancy may cause inconsistency, hence the probabilities must be checked for consistency. This examination may require a reduction to an equivalent set instandard canonicalform from which one can always construct a Bayesian network, which is the most convenient model. We also perform a sensitivity analysis, which shows that the model is robust.

AbstractThe paper deals with the problem of estimating the S–N field based on samples with different lengths and testing the hypothesis of length independence of fatigue lifetimes. A Weibull model developed by Castillo et al. is used to discuss the problem and analyze two data samples of prestressing wires and prestressing strands. The analysis shows that in the first case the length independence assumption cannot be accepted while for the second case length independence seems to be a reasonable assumption. This shows that every case must be analyzed separately and that assuming length independence can lead to unsafe design.

AbstractThe aim of this paper is to give a general model for predicting fatigue behavior for any stress level and range based on laboratory tests. More precisely, an existing Weibull model is generalized and adapted to this type of predictions via compatibility and functional equations techniques. A wide family of models is obtained, for which inference, testing hypotheses, model choice and model validation problems are dealt with. Laboratory testing strategies adequate to the estimation process are also briefly discussed. The relevant result is that testing four groups of specimens at four different stress levels is sufficient to estimate the parameters of the model and consequently, to give the Wöhler fields for any stress level. Together the proposed model and laboratory tests allow any fatigue analysis to be performed. This includes a possible solution of the fatigue damage accumulation problem. One example of application is given to illustrate the proposed methods.

AbstractFatigue lifetimes are dependent on several physical constraints. Therefore, a realistic model for analyzing fatigue lifetime data should take into account these constraints. These physical considerations lead to a functional solution in the form of two five-parameter models for the analysis of fatigue lifetime data. The parameters have clear physical interpretations. However, the standard estimation methods, such as the maximum likelihood, do not produce satisfactory results because: (a) the range of the distribution depends on the parameters, (b) the parameters appear non-linearly in the likelihood gradient equations and hence their solution requires multidimensional searches which may lead to convergence problems, and (c) the maximum likelihood estimates may not exist because the likelihood can be made infinite for some values of the parameters. Castillo and Hadi [5] consider only one of the two models and use the elemental percentile method to estimate the parameters and quantiles. This paper considers the other model. The parameters and quantiles are estimated by the elemental percentile method and are easy to compute. A simulation study shows that the estimators perform well under different values of the parameters. The method is also illustrated by fitting the model to an example of real-life fatigue lifetime data.

AbstractIn this paper a review of some decomposition techniques previously given by the authors to solve bi-level problems is presented within a unified formulation, and a new variant is investigated. Different reliability-based optimization problems in engineering works are formulated and solved: (a) the failure-probability safety-factor problem that makes the compatibility of the classical approach, based on safety-factors; and the modern probability-based approach possible; (b) a modern reliability-based approach where the design is based on minimizing initial/construction costs subject to failure-probability and safety-factor bounds for all failure modes; (c) minimizing the expected total cost of a structure, including maintenance and construction, which depend on the failure probabilities; and (d) a mixed model minimizing the expected total cost adding failure-probability and safety-factor bounds for all failure modes. In these four problems, the objective consists of selecting the values of the design variables that minimize the corresponding cost functions subject to some reliability conditions together with geometric and code constraints. The solution becomes complex because the evaluation of failure probabilities using first-order reliability methods (FORM) involves one optimization problem per failure mode, so that decomposition methods are used to solve the problem. The proposed methods use standard optimization frameworks to obtain the reliability indices and to solve the global problem within a decomposition scheme. An advantage of these approaches is that the optimization procedure and the reliability calculations are decoupled. In addition, a sensitivity analysis is performed using a method that consists of transforming the data parameters into artificial variables, and using the dual associated variables. To illustrate the methods, a breakwater design example is used.

Highlights•Examples of random variables used in civil engineering models are provided.•Consistency of some important mathematical models used in civil engineering are analyzed.•Dimensional problems of some models are identified and solved by providing alternatives.•How an extreme value analysis has to be performed is clarified.•Multivariate random models are classified as under, over or uniquely determined.•Rules are given in order to build models without the reported inconsistencies.

AbstractAn existing well established Weibull model for the statistical assessment of stress life fatigue data is shown to be applicable to the strain–life fatigue analysis, as an alternative to the classical Coffin–Manson approach with clear advantages. The model deals directly with the total strain without the need of separating its elastic and plastic strain components, provides an analytical statistical definition of the problem, and allows dealing with run-out data. Finally, an example of application is given to illustrate the method.

AbstractFunctional networks are used to solve some nonlinear regression problems. One particular problem is how to find the optimal transformations of the response and/or the explanatory variables and obtain the best possible functional relation between the response and predictor variables. After a brief introduction to functional networks, two specific transformation models based on functional networks are proposed. Unlike in neural networks, where the selection of the network topology is arbitrary, the selection of the initial topology of a functional network is problem driven. This important feature of functional networks is illustrated for each of the two proposed models. An equivalent, but simpler network may be obtained from the initial topology using functional equations. The resultant model is then checked for uniqueness of representation. When the functions specified by the transformations are unknown in form, families of linear independent functions are used as approximations. Two different parametric criteria are used for learning these functions: the constrained least squares and the maximum canonical correlation. Model selection criteria are used to avoid the problem of overfitting. Finally, performance of the proposed method are assessed and compared to other methods using a simulation study as well as several real-life data.

AbstractThis paper deals with the problem of predicting traffic flows and updating these predictions when information about OD pairs and/or link flows becomes available. To this end, a Bayesian network is built which is able to take into account the random character of the level of total mean flow and the variability of OD pair flows, together with the random violation of the balance equations for OD pairs and link flows due to extra incoming or exiting traffic at links or to measurement errors. Bayesian networks provide the joint density of all unobserved variables and in particular the corresponding conditional and marginal densities, which allow not only joint predictions, but also probability intervals. The influence of congested traffic can also be taken into consideration by combination of the traffic assignment rules (as SUE, for example) with the Bayesian network model proposed. Some examples illustrate the model and show its practical applicability.

AbstractThis paper deals with the problem of observability of linear systems of equations and inequalities in which some of the unknowns and some of the right-hand side constants are known and one seeks the subset of the remaining unknowns and constants that can be uniquely determined. A general methodology for solving this problem, first in the case of linear equations and finally the case of inequalities, is presented. The proposed methods are illustrated by their application to the telephone network problem.

AbstractAdding new corridors to a highway network represents a multicriteria decision process in which a variety of social, environmental and economic factors must be evaluated and weighted for a large number of corridor alternatives. This paper proposes a new bi-level continuous location model for expansion of a highway network by adding several highway corridors within a geographical region. The upper level problem determines the location of the highway corridors, taking into account the budgetary and technological restrictions, while the lower level problem models the users’ behavior in the located transport network (choices of route and transport system). The proposed model takes into account the demand in the area served by the new network highway corridors, the available budget and the user behavior. This model uses geographical information in order to estimate the length-dependent costs (such as pavement and construction costs) and the cost of earth movement. The proposed method is tested using the standard particle swarm optimization algorithm and applied to the Castilla–La Mancha geographic database. The previous methodology has been extended to a multiobjective approach in order to handling uncertainty in demand.

AbstractThe paper deals with the timetabling problem of a mixed multiple- and single-tracked railway network. Out of all the solutions minimizing the maximum relative travel time, the one minimizing the sum of the relative travel times is selected. User preferences are taken into account in the optimization problems, that is, the desired departure times of travellers are used instead of artificially planned departure times. To find the global optimum of the optimization problem, an algorithm based on the bisection rule is used to provide sharp upper bounds of the objective function together with one trick that allows us to drastically reduce the number of binary variables to be evaluated by considering only those which really matter. These two strategies together permit the memory requirements and the computation time to be reduced, the latter exponentially with the number of trains (several orders of magnitude for existing networks), when compared with other methods. Several examples of applications are presented to illustrate the possibilities and excellences of the proposed method. The model is applied to the case of the existing Madrid–Sevilla high-speed line (double track), together with several extensions to Toledo, Valencia, Albacete, and Málaga, which are contemplated in the future plans of the high-speed train Spanish network. The results show that the computation time is reduced drastically, and that in some corridors single-tracked lines would suffice instead of double-tracked lines.

AbstractThis paper deals with the problem of local sensitivity analysis in ordered parameter models. In addition to order restrictions, some constraints imposed on the parameters by the model and/or the data are considered. Measures for assessing how much a change in the data modifies the results and conclusions of a statistical analysis of these models are presented. The sensitivity measures are derived using recent results in mathematical programming. The estimation problem is formulated as a primal nonlinear programming problem, and the sensitivities of the parameter estimates as well as the objective function sensitivities with respect to data are obtained. They are very effective in revealing the influential observations in this type of models and in evaluating the changes due to changes in data values. The methods are illustrated by their application to a wide variety of examples of order-restricted models including ordered exponential family parameters, ordered multinomial parameters, ordered linear model parameters, ordered and data constrained parameters, and ordered functions of parameters.

AbstractThe paper presents a powerful method for estimating extreme probabilities of a target variable Z = h(X) which is a monotone function of a set of basic variables X = X1,…, Xn). To this aim, a sample of (X1,…, Xn) is simulated in such a way that the corresponding values of Z are in the corresponding tail, and used to fit a Pareto distribution to the associated exceedances. For cases where this method is difficult to apply, an alternative method is proposed, which leads to a low rejection proportion of sample values, when compared with the Monte Carlo method. Both methods are shown to be very useful for sensitivity analysis in Bayesian networks or uncertainty in risk analysis, when very large confidence intervals for the marginal/conditional probabilities are required. The methods are illustrated with several examples, and one example of application to a real case is used to illustrate the whole process.

AbstractThis paper presents a study of the influence of perturbations in the parameters of a functional network. A quantitative measure is introduced, related to the change in the mean squared error when noise is applied to the network parameters. This measure, based on statistical sensitivity, provides a fault tolerance estimate for a functional network and allows the performance degradation of this kind of system to be predicted. It can be used, therefore, to evaluate performance differences between the training process carried out on a computer and its hardware implementation. The experimental results obtained for different functional network architectures and a feedforward multilayer neural network confirm the validity of the proposed model.

AbstractThe paper presents one method for calculating failure probabilities with applications to reliability analysis. The method is based on transforming the initial set of variables to a n-dimensional uniform random variable in the unit hypercube, together with the limit condition set and calculating the associated probability using a recursive method based on the Gauss–Legendre quadrature formulas to calculate the resulting multiple integrals. An example of application is used to illustrate the proposed method.

AbstractFor a correct strength characterization of brittle materials, not only the maximum stress at fracture, but also the geometry of the specimens has to be considered thus taking into account the variable stress state and the size effect. Additionally, fracture may occur due to different fracture modes, as for example surface or edge defects. The authors propose a maximum likelihood estimator to obtain the cumulative distribution functions of strength for surface and edge flaw populations separately, both being three-parameter Weibull cdfs referred to an elemental surface area or elemental edge length, respectively. The method has been applied to simulated 3-point bending test data. The estimated Weibull parameters have been used to compute the cdfs of strength for specimens with different size, providing also the confidence bounds calculated by means of the bootstrap method. Finally, fracture data of 4-point bending tests on silicon carbide have been evaluated with the proposed method.

Advertisement